Hero


  • 首页

  • 分类

  • 关于

  • 归档

  • 标签

  • 公益404

  • 搜索

Java Collection toArray 踩坑

发表于 2017-07-27 | 分类于 Java | | 阅读次数

Question

笔者在使用java.util.Set的toArray方法时,发现报了java.lang.ClassCastException异常,一时比较纳闷。类似源码如下:

1
2
3
4
5
6
7
8
9
10
11
public void testSetToArray() {
Set<String> set = new HashSet<String>();
set.add("First");
set.add("Second");
set.add("Third");
String[] setArr = (String[]) set.toArray(); // 异常抛出
for (String s : setArr) {
System.out.println(s);
}
}

Solution

在发现该异常后,就去查看了Collection接口的JavaDoc,看方法介绍就是将集合元素按照数组的方式返回,乍一看确实没什么问题,但是在Collection还看到一个重载方法,定义如下:

1
2
3
4
5
public interface Collection<E> extends Iterable<E> {
Object[] toArray();
<T> T[] toArray(T[] a);
}

仔细看无参的toArray方法返回就明白了,该方法返回的是Object[],若显式的Cast就会因为类型不为Object而出现问题,而另一个重载方法是采用的泛型定义,就不会有这个问题,我们重写一下刚刚有问题的代码:

1
2
3
4
5
6
7
8
9
10
11
public void testSetToArray() {
Set<String> set = new HashSet<String>();
set.add("First");
set.add("Second");
set.add("Third");
String[] setArr = set.toArray(new String[0]);
for (String s : setArr) {
System.out.println(s);
}
}

这下问题就解决了,Java提供的集合是很方便,但在使用时也要注意,比如toArray这个方法,使用的姿势不对,编译器也没有提示,最后在运行时才会暴露出来。所以强烈建议使用泛型toArray方法去转换。

Tomcat 8 Code Study

发表于 2017-07-21 | | 阅读次数

本来是打算从Tomcat7开始阅读源码的,可是奈何小编使用的是MacOS,Tomcat7依赖dbpc1.4版本,必须用JDK1.6编译,太过繁琐,遂着手Tomcat8…

Environment

  • MacOS
  • JDK7
  • Ant 1.8.x+

Download

1
$ wget https://github.com/apache/tomcat/archive/TOMCAT_8_0_0.tar.gz

目录结构

  • java
    源码目录
  • conf
    配置目录
  • build.xml
    ant构建文件
  • output
    默认编译输出目录

Build

Tomcat是使用Ant工具来构建的,在BUILDING.txt中有详细说明
执行以下两步就可以deploy一个tomcat容器

1
2
$ cd {tomcat.source}
$ ant

NOTE

  1. Tomcat默认是使用build.properties.default里的属性来进行构建的,推荐在源路径新建build.properties来构建自己的Tomcat,
    若遇到Ant工具报错时,第一种可能性就是base.path没有设置正确,这个路径是用来下载Tomcat依赖环境的,Tomcat默认是/usr/share/java,/usr/*是需要root权限的,这里你设定一个方便操作的路径即可。
  2. 报the archive file.tar.gz doesn’t exist,主要是dbcp2-2.0SNAPSHOT目标依赖文件不存在,到build.properties目录下查找官网地址,发现该版本已经不存在了,替换使用dbcp2-2.0.2-SNAPSHOT版本

Startup

  • 数据统计
    对Tomcat8做了个简单的数据统计,以*.java为结尾的文件,统计所有的’\n’字符。在根路径下统计,共计494831。在源代码路径下统计,共计362858。可以看出来,Tomcat的代码量还是相当庞大的>_<
  • 启动入口类
    1
    org.apache.catalina.startup.Bootstrap

我们简单的看一下类Doc:

Bootstrap loader for Catalina. This application constructs a class loader
for use in loading the Catalina internal classes (by accumulating all of the
JAR files found in the “server” directory under “catalina.home”), and
starts the regular execution of the container. The purpose of this
roundabout approach is to keep the Catalina internal classes (and any
other classes they depend on, such as an XML parser) out of the system
class path and therefore not visible to application level classes.

结合源码,可以看到,Bootstrap主要做两件事情,一个是初始化了Catalina(Tomcat)容器核心class以及依赖库的ClassLoader,此ClassLoader是应用级的ClassLoader,所以Tomcat自身的class以及依赖的任何第三方库都是对用户应用屏蔽的;另外就是启动Catalina(Tomcat)容器的初始化启动逻辑。

屏蔽原理:
1. 基于JVM加载类的双亲委托机制 实现自定义ClassLoader
2. 关键函数 Thread.currentThread().setContextClassLoader

Catalina主要有3个ClassLoader (URLClassLoader):
在catalina.properties定义

  • commonLoader
    common.loader,负责加载${catalina.home}/lib以及${catalina.base}/lib下的jar包
  • catalinaLoader
    server.loader,自定义loader,默认是commonLoader
  • sharedLoader
    shared.loader,自定义loader,默认是commonLoader

主程序入口类

1
org.apache.catalina.startup.Catalina

启动的两个关键方法

  • load(String args[]);
    1. 命令行解析
    2. 初始化Server实例
  • start();

LeetCode - 3Sum

发表于 2017-06-22 | 分类于 algorithm | | 阅读次数

3Sum

Given an array S of n integers, are there elements a, b, c in S such that a + b + c = 0? Find all unique triplets in the array which gives the sum of zero.

Note: The solution set must not contain duplicate triplets.

1
2
3
4
5
6
7
For example, given array S = [-1, 0, 1, 2, -1, -4],
A solution set is:
[
[-1, 0, 1],
[-1, -1, 2]
]

题解

题意很清晰,给一个数组,找出所有的和为0的三元组。(答案不允许有重复的三元组)

解题过程

第一版(stupid)

思路:拿到题的一瞬间,觉得很简单,把所有的都比较一次,然后对答案做排序及hash,重复的抛弃掉就行了

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
public class Solution {
public List<List<Integer>> threeSum(int[] nums) {
List<List<Integer>> result = new ArrayList<List<Integer>>();
List<String> hash = new ArrayList<String>();
for (int i = 0; i < nums.length - 2; i++) {
for (int j = i + 1; j < nums.length - 1; j++) {
for (int k = j + 1; k < nums.length; k++) {
if (nums[i] + nums[j] + nums[k] == 0) {
List<Integer> tmp = new ArrayList<Integer>(3);
tmp.add(nums[i]);
tmp.add(nums[j]);
tmp.add(nums[k]);
tmp = this.sort(tmp);
String key = tmp.get(0) + "" + tmp.get(1) + "" + tmp.get(2);
if (this.contains(hash, key)) {
continue;
}
hash.add(key);
result.add(tmp);
}
}
}
}
return result;
}
private boolean contains(List<String> hash, String key) {
for (String hash_key : hash) {
if (key.equals(hash_key)) {
return true;
}
}
return false;
}
private List<Integer> sort(List<Integer> target) {
for (int i = 0; i < target.size(); i++) {
for (int j = i + 1; j < target.size(); j++) {
if (target.get(i) > target.get(j)) {
int tmp = target.get(i);
target.set(i, target.get(j));
target.set(j, tmp);
}
}
}
return target;
}
}

拿到LeetCode上一提交,我的天,最后几个测试用例跑了超久,最终Time Limit Exceeded
分析了一下,发现使用的算法太笨重了,与其说算法,不如说写了堆垃圾,遂重新分析了一下解题算法

第二版(better)

思路:第一版的算法复杂度达到了O(n^3),导致在测试用例大一些的时候,会超时;分析发现,我们可以先将整个数组进行排序,然后将最大最小值和他们区间内的数值进行比较,若等于0,则将结果记录起来,若小于0,则说明最大值过大,若大于0,则说明中间数值过小,通过这种方式,不断的缩小数值区间,最后可以得到结果,整体算法复杂度降低到了O(n^2)
O(n^2) => 中间数值缩小n次 * 最小值增大n次

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
List<List<Integer>> result = new ArrayList<List<Integer>>();
nums = this.sort(nums);
int i = 0;
// 迭代法
while (i < nums.length - 2) {// 比较区间为[i, nums.length - 1]
int j = i + 1; // 中间数
int k = nums.length - 1; // 最右数
while (j < k) { // 内循环终止条件
int sum = nums[i] + nums[j] + nums[k];
if (sum == 0) {
List<Integer> tmp = new ArrayList<Integer>();
tmp.add(nums[i]);
tmp.add(nums[j]);
tmp.add(nums[k]);
result.add(tmp);
}
// 小于等于0 时 中间数偏小 向右移动
if (sum <= 0) {
while (nums[j] == nums[++j] && j < k) {
// 若下一个数和当前数相同 则继续右移
}
}
// 大于等于0时 末尾数偏大 向左移动
if (sum >= 0) {
while (nums[k--] == nums[k] && j < k) {
// 若前一个数和当前数相同 继续左移
}
}
}
while (nums[i] == nums[++i] && i < j) {
// 最左数向右移动 进行下一轮比较
}
}
return result;
}
private int[] sort(int[] nums) {
for (int i = 0; i < nums.length; i++) {
for (int j = i + 1; j < nums.length; j++) {
if (nums[i] > nums[j]) {
int tmp = nums[i];
nums[i] = nums[j];
nums[j] = tmp;
}
}
}
return nums;
}

经过验证,所有的测试用例都跑过了,看来以后遇到问题还是要多思考的^_^

Spark 2.1 Get Started

发表于 2017-06-09 | | 阅读次数

Spark 初探

检查Scala环境

2.1版本要求scala版本为2.11以上,若没有相应环境,可参考Scala 2.11安装

1
2
$ scala -version
$ Scala code runner version 2.10.2 -- Copyright 2002-2013, LAMP/EPFL

Download Spark 2.1

下载源码包

1
$ wget http://d3kbcqa49mib13.cloudfront.net/spark-2.1.0.tgz

编译安装

Note: Starting version 2.0, Spark is built with Scala 2.11 by default. Scala 2.10 users should download the Spark source package
and build with Scala 2.10 support.
由于Spark编译版本从2.0版本开始默认是scala 2.11编译的,因为当前环境scala版本为2.10,所以需要手动编译安装

Building for Scala 2.10

  • To produce a Spark package compiled with Scala 2.10, use the -Dscala-2.10 property:
    1
    2
    $ ./dev/change-scala-version.sh 2.10
    $ ./build/mvn -Pyarn -Phadoop-2.4 -Dscala-2.10 -DskipTests clean package

Note that support for Scala 2.10 is deprecated as of Spark 2.1.0 and may be removed in Spark 2.2.0.
Spark完整构建完成

  • Building submodules individually
    如果只是用Spark的某一个子模块的功能,比如使用Spark Streaming,就可以单独构建Spark Streaming,而不用去额外构建诸如Spark SQL、GraphX等模块

    It’s possible to build Spark sub-modules using the mvn -pl option.
    For instance, you can build the Spark Streaming module using:

    1
    $ ./build/mvn -pl :spark-streaming_2.11 clean install

where spark-streaming_2.11 is the artifactId as defined in streaming/pom.xml file.

Continuous Compilation

Spark同时还支持可持续编译,换句话说,就是动态热加载

We use the scala-maven-plugin which supports incremental and continuous compilation. E.g.

1
$ ./build/mvn scala:cc

should run continuous compilation (i.e. wait for changes). However, this has not been tested extensively. A couple of gotchas to note:

  • it only scans the paths src/main and src/test (see docs), so it will only work from within certain submodules that have that structure.
  • you’ll typically need to run mvn install from the project root for compilation within specific submodules to work; this is because submodules that depend on other submodules do so via the spark-parent module).

Thus, the full flow for running continuous-compilation of the core submodule may look more like:

1
2
3
$ ./build/mvn install -DskipTests
$ cd core
$ ../build/mvn scala:cc

Issue

  • 编译失败
    SSL报错,这是curl使用SSL的一个bug,需要更新curl版本
    更新后使用的curl版本信息

    1
    curl 7.47.1 (x86_64-pc-linux-gnu) libcurl/7.47.1 OpenSSL/1.0.2g zlib/1.2.7 libssh2/1.4.3
  • 使用Spark shell出错
    在执行./bin/spark-shell 或 ./bin/pyspark时,出现以下错误信息:

    ./spark/spark-2.1.0/bin/spark-class: line 77: syntax error near unexpected token "$ARG"' ./spark/spark-2.1.0/bin/spark-class: line 77: CMD+=(“$ARG”)’

Google了一下相关信息,在Apache的JIRA发现了相关解释,有评论说是bash的问题,Bash是从3.1版本开始支持的‘+=’运算符,直接升级bash版本即可解决

1
$ bash --version
  • 其他问题
    当你构建完成后,除非你的scala版本为当前Spark2.1需求的2.11+,否则每次重新构建都需要走上面的构建流程

命令行模式

  • 启动
    1
    $ ./bin/spark-shell

或者

1
$ ./bin/spark-shell --master local[2]

The –master option specifies the master URL for a distributed cluster, or local to run locally with one thread, or local[N] to run locally with N threads. You should start by using local for testing. For a full list of options, run Spark shell with the –help option.

  • Hello World

    Spark’s primary abstraction is a distributed collection of items called a Resilient Distributed Dataset (RDD). RDDs can be created from Hadoop InputFormats (such as HDFS files) or by transforming other RDDs.

1
2
3
4
5
6
7
8
$ scala> val tf = sc.textFile("README.md")
tf: org.apache.spark.rdd.RDD[String] = README.md MapPartitionsRDD[1] at textFile at <console>:27
$ scala> tf.count()
res0: Long = 104
$ scala> tf.first()
res1: String = # Apache Spark

其他的语法糖详见Actions和transformations

示例

Spark Standalone Mode(单实例模式)

  • 指定master运行ip和port
    1
    2
    $ cp conf/spark-env.sh.template conf/spark-env.sh
    vim conf/spark-env.sh

增加如下内容

1
2
3
export SPARK_MASTER_HOST=0.0.0.0
export SPARK_MASTER_PORT=8077
export SPARK_MASTER_WEBUI_PORT=8078

  • Issue
    • 运行 ./bin/spark-shell 抛出java.net.UnknownHostException
      解决过程:初识Spark,猜想Spark需要依赖Hadoop环境运行,遂尝试安装Hadoop,安装Hadoop并启动,会报同样的错误,继续追踪,发现原来是因为无法识别Hostname导致的,修改host环境即可解决

master + slave

  • 启动master
    1
    $ ./sbin/start-master.sh

注:此时若直接连接spark集群,会报资源不足的相关错误,原因是还没有启动相应的Slave资源给集群.(暂时猜想Master只提供管控资源,不提供计算资源)

  • 启动slave

    1
    $ ./sbin/start-slave.sh spark://IP:PORT
  • 连接集群

    1
    $ ./bin/spark-shell --master spark://0.0.0.0:7077 --total-executor-cores 2

–total-executor-cores 控制shell使用集群的core数量

提交任务

  • first task
    示例代码
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
/** SimpleApp.scala */
import org.apache.spark.SparkContext
import org.apache.spark.SparkContext._
import org.apache.spark.SparkConf
object SimpleApp {
def main(args: Array[String]) {
val logFile = "YOUR_SPARK_HOME/README.md"
val conf = new SparkConf().setAppName("Simple Application")
val sc = new SparkContext(conf)
val logData = sc.textFile(logFile, 2).cache()
val numOfA = logData.filter(line => line.contains("a")).count()
val numOfB = logData.filter(line => line.contains("b")).count()
println("Lines with a: $numOfA, Lines with b: $numOfB")
sc.stop()
}
}
```
项目结构
simple

├── simple.sbt
├── src
│ └── main
│ └── scala
│ └── SimpleApp.scala

1
2
3
构建文件
simple.sbt

name := “Simple Project”
version := “1.0”
scalaVersion := “2.11.8”
libraryDependencies += “org.apache.spark” %% “spark-core” % “2.1.0”

1
2
构建命令

$ sbt package

1
2
3
4
5
> 构建过程报下面这样的错误
*unresolved dependency: org.glassfish.hk2#hk2-utils;2.22.2: not found*
**解决方案**:去掉not found的依赖,详情查看[stackoverflow](http://stackoverflow.com/questions/20912369/sbt-fails-to-resolve-dependency-for-jersey-container-grizzly2-http-2-5-1)
提交任务

$ spark-submit –class “SimpleApp” –master spark://0.0.0.0:7077 target/scala-2.11/simple-project_2.11-1.0.jar

1
或者本地模式提交

$ spark-submit –class “SimpleApp” –master local[4] target/scala-2.11/simple-project_2.11-1.0.jar

1
2
>spark-submit除了会读取命令行参数,还会读取 conf/spark-defaults.conf下的配置

spark.master spark://5.6.7.8:7077
spark.executor.memory 4g
spark.eventLog.enabled true
spark.serializer org.apache.spark.serializer.KryoSerializer
```

Spark + YARN

// TODO

基本概念

RDD(Resilient Distributed Dataset)
弹性分布数据集
// TODO

Scala 2.11 安装

发表于 2017-04-25 | 分类于 Program Language | | 阅读次数

安装 scala-2.11.8版本

  • 下载

    1
    curl -o scala-2.11.8.tgz https://downloads.typesafe.com/scala/2.11.8/scala-2.11.8.tgz

    NOTE:
    你也可以去Scala官网选择更合适的版本,安装方法都一样

  • 解压

    1
    tar -zvxf scala-2.11.8.tgz
  • 配置环境变量
    修改 ~/.bashrc 并新增下面两行环境变量

    1
    2
    export SCALA_HOME=/opt/scala-2.11.8
    export PATH=$SCALA_HOME/bin:$PATH
  • 检查

    1
    scala -version

提示版本号2.11.8即安装成功

Hello World

发表于 2017-01-05 | | 阅读次数

Welcome to Hexo! This is your very first post. Check documentation for more info. If you get any problems when using Hexo, you can find the answer in troubleshooting or you can ask me on GitHub.

Quick Start

Create a new post

1
$ hexo new "My New Post"

More info: Writing

Run server

1
$ hexo server

More info: Server

Generate static files

1
$ hexo generate

More info: Generating

Deploy to remote sites

1
$ hexo deploy

More info: Deployment

瓦力木有忧伤

瓦力木有忧伤

凉风有信 秋月无边

6 日志
3 分类
5 标签
GitHub Weibo
© 2017 Vali
由 Hexo 强力驱动
主题 - NexT.Muse