Spark本地Idea开发环境部署原创
# 1. 软件下载
- Spark下载:
版本:spark-3.5.0-bin-hadoop3.tgz(对应hadoop3.3.1版本)
网址:https://archive.apache.org/dist/spark/spark-3.5.0/ (opens new window)
- scala下载:
版本:scala-2.12.18.zip(Windows版本)
网址:https://www.scala-lang.org/download/2.12.18.html (opens new window)
- 默认Jdk已安装
# 2. 安装
# 2.1. 解压放在自己的安装目录
D:\app\scala-2.12.18
D:\app\spark-3.5.0-bin-hadoop3
# 2.2. 配置环境变量
SCALA_HOME=D:\app\scala-2.12.18
SPARK_HOME=D:\app\spark-3.5.0-bin-hadoop3
### 添加PATH
%SPARK_HOME%\bin
%SCALA_HOME%\bin
1
2
3
4
5
6
2
3
4
5
6
# 2.3. Idea安装scala插件
可以在线安装,就直接idea插件库下载,然后重启idea(必须重启)
- 离线安装,scala插件下载
版本:2022.1.5(根据自己的Idea版本下载)
网址:https://plugins.jetbrains.com/plugin/1347-scala/versions#tabs (opens new window)
下载完放在idea的plugin目录下
D:\app\idea_2022\plugins
# 3. Idea配置
# 3.1. 创建项目
创建Maven项目;(此时你会发现,无法新建 Scala Class)
File -> Project Struture -> Global Libraies -> + Scala SDK
main/ 目录新建 scala; Mark Directory as -> Source Root
此时你已经可以编写 scala 代码了
# 3.2. Spark 的配置文件
- 下载Spark集群的 core-site.xml , log4j.properties 放在 resources下
# 4. 开发Spark
# 4.1. 配置pom.xml
<properties>
<maven.compiler.source>8</maven.compiler.source>
<maven.compiler.target>8</maven.compiler.target>
<scala.version>2.12.18</scala.version>
<spark.version>3.5.0</spark.version>
<hadoop.version>3.3.3</hadoop.version>
</properties>
<dependencies>
<dependency>
<groupId>org.apache.hadoop</groupId>
<artifactId>hadoop-cos</artifactId>
<version>${hadoop.version}</version>
</dependency>
<dependency>
<groupId>org.scala-lang</groupId>
<artifactId>scala-library</artifactId>
<version>${scala.version}</version>
</dependency>
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-core_2.12</artifactId>
<version>${spark.version}</version>
</dependency>
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-sql_2.12</artifactId>
<version>${spark.version}</version>
</dependency>
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-streaming_2.12</artifactId>
<version>${spark.version}</version>
</dependency>
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-streaming-kafka-0-10_2.12</artifactId>
<version>${spark.version}</version>
</dependency>
<!-- 外部集群支持 -->
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-yarn_2.12</artifactId>
<version>${spark.version}</version>
</dependency>
</dependencies>
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
# 4.2. WordCount
object WordCount {
def main(args: Array[String]): Unit = {
//local[*] , yarn
val spark: SparkSession = SparkSession.builder().master("yarn").appName("word count").getOrCreate()
val sc: SparkContext = spark.sparkContext
val rdd: RDD[String] = sc.textFile("cosn://bucket/weic/bigdata/spark/hehe.txt")
val counts: RDD[(String, Int)] = rdd.flatMap(_.split(",")).map((_, 1)).reduceByKey(_ + _)
counts.collect().foreach(println)
counts.saveAsTextFile("cosn://bucket/weic/bigdata/spark/output/WordCount")
counts.repartition(1).saveAsTextFile("cosn://bucket/weic/bigdata/spark/output/WordCount2")
}
}
1
2
3
4
5
6
7
8
9
10
11
12
13
2
3
4
5
6
7
8
9
10
11
12
13
# 4.3. 打包插件
<build>
<resources>
<resource>
<directory>src/main/resources</directory>
<excludes>
<exclude>*</exclude>
</excludes>
</resource>
</resources>
<plugins>
<plugin>
<groupId>org.scala-tools</groupId>
<artifactId>maven-scala-plugin</artifactId>
<executions>
<execution>
<goals>
<goal>compile</goal>
<goal>testCompile</goal>
</goals>
</execution>
</executions>
<configuration>
<scalaVersion>${scala.version}</scalaVersion>
<args>
<arg>-target:jvm-1.5</arg>
</args>
</configuration>
</plugin>
</plugins>
</build>
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
# 4.3. 打包发布
mvn clean package
- jar包上传集群
$ spark-submit
--master yarn \
--deploy-mode cluster \
--class com.spark.WordCount \
--executor-memory 7g \
--executor-cores 14 \
--num-executors 9 spark-extractor-1.0-SNAPSHOT.jar > ../logs/WordCount-2024-03-08.log
1
2
3
4
5
6
7
2
3
4
5
6
7
上次更新: 2024/06/28, 14:46:16
- 02
- 2025-03-28拍婚纱照 原创04-02
- 03
- 2024-04-05的晚上 原创04-01