其他分享
首页 > 其他分享> > 快速开始使用spark

快速开始使用spark

作者:互联网

1、版本说明

启动 
./bin/spark-shell
读取一个文件用来创建一个新的数据集Dataset

对数据集进行操作
textFile.count()
textFile.first()
val linesWithSpark = textFile.filter(line => line.contains("Spark"))
textFile.filter(line => line.contains("Spark")).count()

3.2、python方式

启动
./bin/pyspark
textFile = spark.read.text("README.md")
textFile.count()
textFile.first()
linesWithSpark = textFile.filter(textFile.value.contains("Spark"))
textFile.filter(textFile.value.contains("Spark")).count()

4、Dataset的更多操作

1.查找文件中长度最大的字符串,并返回长度
textFile.map(line => line.split(" ").size).reduce((a, b) => if (a > b) a else b)

2.实现wordcounts
val wordCounts = textFile.flatMap(line => line.split(" ")).groupByKey(identity).count()
wordCounts.collect()
![image.png](https://upload-images.jianshu.io/upload_images/4045682-f386bbbf70242ca6.png?imageMogr2/auto-orient/strip%7CimageView2/2/w/1240)

5、缓存Caching

linesWithSpark.cache()
linesWithSpark.count()

通过文件运行

/* SimpleApp.scala */
import org.apache.spark.sql.SparkSession

object SimpleApp {
  def main(args: Array[String]) {
    val logFile = "YOUR_SPARK_HOME/README.md" // Should be some file on your system
    val spark = SparkSession.builder.appName("Simple Application").getOrCreate()
    val logData = spark.read.textFile(logFile).cache()
    val numAs = logData.filter(line => line.contains("a")).count()
    val numBs = logData.filter(line => line.contains("b")).count()
    println(s"Lines with a: $numAs, Lines with b: $numBs")
    spark.stop()
  }
}

标签:count,val,使用,spark,textFile,快速,line,Spark
来源: https://www.cnblogs.com/twodoge/p/10741446.html