spark读取外部数据源
作者:互联网
文章目录
读取json文件
def main(args: Array[String]): Unit = { val conf = new SparkConf().setMaster("local[*]") .setAppName(this.getClass.getName) val sc = new SparkContext(conf) val inputJsonFile = sc.textFile("D:\\studyplace\\sparkBook\\chapter4\\data\\chapter4_3_2.json") val content = inputJsonFile.map(JSON.parseFull) println(content.collect.mkString(",")) //遍历 content.foreach( { case Some(map : Map[String,Any]) => println(map) case None => println("无效的JSON") case _ => println("其他异常...") } ) sc.stop() }
注意:json文件中必须是完整的json字符串,并且是同一个文件
读取csv和tsv文件
csv文件为逗号分隔符,tsv为制表符分隔符
val inputFile = sc.textFile("文件路径") inputFile.flatMap(_.split("分隔符"))
读取SequenceFile
只有键值对的数据才能用sequenceFile格式存储,类比java中Map,scala中Tuple2
sequenceFile可以逐条压缩数据,也可以压缩整个数据块,默认不启用压缩
val inputFile = sc.sequenceFile[String,String]("文件路径")
泛型为读取出的key和value的数据类型
读取ObjectFile格式的数据
spark可以读取Object格式的数据生成RDD,RDD每一个元素都可以被还原成之前的对象
定义一个类
package chapter4 case class Person(name: String, age: Int)
读取数据
import chapter4.Person import org.apache.spark.{SparkConf, SparkContext} object chapte4_3_5 { def main(args: Array[String]): Unit = { val conf = new SparkConf() .setAppName(this.getClass.getName) .setMaster("local[*]") val sc = new SparkContext(conf) val rddData = sc.objectFile[Person]("D:\\studyplace\\sparkBook\\chapter4\\data\\chapter4_3_5.object") println(rddData.collect.toList) sc.stop() } }
对象序列化为数据,保留对象的原始信息,包括包名,因此泛型Person必须一致
读取hdfs中的数据(显式调用hadoopAPI)
import org.apache.hadoop.io.{LongWritable, Text} import org.apache.hadoop.mapred.TextInputFormat import org.apache.spark.{SparkConf, SparkContext} object chapter4_3_6 { def main(args: Array[String]): Unit = { val conf = new SparkConf() .setMaster("local[*]") .setAppName("chapter4_3_6") val sc = new SparkContext(conf) val path = "hdfs://ip:8020/路径" val inputHadoopFile = sc.newAPIHadoopFile[LongWritable,Text,TextInputFormat](path) val result = inputHadoopFile.map(_._2.toString).collect() println(result.mkString(",")) sc.stop() } }
对于 newAPIHadoopFile[LongWritable,Text,TextInputFormat] 第一个泛型LongWritable 是hadoop读取文件的偏移量,Text是偏移量对应的数据内容,TextInputFormat
直接对inputHadoopFile.collect.mkString(",")会报序列化错误,
Writable的子类型(LongWritable,IntWritable,Text)需要通过inputHadoopFile.map(_._2.toString) j进行序列化
读取mysql中的数据
导入依赖
mysqlmysql-connector-java5.1.40
package chapter4 import java.sql.DriverManager import org.apache.spark.rdd.JdbcRDD import org.apache.spark.{SparkConf, SparkContext} object chapter4_3_7 { def main(args: Array[String]): Unit = { val conf = new SparkConf().setAppName("chapter4_3_7").setMaster("local[*]") val sc = new SparkContext(conf) val inputMysql = new JdbcRDD(sc, () => { Class.forName("com.mysql.jdbc.Driver") DriverManager.getConnection("jdbc:mysql://localhost:3306/spark?" + "useUnicode=true&characterEncoding=utf-8", "root", "123456") }, "select * from person where id >= ? and id <= ?;", 1, //查询条件上界 3, //查询条件下界 1, //分区数 r => (r.getInt(1), r.getString(2), r.getInt(3))) println("查询到的记录条目数:"+inputMysql.count) inputMysql.foreach(println) sc.stop() } }
标签:读取,val,数据源,println,chapter4,sc,new,spark 来源: https://blog.51cto.com/u_13985831/2836503