spark 输出到hdfs小文件过多
作者:互联网
使用Adaptive Execution 动态设置Shuffle partition 可控制spark sql小文件问题
- .config("spark.sql.adaptive.enabled", "true") // 开启sparksql Adaptive Execution 自动设置 Shuffle Reducer
- .config("spark.sql.adaptive.shuffle.targetPostShuffleInputSize", "67108864b") //设置每个 Reducer 读取的目标数据量
根据Adaptive Execution来启动CBO
- .config("spark.sql.adaptive.join.enabled", "true") // 是否根据 Adaptive Execution 做 CBO 优化
- .config("spark.sql.autoBroadcastJoinThreshold", "20971520") //broadcastJoin 大小
标签:hdfs,adaptive,过多,sql,spark,Execution,config,Adaptive 来源: https://www.cnblogs.com/javalinux/p/15098875.html