#combiner使用及错误 #案例分析
作者:互联网
一、Combiner说明
MapReduce中的Combiner是为了避免map任务和reduce任务之间的数据传输而设置的。Hadoop允许用户针对maptask的输出指定一个合并函数。即为了减少传输到Reduce中的数据量。它主要是为了削减Mapper的输出从而减少网络带宽和Reducer之上的负载。
Combiner和Reducer的区别
- Combiner和Reducer的区别在于运行的位置:Combiner是在每一个MapTask所在的节点运行,Reducer是接收全局所有Mapper的输出结果
- Combiner的输入key-value的类型就是Mapper组件输出的key-value的类型,Combiner的输出key-value要跟reducer的输入key-value类型要对应起来
- Combiner的使用要非常谨慎,因为Combiner在MapReduce过程中是可选的组件,可能调用也可能不调用,可能调一次也可能调多次,所以:Combiner使用的原则是:有或没有都不能影响业务逻辑,都不能影向最终结果
这里可能不能很好的理解什么叫做影响业务逻辑
,下面我会想说一下我遇到的问题,然后举例两个案例,这里就可以明白了。
二、我在使用Combiner时遇到的问题
在我写一个案例时(下面的错误案例),mapper、combiner和reducer代码都没有问题,但是运行后始终达不到预期的效果,经过多方调试,发现问题是reducer阶段时不会将相同key的value聚合到一起
,最终发现问题根源时在编写combiner时影响了业务逻辑
。下面通过两个案例来说明。
三、正确案例
1、分析资料:
- 链接:https://pan.baidu.com/s/1Vng4GW0J1qa9jC_-7pGE7Q
提取码:y3sf
2、需求:
得到day01~03文件中关键字的倒排索引
3、代码
mapperDemo.java
package com.atSchool.index;
import java.io.IOException;
import org.apache.hadoop.io.LongWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.InputSplit;
import org.apache.hadoop.mapreduce.Mapper;
import org.apache.hadoop.mapreduce.lib.input.FileSplit;
/**
* 输出的形式:
* context.wirte("love:day01.txt", "1")
* context.wirte("beijing:day01.txt", "1")
* context.wirte("love:day02.txt", "1")
* context.wirte("beijing:day01.txt", "1")
*/
public class mapperDemo extends Mapper<LongWritable, Text, Text, Text> {
@Override
protected void map(LongWritable key, Text value, Mapper<LongWritable, Text, Text, Text>.Context context)
throws IOException, InterruptedException {
// 分割
String[] split = value.toString().split(" +");
// 获取文件名
// 1、获取切片
InputSplit inputSplit = context.getInputSplit();
// 2、强转
FileSplit fSplit = (FileSplit) inputSplit;
// 3、得到文件路径再获取文件名
String name = fSplit.getPath().getName();
for (String string : split) {
context.write(new Text(string + ":" + name), new Text("1"));
System.out.println("mapper:" + "<" + string + ":" + name + "," + "1" + ">");
}
}
}
combinerDemo.java
package com.atSchool.index;
import java.io.IOException;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Reducer;
/**
* 输入的形式:
* <"love:day01.txt", {"1"}>
* <"beijing:day01.txt", {"1","1"}>
* <"love:day02.txt", {"1"}>
*
* 输出的形式:
* context.write("love", "day01.txt:1")
* context.write("beijing", "day01.txt:2")
* context.write("love", "day02.txt:1")
*/
public class combinerDemo extends Reducer<Text, Text, Text, Text> {
private Text outKey = new Text();
private Text outValue = new Text();
@Override
protected void reduce(Text key, Iterable<Text> value, Reducer<Text, Text, Text, Text>.Context context)
throws IOException, InterruptedException {
// 处理value
int sum = 0;
for (Text text : value) {
int parseInt = Integer.parseInt(text.toString().trim());
sum += parseInt;
}
// 处理key
String[] split = key.toString().split(":");
// 输出
outKey.set(split[0].trim());
outValue.set(split[1].trim() + ":" + String.valueOf(sum));
context.write(outKey, outValue);
System.out
.println("combiner:" + "<" + split[0].trim() + "," + split[1].trim() + ":" + String.valueOf(sum) + ">");
}
}
reduceDemo.java
package com.atSchool.index;
import java.io.IOException;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Reducer;
/**
* 输入格式:
* <"love", {"day01.txt:1", "day02.txt:1"}>
* <"beijing", {"day01.txt:2"}>
*
* 输出格式:
* context.write("love", "day01.txt:1 day02.txt:2 ")
* context.write("beijing", "day01.txt:2 ")
*/
public class reduceDemo extends Reducer<Text, Text, Text, Text> {
// StringBuilder stringBuilder = new StringBuilder();
private Text outValue = new Text();
@Override
protected void reduce(Text key, Iterable<Text> value, Reducer<Text, Text, Text, Text>.Context context)
throws IOException, InterruptedException {
/**
* 这里为什么不能将stringBuilder定义在外面?
* 因为MP程序在运行的时候,只会走reduce方法,并不是将这整个类走一遍。
* 这里stringBuilder的append方法会一直将数据在末尾追加,不会覆盖之前的数据。
* 使用Test中的set方法时会将之前的数据覆盖。
*/
StringBuilder stringBuilder = new StringBuilder();
/**
* 迭代器为什么不能多次遍历?
* 当调用next()时,返回当前索引(cursor)指向的元素,然后当前索引值(cursor)会+1,
* 当所有元素遍历完,cursor == Collection.size(),
* 此时再使用while(Iterator.hasNext())做循环条件时,返回的是false,无法进行下次遍历,
* 如果需要多次使用Iterator进行遍历,当一次遍历完成,需要重新初始化Collection的iterator()。
*/
// for (Text text : value) {
// System.out.println("reduce输入的value:" + text.toString());
// }
for (Text text : value) {
System.out.println("reduce输入的value:" + text.toString());
stringBuilder.append(" " + text.toString());
}
// 输出
outValue.set(stringBuilder.toString());
context.write(key, outValue);
System.out.println("reduce:" + "<" + key.toString() + "," + stringBuilder.toString() + ">");
}
}
jobDemo.java
package com.atSchool.index;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.conf.Configured;
import org.apache.hadoop.fs.FileSystem;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
import org.apache.hadoop.util.Tool;
import org.apache.hadoop.util.ToolRunner;
import com.atSchool.utils.HDFSUtils;
public class jobDemo extends Configured implements Tool {
public static void main(String[] args) throws Exception {
new ToolRunner().run(new jobDemo(), null);
}
@Override
public int run(String[] args) throws Exception {
// 获取Job
Configuration configuration = new Configuration();
configuration.set("fs.defaultFS", "hdfs://192.168.232.129:9000");
Job job = Job.getInstance(configuration);
// 设置需要运行的任务
job.setJarByClass(jobDemo.class);
// 告诉job Map和Reduce在哪
job.setMapperClass(mapperDemo.class);
job.setCombinerClass(combinerDemo.class);
job.setReducerClass(reduceDemo.class);
// 告诉job Map输出的key和value的数据类型的是什么
job.setMapOutputKeyClass(Text.class);
job.setMapOutputValueClass(Text.class);
// 告诉job Reduce输出的key和value的数据类型的是什么
job.setOutputKeyClass(Text.class);
job.setOutputValueClass(Text.class);
// 告诉job输入和输出的路径
FileInputFormat.addInputPath(job, new Path("/index_source"));
/**
* 因为输出的文件不允许存在,所以需要处理一下
*/
FileSystem fileSystem = HDFSUtils.getFileSystem();
Path path = new Path("/MapReduceOut");
if (fileSystem.exists(path)) {
fileSystem.delete(path, true);
System.out.println("删除成功");
}
FileOutputFormat.setOutputPath(job, path);
// 提交任务
boolean waitForCompletion = job.waitForCompletion(true);
System.out.println(waitForCompletion ? "执行成功" : "执行失败");
return 0;
}
}
运行结果
and day01.txt:1
beijing day01.txt:1 day03.txt:1
capital day03.txt:1
china day03.txt:1 day02.txt:1
i day02.txt:1 day01.txt:2
is day03.txt:1
love day01.txt:2 day02.txt:1
of day03.txt:1
shanghai day01.txt:1
the day03.txt:1
四、错误案例
1、分析资料:
A:B,C,D,F,E,O
B:A,C,E,K
C:F,A,D,I
D:A,E,F,L
E:B,C,D,M,L
F:A,B,C,D,E,O,M
G:A,C,D,E,F
H:A,C,D,E,O
I:A,O
J:B,O
K:A,C,D
L:D,E,F
M:E,F,G
O:A,H,I,J
2、需求:
- 统计每两个人之间的共同好友
3、代码
mapperDemo.java
package com.atSchool.friend;
import java.io.IOException;
import org.apache.hadoop.io.LongWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Mapper;
/**
* 统计两个人的共同好友
*
* 原数据:
* A:B,C,D,F,E,O
* B:A,C,E,K
* C:F,A,D,I
* D:A,E,F,L
*
* 性质:好友双方应当都有对方名字,不会存在单一的情况
* 所以A中有B,即B中有A
*/
public class mapperDemo extends Mapper<LongWritable, Text, Text, Text> {
private Text outKey = new Text();
private Text outValue = new Text();
@Override
protected void map(LongWritable key, Text value, Mapper<LongWritable, Text, Text, Text>.Context context)
throws IOException, InterruptedException {
String[] split = value.toString().trim().split(":");
// 获取好友列表
String[] split2 = split[1].split(",");
outValue.set(split[0]); // 当前用户
// 遍历好友列表
for (String string : split2) {
/**
* 输出:
* B A
* C A
* D A
* F A
* E A
* O A
* A B
* C B
* E B
* K B
*/
outKey.set(string);
context.write(outKey, outValue);
System.out.println("mapper:" + string + "\t" + split[0]);
}
}
}
combinerDemo .java
package com.atSchool.friend;
import java.io.IOException;
import java.util.ArrayList;
import java.util.Arrays;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Reducer;
/**
* 输入:
* A [I, K, C, B, G, F, H, O, D]
* B [A, F, J, E]
* C [A, E, B, H, F, G, K]
*
* 即B是AEFJ的共同好友
*/
public class combinerDemo extends Reducer<Text, Text, Text, Text> {
private Text outKey = new Text();
@Override
protected void reduce(Text key, Iterable<Text> value, Reducer<Text, Text, Text, Text>.Context context)
throws IOException, InterruptedException {
// 遍历value
ArrayList<String> arrayList = new ArrayList<>();
for (Text text : value) {
arrayList.add(text.toString());
}
// 对值排序
String[] strings = arrayList.toArray(new String[arrayList.size()]);
Arrays.sort(strings);
System.out.println("combiner-in:" + key.toString() + "\t" + Arrays.toString(strings));
for (int i = 0; i < strings.length; i++) {
for (int j = i + 1; j < strings.length; j++) {
/**
* 输出:
* BC A
* CD A
* DF A
* FG A
* GH A
* HI A
* IK A
* KO A
*/
outKey.set(strings[i] + strings[j]);
context.write(outKey, key);
System.out.println("combiner-out:" + outKey.toString() + "\t" + key.toString());
}
}
// context.write(new Text("一个小小的验证"), new Text(Arrays.toString(strings)));
// System.out.println("combiner-out:" + outKey.toString() + "\t" + key.toString());
}
}
reduceDemo .java
package com.atSchool.friend;
import java.io.IOException;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Reducer;
/**
* 输入:
*/
public class reduceDemo extends Reducer<Text, Text, Text, Text> {
private Text outValue = new Text();
@Override
protected void reduce(Text key, Iterable<Text> value, Reducer<Text, Text, Text, Text>.Context context)
throws IOException, InterruptedException {
StringBuilder stringBuilder = new StringBuilder();
for (Text text : value) {
stringBuilder.append(text.toString() + ",");
}
outValue.set(stringBuilder.toString());
context.write(key, outValue);
System.out.println("reduce-in-out:" + key.toString() + "\t" + stringBuilder.toString());
}
}
jobDemo .java
package com.atSchool.friend;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.conf.Configured;
import org.apache.hadoop.fs.FileSystem;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
import org.apache.hadoop.util.Tool;
import org.apache.hadoop.util.ToolRunner;
import com.atSchool.utils.HDFSUtils;
public class jobDemo extends Configured implements Tool {
public static void main(String[] args) throws Exception {
new ToolRunner().run(new jobDemo(), null);
}
@Override
public int run(String[] args) throws Exception {
// 获取Job
Configuration configuration = new Configuration();
configuration.set("fs.defaultFS", "hdfs://192.168.232.129:9000");
Job job = Job.getInstance(configuration);
// 设置需要运行的任务
job.setJarByClass(jobDemo.class);
// 告诉job Map和Reduce在哪
job.setMapperClass(mapperDemo.class);
job.setCombinerClass(combinerDemo.class);
job.setReducerClass(reduceDemo.class);
// 告诉job Map输出的key和value的数据类型的是什么
job.setMapOutputKeyClass(Text.class);
job.setMapOutputValueClass(Text.class);
// 告诉job Reduce输出的key和value的数据类型的是什么
job.setOutputKeyClass(Text.class);
job.setOutputValueClass(Text.class);
// 告诉job输入和输出的路径
FileInputFormat.addInputPath(job, new Path("/friend.txt"));
/**
* 因为输出的文件不允许存在,所以需要处理一下
*/
FileSystem fileSystem = HDFSUtils.getFileSystem();
Path path = new Path("/MapReduceOut");
if (fileSystem.exists(path)) {
fileSystem.delete(path, true);
System.out.println("删除成功");
}
FileOutputFormat.setOutputPath(job, path);
// 提交任务
boolean waitForCompletion = job.waitForCompletion(true);
System.out.println(waitForCompletion ? "执行成功" : "执行失败");
return 0;
}
}
4、运行结果
BC A,
BD A,
BF A,
BG A,
BH A,
BI A,
BK A,
BO A,
CD A,
CF A,
CG A,
CH A,
CI A,
CK A,
CO A,
DF A,
DG A,
DH A,
DI A,
DK A,
DO A,
FG A,
FH A,
FI A,
FK A,
FO A,
GH A,
GI A,
GK A,
GO A,
HI A,
HK A,
HO A,
IK A,
IO A,
KO A,
AE B,
AF B,
AJ B,
EF B,
EJ B,
FJ B,
AB C,
AE C,
AF C,
AG C,
AH C,
AK C,
BE C,
BF C,
BG C,
BH C,
BK C,
EF C,
EG C,
EH C,
EK C,
FG C,
FH C,
FK C,
GH C,
GK C,
HK C,
AC D,
AE D,
AF D,
AG D,
AH D,
AK D,
AL D,
CE D,
CF D,
CG D,
CH D,
CK D,
CL D,
EF D,
EG D,
EH D,
EK D,
EL D,
FG D,
FH D,
FK D,
FL D,
GH D,
GK D,
GL D,
HK D,
HL D,
KL D,
AB E,
AD E,
AF E,
AG E,
AH E,
AL E,
AM E,
BD E,
BF E,
BG E,
BH E,
BL E,
BM E,
DF E,
DG E,
DH E,
DL E,
DM E,
FG E,
FH E,
FL E,
FM E,
GH E,
GL E,
GM E,
HL E,
HM E,
LM E,
AC F,
AD F,
AG F,
AL F,
AM F,
CD F,
CG F,
CL F,
CM F,
DG F,
DL F,
DM F,
GL F,
GM F,
LM F,
CO I,
DE L,
EF M,
AF O,
AH O,
AI O,
AJ O,
FH O,
FI O,
FJ O,
HI O,
HJ O,
IJ O,
很明显结果没有按照预期合并在一起。
五、错误分析
1、调试手段:
将每个阶段的key-value的输入和输出全部打印到控制台进行观察。
2、调试信息分析:
正确案例的调试信息分析
// mapper的输出
mapper-out:i day01.txt,1
mapper-out:love day01.txt,1
mapper-out:beijing day01.txt,1
mapper-out:and day01.txt,1
mapper-out:i day01.txt,1
mapper-out:love day01.txt,1
mapper-out:shanghai day01.txt,1
// combiner的输入和输出
combiner-in:and:day01.txt 1,
combiner-out:and day01.txt:1
combiner-in:beijing:day01.txt 1,
combiner-out:beijing day01.txt:1
combiner-in:i:day01.txt 1,1,
combiner-out:i day01.txt:2
combiner-in:love:day01.txt 1,1,
combiner-out:love day01.txt:2
combiner-in:shanghai:day01.txt 1,
combiner-out:shanghai day01.txt:1
mapper-out:beijing day03.txt,1
mapper-out:is day03.txt,1
mapper-out:the day03.txt,1
mapper-out:capital day03.txt,1
mapper-out:of day03.txt,1
mapper-out:china day03.txt,1
combiner-in:beijing:day03.txt 1,
combiner-out:beijing day03.txt:1
combiner-in:capital:day03.txt 1,
combiner-out:capital day03.txt:1
combiner-in:china:day03.txt 1,
combiner-out:china day03.txt:1
combiner-in:is:day03.txt 1,
combiner-out:is day03.txt:1
combiner-in:of:day03.txt 1,
combiner-out:of day03.txt:1
combiner-in:the:day03.txt 1,
combiner-out:the day03.txt:1
mapper-out:i day02.txt,1
mapper-out:love day02.txt,1
mapper-out:china day02.txt,1
combiner-in:china:day02.txt 1,
combiner-out:china day02.txt:1
combiner-in:i:day02.txt 1,
combiner-out:i day02.txt:1
combiner-in:love:day02.txt 1,
combiner-out:love day02.txt:1
// reducer的输入输出
reduce-in-out:and ,day01.txt:1
reduce-in-out:beijing ,day01.txt:1,day03.txt:1
reduce-in-out:capital ,day03.txt:1
reduce-in-out:china ,day03.txt:1,day02.txt:1
reduce-in-out:i ,day02.txt:1,day01.txt:2
reduce-in-out:is ,day03.txt:1
reduce-in-out:love ,day01.txt:2,day02.txt:1
reduce-in-out:of ,day03.txt:1
reduce-in-out:shanghai ,day01.txt:1
reduce-in-out:the ,day03.txt:1
// 总结
1、可以看出,由于分析的文件有三个,大小分别是34,12,31,都没有超过默认的128MB的大小,所以MapTask是三个。
2、上面三个对应出现了三次Map,Combiner,最后才出现了Reduce,恰恰说明了Combiner是在每一个MapTask所在的节点运行,Reducer是接收全局所有Mapper的输出结果。
3、观察Combiner的输入和输出,都是一行对应一行,并没有影响业务逻辑。
错误案例的调试信息分析
// mapper的输出
mapper:B A
mapper:C A
mapper:D A
mapper:F A
mapper:E A
mapper:O A
mapper:A B
mapper:C B
mapper:E B
mapper:K B
mapper:F C
mapper:A C
mapper:D C
mapper:I C
mapper:A D
mapper:E D
mapper:F D
mapper:L D
mapper:B E
mapper:C E
mapper:D E
mapper:M E
mapper:L E
mapper:A F
mapper:B F
mapper:C F
mapper:D F
mapper:E F
mapper:O F
mapper:M F
mapper:A G
mapper:C G
mapper:D G
mapper:E G
mapper:F G
mapper:A H
mapper:C H
mapper:D H
mapper:E H
mapper:O H
mapper:A I
mapper:O I
mapper:B J
mapper:O J
mapper:A K
mapper:C K
mapper:D K
mapper:D L
mapper:E L
mapper:F L
mapper:E M
mapper:F M
mapper:G M
mapper:A O
mapper:H O
mapper:I O
mapper:J O
// combiner的输入输出
combiner-in:A [B, C, D, F, G, H, I, K, O]
combiner-out:BC A
combiner-out:BD A
combiner-out:BF A
combiner-out:BG A
combiner-out:BH A
combiner-out:BI A
combiner-out:BK A
combiner-out:BO A
combiner-out:CD A
combiner-out:CF A
combiner-out:CG A
combiner-out:CH A
combiner-out:CI A
combiner-out:CK A
combiner-out:CO A
combiner-out:DF A
combiner-out:DG A
combiner-out:DH A
combiner-out:DI A
combiner-out:DK A
combiner-out:DO A
combiner-out:FG A
combiner-out:FH A
combiner-out:FI A
combiner-out:FK A
combiner-out:FO A
combiner-out:GH A
combiner-out:GI A
combiner-out:GK A
combiner-out:GO A
combiner-out:HI A
combiner-out:HK A
combiner-out:HO A
combiner-out:IK A
combiner-out:IO A
combiner-out:KO A
combiner-in:B [A, E, F, J]
combiner-out:AE B
combiner-out:AF B
combiner-out:AJ B
combiner-out:EF B
combiner-out:EJ B
combiner-out:FJ B
combiner-in:C [A, B, E, F, G, H, K]
combiner-out:AB C
combiner-out:AE C
combiner-out:AF C
combiner-out:AG C
combiner-out:AH C
combiner-out:AK C
combiner-out:BE C
combiner-out:BF C
combiner-out:BG C
combiner-out:BH C
combiner-out:BK C
combiner-out:EF C
combiner-out:EG C
combiner-out:EH C
combiner-out:EK C
combiner-out:FG C
combiner-out:FH C
combiner-out:FK C
combiner-out:GH C
combiner-out:GK C
combiner-out:HK C
combiner-in:D [A, C, E, F, G, H, K, L]
combiner-out:AC D
combiner-out:AE D
combiner-out:AF D
combiner-out:AG D
combiner-out:AH D
combiner-out:AK D
combiner-out:AL D
combiner-out:CE D
combiner-out:CF D
combiner-out:CG D
combiner-out:CH D
combiner-out:CK D
combiner-out:CL D
combiner-out:EF D
combiner-out:EG D
combiner-out:EH D
combiner-out:EK D
combiner-out:EL D
combiner-out:FG D
combiner-out:FH D
combiner-out:FK D
combiner-out:FL D
combiner-out:GH D
combiner-out:GK D
combiner-out:GL D
combiner-out:HK D
combiner-out:HL D
combiner-out:KL D
combiner-in:E [A, B, D, F, G, H, L, M]
combiner-out:AB E
combiner-out:AD E
combiner-out:AF E
combiner-out:AG E
combiner-out:AH E
combiner-out:AL E
combiner-out:AM E
combiner-out:BD E
combiner-out:BF E
combiner-out:BG E
combiner-out:BH E
combiner-out:BL E
combiner-out:BM E
combiner-out:DF E
combiner-out:DG E
combiner-out:DH E
combiner-out:DL E
combiner-out:DM E
combiner-out:FG E
combiner-out:FH E
combiner-out:FL E
combiner-out:FM E
combiner-out:GH E
combiner-out:GL E
combiner-out:GM E
combiner-out:HL E
combiner-out:HM E
combiner-out:LM E
combiner-in:F [A, C, D, G, L, M]
combiner-out:AC F
combiner-out:AD F
combiner-out:AG F
combiner-out:AL F
combiner-out:AM F
combiner-out:CD F
combiner-out:CG F
combiner-out:CL F
combiner-out:CM F
combiner-out:DG F
combiner-out:DL F
combiner-out:DM F
combiner-out:GL F
combiner-out:GM F
combiner-out:LM F
combiner-in:G [M]
combiner-in:H [O]
combiner-in:I [C, O]
combiner-out:CO I
combiner-in:J [O]
combiner-in:K [B]
combiner-in:L [D, E]
combiner-out:DE L
combiner-in:M [E, F]
combiner-out:EF M
combiner-in:O [A, F, H, I, J]
combiner-out:AF O
combiner-out:AH O
combiner-out:AI O
combiner-out:AJ O
combiner-out:FH O
combiner-out:FI O
combiner-out:FJ O
combiner-out:HI O
combiner-out:HJ O
combiner-out:IJ O
// reducer的输入输出
reduce-in-out:BC A,
reduce-in-out:BD A,
reduce-in-out:BF A,
reduce-in-out:BG A,
reduce-in-out:BH A,
reduce-in-out:BI A,
reduce-in-out:BK A,
reduce-in-out:BO A,
reduce-in-out:CD A,
reduce-in-out:CF A,
reduce-in-out:CG A,
reduce-in-out:CH A,
reduce-in-out:CI A,
reduce-in-out:CK A,
reduce-in-out:CO A,
reduce-in-out:DF A,
reduce-in-out:DG A,
reduce-in-out:DH A,
reduce-in-out:DI A,
reduce-in-out:DK A,
reduce-in-out:DO A,
reduce-in-out:FG A,
reduce-in-out:FH A,
reduce-in-out:FI A,
reduce-in-out:FK A,
reduce-in-out:FO A,
reduce-in-out:GH A,
reduce-in-out:GI A,
reduce-in-out:GK A,
reduce-in-out:GO A,
reduce-in-out:HI A,
reduce-in-out:HK A,
reduce-in-out:HO A,
reduce-in-out:IK A,
reduce-in-out:IO A,
reduce-in-out:KO A,
reduce-in-out:AE B,
reduce-in-out:AF B,
reduce-in-out:AJ B,
reduce-in-out:EF B,
reduce-in-out:EJ B,
reduce-in-out:FJ B,
reduce-in-out:AB C,
reduce-in-out:AE C,
reduce-in-out:AF C,
reduce-in-out:AG C,
reduce-in-out:AH C,
reduce-in-out:AK C,
reduce-in-out:BE C,
reduce-in-out:BF C,
reduce-in-out:BG C,
reduce-in-out:BH C,
reduce-in-out:BK C,
reduce-in-out:EF C,
reduce-in-out:EG C,
reduce-in-out:EH C,
reduce-in-out:EK C,
reduce-in-out:FG C,
reduce-in-out:FH C,
reduce-in-out:FK C,
reduce-in-out:GH C,
reduce-in-out:GK C,
reduce-in-out:HK C,
reduce-in-out:AC D,
reduce-in-out:AE D,
reduce-in-out:AF D,
reduce-in-out:AG D,
reduce-in-out:AH D,
reduce-in-out:AK D,
reduce-in-out:AL D,
reduce-in-out:CE D,
reduce-in-out:CF D,
reduce-in-out:CG D,
reduce-in-out:CH D,
reduce-in-out:CK D,
reduce-in-out:CL D,
reduce-in-out:EF D,
reduce-in-out:EG D,
reduce-in-out:EH D,
reduce-in-out:EK D,
reduce-in-out:EL D,
reduce-in-out:FG D,
reduce-in-out:FH D,
reduce-in-out:FK D,
reduce-in-out:FL D,
reduce-in-out:GH D,
reduce-in-out:GK D,
reduce-in-out:GL D,
reduce-in-out:HK D,
reduce-in-out:HL D,
reduce-in-out:KL D,
reduce-in-out:AB E,
reduce-in-out:AD E,
reduce-in-out:AF E,
reduce-in-out:AG E,
reduce-in-out:AH E,
reduce-in-out:AL E,
reduce-in-out:AM E,
reduce-in-out:BD E,
reduce-in-out:BF E,
reduce-in-out:BG E,
reduce-in-out:BH E,
reduce-in-out:BL E,
reduce-in-out:BM E,
reduce-in-out:DF E,
reduce-in-out:DG E,
reduce-in-out:DH E,
reduce-in-out:DL E,
reduce-in-out:DM E,
reduce-in-out:FG E,
reduce-in-out:FH E,
reduce-in-out:FL E,
reduce-in-out:FM E,
reduce-in-out:GH E,
reduce-in-out:GL E,
reduce-in-out:GM E,
reduce-in-out:HL E,
reduce-in-out:HM E,
reduce-in-out:LM E,
reduce-in-out:AC F,
reduce-in-out:AD F,
reduce-in-out:AG F,
reduce-in-out:AL F,
reduce-in-out:AM F,
reduce-in-out:CD F,
reduce-in-out:CG F,
reduce-in-out:CL F,
reduce-in-out:CM F,
reduce-in-out:DG F,
reduce-in-out:DL F,
reduce-in-out:DM F,
reduce-in-out:GL F,
reduce-in-out:GM F,
reduce-in-out:LM F,
reduce-in-out:CO I,
reduce-in-out:DE L,
reduce-in-out:EF M,
reduce-in-out:AF O,
reduce-in-out:AH O,
reduce-in-out:AI O,
reduce-in-out:AJ O,
reduce-in-out:FH O,
reduce-in-out:FI O,
reduce-in-out:FJ O,
reduce-in-out:HI O,
reduce-in-out:HJ O,
reduce-in-out:IJ O,
// 错误分析
1、分析的文件只有一个,大小为154KB,所以MapTask只有一个
2、上面对应出现了一次mapper、combiner和reducer 正常
3、但是在 combiner阶段,对比前面的正确案例combiner的输入输出,这里一个输入对应了多个输出,
很明显影响了业务逻辑(这里不可硬性理解为业务逻辑就是combiner的输入输出个数一一对应,自己体会其中的奥妙,我也说不明白,所以是举了两个案例对比说明^_^),
所以这里就不能使用combiner强行完成一个工作
// 解决方法
应当使用两个Map Reduce去完成这个工作。
// 验证
如果还是不明白,这里做一个小小的验证,此时抛开所有的业务来看,我们只要使得combiner的输入输出的个数是一一对应的,并且将key的值全部设置成一样的,会发现在reducer又会根据key将value聚合在一起了
// 运行结果
一个小小的验证 [B, C, D, F, G, H, I, K, O],[A, E, F, J],[A, B, E, F, G, H, K],[A, C, E, F, G, H, K, L],[A, B, D, F, G, H, L, M],[A, C, D, G, L, M],[M],[O],[C, O],[O],[B],[D, E],[E, F],[A, F, H, I, J],
以上就是我在使用combiner时遇到的问题,且在发现问题后对问题的理解,可能会有错误。
标签:mapper,combiner,错误,reduce,案例,import,txt,out 来源: https://blog.csdn.net/qq_45634592/article/details/115642018