1. MapReduce原理

通过前面的Hadoop序列化 此处难以理解

11. MapReduce原理 - 图1

11. MapReduce原理 - 图2

11. MapReduce原理 - 图3

2. InputFormat 数据输入

11. MapReduce原理 - 图4

2.1. FileInputFormat切片源码

11. MapReduce原理 - 图5

11. MapReduce原理 - 图6

11. MapReduce原理 - 图7

2.2. CombineTextInputFormat切片机制

框架默认的TextInputFormat切片机制是对任务按文件规划切片,不管文件多小,都会是一个单独的切片,都会交给一个MapTask,这样如果有大量小文件,就会产生大量的MapTask,处理效率极其低下。

11. MapReduce原理 - 图8

2.3. 自定义InputFormat

Driver

  1. package com.inputformat;
  2. import org.apache.hadoop.conf.Configuration;
  3. import org.apache.hadoop.fs.Path;
  4. import org.apache.hadoop.io.BytesWritable;
  5. import org.apache.hadoop.io.Text;
  6. import org.apache.hadoop.mapreduce.Job;
  7. import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
  8. import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
  9. import org.apache.hadoop.mapreduce.lib.output.SequenceFileOutputFormat;
  10. import java.io.IOException;
  11. public class MyInputDriver {
  12. public static void main(String[] args) throws IOException, InterruptedException, ClassNotFoundException {
  13. Job job = Job.getInstance(new Configuration());
  14. job.setJarByClass(MyInputDriver.class);
  15. job.setMapOutputKeyClass(Text.class);
  16. job.setMapOutputValueClass(BytesWritable.class);
  17. job.setOutputKeyClass(Text.class);
  18. job.setOutputValueClass(BytesWritable.class);
  19. job.setInputFormatClass(MyInputFormat.class);
  20. job.setOutputFormatClass(SequenceFileOutputFormat.class);
  21. FileInputFormat.setInputPaths(job, new Path("D:/input"));
  22. FileOutputFormat.setOutputPath(job, new Path("D:/output"));
  23. boolean b = job.waitForCompletion(true);
  24. System.exit(b ? 0 : 1);
  25. }
  26. }

InputFormat

  1. package com.inputformat;
  2. import org.apache.hadoop.io.BytesWritable;
  3. import org.apache.hadoop.io.Text;
  4. import org.apache.hadoop.mapreduce.InputSplit;
  5. import org.apache.hadoop.mapreduce.RecordReader;
  6. import org.apache.hadoop.mapreduce.TaskAttemptContext;
  7. import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
  8. import java.io.IOException;
  9. public class MyInputFormat extends FileInputFormat<Text, BytesWritable> {
  10. /**
  11. * 返回一个自定义RecordReader
  12. * @param inputSplit
  13. * @param taskAttemptContext
  14. * @return
  15. * @throws IOException
  16. * @throws InterruptedException
  17. */
  18. @Override
  19. public RecordReader<Text, BytesWritable> createRecordReader(InputSplit inputSplit, TaskAttemptContext taskAttemptContext) throws IOException, InterruptedException {
  20. return new MyRecordReader();
  21. }
  22. }

RecordReader

  1. package com.inputformat;
  2. import org.apache.hadoop.fs.FSDataInputStream;
  3. import org.apache.hadoop.fs.FileSystem;
  4. import org.apache.hadoop.io.BytesWritable;
  5. import org.apache.hadoop.io.IOUtils;
  6. import org.apache.hadoop.io.Text;
  7. import org.apache.hadoop.mapreduce.InputSplit;
  8. import org.apache.hadoop.mapreduce.RecordReader;
  9. import org.apache.hadoop.mapreduce.TaskAttemptContext;
  10. import org.apache.hadoop.mapreduce.lib.input.FileSplit;
  11. import java.io.IOException;
  12. //负责将整个文件转化成一组Key Value对
  13. public class MyRecordReader extends RecordReader<Text, BytesWritable> {
  14. //文件是否读完 默认为false
  15. private boolean isRead;
  16. //键值对
  17. private Text key=new Text();
  18. private BytesWritable value= new BytesWritable();
  19. FSDataInputStream inputStream;
  20. FileSplit fs;
  21. /**
  22. * 初始化方法 一般执行一些初始化操作
  23. *
  24. * @param inputSplit
  25. * @param taskAttemptContext
  26. * @throws IOException
  27. * @throws InterruptedException
  28. */
  29. @Override
  30. public void initialize(InputSplit inputSplit, TaskAttemptContext taskAttemptContext) throws IOException, InterruptedException {
  31. //开流
  32. fs = (FileSplit) inputSplit;//强转为实现子类
  33. FileSystem fileSystem = FileSystem.get(taskAttemptContext.getConfiguration()); //获取config对象
  34. inputStream = fileSystem.open(fs.getPath());//获取路径
  35. }
  36. /**
  37. * 读取下一个键值对 是否存在
  38. *
  39. * @return
  40. * @throws IOException
  41. * @throws InterruptedException
  42. */
  43. @Override
  44. public boolean nextKeyValue() throws IOException, InterruptedException {
  45. if (!isRead){
  46. //读取这个文件
  47. //填充key
  48. key.set(fs.getPath().toString()); //key路径
  49. //value
  50. byte[] buffer = new byte[(int) fs.getLength()];
  51. value.set(buffer,0,buffer.length);
  52. //标记文件读完
  53. isRead = true;
  54. return true;
  55. }
  56. return false;
  57. }
  58. /*
  59. 获取当前key
  60. */
  61. @Override
  62. public Text getCurrentKey() throws IOException, InterruptedException {
  63. return key;
  64. }
  65. //获取当前value
  66. @Override
  67. public BytesWritable getCurrentValue() throws IOException, InterruptedException {
  68. return value;
  69. }
  70. /**
  71. * 显示进度
  72. *
  73. * @return
  74. * @throws IOException
  75. * @throws InterruptedException
  76. */
  77. @Override
  78. public float getProgress() throws IOException, InterruptedException {
  79. return isRead ? 0 : 1;
  80. }
  81. /*
  82. 关闭方法
  83. */
  84. @Override
  85. public void close() throws IOException {
  86. IOUtils.closeStream(inputStream); //关流
  87. }
  88. }

3. Shuffle(混洗) 整理数据

MapReduce框架会确保每一个Reducer的输入都是按Key进行排序的。一般,将排序以及Map的输出传输到Reduce的过程称为混洗(shuffle)。每一个Map都包含一个环形的缓存,默认100M,Map首先将输出写到缓存当中。当缓存的内容达到“阈值”时(阈值默认的大小是缓存的80%),一个后台线程负责将结果写到硬盘,这个过程称为“spill”。Spill过程中,Map仍可以向缓存写入结果,如果缓存已经写满,那么Map进行等待。

Map方法之后,Reduce方法之前的数据处理过程称之为Shuffle。

11. MapReduce原理 - 图9

Shuffle将map中无序的键值对,分区 排序 归并后输出给Reduce

Shuffle阶段数据是存放在内存(栈)中,如果数据写满了缓冲区,则会进行分区 并排序 然后进行归并排序 并且写入磁盘的操作,以释放缓冲区 让新数据进入缓冲区

一次排序比多次排序效率要高 因为归并次数越多效率下降 但如果是数据集庞大 我们只有牺牲时间来换取空间

3.1. Partition分区

实体类

  1. package com.flow;
  2. import org.apache.hadoop.io.Writable;
  3. import java.io.DataInput;
  4. import java.io.DataOutput;
  5. import java.io.IOException;
  6. public class FlowBean implements Writable {
  7. private long upFlow;
  8. private long downFlow;
  9. private long sumFlow;
  10. @Override
  11. public String toString() {
  12. return upFlow + "\t" + downFlow + "\t" + sumFlow;
  13. }
  14. public void set(long upFlow, long downFlow) {
  15. this.downFlow = downFlow;
  16. this.upFlow = upFlow;
  17. this.sumFlow = upFlow + downFlow;
  18. }
  19. public long getUpFlow() {
  20. return upFlow;
  21. }
  22. public void setUpFlow(long upFlow) {
  23. this.upFlow = upFlow;
  24. }
  25. public long getDownFlow() {
  26. return downFlow;
  27. }
  28. public void setDownFlow(long downFlow) {
  29. this.downFlow = downFlow;
  30. }
  31. public long getSumFlow() {
  32. return sumFlow;
  33. }
  34. public void setSumFlow(long sumFlow) {
  35. this.sumFlow = sumFlow;
  36. }
  37. /**
  38. * 将对象数据写出到框架指定地方 序列化
  39. *
  40. * @param dataOutput 数据的容器
  41. * @throws IOException
  42. */
  43. @Override
  44. public void write(DataOutput dataOutput) throws IOException {
  45. dataOutput.writeLong(upFlow);
  46. dataOutput.writeLong(downFlow);
  47. dataOutput.writeLong(sumFlow);
  48. }
  49. /**
  50. * 从框架指定地方读取数据填充对象 反序列化
  51. *
  52. * @param dataInput
  53. * @throws IOException
  54. */
  55. @Override
  56. public void readFields(DataInput dataInput) throws IOException {
  57. //读写顺序要一致
  58. this.upFlow = dataInput.readLong();
  59. this.downFlow = dataInput.readLong();
  60. this.sumFlow = dataInput.readLong();
  61. }
  62. }

分区类

  1. package com.partitioner;
  2. import com.flow.FlowBean;
  3. import org.apache.hadoop.io.Text;
  4. import org.apache.hadoop.mapreduce.Partitioner;
  5. public class MyPartitioner extends Partitioner<Text, FlowBean> {
  6. /**
  7. * 对每一个键值对 返回对应的分区号
  8. *
  9. * @param text 手机号
  10. * @param flowBean 流量
  11. * @param numPartitions
  12. * @return
  13. */
  14. @Override
  15. public int getPartition(Text text, FlowBean flowBean, int numPartitions) {
  16. switch (text.toString().substring(0, 3)) { //根据手机号前3位
  17. case "136":
  18. return 0;
  19. case "137":
  20. return 1;
  21. case "138":
  22. return 2;
  23. case "139":
  24. return 3;
  25. default:
  26. return 4;
  27. }
  28. }
  29. }

驱动类

  1. package com.partitioner;
  2. import com.flow.FlowBean;
  3. import com.flow.FlowMapper;
  4. import com.flow.FlowReducer;
  5. import org.apache.hadoop.conf.Configuration;
  6. import org.apache.hadoop.fs.Path;
  7. import org.apache.hadoop.io.Text;
  8. import org.apache.hadoop.mapreduce.Job;
  9. import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
  10. import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
  11. import java.io.IOException;
  12. public class NewFlowDriver {
  13. public static void main(String[] args) throws IOException, InterruptedException, ClassNotFoundException {
  14. Job job = Job.getInstance(new Configuration());
  15. job.setJarByClass(NewFlowDriver.class);
  16. job.setMapperClass(FlowMapper.class);
  17. job.setReducerClass(FlowReducer.class);
  18. job.setMapOutputKeyClass(Text.class);
  19. job.setMapOutputValueClass(FlowBean.class);
  20. job.setNumReduceTasks(5); //设置分区数/并行数
  21. job.setPartitionerClass(MyPartitioner.class); //设置分区类
  22. job.setOutputKeyClass(Text.class);
  23. job.setOutputValueClass(FlowBean.class);
  24. FileInputFormat.setInputPaths(job, new Path("file:///d:/input"));
  25. FileOutputFormat.setOutputPath(job, new Path("file:///d:/output"));
  26. boolean completion = job.waitForCompletion(true);
  27. System.exit(completion ? 0 : 1);
  28. }
  29. }

3.2. WritableComparable 排序

WritableComparable 是MapReduce中默认的排序接口 实现类为WritableComparator

MapTask和ReduceTask均会对数据按照key进行排序 hadoop的默认行为 默认排序为字典顺序排序 底层为快速排序

如果要重写排序方法 则让实体类继承WritableComparable接口 并实现compareTo方法

实现类

  1. package com.compare;
  2. import org.apache.hadoop.io.Writable;
  3. import org.apache.hadoop.io.WritableComparable;
  4. import java.io.DataInput;
  5. import java.io.DataOutput;
  6. import java.io.IOException;
  7. //实现WritableComparable接口
  8. public class FlowBean implements WritableComparable<FlowBean> {
  9. private long upFlow;
  10. private long downFlow;
  11. private long sumFlow;
  12. @Override
  13. public String toString() {
  14. return upFlow + "\t" + downFlow + "\t" + sumFlow;
  15. }
  16. public void set(long upFlow, long downFlow) {
  17. this.downFlow = downFlow;
  18. this.upFlow = upFlow;
  19. this.sumFlow = upFlow + downFlow;
  20. }
  21. public long getUpFlow() {
  22. return upFlow;
  23. }
  24. public void setUpFlow(long upFlow) {
  25. this.upFlow = upFlow;
  26. }
  27. public long getDownFlow() {
  28. return downFlow;
  29. }
  30. public void setDownFlow(long downFlow) {
  31. this.downFlow = downFlow;
  32. }
  33. public long getSumFlow() {
  34. return sumFlow;
  35. }
  36. public void setSumFlow(long sumFlow) {
  37. this.sumFlow = sumFlow;
  38. }
  39. /**
  40. * 将对象数据写出到框架指定地方 序列化
  41. *
  42. * @param dataOutput 数据的容器
  43. * @throws IOException
  44. */
  45. @Override
  46. public void write(DataOutput dataOutput) throws IOException {
  47. dataOutput.writeLong(upFlow);
  48. dataOutput.writeLong(downFlow);
  49. dataOutput.writeLong(sumFlow);
  50. }
  51. /**
  52. * 从框架指定地方读取数据填充对象 反序列化
  53. *
  54. * @param dataInput
  55. * @throws IOException
  56. */
  57. @Override
  58. public void readFields(DataInput dataInput) throws IOException {
  59. //读写顺序要一致
  60. this.upFlow = dataInput.readLong();
  61. this.downFlow = dataInput.readLong();
  62. this.sumFlow = dataInput.readLong();
  63. }
  64. //比较器
  65. @Override
  66. public int compareTo(FlowBean o) {
  67. // if (this.sumFlow < o.sumFlow) {
  68. // return 1; //降序
  69. // } else if (
  70. // this.sumFlow == o.sumFlow
  71. // ) {
  72. // return 0;
  73. // }
  74. // return -1; //升序
  75. return Long.compare(o.sumFlow,this.sumFlow);
  76. }
  77. }

mapper类

  1. package com.compare;
  2. import org.apache.hadoop.io.LongWritable;
  3. import org.apache.hadoop.io.Text;
  4. import org.apache.hadoop.mapreduce.Mapper;
  5. import java.io.IOException;
  6. public class CompareMapper extends Mapper<LongWritable, Text,FlowBean,Text> {
  7. private Text phone =new Text();
  8. private FlowBean flow = new FlowBean();
  9. @Override
  10. protected void map(LongWritable key, Text value, Mapper<LongWritable, Text, FlowBean, Text>.Context context) throws IOException, InterruptedException {
  11. //一行数据
  12. String line = value.toString();
  13. //切分
  14. String[] fields = line.split("\t");
  15. //封装
  16. phone.set(fields[0]);
  17. flow.setUpFlow(Long.parseLong(fields[1]));
  18. flow.setDownFlow(Long.parseLong(fields[2]));
  19. flow.setSumFlow(Long.parseLong(fields[3]));
  20. //写到上下文
  21. context.write(flow,phone);
  22. }
  23. }

Reducer类

  1. package com.compare;
  2. import org.apache.hadoop.io.Text;
  3. import org.apache.hadoop.mapreduce.Reducer;
  4. import java.io.IOException;
  5. //收的数据为 流量key 手机号value 输出为 手机key 流量value
  6. public class CompareReducer extends Reducer<FlowBean, Text,Text,FlowBean> {
  7. /**
  8. * Reduce收到的数据已经排完序了 我们只需要将键和值 反着输出到文件中就可以
  9. * @param key
  10. * @param values
  11. * @param context
  12. * @throws IOException
  13. * @throws InterruptedException
  14. */
  15. @Override
  16. protected void reduce(FlowBean key, Iterable<Text> values, Reducer<FlowBean, Text, Text, FlowBean>.Context context) throws IOException, InterruptedException {
  17. for (Text value : values) {
  18. context.write(value,key);
  19. }
  20. }
  21. }

驱动类

  1. package com.compare;
  2. import com.partitioner.MyPartitioner;
  3. import org.apache.hadoop.conf.Configuration;
  4. import org.apache.hadoop.fs.Path;
  5. import org.apache.hadoop.io.Text;
  6. import org.apache.hadoop.io.WritableComparator;
  7. import org.apache.hadoop.mapreduce.Job;
  8. import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
  9. import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
  10. import java.io.IOException;
  11. public class CompareDriver {
  12. public static void main(String[] args) throws IOException, InterruptedException, ClassNotFoundException {
  13. Job job = Job.getInstance(new Configuration());
  14. job.setJarByClass(CompareDriver.class);
  15. job.setMapperClass(CompareMapper.class);
  16. job.setReducerClass(CompareReducer.class);
  17. job.setMapOutputKeyClass(FlowBean.class);
  18. job.setMapOutputValueClass(Text.class);
  19. // job.setSortComparatorClass(WritableComparator.class); //默认排序
  20. // job.setGroupingComparatorClass(WritableComparator.class); //分区排序也是使用这个Comparato类
  21. job.setOutputKeyClass(Text.class);
  22. job.setOutputValueClass(FlowBean.class);
  23. FileInputFormat.setInputPaths(job, new Path("file:///d:/output"));
  24. FileOutputFormat.setOutputPath(job, new Path("file:///d:/output2"));
  25. boolean completion = job.waitForCompletion(true);
  26. System.exit(completion ? 0 : 1);
  27. }
  28. }

3.3. RawComparator 排序

WritableComparable 类已经帮我实现好了RawComparator 排序中方法 所有我们可以直接继承WritableComparable 而不是实现RawComparator 接口

  1. package com.compare;
  2. import org.apache.hadoop.io.WritableComparable;
  3. import org.apache.hadoop.io.WritableComparator;
  4. public class FlowComparator extends WritableComparator {
  5. @Override
  6. public int compare(WritableComparable a, WritableComparable b) {
  7. FlowBean fa = (FlowBean) a;
  8. FlowBean fb = (FlowBean) b;
  9. return Long.compare(fb.getSumFlow(), fa.getSumFlow());
  10. }
  11. protected FlowComparator() {
  12. super(FlowBean.class, true);
  13. }
  14. }

驱动类要set为自定义后的排序类

  1. package com.compare;
  2. import com.partitioner.MyPartitioner;
  3. import org.apache.hadoop.conf.Configuration;
  4. import org.apache.hadoop.fs.Path;
  5. import org.apache.hadoop.io.Text;
  6. import org.apache.hadoop.io.WritableComparator;
  7. import org.apache.hadoop.mapreduce.Job;
  8. import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
  9. import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
  10. import java.io.IOException;
  11. public class CompareDriver {
  12. public static void main(String[] args) throws IOException, InterruptedException, ClassNotFoundException {
  13. Job job = Job.getInstance(new Configuration());
  14. job.setJarByClass(CompareDriver.class);
  15. job.setMapperClass(CompareMapper.class);
  16. job.setReducerClass(CompareReducer.class);
  17. job.setMapOutputKeyClass(FlowBean.class);
  18. job.setMapOutputValueClass(Text.class);
  19. // job.setSortComparatorClass(WritableComparator.class); //默认排序
  20. // job.setGroupingComparatorClass(WritableComparator.class); //分区排序也是使用这个Comparator类
  21. job.setSortComparatorClass(FlowComparator.class); //设置为重写的Comparator类
  22. job.setOutputKeyClass(Text.class);
  23. job.setOutputValueClass(FlowBean.class);
  24. FileInputFormat.setInputPaths(job, new Path("file:///d:/output"));
  25. FileOutputFormat.setOutputPath(job, new Path("file:///d:/output2"));
  26. boolean completion = job.waitForCompletion(true);
  27. System.exit(completion ? 0 : 1);
  28. }
  29. }

Mappring类和实体类一致 但实体类中的compareTo 因为job已经设置了自定义的排序类 所有不会执行实体类中的compareTo方法

3.4. Combiner 合并

  1. Combiner是MR程序中Mapper和Reducer之外的一种组件

  2. Combiner组件的父类就是Reducer

  3. Combiner和Reducer的区别在于运行的位置
    Combiner是在每一个MapTask所在堆叠节点运行
    Reducer是接受全局所有Mapper的输出结果

  4. Combiner的意义就是对每一个MapTask的输出进行局部汇总,以减少网络传输量

  5. Combiner能够应用的前提是不能影响最终的业务逻辑,而且Combiner的输出kv应用更Reducer的输入kv类型对应起来

总结:Combiner就是在MapTask时 提前将数据分组归并 减少相同数据的分区 排序 再归并,但前提条件是合并后的数据不影响产生的结果 否则空间换取时间的做法不可取

使用 在driver中传入Reducer类启用 不影响Reducer的使用

  1. job.setCombinerClass(CompareReducer.class); //提前归并分组 减少数据处理时间

3.5. GroupingComparator分组

GroupingComparator是在reduce阶段分组来使用的,由于reduce阶段,如果key相同的一组,只取第一个key作为key,迭代所有的values。 如果reduce的key是自定义的bean,我们只需要bean里面的某个属性相同就认为这样的key是相同的,这是我们就需要之定义GroupCoparator来“欺骗”reduce了。

  1. 实体类继承WritableComparable接口 实现compareTo方法 ```java package com.grouping;

import org.apache.hadoop.io.WritableComparable;

import java.io.DataInput; import java.io.DataOutput; import java.io.IOException;

public class OrderBean implements WritableComparable {

  1. private String orderId;
  2. private String productId;
  3. private double price;
  4. @Override
  5. public String toString() {
  6. return orderId + "\t" + productId + "\t" + price;
  7. }
  8. public String getOrderId() {
  9. return orderId;
  10. }
  11. public String getProductId() {
  12. return productId;
  13. }
  14. public void setProductId(String productId) {
  15. this.productId = productId;
  16. }
  17. public double getPrice() {
  18. return price;
  19. }
  20. public void setPrice(double price) {
  21. this.price = price;
  22. }
  23. public void setOrderId(String orderId) {
  24. this.orderId = orderId;
  25. }
  26. //先按订单排序再根据订单相同价格降序
  27. @Override
  28. public int compareTo(OrderBean o) {
  29. int compare = this.orderId.compareTo(o.orderId); //比较订单号是否相同
  30. if (compare != 0) {
  31. return compare; //不相同则返回差值
  32. } else {
  33. return Double.compare(o.price, this.price); //相同按价格升序
  34. }
  35. }
  36. @Override
  37. public void write(DataOutput dataOutput) throws IOException {
  38. dataOutput.writeUTF(orderId);
  39. dataOutput.writeUTF(productId);
  40. dataOutput.writeDouble(price);
  41. }
  42. @Override
  43. public void readFields(DataInput dataInput) throws IOException {
  44. }

}

  1. 2.
  2. mapper封装数据到实体类中
  3. ```java
  4. package com.grouping;
  5. import org.apache.hadoop.io.LongWritable;
  6. import org.apache.hadoop.io.NullWritable;
  7. import org.apache.hadoop.io.Text;
  8. import org.apache.hadoop.mapreduce.Mapper;
  9. import java.io.IOException;
  10. //封装OrderBean
  11. public class OrderMapper extends Mapper<LongWritable, Text,OrderBean, NullWritable> {
  12. private OrderBean order =new OrderBean();
  13. //mapper封装方法
  14. @Override
  15. protected void map(LongWritable key, Text value, Mapper<LongWritable, Text, OrderBean, NullWritable>.Context context) throws IOException, InterruptedException {
  16. String[] split = value.toString().split("\t");
  17. order.setOrderId(split[0]);
  18. order.setProductId(split[1]);
  19. order.setPrice(Double.parseDouble(split[2]));
  20. //key为一个OrderBean
  21. context.write(order,NullWritable.get());
  22. }
  23. }
  1. 比较器 继承WritableComparator实现类 重写compare和无参构造方法 ```java package com.grouping;

import org.apache.hadoop.io.WritableComparable; import org.apache.hadoop.io.WritableComparator;

//按照订单编号对数据进行分组 public class OrderComparator extends WritableComparator { //按照相同订单进入一组进行比较 @Override public int compare(WritableComparable a, WritableComparable b) { OrderBean oa = (OrderBean) a; OrderBean ob = (OrderBean) b;

  1. return oa.getOrderId().compareTo(ob.getOrderId());
  2. }
  3. protected OrderComparator() {
  4. super(OrderBean.class,true);
  5. }

}

  1. 4.
  2. Reducer 此时key为实体类 valuenull
  3. ```java
  4. package com.grouping;
  5. import org.apache.hadoop.io.NullWritable;
  6. import org.apache.hadoop.mapreduce.Reducer;
  7. import java.io.IOException;
  8. import java.util.Iterator;
  9. //取每个订单的最高价格
  10. public class OrderReducer extends Reducer<OrderBean, NullWritable,OrderBean,NullWritable> {
  11. @Override
  12. protected void reduce(OrderBean key, Iterable<NullWritable> values, Reducer<OrderBean, NullWritable, OrderBean, NullWritable>.Context context) throws IOException, InterruptedException {
  13. Iterator<NullWritable> iterator = values.iterator();
  14. for (int i = 0; i < 2; i++) { //输出当前订单组中前两个最高价格
  15. if (iterator.hasNext()){
  16. iterator.next();
  17. context.write(key,NullWritable.get());
  18. }
  19. }
  20. }
  21. }
  1. 驱动类 setGroupingComparatorClass开启分组 ```java package com.grouping;

import com.flow.FlowBean; import com.flow.FlowDriver; import com.flow.FlowMapper; import com.flow.FlowReducer; import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.fs.Path; import org.apache.hadoop.io.NullWritable; import org.apache.hadoop.io.Text; import org.apache.hadoop.mapreduce.Job; import org.apache.hadoop.mapreduce.lib.input.FileInputFormat; import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;

import java.io.IOException;

public class OrderDriver { public static void main(String[] args) throws IOException, InterruptedException, ClassNotFoundException { Job job = Job.getInstance(new Configuration());

  1. job.setJarByClass(OrderDriver.class);
  2. job.setMapperClass(OrderMapper.class);
  3. job.setReducerClass(OrderReducer.class);
  4. job.setMapOutputKeyClass(OrderBean.class);
  5. job.setMapOutputValueClass(NullWritable.class);
  6. job.setGroupingComparatorClass(OrderComparator.class); //分组比较器
  7. job.setOutputKeyClass(OrderBean.class);
  8. job.setOutputValueClass(NullWritable.class);
  9. FileInputFormat.setInputPaths(job, new Path("file:///d:/input"));
  10. FileOutputFormat.setOutputPath(job, new Path("file:///d:/output"));
  11. boolean completion = job.waitForCompletion(true);
  12. System.exit(completion ? 0 : 1);
  13. }

}

  1. 上面Reducer中获取当前订单组中前两个最高价格 利用了shuffle中数据序列化的特性 如果在写入到磁盘中每次输出一个值创建一个映射实体类那么效率太低下 进入Mapper后数据就默认内部序列化了 写入到磁盘时只需创建一次映射实体类通过序列化迭代下一个键值对改变实体类的值 这样无需多次创建实体类浪费资源
  2. <a name="d1c91801"></a>
  3. # 4. OutputFormat 数据输出
  4. ![](https://cdn.jsdelivr.net/gh/Iekrwh/images/md-images/image-20211017083334913.png#alt=image-20211017083334913)
  5. 1.
  6. Record
  7. ```java
  8. package com.outputformat;
  9. import org.apache.hadoop.conf.Configuration;
  10. import org.apache.hadoop.fs.FSDataOutputStream;
  11. import org.apache.hadoop.fs.FileSystem;
  12. import org.apache.hadoop.fs.Path;
  13. import org.apache.hadoop.io.IOUtils;
  14. import org.apache.hadoop.io.LongWritable;
  15. import org.apache.hadoop.io.Text;
  16. import org.apache.hadoop.mapreduce.RecordWriter;
  17. import org.apache.hadoop.mapreduce.TaskAttemptContext;
  18. import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
  19. import java.io.IOException;
  20. import java.nio.charset.StandardCharsets;
  21. //将数据按照不包含 atguigu 的数据 分别输出到两个文件中
  22. public class MyRecordWriter extends RecordWriter<LongWritable, Text> {
  23. FSDataOutputStream atguigu = null;
  24. FSDataOutputStream other = null;
  25. public MyRecordWriter(TaskAttemptContext job) throws IOException {
  26. Configuration configuration = job.getConfiguration();//通过job获取配置文件
  27. FileSystem fileSystem = FileSystem.get(configuration); //通过配置文件获取文件对象
  28. String outdir = configuration.get(FileOutputFormat.OUTDIR); //获取配置文件中的输出路径地址
  29. atguigu = fileSystem.create(new Path(outdir + "/atguigu.log"));//拼接
  30. other = fileSystem.create(new Path(outdir + "/other.log"));
  31. }
  32. /**
  33. * 接受键值对 并按照值的不同输出到不同文件中
  34. *
  35. * @param key 读取的一行的偏移量
  36. * @param value 这一行的内容
  37. * @throws IOException
  38. * @throws InterruptedException
  39. */
  40. @Override
  41. public void write(LongWritable key, Text value) throws IOException, InterruptedException {
  42. String line = value.toString() + "\n";
  43. if (line.contains("atguigu")) {//判断此行是否包含atguigu
  44. //往atguigu文件写出数据
  45. atguigu.write(line.getBytes(StandardCharsets.UTF_8));
  46. } else {
  47. //往other文件写出数据
  48. other.write(line.getBytes(StandardCharsets.UTF_8));
  49. }
  50. }
  51. //关闭资源
  52. @Override
  53. public void close(TaskAttemptContext context) throws IOException, InterruptedException {
  54. IOUtils.closeStream(atguigu);
  55. IOUtils.closeStream(other);
  56. }
  57. }
  1. OutputFormat类 ```java package com.outputformat;

import org.apache.hadoop.io.LongWritable; import org.apache.hadoop.io.Text; import org.apache.hadoop.mapreduce.RecordWriter; import org.apache.hadoop.mapreduce.TaskAttemptContext; import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;

import java.io.IOException; //mapping默认输出为LongWritable, Text public class MyOutputFormat extends FileOutputFormat { //返回一个处理数据的Record Writer @Override public RecordWriter getRecordWriter(TaskAttemptContext job) throws IOException, InterruptedException { return new MyRecordWriter(job); } }

  1. 3.
  2. driver
  3. ```java
  4. package com.outputformat;
  5. import org.apache.hadoop.conf.Configuration;
  6. import org.apache.hadoop.fs.Path;
  7. import org.apache.hadoop.io.IntWritable;
  8. import org.apache.hadoop.io.LongWritable;
  9. import org.apache.hadoop.io.Text;
  10. import org.apache.hadoop.mapreduce.Job;
  11. import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
  12. import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
  13. import java.io.IOException;
  14. public class OutputDrive {
  15. public static void main(String[] args) throws IOException, InterruptedException, ClassNotFoundException {
  16. Configuration configuration = new Configuration();
  17. Job job = Job.getInstance(configuration);
  18. job.setJarByClass(OutputDrive.class);
  19. job.setOutputFormatClass(MyOutputFormat.class);
  20. FileInputFormat.setInputPaths(job, new Path("d:/input"));
  21. //必须保证配置文件配置正常才能正常运行
  22. FileOutputFormat.setOutputPath(job, new Path("d:/output"));
  23. boolean result = job.waitForCompletion(true);
  24. System.exit(result ? 0 : 1);
  25. }
  26. }

5. Reduce Join

reduce side join是一种最简单的join方式,其主要思想如下:
在map阶段,map函数同时读取两个文件File1和File2,为了区分两种来源的key/value数据对,对每条数据打一个标签> (tag),比如:tag=0表示来自文件File1,tag=2表示来自文件File2。即:map阶段的主要任务是对不同文件中的数据打标签。> 在reduce阶段,reduce函数获取key相同的来自File1和File2文件的value list,
然后对于同一个key,对File1和File2中的数据进行join(笛卡尔乘积)。即:reduce阶段进行实际的连接操作.

  • Map端的主要工作:为来自不同表或文件的key/value对,打标签以区别不同来源的记录。然后用连接字段作为key,其余部分和新加的标志作为value,最后进行输出。
  • Reduce端的主要工作:在Reduce端以连接字段作为key的分组已经完成,我们只需要在每一个分组当中将那些来源于不同文件的记录(在Map阶段已经打标志)分开,最后进行合并就ok了。
  • 该方法的缺点:这种方式的缺点很明显就是会造成Map和Reduce端也就是shuffle阶段出现大量的数据传输,效率很低。

11. MapReduce原理 - 图10

  1. 创建实体类 并排序 ```java package com.reducejoin;

import org.apache.hadoop.io.WritableComparable;

import java.io.DataInput; import java.io.DataOutput; import java.io.IOException;

public class OrderBean implements WritableComparable {

  1. private String id;
  2. private String pid;
  3. private int amount;
  4. private String pname;
  5. @Override
  6. public String toString() {
  7. return id + "\t" + pname + "\t" + amount;
  8. }
  9. public String getId() {
  10. return id;
  11. }
  12. public void setId(String id) {
  13. this.id = id;
  14. }
  15. public String getPid() {
  16. return pid;
  17. }
  18. public void setPid(String pid) {
  19. this.pid = pid;
  20. }
  21. public int getAmount() {
  22. return amount;
  23. }
  24. public void setAmount(int amount) {
  25. this.amount = amount;
  26. }
  27. public String getPname() {
  28. return pname;
  29. }
  30. public void setPname(String pname) {
  31. this.pname = pname;
  32. }
  33. @Override
  34. public int compareTo(OrderBean o) {
  35. //按pid分组 组内按照pname降序排序
  36. int i = this.pid.compareTo(o.pid);
  37. if (i !=0){
  38. return i;
  39. }else {
  40. return o.pname.compareTo(this.pname);
  41. }
  42. }
  43. @Override
  44. public void write(DataOutput dataOutput) throws IOException {
  45. dataOutput.writeUTF(id);
  46. dataOutput.writeUTF(pid);
  47. dataOutput.writeInt(amount);
  48. dataOutput.writeUTF(pname);
  49. }
  50. @Override
  51. public void readFields(DataInput dataInput) throws IOException {
  52. this.id = dataInput.readUTF();
  53. this.pid = dataInput.readUTF();
  54. this.amount = dataInput.readInt();
  55. this.pname = dataInput.readUTF();
  56. }

}

  1. 2.
  2. Mapper 根据文件名的不同来封装实体类不同的实现
  3. ```java
  4. package com.reducejoin;
  5. import org.apache.hadoop.io.LongWritable;
  6. import org.apache.hadoop.io.NullWritable;
  7. import org.apache.hadoop.io.Text;
  8. import org.apache.hadoop.mapreduce.Mapper;
  9. import org.apache.hadoop.mapreduce.lib.input.FileSplit;
  10. import java.io.IOException;
  11. public class OrderMapper extends Mapper<LongWritable, Text, OrderBean, NullWritable> {
  12. private OrderBean order = new OrderBean();
  13. private String filename; //获取当前文件名
  14. @Override
  15. protected void setup(Mapper<LongWritable, Text, OrderBean, NullWritable>.Context context) throws IOException, InterruptedException {
  16. //获取数据文件名
  17. FileSplit fs = (FileSplit) context.getInputSplit();
  18. filename = fs.getPath().getName();
  19. }
  20. @Override
  21. protected void map(LongWritable key, Text value, Mapper<LongWritable, Text, OrderBean, NullWritable>.Context context) throws IOException, InterruptedException {
  22. String[] split = value.toString().split("\t");
  23. //封装,按数据来源不同分别封装
  24. if ("order.txt".equals(filename)){
  25. //封装order
  26. order.setId(split[0]);
  27. order.setPid(split[1]);
  28. order.setAmount(Integer.parseInt(split[2]));
  29. order.setPname(""); //不能为null
  30. }else{
  31. //封装pd
  32. order.setPid(split[0]);
  33. order.setPname(split[1]);
  34. order.setAmount(0); //不能为null
  35. order.setId("");
  36. }
  37. context.write(order,NullWritable.get());
  38. }
  39. }
  1. comparator 根据pid进行分组 ```java package com.reducejoin;

import org.apache.hadoop.io.WritableComparable; import org.apache.hadoop.io.WritableComparator;

//分组比较器 按照order对象的pid分组 public class OrderComparator extends WritableComparator {

  1. protected OrderComparator() {
  2. super(OrderBean.class,true);
  3. }
  4. //按照pid比较a和b
  5. @Override
  6. public int compare(WritableComparable a, WritableComparable b) {
  7. OrderBean oa= (OrderBean) a;
  8. OrderBean ob= (OrderBean) b;
  9. return oa.getPid().compareTo(ob.getPid());
  10. }

}

  1. 4.
  2. Reducer 进行替换合并处理好/标志好数据
  3. ```java
  4. package com.reducejoin;
  5. import org.apache.hadoop.io.NullWritable;
  6. import org.apache.hadoop.mapreduce.Reducer;
  7. import java.io.IOException;
  8. import java.util.Iterator;
  9. //数据替换工作 将pid换成对应的pname
  10. public class OrderReducer extends Reducer<OrderBean, NullWritable, OrderBean, NullWritable> {
  11. @Override
  12. protected void reduce(OrderBean key, Iterable<NullWritable> values, Reducer<OrderBean, NullWritable, OrderBean, NullWritable>.Context context) throws IOException, InterruptedException {
  13. /*for (NullWritable value : values) {
  14. if (!"".equals(key.getPname())){
  15. pName=key.getPname(); //遍历panme查找当前分组中有值的pname即品牌
  16. break; //但迭代器无法进行第二次迭代遍历
  17. }*/
  18. //已经根据pname再次排序 并进行分组 第一个为需要的品牌名pname
  19. Iterator<NullWritable> iterator = values.iterator();
  20. iterator.next();
  21. String pName = key.getPname(); //获取品牌名
  22. while (iterator.hasNext()) {
  23. iterator.next();
  24. key.setPname(pName); //替换为对应的品牌名
  25. context.write(key, NullWritable.get()); //写出
  26. }
  27. }
  28. }
  1. 驱动类 ```java package com.reducejoin;

import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.fs.Path; import org.apache.hadoop.io.NullWritable; import org.apache.hadoop.mapreduce.Job; import org.apache.hadoop.mapreduce.lib.input.FileInputFormat; import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;

import java.io.IOException;

public class OrderDriver { public static void main(String[] args) throws IOException, InterruptedException, ClassNotFoundException { Job job = Job.getInstance(new Configuration());

  1. job.setJarByClass(OrderDriver.class);
  2. job.setMapperClass(OrderMapper.class);
  3. job.setReducerClass(OrderReducer.class);
  4. job.setMapOutputKeyClass(OrderBean.class);
  5. job.setMapOutputValueClass(NullWritable.class);
  6. job.setOutputKeyClass(OrderBean.class);
  7. job.setOutputValueClass(NullWritable.class);
  8. job.setGroupingComparatorClass(OrderComparator.class); //分组比较器
  9. FileInputFormat.setInputPaths(job, new Path("d:/input"));
  10. FileOutputFormat.setOutputPath(job, new Path("d:/output"));
  11. boolean b = job.waitForCompletion(true);
  12. System.exit(b ? 0 : 1);
  13. }

}

  1. <a name="39b46b74"></a>
  2. # 6. MapJoin
  3. Map Join适用于一张表十分小、一张表很大的场景。在Map端缓存多张表,提前处理业务逻辑,这样增加Map端业务,减少Reduce端数据的压力,尽可能的减少数据倾斜。
  4. 而使用MapJoin只需编写 driver和map类 无需编写reduce类 因为不涉及到reduce阶段 我们在map阶段就处理完成
  5. 1.
  6. driver 开启分布式缓存并传递小文件路径
  7. ```java
  8. package com.mapjoin;
  9. import org.apache.hadoop.conf.Configuration;
  10. import org.apache.hadoop.fs.Path;
  11. import org.apache.hadoop.io.NullWritable;
  12. import org.apache.hadoop.io.Text;
  13. import org.apache.hadoop.mapreduce.Job;
  14. import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
  15. import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
  16. import java.io.IOException;
  17. import java.net.URI;
  18. public class MJDriver {
  19. public static void main(String[] args) throws IOException, InterruptedException, ClassNotFoundException {
  20. Job job = Job.getInstance(new Configuration());
  21. job.setJarByClass(MJDriver.class);
  22. job.setMapperClass(MJMapper.class);
  23. job.setNumReduceTasks(0); //Map端的join不需要Reduce阶段 所以设置ReduceTask数0
  24. //添加分布式缓存可以添加多值 传递为数组
  25. job.addCacheFile(URI.create("file:///d:/input/pd.txt")); //设置加载缓存数据
  26. job.setMapOutputKeyClass(Text.class);
  27. job.setMapOutputValueClass(NullWritable.class);
  28. FileInputFormat.setInputPaths(job, new Path("D:/input/order.txt"));
  29. FileOutputFormat.setOutputPath(job, new Path("d:/output"));
  30. boolean b = job.waitForCompletion(true);
  31. System.exit(b ? 0 : 1);
  32. }
  33. }
  1. mapper setup加载分布式缓存字节流put到map集合当中 在map中替换和处理要处理的数据 ```java package com.mapjoin;

import org.apache.commons.lang.StringUtils; import org.apache.hadoop.fs.FSDataInputStream; import org.apache.hadoop.fs.FileSystem; import org.apache.hadoop.fs.Path; import org.apache.hadoop.io.IOUtils; import org.apache.hadoop.io.LongWritable; import org.apache.hadoop.io.NullWritable; import org.apache.hadoop.io.Text; import org.apache.hadoop.mapreduce.Mapper;

import java.io.BufferedReader; import java.io.IOException; import java.io.InputStreamReader; import java.net.URI; import java.util.HashMap; import java.util.Map;

public class MJMapper extends Mapper {

  1. private Map<String, String> pMap = new HashMap<>();
  2. private Text k = new Text();
  3. @Override
  4. protected void setup(Mapper<LongWritable, Text, Text, NullWritable>.Context context) throws IOException, InterruptedException {
  5. //读取pd.txt到pMap
  6. //开流
  7. URI[] cacheFiles = context.getCacheFiles(); //读取分布式缓存文件路径数组
  8. FileSystem fileSystem = FileSystem.get(context.getConfiguration());
  9. FSDataInputStream pd = fileSystem.open(new Path(cacheFiles[0])); //pd文件
  10. //将文件按行处理 读取到pMap中
  11. BufferedReader br = new BufferedReader(new InputStreamReader(pd)); //将字节流转为字符流
  12. String line;
  13. while (StringUtils.isNotEmpty(line = br.readLine())) {
  14. String[] split = line.split("\t");
  15. pMap.put(split[0], split[1]); //转为map集合
  16. }
  17. IOUtils.closeStream(br);
  18. }
  19. //处理order.txt的数据
  20. @Override
  21. protected void map(LongWritable key, Text value, Mapper<LongWritable, Text, Text, NullWritable>.Context context) throws IOException, InterruptedException {
  22. String[] split = value.toString().split("\t");
  23. k.set(split[0] + "\t" + pMap.get(split[1]) + "\t" + split[2]); //从map中根据pid获取value替换
  24. context.write(k,NullWritable.get());
  25. }

}

  1. ![](https://cdn.jsdelivr.net/gh/Iekrwh/images/md-images/image-20211017103235651.png#alt=image-20211017103235651)
  2. 1. mapJoin效率比ReduceJoin
  3. 2. mapJoin因为是提前缓存数据到内存中 如果数据量庞大那么则无法使用
  4. <a name="25584f38"></a>
  5. # 7. 数据清洗(ETL)和计数器
  6. 在运行核心业务MapReduce程序之前,往往要先对数据进行清洗,清理掉不符合用户要求的数据。清理的过程往往只需要运行Mapper程序,不需要运行Reduce程序
  7. 1.
  8. 创建枚举类 方便构造计数器
  9. ```java
  10. package com.etl;
  11. public enum ETL {
  12. PASS,FAIL
  13. }
  1. mapper类 在setup方法中构造Counter 计数器 在map中通过数据清洗 计算出符合条件的条数 ```java package com.etl;

import org.apache.hadoop.io.LongWritable; import org.apache.hadoop.io.NullWritable; import org.apache.hadoop.io.Text; import org.apache.hadoop.mapreduce.Counter; import org.apache.hadoop.mapreduce.Mapper;

import java.io.IOException;

public class ETLMapper extends Mapper {

  1. private Counter pass;
  2. private Counter fail;
  3. @Override
  4. protected void setup(Mapper<LongWritable, Text, Text, NullWritable>.Context context) throws IOException, InterruptedException {

// pass = context.getCounter(“ETL”, “PASS”); //通过上下文构造一个计数器对象 // fail = context.getCounter(“ETL”, “Fail”); //通过key value赋值 pass = context.getCounter(ETL.PASS); //通过上下文构造一个计数器对象 fail = context.getCounter(ETL.FAIL); //通过枚举类来构造

  1. }
  2. //判断日志是否需要清洗
  3. @Override
  4. protected void map(LongWritable key, Text value, Mapper<LongWritable, Text, Text, NullWritable>.Context context) throws IOException, InterruptedException {
  5. String[] splits = value.toString().split(" ");
  6. if (splits.length > 11) {
  7. context.write(value, NullWritable.get());
  8. pass.increment(1); //计数器+1
  9. } else {
  10. fail.increment(1); //不符合条件的计数器+1
  11. }
  12. //此处没有作上下文写入 默认为不改变传递给reduce
  13. }

}

  1. 3.
  2. 驱动类
  3. ```java
  4. package com.etl;
  5. import org.apache.hadoop.conf.Configuration;
  6. import org.apache.hadoop.fs.Path;
  7. import org.apache.hadoop.io.LongWritable;
  8. import org.apache.hadoop.io.NullWritable;
  9. import org.apache.hadoop.io.Text;
  10. import org.apache.hadoop.mapreduce.Job;
  11. import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
  12. import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
  13. import java.io.IOException;
  14. public class ETLDriver {
  15. public static void main(String[] args) throws IOException, InterruptedException, ClassNotFoundException {
  16. Job job = Job.getInstance(new Configuration());
  17. job.setJarByClass(ETLDriver.class);
  18. job.setMapperClass(ETLMapper.class);
  19. job.setNumReduceTasks(0);
  20. job.setMapOutputKeyClass(Text.class);
  21. job.setMapOutputValueClass(NullWritable.class);
  22. FileInputFormat.setInputPaths(job, new Path("d:/input"));
  23. FileOutputFormat.setOutputPath(job, new Path("d:/output"));
  24. boolean b = job.waitForCompletion(true);
  25. System.exit(b ? 0 : 1);
  26. }
  27. }

11. MapReduce原理 - 图11

Fail为符合条件的内容条数

PASS为不符合条件的内容条数

8. 总结

  1. 输入数据接口: InputFormat

    • 默认使用的实现是: TextInputFormat
    • TextInputFormat的功能逻辑是: 一次读一行文本 然后将该行的起始偏移量作为key 行内容作为valuie返回
    • KeyVlaueTextInputFormat每一行均为一条记录 被分隔符分割为key value 默认的分隔符为 \t
    • NlineInputFormat 按照指定的行数N来划分切片
    • CombineTextInputFormat可以把多个小文件合并成一个切片处理 提高处理效率
    • 用户还可以自定义InputFormat
  2. 逻辑处理接口: Mapper

    • 根据业务需求实现 map() setup cleanup() 这三个方法
  3. Partitioner分区

    • 默认实现类 HashPartitioner 逻辑是根据key的哈希值 和 numReduces来返回一个分区号
      (key.hashCode() & Integer.MAXVALUE) % numReduces

    • 可以自定义分区

  4. Comparable 排序

    • 当我们用自定义的对象作为key来输出时 必须要实现 WritableComparable接口 重写其中的compareTo()方法
    • 部分排序: 对最终输出的每个文件进行内部排序
    • 全排序:对所有数据进行排序 通常只有一个Reduse
    • 二次排序: 排序的条件有两个
  5. Combiner 合并

    • Combiner合并可以提高程序的效率,减少IO传输.但是使用时必须不能影响原有的业务处理结果
  6. Reduce端分组

    • GroupingComparator 在Reduce端对key进行分组 应用于:在接收的key为bean对象时,想让一个或几个字段相同(全部字段比较不相同)的key进入到同一个reduce方法时,可以采用分组排序
  7. 逻辑处理接口 Reducer

    • 根据业务需求实现 reduce() setup cleanup() 这三个方法
  8. 输出数据接口 OutputFormat

    • 默认实现类是TextOutputFormat 功能逻辑是 将每一个键值对 想目标文本文件输出一行
    • 将SequenceFileOutputFormat输出作为后续MapReduce任务的输入,这个是一种比较好的输出格式,y我它格式紧凑 容易被压缩
    • 可以自定义OutputFormat