1. Flume 进阶
2. Flume 事务

3. Agent 内部原理

ChannelSelector
ChannelSelector的作用就是选出Event将要被发往哪个Channel。其共有两种类型,分别是Replicating(复制)和Multiplexing(多路复用)。 默认为Replicating(复制)
ReplicatingSelector会将同一个Event发往所有的Channel,Multiplexing会根据相应的原则,将不同的Event发往不同的Channel。SinkProcessor
SinkProcessor共有三种类型,分别是DefaultSinkProcessor、LoadBalancingSinkProcessor和FailoverSinkProcessor- DefaultSinkProcessor对应的是单个的Sink
- LoadBalancingSinkProcessor和FailoverSinkProcessor对应的是Sink Group,
- LoadBalancingSinkProcessor可以实现负载均衡的功能
- FailoverSinkProcessor可以错误恢复的功能。
4. Flume 拓扑结构
4.1. 简单串联

这种模式是将多个flume顺序连接起来了,从最初的source开始到最终sink传送的目的存储系统。此模式不建议桥接过多的flume数量, flume数量过多不仅会影响传输速率,而且一旦传输过程中某个节点flume宕机,会影响整个传输系统。
4.1.1. 实现串联 输出到arvo
flume1 配置文件
#agent1 netcatsource --> memorychannel --> arvosinka1.sources = r1a1.sinks = k1a1.channels = c1a1.sources.r1.type = netcata1.sources.r1.bind = hadoop102a1.sources.r1.port = 22222#设置为arvosink 向指定地址:端口输出数据a1.sinks.k1.type = arvo#输出数据的地址a1.sinks.k1.hostname = hadoop103#输出数据的地址a1.sinks.k1.port = 33333a1.channels.c1.type = memorya1.channels.c1.capacity = 1000a1.channels.c1.transactionCapacity = 100a1.sources.r1.channels = c1a1.sinks.k1.channel = c1
flume2 配置文件
#agent2 netcatsource --> memorychannel --> loggersinka1.sources = r1a1.sinks = k1a1.channels = c1#输入数据类型改为 arvoa1.sources.r1.type = arvo#输入地址a1.sources.r1.bind = hadoop102#输入端口a1.sources.r1.port = 33333#设置为logger 写入到log文件 持久化a1.sinks.k1.type = loggera1.channels.c1.type = memorya1.channels.c1.capacity = 1000a1.channels.c1.transactionCapacity = 100a1.sources.r1.channels = c1a1.sinks.k1.channel = c1
测试
#先启动103 否则102发送数据无人接收 在hadoop103操作flume-ng agent --conf conf/ --name a1 --conf-file datas/avrosource_loggersink.conf -Dflume.root.logger=INFO,console
#在hadoop102操作flume-ng agent --conf conf/ --name a1 --conf-file datas/netcatsource_avrosink.conf -Dflume.root.logger=INFO,console#在另外个ssh窗口中操作nc hadoop102 22222
4.2. 复制和多路复用

Flume支持将事件流向一个或者多个目的地。这种模式可以将相同数据复制到多个channel中,或者将不同数据分发到不同的channel中,sink可以选择传送到不同的目的地。
4.2.1. 实现复制 selector=replicating

从指定文件中读取日志 复制转发到个channel中 channel再转发给指定的sink方
#agent 1 hadoop102a1.sources = r1a1.sinks = k1 k2a1.channels = c1 c2a1.sources.r1.type = exec#读取hive日志文件a1.sources.r1.command = tail -F /opt/module/hive/logs/hive.log#selector频道选择器 默认为replicating 为复制 不配置type也是这个方案a1.sources.r1.selector.type = replicating#可选的channel#a1.sources.r1.selector.optional = c3#设置为arvosink 向指定地址:端口输出数据a1.sinks.k1.type = arvo#输出数据的地址a1.sinks.k1.hostname = hadoop103#输出数据的地址a1.sinks.k1.port = 33333#第二个sinksa1.sinks.k2.type = arvoa1.sinks.k2.hostname = hadoop104a1.sinks.k2.port = 44444a1.channels.c1.type = memorya1.channels.c1.capacity = 1000a1.channels.c1.transactionCapacity = 100#第二个channela1.channels.c2.type = memorya1.channels.c2.capacity = 1000a1.channels.c2.transactionCapacity = 100#一个sources 对接两个channelsa1.sources.r1.channels = c1 c2#每个sinks对应一个channela1.sinks.k1.channel = c1a1.sinks.k2.channel = c2
sink方1 从hadoop102接收数据 再存储到hdfs中
#agent2 hadoop103a1.sources = r1a1.sinks = k1a1.channels = c1a1.sources.r1.type = avroa1.sources.r1.bind = hadoop103a1.sources.r1.port = 33333# Describe the sinka1.sinks.k1.type = hdfsa1.sinks.k1.hdfs.path = hdfs://hadoop102:8020/flume2/%Y%m%d/%H#上传文件的前缀a1.sinks.k1.hdfs.filePrefix = flume2-#是否按照时间滚动文件夹a1.sinks.k1.hdfs.round = true#多少时间单位创建一个新的文件夹a1.sinks.k1.hdfs.roundValue = 1#重新定义时间单位a1.sinks.k1.hdfs.roundUnit = hour#是否使用本地时间戳a1.sinks.k1.hdfs.useLocalTimeStamp = true#积攒多少个Event才flush到HDFS一次a1.sinks.k1.hdfs.batchSize = 100#设置文件类型,可支持压缩a1.sinks.k1.hdfs.fileType = DataStream#多久生成一个新的文件a1.sinks.k1.hdfs.rollInterval = 600#设置每个文件的滚动大小大概是128Ma1.sinks.k1.hdfs.rollSize = 134217700#文件的滚动与Event数量无关a1.sinks.k1.hdfs.rollCount = 0a1.channels.c1.type = memorya1.channels.c1.capacity = 1000a1.channels.c1.transactionCapacity = 100a1.sources.r1.channels = c1a1.sinks.k1.channel = c1
sink2 从hadoop102接收数据 再通过File_roll sink存储到本地目录中
#agent3 hadoop104a1.sources = r1a1.sinks = k1a1.channels = c1a1.sources.r1.type = avroa1.sources.r1.bind = hadoop104a1.sources.r1.port = 44444#将event数据存储在指定sink为file_roll 本地存储模式a1.sinks.k1.type = file_roll#存放目录 输出的本地目录必须是已经存在的目录a1.sinks.k1.sink.directory = /opt/module/flume/demo#默认为30s 滚动文件 设置为0将不再滚动a1.sinks.k1.sink.rollInterval = 30a1.channels.c1.type = memorya1.channels.c1.capacity = 1000a1.channels.c1.transactionCapacity = 100a1.sources.r1.channels = c1a1.sinks.k1.channel = c1
启动 先启动103和104的监听 再启动102的监听
4.2.2. 实现多路复用 selector=multiplexing
agent1
#agent 1 hadoop102a1.sources = r1a1.sinks = k1 k2a1.channels = c1 c2a1.sources.r1.type = exec#读取hive日志文件a1.sources.r1.command = tail -F /opt/module/hive/logs/hive.log#复用配置#selector频道选择器 默认为replicating multiplexing为复用 需要配合拦截器使用a1.sources.r1.selector.type = multiplexing# header的key 根据event的header里面指定key 判断值 分发给哪个channela1.sources.r1.selector.header = state#CZ为自定义value值 为上面指定key中对应值 如key中值为CZ 则分发给 c1 channela1.sources.r1.selector.mapping.CZ = c1#值为US 则分发到 c2 channela1.sources.r1.selector.mapping.US = c2# 设置拦截器 (用于向headers中添加指定键值对)#拦截器名称a1.sources.r1.interceptors = i1#拦截器类型 static 向header添加 自定义键值对a1.sources.r1.interceptors.i1.type = statica1.sources.r1.interceptors.i1.key = state#多个值只能通过自定义拦截器定义 此处是写死为CZa1.sources.r1.interceptors.i1.value = CZ#设置为arvosink 向指定地址:端口输出数据a1.sinks.k1.type = arvo#输出数据的地址a1.sinks.k1.hostname = hadoop103#输出数据的地址a1.sinks.k1.port = 33333#第二个sinksa1.sinks.k2.type = arvoa1.sinks.k2.hostname = hadoop104a1.sinks.k2.port = 44444a1.channels.c1.type = memorya1.channels.c1.capacity = 1000a1.channels.c1.transactionCapacity = 100#第二个channela1.channels.c2.type = memorya1.channels.c2.capacity = 1000a1.channels.c2.transactionCapacity = 100#一个sources 对接两个channelsa1.sources.r1.channels = c1 c2#每个sinks对应一个channela1.sinks.k1.channel = c1a1.sinks.k2.channel = c2
agent2和agent3 与上面复制一样 或 自定义
4.3. 负载均衡和故障转移

Flume支持使用将多个sink逻辑上分到一个sink组,sink组配合不同的SinkProcessor可以实现负载均衡和错误恢复的功能。
4.3.1. 实现故障转移 processor=failover

#agent1 hadoop102a1.sources = r1a1.sinks = k1 k2a1.channels = c1a1.sources.r1.type = netcata1.sources.r1.bind = hadoop102a1.sources.r1.port = 22222#sinks1a1.sinks.k1.type = avroa1.sinks.k1.hostname = hadoop103a1.sinks.k1.port = 33333#sinks2a1.sinks.k2.type = avroa1.sinks.k2.hostname = hadoop104a1.sinks.k2.port = 44444#定义sinkgroupsa1.sinkgroups = g1#该组下面有哪些sink实例a1.sinkgroups.g1.sinks = k1 k2#failover为故障转移 默认为一对一a1.sinkgroups.g1.processor.type = failover#优先级 值越大优先级越大a1.sinkgroups.g1.processor.priority.k1 = 5a1.sinkgroups.g1.processor.priority.k2 = 10#sink连接超时时间 默认为30000毫秒a1.sinkgroups.g1.processor.maxpenalty = 10000a1.channels.c1.type = memorya1.channels.c1.capacity = 1000a1.channels.c1.transactionCapacity = 100a1.sources.r1.channels = c1#sinks绑定的channel 应为一个a1.sinks.k1.channel = c1a1.sinks.k2.channel = c1
4.3.2. 实现负载均衡 processor=load_balance
#agent1 hadoop102
a1.sources = r1
a1.sinks = k1 k2
a1.channels = c1
a1.sources.r1.type = netcat
a1.sources.r1.bind = hadoop102
a1.sources.r1.port = 22222
#sinks1
a1.sinks.k1.type = avro
a1.sinks.k1.hostname = hadoop103
a1.sinks.k1.port = 33333
#sinks2
a1.sinks.k2.type = avro
a1.sinks.k2.hostname = hadoop104
a1.sinks.k2.port = 44444
#定义sinkgroups
a1.sinkgroups = g1
#该组下面有哪些sink实例
a1.sinkgroups.g1.sinks = k1 k2
#load_balance 为负载均衡
a1.sinkgroups.g1.processor.type = load_balance
# 默认为round_robin轮询sink random为随机发给某个sink
a1.sinkgroups.g1.processor.selector = random
#连接超时时间 30000毫秒
a1.sinkgroups.g1.processor.maxpenalty = 10000
a1.channels.c1.type = memory
a1.channels.c1.capacity = 1000
a1.channels.c1.transactionCapacity = 100
a1.sources.r1.channels = c1
#sinks绑定的channel 应为一个
a1.sinks.k1.channel = c1
a1.sinks.k2.channel = c1
4.4. 聚合

这种模式是我们最常见的,也非常实用,日常web应用通常分布在上百个服务器,大者甚至上千个、上万个服务器。产生的日志,处理起来也非常麻烦。用flume的这种组合方式能很好的解决这一问题,每台服务器部署一个flume采集日志,传送到一个集中收集日志的flume,再由此flume上传到hdfs、hive、hbase等,进行日志分析。
4.4.1. 实现聚合

将分开的agent的sink全部汇总到一个agent上 再进行持久化
5. 拦截器 Interceptor
更多类型拦截器查看官方文档
通过配置文件 配置拦截器 agent名称.sources.r1.interceptors
a1.sources = r1
a1.sinks = k1
a1.channels = c1
a1.sources.r1.type = netcat
a1.sources.r1.bind = localhost
a1.sources.r1.port = 44444
# 设置拦截器 (用于向headers中添加时间戳)
#拦截器名称
a1.sources.r1.interceptors = i1
#拦截器类型 timestamp 向header添加时间戳
a1.sources.r1.interceptors.i1.type = timestamp
a1.sinks.k1.type = logger
a1.channels.c1.type = memory
a1.channels.c1.capacity = 1000
a1.channels.c1.transactionCapacity = 100
a1.sources.r1.channels = c1
a1.sinks.k1.channel = c1

