
Taildir Source维护了一个json格式的position File,其会定期的往position File中更新每个文件读取到的最新的位置,因此能够实现断点续传vim flume-taildir-hdfs.confa3.sources = r3a3.sinks = k3a3.channels = c3# Describe/configure the sourcea3.sources.r3.type = TAILDIR#指定position_file位置a3.sources.r3.positionFile = /opt/egg/apache-flume-1.7.0-bin/upload/tail_dir.json#定义监控目录文件a3.sources.r3.filegroups = f1 f2a3.sources.r3.filegroups.f1 = /opt/egg/apache-flume-1.7.0-bin/upload/.*file.*a3.sources.r3.filegroups.f2 = /opt/module/flume/files/.*log.*# Describe the sinka3.sinks.k3.type = hdfsa3.sinks.k3.hdfs.path = hdfs://hadoop1:9000/flume/upload/%Y%m%d/%H#上传文件的前缀a3.sinks.k3.hdfs.filePrefix = tail-#是否按照时间滚动文件夹a3.sinks.k3.hdfs.round = true#多少时间单位创建一个新的文件夹a3.sinks.k3.hdfs.roundValue = 1#重新定义时间单位a3.sinks.k3.hdfs.roundUnit = hour#是否使用本地时间戳a3.sinks.k3.hdfs.useLocalTimeStamp = true#积攒多少个Event才flush到HDFS一次a3.sinks.k3.hdfs.batchSize = 100#设置文件类型,可支持压缩a3.sinks.k3.hdfs.fileType = DataStream#多久生成一个新的文件a3.sinks.k3.hdfs.rollInterval = 60#设置每个文件的滚动大小大概是128Ma3.sinks.k3.hdfs.rollSize = 134217700#文件的滚动与Event数量无关a3.sinks.k3.hdfs.rollCount = 0# Use a channel which buffers events in memorya3.channels.c3.type = memorya3.channels.c3.capacity = 1000a3.channels.c3.transactionCapacity = 100# Bind the source and sink to the channela3.sources.r3.channels = c3a3.sinks.k3.channel = c3启动flumebin/flume-ng agent --conf conf/ --name a3 --conf-file job/flume-taildir-hdfs.conf/opt/egg/apache-flume-1.7.0-bin/uploadecho hello >> file1.txtecho atguigu >> file2.txt查看hdfs会看到两个linux 文件输出到了一个hdfs文件里-rw-r--r-- 3 root supergroup 28 2019-09-01 16:29 /flume/upload/20190901/16/tail-.1567326496953