Flume Source官网剖析(博主推荐)

  

   不多说,直接上干货!

Flume Source官网剖析(博主推荐)

  一切来源于flume官网

http://flume.apache.org/FlumeUserGuide.html

Flume Source官网剖析(博主推荐)

Flume Sources

   

Avro Source

Flume Source官网剖析(博主推荐)

   官网给的例子是

a1.sources = r1
a1.channels = c1
a1.sources.r1.type = avro
a1.sources.r1.channels = c1
a1.sources.r1.bind = 0.0.0.0
a1.sources.r1.port = 4141

 而我们常用的一般是,

agent1.sources = avro-source1
agent1.channels = ch1



#Define and configure an Spool directory source
agent1.sources.avro-source1.channels=ch1
agent1.sources.avro-source1.type=avro
agent1.sources.avro-source1.bind=0.0.0.0
agent1.sources.avro-source1.port=4141


a1.sources.r1.interceptors = i1
a1.sources.r1.interceptors.i1.type = com.djt.flume.interceptor.BehaviorIterceptor$BehaviorIterceptorBuilder


 

   

Thrift Source

Flume Source官网剖析(博主推荐)

  

Exec Source

Flume Source官网剖析(博主推荐)

JMS Source

Flume Source官网剖析(博主推荐)

Spooling Directory Source(常用)

Flume Source官网剖析(博主推荐)

  官网上给的参考例子是

a1.channels = ch-1
a1.sources = src-1

a1.sources.src-1.type = spooldir
a1.sources.src-1.channels = ch-1
a1.sources.src-1.spoolDir = /var/log/apache/flumeSpool
a1.sources.src-1.fileHeader = true

  而我们常用的一般是,

agent1.sources = spool-source1
agent1.channels = ch1



#Define and configure an Spool directory source
agent1.sources.spool-source1.channels=ch1
agent1.sources.spool-source1.type=spooldir
agent1.sources.spool-source1.spoolDir=/home/hadoop/data/flume/sqooldir
agent1.sources.spool-source1.ignorePattern=event(_d{4}-d{2}-d{2}\_d{2}\_d{2})?.log(.COMPLETED)?
agent1.sources.spool-source1.deserializer.maxLineLength=10240

Taildir Source

Flume Source官网剖析(博主推荐)

Twitter 1% firehose Source

Flume Source官网剖析(博主推荐)

Kafka Source(常用)

Flume Source官网剖析(博主推荐)

   官网给的例子是

Example for topic subscription by comma-separated topic list.

tier1.sources.source1.type = org.apache.flume.source.kafka.KafkaSource
tier1.sources.source1.channels = channel1
tier1.sources.source1.batchSize = 5000
tier1.sources.source1.batchDurationMillis = 2000
tier1.sources.source1.kafka.bootstrap.servers = localhost:9092
tier1.sources.source1.kafka.topics = test1, test2
tier1.sources.source1.kafka.consumer.group.id = custom.g.id

 Example for topic subscription by regex

tier1.sources.source1.type = org.apache.flume.source.kafka.KafkaSource
tier1.sources.source1.channels = channel1
tier1.sources.source1.kafka.bootstrap.servers = localhost:9092
tier1.sources.source1.kafka.topics.regex = ^topic[0-9]$
# the default kafka.consumer.group.id=flume is used

  具体官网里,还给了Security and Kafka Source、TLS and Kafka Source、Kerberos and Kafka Source。自行去看吧

NetCat Source

Flume Source官网剖析(博主推荐)

Sequence Generator Source

Flume Source官网剖析(博主推荐)

Syslog Sources

Flume Source官网剖析(博主推荐)

Syslog TCP Source

Flume Source官网剖析(博主推荐)

Multiport Syslog TCP Source

Flume Source官网剖析(博主推荐)

Syslog UDP Source

Flume Source官网剖析(博主推荐)

HTTP Source

Flume Source官网剖析(博主推荐)

   官网还提供了JSONHandler、BlobHandler。这里不多说,自行去看吧

Stress Source

Flume Source官网剖析(博主推荐)

Legacy Sources

Flume Source官网剖析(博主推荐)

Avro Legacy Source

Flume Source官网剖析(博主推荐)

Thrift Legacy Source

Flume Source官网剖析(博主推荐)

Custom Source

Flume Source官网剖析(博主推荐)

Scribe Source

Flume Source官网剖析(博主推荐)

   

 大家自己去看,罗列出来,是让大家,不要局限于博客本身,眼光要放宽和多看官网,一切来源于官网。

  对于大数据无论是各种开源项目,看官网是学习的最好办法,再加上他人的中文博客。不要觉得英文恐惧,专业英语也就那样!变成高手和大牛,加油吧,zhouls!