[Flume]使用 Flume 来传递web log 到 hdfs 的例子

简介:

[Flume]使用 Flume 来传递web log 到 hdfs 的例子:

在 hdfs 上创建存储 log 的目录: 
$ hdfs dfs -mkdir -p /test001/weblogsflume

指定log 输入的目录:
$ sudo mkdir -p /flume/weblogsmiddle

设定使得log 可以被任何用户访问:
$ sudo chmod a+w -R /flume
$

设置配置文件内容:

$ cat /mytraining/exercises/flume/spooldir.conf

复制代码

#Setting component
agent1.sources = webserver-log-source
agent1.sinks = hdfs-sink
agent1.channels = memory-channel

#Setting source
agent1.sources.webserver-log-source.type = spooldir
agent1.sources.webserver-log-source.spoolDir = /flume/weblogsmiddle
agent1.sources.webserver-log-source.channels = memory-channel

#Setting sinks
agent1.sinks.hdfs-sink.type = hdfs
agent1.sinks.hdfs-sink.hdfs.path = /test001/weblogsflume/
agent1.sinks.hdfs-sink.channel = memory-channel
agent1.sinks.hdfs-sink.hdfs.rollInterval = 0
agent1.sinks.hdfs-sink.hdfs.rollSize = 524288
agent1.sinks.hdfs-sink.hdfs.rollCount = 0
agent1.sinks.hdfs-sink.hdfs.fileType = DataStream

#Setting channels 
agent1.channels.memory-channel.type = memory
agent1.channels.memory-channel.capacity = 100000
agent1.channels.memory-channel.transactionCapacity = 1000

 

复制代码

 

$cd /mytraining/exercises/flume/spooldir.conf

启动 Flume:

$ flume-ng agent --conf /etc/flume-ng/conf \
> --conf-file spooldir.conf \
> --name agent1 -Dflume.root.logger=INFO,console

 

复制代码

Info: Sourcing environment configuration script /etc/flume-ng/conf/flume-env.sh
Info: Including Hadoop libraries found via (/usr/bin/hadoop) for HDFS access
Info: Excluding /usr/lib/hadoop/lib/slf4j-api-1.7.5.jar from classpath
Info: Excluding /usr/lib/hadoop/lib/slf4j-log4j12.jar from classpath
Info: Including HBASE libraries found via (/usr/bin/hbase) for HBASE access
Info: Excluding /usr/lib/hbase/bin/../lib/slf4j-api-1.7.5.jar from classpath
Info: Excluding /usr/lib/hbase/bin/../lib/slf4j-log4j12.jar from classpath
Info: Excluding /usr/lib/hadoop/lib/slf4j-api-1.7.5.jar from classpath
Info: Excluding /usr/lib/hadoop/lib/slf4j-log4j12.jar from classpath
Info: Excluding /usr/lib/hadoop/lib/slf4j-api-1.7.5.jar from classpath
Info: Excluding /usr/lib/hadoop/lib/slf4j-log4j12.jar from classpath
Info: Excluding /usr/lib/zookeeper/lib/slf4j-api-1.7.5.jar from classpath
Info: Excluding /usr/lib/zookeeper/lib/slf4j-log4j12-1.7.5.jar from classpath
Info: Excluding /usr/lib/zookeeper/lib/slf4j-log4j12.jar from classpath
Info: Including Hive libraries found via () for Hive access

...

-Djava.library.path=:/usr/lib/hadoop/lib/native:/usr/lib/hadoop/lib/native:/usr/lib/hbase/bin/../lib/native/Linux-amd64-64 org.apache.flume.node.Application --conf-file spooldir.conf --name agent1
2017-10-20 21:07:08,929 (lifecycleSupervisor-1-0) [INFO - org.apache.flume.node.PollingPropertiesFileConfigurationProvider.start(PollingPropertiesFileConfigurationProvider.java:61)] Configuration provider starting
2017-10-20 21:07:09,057 (conf-file-poller-0) [INFO - org.apache.flume.node.PollingPropertiesFileConfigurationProvider$FileWatcherRunnable.run(PollingPropertiesFileConfigurationProvider.java:133)] Reloading configuration file:spooldir.conf
2017-10-20 21:07:09,300 (conf-file-poller-0) [INFO - org.apache.flume.conf.FlumeConfiguration$AgentConfiguration.addProperty(FlumeConfiguration.java:1017)] Processing:hdfs-sink
2017-10-20 21:07:09,302 (conf-file-poller-0) [INFO - org.apache.flume.conf.FlumeConfiguration$AgentConfiguration.addProperty(FlumeConfiguration.java:1017)] Processing:hdfs-sink
2017-10-20 21:07:09,302 (conf-file-poller-0) [INFO - org.apache.flume.conf.FlumeConfiguration$AgentConfiguration.addProperty(FlumeConfiguration.java:931)] Added sinks: hdfs-sink Agent: agent1

...

2017-10-20 21:07:09,304 (conf-file-poller-0) [INFO - org.apache.flume.conf.FlumeConfiguration$AgentConfiguration.addProperty(FlumeConfiguration.java:1017)] Processing:hdfs-sink
2017-10-20 21:07:09,306 (conf-file-poller-0) [INFO - org.apache.flume.conf.FlumeConfiguration$AgentConfiguration.addProperty(FlumeConfiguration.java:1017)] Processing:hdfs-sink
2017-10-20 21:07:09,310 (conf-file-poller-0) [INFO - org.apache.flume.conf.FlumeConfiguration$AgentConfiguration.addProperty(FlumeConfiguration.java:1017)] Processing:hdfs-sink
...

2017-10-20 21:07:10,398 (conf-file-poller-0) 
[INFO - org.apache.flume.node.Application.startAllComponents(Application.java:138)] Starting new configuration:{ sourceRunners:{webserver-log-source=EventDrivenSourceRunner: { source:Spool Directory source webserver-log-source: { spoolDir: /flume/weblogsmiddle } }} sinkRunners:{hdfs-sink=SinkRunner: { policy:org.apache.flume.sink.DefaultSinkProcessor@12c67180 counterGroup:{ name:null counters:{} } }} channels:{memory-channel=org.apache.flume.channel.MemoryChannel{name: memory-channel}} }

...

2017-10-20 21:10:25,268 (pool-6-thread-1) [INFO - org.apache.flume.client.avro.ReliableSpoolingFileEventReader.readEvents(ReliableSpoolingFileEventReader.java:238)] Last read was never committed - resetting mark position.

复制代码

 

向 /flume/weblogsmiddle 传入 log:

cp -r /mytest/weblogs /tmp/tmpweblogs
mv /tmp/tmpweblogs/* /flume/weblogsmiddle


等待几分钟后,查看 hdfs 上的变化:


$ hdfs dfs -ls /test001/weblogsflume

复制代码
-rw-rw-rw- 1 training supergroup 527909 2017-10-20 21:10 /test001/weblogsflume/FlumeData.1508558917884
-rw-rw-rw- 1 training supergroup 527776 2017-10-20 21:10 /test001/weblogsflume/FlumeData.1508558917885
...

-rw-rw-rw- 1 training supergroup 527909 2017-10-20 21:10 /test001/weblogsflume/FlumeData.1508558917884
-rw-rw-rw- 1 training supergroup 527776 2017-10-20 21:10 /test001/weblogsflume/FlumeData.1508558917885
$ 
复制代码

 

在flume-ng 启动的窗口,按下 Ctrl+C Ctrol+Z 停止 flume 的运行

^C
^Z
[1]+ Stopped 
flume-ng agent --conf /etc/flume-ng/conf --conf-file spooldir.conf --name agent1 -Dflume.root.logger=INFO,console
[training@localhost flume]$

 





本文转自健哥的数据花园博客园博客,原文链接:http://www.cnblogs.com/gaojian/p/7706497.html,如需转载请自行联系原作者

相关实践学习
云数据库HBase版使用教程
  相关的阿里云产品:云数据库 HBase 版 面向大数据领域的一站式NoSQL服务,100%兼容开源HBase并深度扩展,支持海量数据下的实时存储、高并发吞吐、轻SQL分析、全文检索、时序时空查询等能力,是风控、推荐、广告、物联网、车联网、Feeds流、数据大屏等场景首选数据库,是为淘宝、支付宝、菜鸟等众多阿里核心业务提供关键支撑的数据库。 了解产品详情: https://cn.aliyun.com/product/hbase   ------------------------------------------------------------------------- 阿里云数据库体验:数据库上云实战 开发者云会免费提供一台带自建MySQL的源数据库 ECS 实例和一台目标数据库 RDS实例。跟着指引,您可以一步步实现将ECS自建数据库迁移到目标数据库RDS。 点击下方链接,领取免费ECS&RDS资源,30分钟完成数据库上云实战!https://developer.aliyun.com/adc/scenario/51eefbd1894e42f6bb9acacadd3f9121?spm=a2c6h.13788135.J_3257954370.9.4ba85f24utseFl
目录
相关文章
|
2月前
|
Java
flume的log4j.properties配置说明
flume的log4j.properties配置说明
|
3月前
|
存储 运维 应用服务中间件
[运维日志] Web 服务器日志依日期归档(Powershell 实现,附源代码)
[运维日志] Web 服务器日志依日期归档(Powershell 实现,附源代码)
74 0
|
5月前
|
存储 监控
63 Flume采集目录到HDFS
63 Flume采集目录到HDFS
31 0
|
5月前
|
分布式计算
33 MAPREDUCE的 web日志预处理
33 MAPREDUCE的 web日志预处理
28 0
|
3月前
|
Java 应用服务中间件 容器
SpringBoot 各种 Web 容器服开启 AccessLog 日志
SpringBoot 各种 Web 容器服开启 AccessLog 日志
36 0
|
9月前
|
消息中间件 存储 分布式计算
Flume实现Kafka数据持久化存储到HDFS
Flume实现Kafka数据持久化存储到HDFS
394 0
|
5月前
|
监控 Java
64 Flume采集文件到HDFS
64 Flume采集文件到HDFS
30 0
通过MAE和WEB采集基站一键式日志方法
通过MAE和WEB采集基站一键式日志方法
|
9月前
|
网络协议 Shell Perl
根据web访问日志,封禁请求量异常的IP,如IP在半小时后恢复正常,则解除封禁
根据web访问日志,封禁请求量异常的IP,如IP在半小时后恢复正常,则解除封禁
68 1
|
10月前
|
存储 JSON 运维
【Go】基于 Gin 从0到1搭建 Web 管理后台系统后端服务(一)项目初始化、配置和日志(下)
【Go】基于 Gin 从0到1搭建 Web 管理后台系统后端服务(一)项目初始化、配置和日志(下)