주식회사 누리아이티

정보자산의 보안강화를 위한 다계층 인증SW (BaroPAM) 전문기업.

▶ BaroSolution/가이드

정보자산의 이상접속 탐지/차단을 위한 실시간 로그 수집기인 BaroCollector 관리

누리아이티 2020. 2. 25. 10:57

 

 

1. BaroCollector 설치

 

BaroCollector 설치는 컴파일 후 생성된 flume-ng-jdbc-source-2.0.jar 파일을 $FLUME_HOME/lib 디렉토리에 다음과 같이 복사하면 된다.

 

[root] /home/flume-ng-sources/target > cp flume-ng-jdbc-source-2.0.jar $FLUME_HOME/lib/.

 

 

2. 환경 변수 설정

 

BaroCollector를 기동하려면 환경설정 파일인 flume-env.sh에 다음과 같이 환경 변수들을 정의해야 한다.

 

변수 설명 비고
FLUME_HOME Apache Flume이 설치된 디렉토리를 지정하는 변수  
FLUME_CLASSPATH Apache Flume Library 디렉토리를 지정하는 변수  
JAVA_HOME JDK가 설치된 디렉토리를 지정하는 변수  
CLASSPATH Java 프로그램을 컴파일(javac)이나 실행(java)할 때나 관련된 클래스를 지정하는 변수  
LANG 동일한 언어를 지원하는 데 필요한 로케일을 지정하는 변수  
PATH $FLUME_HOME/bin, $JAVA_HOME/bin PATH에 반드시 포함되어야 한다.  
     

 

[root] /usr/baropam/agent > vi flume-env.sh
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements.  See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership.  The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License.  You may obtain a copy of the License at
#
#     http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
 
# If this file is placed at FLUME_COTB_DIR/flume-env.sh, it will be sourced
# during Flume startup.
 
# Give Flume more memory and pre-allocate, enable remote monitoring via JMX
#JAVA_OPTS="-Xms100m -Xmx200m -Dcom.sun.management.jmxremote"
JAVA_OPTS="-XX:MaxDirectMemorySize=128m"
 
# Note that the Flume conf directory is always included in the classpath.
FLUME_HOME=/home/apache-flume-1.7.0-bin
FLUME_CLASSPATH=$FLUME_HOME/lib
 
# Java variables can be set here
JAVA_HOME=/usr/lib/jvm/jre-1.7.0-openjdk.x86_64
CLASSPATH=$CLASSPATH:$FLUME_CLASSPATH:$JAVA_HOME/lib:
 
# Enviroment variables can be set here.
LANG=ko_KR.euckr
#LANG=ko_KR.utf8
PATH=$PATH:$FLUME_HOME/bin:$JAVA_HOME/bin:/etc/alternatives

 

 

3. Log4j 속성 설정

 

log4j는 프로그램을 작성하는 도중에 로그를 남기기 위해 사용되는 자바 기반 로깅 유틸리티이다. 디버그용 도구로 주로 사용되고 있다.

 

log4j의 최근 버전에 의하면 높은 등급에서 낮은 등급으로의 6개 로그 레벨(FATAL, ERROR, WARN, INFO, DEBUG, TRACE)을 가지고 있다. 설정 파일에 대상별(자바에서는 패키지)로 레벨을 지정이 가능하고 그 등급 이상의 로그만 저장하는 방식이다.

 

[root] /usr/baropam/agent > vi log4j.properties
#
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements.  See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership.  The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License.  You may obtain a copy of the License at
#
#  http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied.  See the License for the
# specific language governing permissions and limitations
# under the License.
#
 
# Define some default values that can be overridden by system properties.
#
# For testing, it may also be convenient to specify
# -Dflume.root.logger=DEBUG,console when launching flume.
 
#flume.root.logger=DEBUG,console
flume.root.logger=INFO,LOGFILE
flume.log.dir=./logs
flume.log.file=flume.log
 
log4j.logger.org.apache.flume.lifecycle = INFO
log4j.logger.org.jboss = WARN
log4j.logger.org.mortbay = INFO
log4j.logger.org.apache.avro.ipc.NettyTransceiver = WARN
log4j.logger.org.apache.hadoop = INFO
 
# Define the root logger to the system property "flume.root.logger".
log4j.rootLogger=${flume.root.logger}
 
 
# Stock log4j rolling file appender
# Default log rotation configuration
log4j.appender.LOGFILE=org.apache.log4j.RollingFileAppender
log4j.appender.LOGFILE.MaxFileSize=100MB
log4j.appender.LOGFILE.MaxBackupIndex=10
log4j.appender.LOGFILE.File=${flume.log.dir}/${flume.log.file}
log4j.appender.LOGFILE.layout=org.apache.log4j.PatternLayout
log4j.appender.LOGFILE.layout.ConversionPattern=%d{dd MMM yyyy HH:mm:ss,SSS} %-5p [%t] (%C.%M:%L) %x - %m%n
 
 
# Warning: If you enable the following appender it will fill up your disk if you don't have a cleanup job!
# This uses the updated rolling file appender from log4j-extras that supports a reliable time-based rolling policy.
# See http://logging.apache.org/log4j/companions/extras/apidocs/org/apache/log4j/rolling/TimeBasedRollingPolicy.html
# Add "DAILY" to flume.root.logger above if you want to use this
log4j.appender.DAILY=org.apache.log4j.rolling.RollingFileAppender
log4j.appender.DAILY.rollingPolicy=org.apache.log4j.rolling.TimeBasedRollingPolicy
log4j.appender.DAILY.rollingPolicy.ActiveFileName=${flume.log.dir}/${flume.log.file}
log4j.appender.DAILY.rollingPolicy.FileNamePattern=${flume.log.dir}/${flume.log.file}.%d{yyyy-MM-dd}
log4j.appender.DAILY.layout=org.apache.log4j.PatternLayout
log4j.appender.DAILY.layout.ConversionPattern=%d{dd MMM yyyy HH:mm:ss,SSS} %-5p [%t] (%C.%M:%L) %x - %m%n
 
 
# console
# Add "console" to flume.root.logger above if you want to use this
log4j.appender.console=org.apache.log4j.ConsoleAppender
log4j.appender.console.target=System.err
log4j.appender.console.layout=org.apache.log4j.PatternLayout
log4j.appender.console.layout.ConversionPattern=%d (%t) [%p - %l] %m%n

 

 

4. BaroCollector 속성 설정

 

BaroCollector JDBCSource를 사용하려면 환경설정 파일인 flume.conf에 다음과 같이 Property들을 정의해야 한다.

 

[root] /usr/baropam/agent > vi flume.conf
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements.  See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership.  The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License.  You may obtain a copy of the License at
#
#  http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied.  See the License for the
# specific language governing permissions and limitations
# under the License.
 
# The configuration file needs to define the sources,
# the channels and the sinks.
# Sources, channels and sinks are defined per agent,
# in this case called 'agent'
 
avro-exec.sources = agent1
avro-exec.channels = mem-channel1
avro-exec.sinks = avroSink1
#avro-exec.sinks = avroSink1 avroSink2
 
# For each one of the sources, the type is defined
avro-exec.sources.agent1.type = exec
avro-exec.sources.agent1.command = tail -F /var/log/secure | egrep "Did not receive identification string|Invalid user|authentication failure;|keyboard-interactive|subsystem request|reverse mapping checking|Received disconnect"
avro-exec.sources.agent1.restartThrottle = 1000
avro-exec.sources.agent1.restart = true
avro-exec.sources.agent1.batchSize = 1
avro-exec.sources.agent1.charset.default = euc-kr
 
# The channel can be defined as follows.
avro-exec.sources.agent1.channels = mem-channel1
 
# Static Interceptor
avro-exec.sources.agent1.interceptors = i1
avro-exec.sources.agent1.interceptors.i1.type = static
avro-exec.sources.agent1.interceptors.i1.key = task_type
avro-exec.sources.agent1.interceptors.i1.value = 100
 
# Each sink's type must be defined
avro-exec.sinks.avroSink1.type = avro
avro-exec.sinks.avroSink1.hostname = 1.234.83.169
avro-exec.sinks.avroSink1.port = 61616
avro-exec.sinks.avroSink1.maxIoWorkers = 100
avro-exec.sinks.avroSink1.charset=euc-kr
#avro-exec.sinks.avroSink1.charset=utf-8
 
#avro-exec.sinks.avroSink2.type = avro
#avro-exec.sinks.avroSink2.hostname = 1.234.83.169
#avro-exec.sinks.avroSink2.port = 61617
#avro-exec.sinks.avroSink2.maxIoWorkers = 100
#avro-exec.sinks.avroSink2.charset=euc-kr
##avro-exec.sinks.avroSink2.charset=utf-8
 
# Load balance.
#avro-exec.sinkgroups = sinkgroup1
#avro-exec.sinkgroups.sinkgroup1.sinks = avroSink2 avroSink1
#avro-exec.sinkgroups.sinkgroup1.processor.type = load_balance
#avro-exec.sinkgroups.sinkgroup1.processor.backoff = true
##avro-exec.sinkgroups.sinkgroup1.processor.backoff = false
#avro-exec.sinkgroups.sinkgroup1.processor.selector = random
##avro-exec.sinkgroups.sinkgroup1.processor.selector = round_robin
 
# Failover Sink Processor
#avro-exec.sinkgroups = sinkgroup2
#avro-exec.sinkgroups.sinkgroup2.sinks = avroSink2 avroSink1
#avro-exec.sinkgroups.sinkgroup2.processor.type = failover
#avro-exec.sinkgroups.sinkgroup2.processor.priority.avroSink1 = 10
#avro-exec.sinkgroups.sinkgroup2.processor.priority.avroSink2 = 5
#avro-exec.sinkgroups.sinkgroup2.processor.maxpenalty = 10000
 
# Specify the channel the sink should use
avro-exec.sinks.avroSink1.channel = mem-channel1
#avro-exec.sinks.avroSink2.channel = mem-channel1
 
# Each channel's type is defined.
#avro-exec.channels.mem-channel1.type = memory
avro-exec.channels.mem-channel1.type = file
avro-exec.channels.mem-channel1.checkpointDir = ./checkpoint1
avro-exec.channels.mem-channel1.dataDirs = ./checkdata1
 
# Other config values specific to each type of channel(sink or source)
# can be defined as well
# In this case, it specifies the capacity of the memory channel
avro-exec.channels.mem-channel1.capacity = 10000
avro-exec.channels.mem-channel1.transactionCapacity = 10000
avro-exec.channels.mem-channel1.keep-alive = 3

 

 

5. BaroCollector 기동

 

BaroCollector를 기동하는 startup.sh 쉘 스크립트는 다음과 같다.

 

[root] /usr/baropam/agent > vi startup.sh
#!/bin/sh
 
export FLUME_HOME=/home/apache-flume-1.7.0-bin;
export JAVA_HOME=/usr/lib/jvm/jre-1.7.0-openjdk.x86_64;
 
export CLASSPATH=$CLASSPATH:$FLUME_HOME/lib:$JAVA_HOME/lib
export PATH=$PATH:$FLUME_HOME/bin:$JAVA_HOME/bin
 
export LANG=ko_KR.euckr
#export LANG=ko_KR.utf8
 
\rm /usr/baropam/agent/logs/flume*
 
flume-ng agent -n avro-exec -c /usr/baropam/agent/ -f flume.conf -Dflume.monitoring.type=http -Dflume.monitoring.port=61615 &
 

 

BaroCollector 기동은 startup.sh 쉘 스크립트를 백드라운드 프로세스로 다음과 같이 실행하면 된다.

 

[root] usr/baropam/agent > sh startup.sh &

 

BaroCollector가 실행되고 있는지 확인하기 위해서는 다음과 같은 명령어를 수행한다.

 

[root] /usr/baropam/agent > ps -ef|grep flume | grep avro-exec | grep -v grep

 

그러면, 다음과 같이 BaroCollector 프로세스가 존재하는지 확인할 수 있다.

 

[root] /usr/baropam/agent > ps -ef|grep flume | grep agent_500 | grep -v grep
root     11275     1  0 14:04 pts/5    00:00:02 /usr/lib/jvm/jre-1.7.0-openjdk.x86_64/bin/java -XX:MaxDirectMemorySize=128m -Dflume.monitoring.type=http -Dflume.monitoring.port=61615 -cp /usr/baropam/agent:/home/apache-flume-1.7.0-bin/lib/*:/home/apache-flume-1.7.0-bin/lib:/lib/* -Djava.library.path= org.apache.flume.node.Application -n avro-exec -f flume.conf

 

 

6. BaroCollector 종료

 

BaroCollector를 종료하는 shutdown.sh 쉘 스크립트는 다음과 같다.

 

[root] /usr/baropam/agent > vi shutdown.sh
#!/bin/sh
 
ps -ef|grep flume | grep avro-exec | grep -v grep |awk '{print "kill -9 "$2}'|sh -v

 

BaroCollector 종료는 shutdown.sh 쉘 스크립트를 다음과 같이 실행하면 된다.

 

[root] /usr/baropam/agent > sh shutdown.sh

 

 

7. BaroCollector 로그

 

BaroCollector 로그는 BaroCollector가 실행되면서 발생한 로그(INFO, WARN, ERROR) 및 수집하면서 남긴 로그들이 남아 향후 BaroCollector 상태 및 장애 발생시 원인 구명 등에 활용하는 중요한 로그다.

 

[root] /usr/baropam/agent/logs > ls -al
합계 20
drwxr-xr-x 2 root root  4096 12  4 11:05 .
drwxr-xr-x 6 root root  4096 12  4 11:04 ..
-rw-r--r-- 1 root root 12188 12  4 11:05 flume.log

 

[root] /usr/baropam/agent/logs > vi flume.log
04 12 2015 11:05:19,976 INFO  [lifecycleSupervisor-1-0] (org.apache.flume.node.PollingPropertiesFileConfigurationProvider.start:61)  - Configuration provider starting
04 12 2015 11:05:19,981 INFO  [conf-file-poller-0] (org.apache.flume.node.PollingPropertiesFileConfigurationProvider$FileWatcherRunnable.run:133)  - Reloading configuration file:flume.conf
04 12 2015 11:05:20,007 INFO  [conf-file-poller-0] (org.apache.flume.conf.FlumeConfiguration$AgentConfiguration.addProperty:1017)  - Processing:avroSink1
04 12 2015 11:05:20,009 INFO  [conf-file-poller-0] (org.apache.flume.conf.FlumeConfiguration$AgentConfiguration.addProperty:1017)  - Processing:avroSink1
04 12 2015 11:05:20,009 INFO  [conf-file-poller-0] (org.apache.flume.conf.FlumeConfiguration$AgentConfiguration.addProperty:1017)  - Processing:avroSink1
04 12 2015 11:05:20,009 INFO  [conf-file-poller-0] (org.apache.flume.conf.FlumeConfiguration$AgentConfiguration.addProperty:931)  - Added sinks: avroSink1 Agent: spooldir
04 12 2015 11:05:20,010 INFO  [conf-file-poller-0] (org.apache.flume.conf.FlumeConfiguration$AgentConfiguration.addProperty:1017)  - Processing:avroSink1
04 12 2015 11:05:20,010 INFO  [conf-file-poller-0] (org.apache.flume.conf.FlumeConfiguration$AgentConfiguration.addProperty:1017)  - Processing:avroSink1
04 12 2015 11:05:20,010 INFO  [conf-file-poller-0] (org.apache.flume.conf.FlumeConfiguration$AgentConfiguration.addProperty:1017)  - Processing:avroSink1
04 12 2015 11:05:20,032 INFO  [conf-file-poller-0] (org.apache.flume.conf.FlumeConfiguration.validateConfiguration:141)  - Post-validation flume configuration contains configuration for agents: [spooldir]
04 12 2015 11:05:20,033 INFO  [conf-file-poller-0] (org.apache.flume.node.AbstractConfigurationProvider.loadChannels:145)  - Creating channels
04 12 2015 11:05:20,058 INFO  [conf-file-poller-0] (org.apache.flume.channel.DefaultChannelFactory.create:42)  - Creating instance of channel mem-channel1 type file
04 12 2015 11:05:20,102 INFO  [conf-file-poller-0] (org.apache.flume.node.AbstractConfigurationProvider.loadChannels:200)  - Created channel mem-channel1
04 12 2015 11:05:20,103 INFO  [conf-file-poller-0] (org.apache.flume.source.DefaultSourceFactory.create:41)  - Creating instance of source agent1, type spooldir
04 12 2015 11:05:20,132 INFO  [conf-file-poller-0] (org.apache.flume.interceptor.StaticInterceptor$Builder.build:133)  - Creating StaticInterceptor: preserveExisting=true,key=task_type,value=700
04 12 2015 11:05:20,133 INFO  [conf-file-poller-0] (org.apache.flume.sink.DefaultSinkFactory.create:42)  - Creating instance of sink: avroSink1, type: avro
04 12 2015 11:05:20,138 INFO  [conf-file-poller-0] (org.apache.flume.sink.AbstractRpcSink.configure:183)  - Connection reset is set to 0. Will not reset connection to next hop
04 12 2015 11:05:20,139 INFO  [conf-file-poller-0] (org.apache.flume.node.AbstractConfigurationProvider.getConfiguration:114)  - Channel mem-channel1 connected to [agent1, avroSink1]
04 12 2015 11:05:20,144 INFO  [conf-file-poller-0] (org.apache.flume.node.Application.startAllComponents:138)  - Starting new configuration:{ sourceRunners:{agent1=EventDrivenSourceRunner: { source:Spool Directory source agent1: { spoolDir: ./file } }} sinkRunners:{avroSink1=SinkRunner: { policy:org.apache.flume.sink.DefaultSinkProcessor@7209d9af counterGroup:{ name:null counters:{} } }} channels:{mem-channel1=FileChannel mem-channel1 { dataDirs: [./checkdata1] }} }
04 12 2015 11:05:20,145 INFO  [conf-file-poller-0] (org.apache.flume.node.Application.startAllComponents:145)  - Starting Channel mem-channel1
04 12 2015 11:05:20,146 INFO  [lifecycleSupervisor-1-0] (org.apache.flume.channel.file.FileChannel.start:273)  - Starting FileChannel mem-channel1 { dataDirs: [./checkdata1] }...
04 12 2015 11:05:20,186 INFO  [lifecycleSupervisor-1-0] (org.apache.flume.channel.file.Log.<init>:344)  - Encryption is not enabled
04 12 2015 11:05:20,187 INFO  [lifecycleSupervisor-1-0] (org.apache.flume.channel.file.Log.replay:392)  - Replay started
04 12 2015 11:05:20,209 INFO  [lifecycleSupervisor-1-0] (org.apache.flume.channel.file.Log.replay:404)  - Found NextFileID 15, from [./checkdata1/log-14, ./checkdata1/log-15, ./checkdata1/log-12, ./checkdata1/log-13]
04 12 2015 11:05:20,226 INFO  [lifecycleSupervisor-1-0] (org.apache.flume.channel.file.EventQueueBackingStoreFileV3.<init>:55)  - Starting up with ./checkpoint1/checkpoint and ./checkpoint1/checkpoint.meta
04 12 2015 11:05:20,226 INFO  [lifecycleSupervisor-1-0] (org.apache.flume.channel.file.EventQueueBackingStoreFileV3.<init>:59)  - Reading checkpoint metadata from ./checkpoint1/checkpoint.meta
04 12 2015 11:05:20,455 INFO  [lifecycleSupervisor-1-0] (org.apache.flume.channel.file.FlumeEventQueue.<init>:116)  - QueueSet population inserting 0 took 0
04 12 2015 11:05:20,496 INFO  [lifecycleSupervisor-1-0] (org.apache.flume.channel.file.Log.replay:443)  - Last Checkpoint Thu Nov 26 11:19:29 KST 2015, queue depth = 0
04 12 2015 11:05:20,502 INFO  [lifecycleSupervisor-1-0] (org.apache.flume.channel.file.Log.doReplay:528)  - Replaying logs with v2 replay logic
04 12 2015 11:05:20,533 INFO  [lifecycleSupervisor-1-0] (org.apache.flume.channel.file.ReplayHandler.replayLog:249)  - Starting replay of [./checkdata1/log-12, ./checkdata1/log-13, ./checkdata1/log-14, ./checkdata1/log-15]
04 12 2015 11:05:20,546 INFO  [lifecycleSupervisor-1-0] (org.apache.flume.channel.file.ReplayHandler.replayLog:262)  - Replaying ./checkdata1/log-12
04 12 2015 11:05:20,586 INFO  [lifecycleSupervisor-1-0] (org.apache.flume.tools.DirectMemoryUtils.getDefaultDirectMemorySize:114)  - Unable to get maxDirectMemory from VM: NoSuchMethodException: sun.misc.VM.maxDirectMemory(null)
04 12 2015 11:05:20,593 INFO  [lifecycleSupervisor-1-0] (org.apache.flume.tools.DirectMemoryUtils.allocate:48)  - Direct Memory Allocation:  Allocation = 1048576, Allocated = 0, MaxDirectMemorySize = 134217728, Remaining = 134217728
04 12 2015 11:05:20,620 INFO  [lifecycleSupervisor-1-0] (org.apache.flume.channel.file.LogFile$SequentialReader.skipToLastCheckpointPosition:632)  - fast-forward to checkpoint position: 6610
04 12 2015 11:05:20,628 INFO  [lifecycleSupervisor-1-0] (org.apache.flume.channel.file.LogFile$SequentialReader.next:657)  - Encountered EOF at 6610 in ./checkdata1/log-12
04 12 2015 11:05:20,628 INFO  [lifecycleSupervisor-1-0] (org.apache.flume.channel.file.ReplayHandler.replayLog:262)  - Replaying ./checkdata1/log-13
04 12 2015 11:05:20,629 INFO  [lifecycleSupervisor-1-0] (org.apache.flume.channel.file.LogFile$SequentialReader.skipToLastCheckpointPosition:632)  - fast-forward to checkpoint position: 10491
04 12 2015 11:05:20,638 INFO  [lifecycleSupervisor-1-0] (org.apache.flume.channel.file.LogFile$SequentialReader.next:657)  - Encountered EOF at 10491 in ./checkdata1/log-13
04 12 2015 11:05:20,638 INFO  [lifecycleSupervisor-1-0] (org.apache.flume.channel.file.ReplayHandler.replayLog:262)  - Replaying ./checkdata1/log-14
04 12 2015 11:05:20,646 INFO  [lifecycleSupervisor-1-0] (org.apache.flume.channel.file.LogFile$SequentialReader.skipToLastCheckpointPosition:634)  - Checkpoint for file(/home/apache-flume-1.7.0-bin/agent/spooldir/./checkdata1/log-14) is: 1448504287314, which is beyond the requested checkpoint time: 1448504339289 and position 0
04 12 2015 11:05:20,660 INFO  [lifecycleSupervisor-1-0] (org.apache.flume.channel.file.ReplayHandler.replayLog:262)  - Replaying ./checkdata1/log-15
04 12 2015 11:05:20,664 INFO  [lifecycleSupervisor-1-0] (org.apache.flume.channel.file.LogFile$SequentialReader.skipToLastCheckpointPosition:632)  - fast-forward to checkpoint position: 3417
04 12 2015 11:05:20,672 INFO  [lifecycleSupervisor-1-0] (org.apache.flume.channel.file.LogFile$SequentialReader.next:657)  - Encountered EOF at 3417 in ./checkdata1/log-15
04 12 2015 11:05:20,672 INFO  [lifecycleSupervisor-1-0] (org.apache.flume.channel.file.LogFile$SequentialReader.next:657)  - Encountered EOF at 155 in ./checkdata1/log-14
04 12 2015 11:05:20,673 INFO  [lifecycleSupervisor-1-0] (org.apache.flume.channel.file.ReplayHandler.replayLog:346)  - read: 5, put: 0, take: 0, rollback: 0, commit: 0, skip: 5, eventCount:0
04 12 2015 11:05:20,673 INFO  [lifecycleSupervisor-1-0] (org.apache.flume.channel.file.FlumeEventQueue.replayComplete:409)  - Search Count = 0, Search Time = 0, Copy Count = 0, Copy Time = 0
04 12 2015 11:05:20,702 INFO  [lifecycleSupervisor-1-0] (org.apache.flume.channel.file.Log.replay:491)  - Rolling ./checkdata1
04 12 2015 11:05:20,702 INFO  [lifecycleSupervisor-1-0] (org.apache.flume.channel.file.Log.roll:961)  - Roll start ./checkdata1
04 12 2015 11:05:20,705 INFO  [lifecycleSupervisor-1-0] (org.apache.flume.channel.file.LogFile$Writer.<init>:214)  - Opened ./checkdata1/log-16
04 12 2015 11:05:20,746 INFO  [lifecycleSupervisor-1-0] (org.apache.flume.channel.file.Log.roll:977)  - Roll end
04 12 2015 11:05:20,747 INFO  [lifecycleSupervisor-1-0] (org.apache.flume.channel.file.EventQueueBackingStoreFile.beginCheckpoint:230)  - Start checkpoint for ./checkpoint1/checkpoint, elements to sync = 0
04 12 2015 11:05:20,772 INFO  [lifecycleSupervisor-1-0] (org.apache.flume.channel.file.EventQueueBackingStoreFile.checkpoint:255)  - Updating checkpoint metadata: logWriteOrderID: 1449194720258, queueSize: 0, queueHead: 731
04 12 2015 11:05:20,836 INFO  [lifecycleSupervisor-1-0] (org.apache.flume.channel.file.Log.writeCheckpoint:1034)  - Updated checkpoint for file: ./checkdata1/log-16 position: 0 logWriteOrderID: 1449194720258
04 12 2015 11:05:20,836 INFO  [lifecycleSupervisor-1-0] (org.apache.flume.channel.file.FileChannel.start:301)  - Queue Size after replay: 0 [channel=mem-channel1]
04 12 2015 11:05:20,886 INFO  [lifecycleSupervisor-1-0] (org.apache.flume.instrumentation.MonitoredCounterGroup.register:120)  - Monitored counter group for type: CHANNEL, name: mem-channel1: Successfully registered new MBean.
04 12 2015 11:05:20,886 INFO  [lifecycleSupervisor-1-0] (org.apache.flume.instrumentation.MonitoredCounterGroup.start:96)  - Component type: CHANNEL, name: mem-channel1 started
04 12 2015 11:05:20,886 INFO  [conf-file-poller-0] (org.apache.flume.node.Application.startAllComponents:173)  - Starting Sink avroSink1
04 12 2015 11:05:20,886 INFO  [lifecycleSupervisor-1-3] (org.apache.flume.sink.AbstractRpcSink.start:289)  - Starting RpcSink avroSink1 { host: 1.234.83.169, port: 61616 }...
04 12 2015 11:05:20,886 INFO  [conf-file-poller-0] (org.apache.flume.node.Application.startAllComponents:184)  - Starting Source agent1
04 12 2015 11:05:20,887 INFO  [lifecycleSupervisor-1-4] (org.apache.flume.source.SpoolDirectorySource.start:78)  - SpoolDirectorySource source starting with directory: ./file
04 12 2015 11:05:20,887 INFO  [lifecycleSupervisor-1-3] (org.apache.flume.instrumentation.MonitoredCounterGroup.register:120)  - Monitored counter group for type: SINK, name: avroSink1: Successfully registered new MBean.
04 12 2015 11:05:20,887 INFO  [lifecycleSupervisor-1-3] (org.apache.flume.instrumentation.MonitoredCounterGroup.start:96)  - Component type: SINK, name: avroSink1 started
04 12 2015 11:05:20,887 INFO  [lifecycleSupervisor-1-3] (org.apache.flume.sink.AbstractRpcSink.createConnection:206)  - Rpc sink avroSink1: Building RpcClient with hostname: 1.234.83.169, port: 61616
04 12 2015 11:05:20,888 INFO  [lifecycleSupervisor-1-3] (org.apache.flume.sink.AvroSink.initializeRpcClient:126)  - Attempting to create Avro Rpc client.
04 12 2015 11:05:21,214 INFO  [lifecycleSupervisor-1-4] (org.apache.flume.instrumentation.MonitoredCounterGroup.register:120)  - Monitored counter group for type: SOURCE, name: agent1: Successfully registered new MBean.
04 12 2015 11:05:21,214 INFO  [lifecycleSupervisor-1-4] (org.apache.flume.instrumentation.MonitoredCounterGroup.start:96)  - Component type: SOURCE, name: agent1 started
04 12 2015 11:05:21,441 INFO  [lifecycleSupervisor-1-3] (org.apache.flume.sink.AbstractRpcSink.start:303)  - Rpc sink avroSink1 started.