Documentation ¶
Index ¶
- Constants
- Variables
- func ExectionTimeCost(title string, start time.Time)
- func InitLogger(opt *LogOption)
- func Retry(times int, interval time.Duration, method func() error) error
- func UnsafeSliceToString(b []byte) string
- func UnsafeStringToSlice(s string) (b []byte)
- func WalkDirs(dir string, depth int, filter func(p string) bool, hook func(fs fs.FileInfo)) ([]string, error)
- type BlockingBuffer
- type Event
- type FileMeta
- type FileSink
- type FileSinkConf
- type LogCollector
- type LogConf
- type LogFileEvent
- type LogMeta
- type LogOption
- type LogReader
- type LogWatchOption
- type LogWatcher
- type LogWatcherOption
- type Sink
- type SinkConf
- type SinkType
- type TaskState
Constants ¶
const ( MaxBufferSize = 4 * MB MB = 1024 * 1024 KB = 1024 )
const ( LogFileRenameRotate = LogFileEvent(1) << iota // rename rotate: rename old log into a archived one, and then create a new file serving log LogFileModify LogFileChomd LogFileRemove LogFileDiscover = LogFileEvent(1) << 62 LogFileEventNotEncoded = LogFileEvent(1) << 63 WindowSize = 32 SignContentSize = 1024 )
Variables ¶
var (
ErrNoProgress = errors.New("Buffer Read From Line, No Prgress")
)
Functions ¶
func ExectionTimeCost ¶
func InitLogger ¶
func InitLogger(opt *LogOption)
func UnsafeSliceToString ¶
func UnsafeStringToSlice ¶
zero-copy slice convert to string
Types ¶
type BlockingBuffer ¶
type BlockingBuffer struct {
// contains filtered or unexported fields
}
BlockingBuffer is a thread-safe buffer, with blocking api. It block itself when data is full
func NewBlockingBuffer ¶
func NewBlockingBuffer(bufferSize int) *BlockingBuffer
func (*BlockingBuffer) Close ¶
func (b *BlockingBuffer) Close()
func (*BlockingBuffer) Fetch ¶
func (b *BlockingBuffer) Fetch() []byte
Fetch fetch data in buf and notify Waited Write
func (*BlockingBuffer) IfFullThenWait ¶
func (b *BlockingBuffer) IfFullThenWait()
func (*BlockingBuffer) ReadLinesFrom ¶
ReadLinesFrom make sure buffer read at least a line or none.
type FileSinkConf ¶
type FileSinkConf struct {
Dst string `json:"dst"`
}
type LogCollector ¶
type LogCollector struct {
// contains filtered or unexported fields
}
func NewLogCollector ¶
func NewLogCollector() *LogCollector
TODO(link.xk): add some option: filter interval, buffer size
func (*LogCollector) Close ¶
func (lc *LogCollector) Close() error
TODO(link.xk): make sure all resource has been released
func (*LogCollector) Init ¶
func (lc *LogCollector) Init(conf LogConf) error
Init plz make sure log is existed, otherwise init return a error and will not collect logs. Once Log File is register, it cannot be changed or added. Log path registers only when it inits.
func (*LogCollector) Join ¶
func (lc *LogCollector) Join()
func (*LogCollector) Run ¶
func (lc *LogCollector) Run() error
type LogConf ¶
type LogConf struct { Dir string `json:"dir"` Pattern string `json:"pattern"` LineSep string `json:"lineSeperator"` Sink SinkConf `json:"sink"` Watcher LogWatcherOption `json:"watcher"` }
func ReadLogCollectorConf ¶
type LogFileEvent ¶
type LogFileEvent uint64
func (LogFileEvent) String ¶
func (e LogFileEvent) String() string
type LogWatchOption ¶
type LogWatcher ¶
type LogWatcher struct { EventC chan *Event // contains filtered or unexported fields }
LogWatcher watch log files based on fsnotify. It mainly consists of a event-watcher and a event transform pattern, transform fsnotify event into LogFileEvent. Basically, it listens inotify event of log files, transforming into LogFileEvent, and pass to upper collector to cope with. The frequency of sending a event is determined by filterInterval and event kind change. It manages two goroutines in event-driven and poller mode. Event-Driven goroutine handle event from fsnotify.Watcher while poller gorotine which is trggered much less frequently, handles operations needing resume after some time, such as removeing a file by mistake.
Commonly speaking, Log File is watched as fellows: Step 1, Register File And add into fsnotify watcher. File Pattern supports glob. Note that log file's path is settled, and any other operation related with log's path will only use the determined path, rather than reseatch files.And a Discover Event is generated for each determined log file. Step 2, Watcher Events. Event is passed to LogWatcher via a chan. LogWatcher will transform raw event into LogFileEvent based on some pattern(linux only at now) or File's status. Specially, when file is removed, we consider it a mistake and put path into poller quere so that it will be rewatcher when poller triggers. Step 3: Send Event.
func NewLogWatcher ¶
func NewLogWatcher(option *LogWatchOption) *LogWatcher
func (*LogWatcher) Close ¶
func (lw *LogWatcher) Close()
func (*LogWatcher) Init ¶
func (lw *LogWatcher) Init() error
func (*LogWatcher) RegisterAndWatch ¶
func (lw *LogWatcher) RegisterAndWatch(dir, pattern string) error
RegisterAndWatch find files and register and watch them
func (*LogWatcher) RunEventHandler ¶
func (lw *LogWatcher) RunEventHandler()
start event handler goroutine.
type LogWatcherOption ¶
type Sink ¶
func NewFileSink ¶
func NewFileSink() Sink