ratchet: github.com/dailyburn/ratchet/processors Index | Files

package processors

import "github.com/dailyburn/ratchet/processors"

Package processors contains built-in DataProcessor implementations that are generic and potentially useful across any ETL project.

Index

Package Files

big_query_reader.go big_query_writer.go csv_transformer.go csv_writer.go doc.go file_reader.go ftp_writer.go func_transformer.go http_request.go io_reader.go io_reader_writer.go io_writer.go passthrough.go regexp_matcher.go s3_reader.go s3_writer.go scp.go sftp_reader.go sftp_writer.go sql_executor.go sql_reader.go sql_reader_writer.go sql_writer.go

type BigQueryConfig Uses

type BigQueryConfig struct {
    JsonPemPath string
    ProjectID   string
    DatasetID   string
}

BigQueryConfig is used when init'ing new BigQueryReader instances.

type BigQueryReader Uses

type BigQueryReader struct {
    PageSize         int    // defaults to 5000
    AggregateResults bool   // determines whether to send data as soon as available or to aggregate and send all query results, defaults to false
    UnflattenResults bool   // defaults to false
    TmpTableName     string // Used when UnflattenResults is true. default to "_ratchet_tmp"
    ConcurrencyLevel int    // See ConcurrentDataProcessor
    // contains filtered or unexported fields
}

BigQueryReader is used to query data from Google's BigQuery, and it behaves similarly to SQLReader. See SQLReader docs for explanation on static vs dynamic querying.

Note: If your data set contains nested/repeated fields you will likely want to get results back "unflattened." By default BigQuery returns results in a flattened format, which duplicates rows for each repeated value. This can be annoying to deal with, so BigQueryReader provides a "UnflattenResults" flag that will handle querying in such a way to get back unflattened results. This involves using a temporary table setting and a couple of other special query settings - read the BigQuery docs related to flatten and repeated fields for more info.

func NewBigQueryReader Uses

func NewBigQueryReader(config *BigQueryConfig, query string) *BigQueryReader

NewBigQueryReader returns an instance of a BigQueryExtractor ready to run a static query.

func NewDynamicBigQueryReader Uses

func NewDynamicBigQueryReader(config *BigQueryConfig, sqlGenerator func(data.JSON) (string, error)) *BigQueryReader

NewDynamicBigQueryReader returns an instance of a BigQueryExtractor ready to run a dynamic query based on the sqlGenerator function.

func (*BigQueryReader) Concurrency Uses

func (r *BigQueryReader) Concurrency() int

Concurrency defers to ConcurrentDataProcessor

func (*BigQueryReader) Finish Uses

func (r *BigQueryReader) Finish(outputChan chan data.JSON, killChan chan error)

Finish - see interface for documentation.

func (*BigQueryReader) ForEachQueryData Uses

func (r *BigQueryReader) ForEachQueryData(d data.JSON, killChan chan error, forEach func(d data.JSON))

ForEachQueryData handles generating the SQL (in case of dynamic mode), running the query and retrieving the data in data.JSON format, and then passing the results back witih the function call to forEach.

func (*BigQueryReader) ProcessData Uses

func (r *BigQueryReader) ProcessData(d data.JSON, outputChan chan data.JSON, killChan chan error)

ProcessData defers to ForEachQueryData

func (*BigQueryReader) String Uses

func (r *BigQueryReader) String() string

type BigQueryWriter Uses

type BigQueryWriter struct {
    ConcurrencyLevel int // See ConcurrentDataProcessor
    // contains filtered or unexported fields
}

BigQueryWriter is used to write data to Google's BigQuery. If the table you want to write to already exists, use NewBigQueryWriter, otherwise use NewBigQueryWriterForNewTable and the desired table structure will be created when the client is initiated.

func NewBigQueryWriter Uses

func NewBigQueryWriter(config *BigQueryConfig, tableName string) *BigQueryWriter

NewBigQueryWriter instantiates a new instance of BigQueryWriter

func NewBigQueryWriterForNewTable Uses

func NewBigQueryWriterForNewTable(config *BigQueryConfig, tableName string, fields map[string]string) *BigQueryWriter

NewBigQueryWriterForNewTable instantiates a new instance of BigQueryWriter and prepares to write results to a new table

func (*BigQueryWriter) Concurrency Uses

func (w *BigQueryWriter) Concurrency() int

Concurrency delegates to ConcurrentDataProcessor

func (*BigQueryWriter) Finish Uses

func (w *BigQueryWriter) Finish(outputChan chan data.JSON, killChan chan error)

Finish - see interface for documentation.

func (*BigQueryWriter) ProcessData Uses

func (w *BigQueryWriter) ProcessData(d data.JSON, outputChan chan data.JSON, killChan chan error)

ProcessData defers to WriterBatch

func (*BigQueryWriter) String Uses

func (w *BigQueryWriter) String() string

func (*BigQueryWriter) WriteBatch Uses

func (w *BigQueryWriter) WriteBatch(queuedRows []map[string]interface{}) (err error)

WriteBatch inserts the supplied data into BigQuery

type CSVTransformer Uses

type CSVTransformer struct {
    Parameters util.CSVParameters
}

CSVTransformer converts data.JSON objects into a CSV string object and sends it on to the next stage. In use-cases where you simply want to write to a CSV file, use CSVWriter instead.

CSVTransformer is for more complex use-cases where you need to generate CSV data and perhaps send it to multiple output stages.

func NewCSVTransformer Uses

func NewCSVTransformer() *CSVTransformer

NewCSVTransformer returns a new CSVTransformer wrapping the given io.Writer object

func (*CSVTransformer) Finish Uses

func (w *CSVTransformer) Finish(outputChan chan data.JSON, killChan chan error)

Finish - see interface for documentation.

func (*CSVTransformer) ProcessData Uses

func (w *CSVTransformer) ProcessData(d data.JSON, outputChan chan data.JSON, killChan chan error)

ProcessData defers to util.CSVProcess

func (*CSVTransformer) String Uses

func (w *CSVTransformer) String() string

type CSVWriter Uses

type CSVWriter struct {
    Parameters util.CSVParameters
}

CSVWriter is handles converting data.JSON objects into CSV format, and writing them to the given io.Writer. The Data must be a valid JSON object or a slice of valid JSON objects. If you already have Data formatted as a CSV string you can use an IoWriter instead.

func NewCSVWriter Uses

func NewCSVWriter(w io.Writer) *CSVWriter

NewCSVWriter returns a new CSVWriter wrapping the given io.Writer object

func (*CSVWriter) Finish Uses

func (w *CSVWriter) Finish(outputChan chan data.JSON, killChan chan error)

Finish - see interface for documentation.

func (*CSVWriter) ProcessData Uses

func (w *CSVWriter) ProcessData(d data.JSON, outputChan chan data.JSON, killChan chan error)

ProcessData defers to util.CSVProcess

func (*CSVWriter) String Uses

func (w *CSVWriter) String() string

type FileReader Uses

type FileReader struct {
    // contains filtered or unexported fields
}

FileReader opens and reads the contents of the given filename.

func NewFileReader Uses

func NewFileReader(filename string) *FileReader

NewFileReader returns a new FileReader that will read the entire contents of the given file path and send it at once. For buffered or line-by-line reading try using IoReader.

func (*FileReader) Finish Uses

func (r *FileReader) Finish(outputChan chan data.JSON, killChan chan error)

Finish - see interface for documentation.

func (*FileReader) ProcessData Uses

func (r *FileReader) ProcessData(d data.JSON, outputChan chan data.JSON, killChan chan error)

ProcessData reads a file and sends its contents to outputChan

func (*FileReader) String Uses

func (r *FileReader) String() string

type FtpWriter Uses

type FtpWriter struct {
    // contains filtered or unexported fields
}

FtpWriter type represents an ftp writter processor

func NewFtpWriter Uses

func NewFtpWriter(host, username, password, path string) *FtpWriter

NewFtpWriter instantiates new instance of an ftp writer

func (*FtpWriter) Finish Uses

func (f *FtpWriter) Finish(outputChan chan data.JSON, killChan chan error)

Finish closes open references to the remote file and server

func (*FtpWriter) ProcessData Uses

func (f *FtpWriter) ProcessData(d data.JSON, outputChan chan data.JSON, killChan chan error)

ProcessData writes data as is directly to the output file

func (*FtpWriter) String Uses

func (f *FtpWriter) String() string

type FuncTransformer Uses

type FuncTransformer struct {
    Name             string // can be set for more useful log output
    ConcurrencyLevel int    // See ConcurrentDataProcessor
    // contains filtered or unexported fields
}

FuncTransformer executes the given function on each data payload, sending the resuling data to the next stage.

While FuncTransformer is useful for simple data transformation, more complicated tasks justify building a custom implementation of DataProcessor.

func NewFuncTransformer Uses

func NewFuncTransformer(transform func(d data.JSON) data.JSON) *FuncTransformer

NewFuncTransformer instantiates a new instance of func transformer

func (*FuncTransformer) Concurrency Uses

func (t *FuncTransformer) Concurrency() int

Concurrency defers to ConcurrentDataProcessor

func (*FuncTransformer) Finish Uses

func (t *FuncTransformer) Finish(outputChan chan data.JSON, killChan chan error)

Finish - see interface for documentation.

func (*FuncTransformer) ProcessData Uses

func (t *FuncTransformer) ProcessData(d data.JSON, outputChan chan data.JSON, killChan chan error)

ProcessData runs the supplied func and sends the returned value to outputChan

func (*FuncTransformer) String Uses

func (t *FuncTransformer) String() string

type HTTPRequest Uses

type HTTPRequest struct {
    Request *http.Request
    Client  *http.Client
}

HTTPRequest executes an HTTP request and passes along the response body. It is simply wrapping an http.Request and http.Client object. See the net/http docs for more info: https://golang.org/pkg/net/http

func NewHTTPRequest Uses

func NewHTTPRequest(method, url string, body io.Reader) (*HTTPRequest, error)

NewHTTPRequest creates a new HTTPRequest and is essentially wrapping net/http's NewRequest function. See https://golang.org/pkg/net/http/#NewRequest

func (*HTTPRequest) Finish Uses

func (r *HTTPRequest) Finish(outputChan chan data.JSON, killChan chan error)

Finish - see interface for documentation.

func (*HTTPRequest) ProcessData Uses

func (r *HTTPRequest) ProcessData(d data.JSON, outputChan chan data.JSON, killChan chan error)

ProcessData sends data to outputChan if the response body is not null

func (*HTTPRequest) String Uses

func (r *HTTPRequest) String() string

type IoReader Uses

type IoReader struct {
    Reader     io.Reader
    LineByLine bool // defaults to true
    BufferSize int
    Gzipped    bool
}

IoReader wraps an io.Reader and reads it.

func NewIoReader Uses

func NewIoReader(reader io.Reader) *IoReader

NewIoReader returns a new IoReader wrapping the given io.Reader object.

func (*IoReader) Finish Uses

func (r *IoReader) Finish(outputChan chan data.JSON, killChan chan error)

Finish - see interface for documentation.

func (*IoReader) ForEachData Uses

func (r *IoReader) ForEachData(killChan chan error, foo func(d data.JSON))

ForEachData either reads by line or by buffered stream, sending the data back to the anonymous func that ultimately shoves it onto the outputChan

func (*IoReader) ProcessData Uses

func (r *IoReader) ProcessData(d data.JSON, outputChan chan data.JSON, killChan chan error)

ProcessData overwrites the reader if the content is Gzipped, then defers to ForEachData

func (*IoReader) String Uses

func (r *IoReader) String() string

type IoReaderWriter Uses

type IoReaderWriter struct {
    IoReader
    IoWriter
}

IoReaderWriter performs both the job of a IoReader and IoWriter. It will read data from the given io.Reader, write the resulting data to the given io.Writer, and (if the write was successful) send the data to the next stage of processing.

IoReaderWriter is composed of both a IoReader and IoWriter, so it supports all of the same properties and usage options.

func NewIoReaderWriter Uses

func NewIoReaderWriter(reader io.Reader, writer io.Writer) *IoReaderWriter

NewIoReaderWriter returns a new IoReaderWriter wrapping the given io.Reader object

func (*IoReaderWriter) Finish Uses

func (r *IoReaderWriter) Finish(outputChan chan data.JSON, killChan chan error)

Finish - see interface for documentation.

func (*IoReaderWriter) ProcessData Uses

func (r *IoReaderWriter) ProcessData(d data.JSON, outputChan chan data.JSON, killChan chan error)

ProcessData grabs data from IoReader.ForEachData, then sends it to IoWriter.ProcessData in addition to sending it upstream on the outputChan

func (*IoReaderWriter) String Uses

func (r *IoReaderWriter) String() string

type IoWriter Uses

type IoWriter struct {
    Writer     io.Writer
    AddNewline bool
}

IoWriter wraps any io.Writer object. It can be used to write data out to a File, os.Stdout, or any other task that can be supported via io.Writer.

func NewIoWriter Uses

func NewIoWriter(writer io.Writer) *IoWriter

NewIoWriter returns a new IoWriter wrapping the given io.Writer object

func (*IoWriter) Finish Uses

func (w *IoWriter) Finish(outputChan chan data.JSON, killChan chan error)

Finish - see interface for documentation.

func (*IoWriter) ProcessData Uses

func (w *IoWriter) ProcessData(d data.JSON, outputChan chan data.JSON, killChan chan error)

ProcessData writes the data

func (*IoWriter) String Uses

func (w *IoWriter) String() string

type Passthrough Uses

type Passthrough struct {
    // contains filtered or unexported fields
}

Passthrough simply passes the data on to the next stage. We have to set a placeholder field - if we leave this as an empty struct we get some properties for comparison and memory addressing that are not desirable and cause comparison bugs (see: http://dave.cheney.net/2014/03/25/the-empty-struct)

func NewPassthrough Uses

func NewPassthrough() *Passthrough

NewPassthrough instantiates a new instance of Passthrough

func (*Passthrough) Finish Uses

func (r *Passthrough) Finish(outputChan chan data.JSON, killChan chan error)

Finish - see interface for documentation.

func (*Passthrough) ProcessData Uses

func (r *Passthrough) ProcessData(d data.JSON, outputChan chan data.JSON, killChan chan error)

ProcessData blindly sends whatever it receives to the outputChan

func (*Passthrough) String Uses

func (r *Passthrough) String() string

type RegexpMatcher Uses

type RegexpMatcher struct {

    // Set to true to log each match attempt (logger must be in debug mode).
    DebugLog bool
    // contains filtered or unexported fields
}

RegexpMatcher checks if incoming data matches the given Regexp, and sends it on to the next stage only if it matches. It is using regexp.Match under the covers: https://golang.org/pkg/regexp/#Match

func NewRegexpMatcher Uses

func NewRegexpMatcher(pattern string) *RegexpMatcher

NewRegexpMatcher returns a new RegexpMatcher initialized with the given pattern to match.

func (*RegexpMatcher) Finish Uses

func (r *RegexpMatcher) Finish(outputChan chan data.JSON, killChan chan error)

Finish - see interface for documentation.

func (*RegexpMatcher) ProcessData Uses

func (r *RegexpMatcher) ProcessData(d data.JSON, outputChan chan data.JSON, killChan chan error)

ProcessData sends the data it receives to the outputChan only if it matches the supplied regex

func (*RegexpMatcher) String Uses

func (r *RegexpMatcher) String() string

type S3Reader Uses

type S3Reader struct {
    IoReader // embeds IoReader

    DeleteObjects bool
    // contains filtered or unexported fields
}

S3Reader handles retrieving objects from S3. Use NewS3ObjectReader to read a single object, or NewS3PrefixReader to read all objects matching the same prefix in your bucket. S3Reader embeds an IoReeader, so it will support the same configuration options as IoReader.

func NewS3ObjectReader Uses

func NewS3ObjectReader(awsID, awsSecret, awsRegion, bucket, object string) *S3Reader

NewS3ObjectReader reads a single object from the given S3 bucket

func NewS3PrefixReader Uses

func NewS3PrefixReader(awsID, awsSecret, awsRegion, bucket, prefix string) *S3Reader

NewS3PrefixReader reads a all objects from the given S3 bucket that match a prefix. See http://docs.aws.amazon.com/AmazonS3/latest/dev/ListingKeysHierarchy.html S3 Delimiter will be "/"

func (*S3Reader) Finish Uses

func (r *S3Reader) Finish(outputChan chan data.JSON, killChan chan error)

Finish - see interface for documentation.

func (*S3Reader) ProcessData Uses

func (r *S3Reader) ProcessData(d data.JSON, outputChan chan data.JSON, killChan chan error)

ProcessData reads an entire directory if a prefix is provided (sending each file in that directory to outputChan), or just sends the single file to outputChan if a complete file path is provided (not a prefix/directory).

It optionally deletes all processed objects once the contents have been sent to outputChan

func (*S3Reader) String Uses

func (r *S3Reader) String() string

type S3Writer Uses

type S3Writer struct {
    Compress      bool
    LineSeparator string
    // contains filtered or unexported fields
}

S3Writer sends data upstream to S3. By default, we will not compress data before sending it. Set the `Compress` flag to true to use gzip compression before storing in S3 (if this flag is set to true, ".gz" will automatically be appended to the key name specified).

By default, we will separate each iteration of data sent to `ProcessData` with a new line when we piece back together to send to S3. Change the `LineSeparator` attribute to change this behavior.

func NewS3Writer Uses

func NewS3Writer(awsID, awsSecret, awsRegion, bucket, key string) *S3Writer

NewS3Writer instaniates a new S3Writer

func (*S3Writer) Finish Uses

func (w *S3Writer) Finish(outputChan chan data.JSON, killChan chan error)

Finish writes all enqueued data to S3, defering to util.WriteS3Object

func (*S3Writer) ProcessData Uses

func (w *S3Writer) ProcessData(d data.JSON, outputChan chan data.JSON, killChan chan error)

ProcessData enqueues all received data

func (*S3Writer) String Uses

func (w *S3Writer) String() string

type SCP Uses

type SCP struct {
    Port        string // e.g., "2222" -- only send for non-standard ports
    Object      string // e.g., "/path/to/file.txt"
    Destination string // e.g., "user@host:/path/to/destination/"

}

SCP executes the scp command, sending the given file to the given destination.

func NewSCP Uses

func NewSCP(obj string, destination string) *SCP

NewSCP instantiates a new instance of SCP

func (*SCP) Finish Uses

func (s *SCP) Finish(outputChan chan data.JSON, killChan chan error)

Finish defers to Run

func (*SCP) ProcessData Uses

func (s *SCP) ProcessData(d data.JSON, outputChan chan data.JSON, killChan chan error)

ProcessData sends all data to outputChan

func (*SCP) Run Uses

func (s *SCP) Run(killChan chan error)

Run executes the scp command from the attributes of the SCP struct

type SQLExecutor Uses

type SQLExecutor struct {
    // contains filtered or unexported fields
}

SQLExecutor runs the given SQL and swallows any returned data.

It can operate in 2 modes: 1) Static - runs the given SQL query and ignores any received data. 2) Dynamic - generates a SQL query for each data payload it receives.

The dynamic SQL generation is implemented by passing in a "sqlGenerator" function to NewDynamicSQLExecutor. This allows you to write whatever code is needed to generate SQL based upon data flowing through the pipeline.

func NewDynamicSQLExecutor Uses

func NewDynamicSQLExecutor(dbConn *sql.DB, sqlGenerator func(data.JSON) (string, error)) *SQLExecutor

NewDynamicSQLExecutor returns a new SQLExecutor operating in dynamic mode.

func NewSQLExecutor Uses

func NewSQLExecutor(dbConn *sql.DB, sql string) *SQLExecutor

NewSQLExecutor returns a new SQLExecutor

func (*SQLExecutor) Finish Uses

func (s *SQLExecutor) Finish(outputChan chan data.JSON, killChan chan error)

Finish - see interface for documentation.

func (*SQLExecutor) ProcessData Uses

func (s *SQLExecutor) ProcessData(d data.JSON, outputChan chan data.JSON, killChan chan error)

ProcessData runs the SQL statements, deferring to util.ExecuteSQLQuery

func (*SQLExecutor) String Uses

func (s *SQLExecutor) String() string

type SQLReader Uses

type SQLReader struct {
    BatchSize         int
    StructDestination interface{}
    ConcurrencyLevel  int // See ConcurrentDataProcessor
    // contains filtered or unexported fields
}

SQLReader runs the given SQL and passes the resulting data to the next stage of processing.

It can operate in 2 modes: 1) Static - runs the given SQL query and ignores any received data. 2) Dynamic - generates a SQL query for each data payload it receives.

The dynamic SQL generation is implemented by passing in a "sqlGenerator" function to NewDynamicSQLReader. This allows you to write whatever code is needed to generate SQL based upon data flowing through the pipeline.

func NewDynamicSQLReader Uses

func NewDynamicSQLReader(dbConn *sql.DB, sqlGenerator func(data.JSON) (string, error)) *SQLReader

NewDynamicSQLReader returns a new SQLReader operating in dynamic mode.

func NewSQLReader Uses

func NewSQLReader(dbConn *sql.DB, sql string) *SQLReader

NewSQLReader returns a new SQLReader operating in static mode.

func (*SQLReader) Concurrency Uses

func (s *SQLReader) Concurrency() int

Concurrency defers to ConcurrentDataProcessor

func (*SQLReader) Finish Uses

func (s *SQLReader) Finish(outputChan chan data.JSON, killChan chan error)

Finish - see interface for documentation.

func (*SQLReader) ForEachQueryData Uses

func (s *SQLReader) ForEachQueryData(d data.JSON, killChan chan error, forEach func(d data.JSON))

ForEachQueryData handles generating the SQL (in case of dynamic mode), running the query and retrieving the data in data.JSON format, and then passing the results back witih the function call to forEach.

func (*SQLReader) ProcessData Uses

func (s *SQLReader) ProcessData(d data.JSON, outputChan chan data.JSON, killChan chan error)

ProcessData - see interface for documentation.

func (*SQLReader) String Uses

func (s *SQLReader) String() string

type SQLReaderWriter Uses

type SQLReaderWriter struct {
    SQLReader
    SQLWriter
    ConcurrencyLevel int // See ConcurrentDataProcessor
}

SQLReaderWriter performs both the job of a SQLReader and SQLWriter. This means it will run a SQL query, write the resulting data into a SQL database, and (if the write was successful) send the queried data to the next stage of processing.

SQLReaderWriter is composed of both a SQLReader and SQLWriter, so it supports all of the same properties and usage options (such as static versus dynamic SQL querying).

func NewDynamicSQLReaderWriter Uses

func NewDynamicSQLReaderWriter(readConn *sql.DB, writeConn *sql.DB, sqlGenerator func(data.JSON) (string, error), writeTable string) *SQLReaderWriter

NewDynamicSQLReaderWriter returns a new SQLReaderWriter ready for dynamic querying.

func NewSQLReaderWriter Uses

func NewSQLReaderWriter(readConn *sql.DB, writeConn *sql.DB, readQuery, writeTable string) *SQLReaderWriter

NewSQLReaderWriter returns a new SQLReaderWriter ready for static querying.

func (*SQLReaderWriter) Concurrency Uses

func (s *SQLReaderWriter) Concurrency() int

Concurrency defers to ConcurrentDataProcessor

func (*SQLReaderWriter) Finish Uses

func (s *SQLReaderWriter) Finish(outputChan chan data.JSON, killChan chan error)

Finish - see interface for documentation.

func (*SQLReaderWriter) ProcessData Uses

func (s *SQLReaderWriter) ProcessData(d data.JSON, outputChan chan data.JSON, killChan chan error)

ProcessData uses SQLReader methods for processing data - this works via composition

func (*SQLReaderWriter) String Uses

func (s *SQLReaderWriter) String() string

type SQLWriter Uses

type SQLWriter struct {
    TableName        string
    OnDupKeyUpdate   bool
    OnDupKeyFields   []string
    ConcurrencyLevel int // See ConcurrentDataProcessor
    BatchSize        int
    // contains filtered or unexported fields
}

SQLWriter handles INSERTing data.JSON into a specified SQL table. If an error occurs while building or executing the INSERT, the error will be sent to the killChan.

Note that the data.JSON must be a valid JSON object or a slice of valid objects, where the keys are column names and the the values are the SQL values to be inserted into those columns.

For use-cases where a SQLWriter instance needs to write to multiple tables you can pass in SQLWriterData.

func NewSQLWriter Uses

func NewSQLWriter(db *sql.DB, tableName string) *SQLWriter

NewSQLWriter returns a new SQLWriter

func (*SQLWriter) Concurrency Uses

func (s *SQLWriter) Concurrency() int

Concurrency defers to ConcurrentDataProcessor

func (*SQLWriter) Finish Uses

func (s *SQLWriter) Finish(outputChan chan data.JSON, killChan chan error)

Finish - see interface for documentation.

func (*SQLWriter) ProcessData Uses

func (s *SQLWriter) ProcessData(d data.JSON, outputChan chan data.JSON, killChan chan error)

ProcessData defers to util.SQLInsertData

func (*SQLWriter) String Uses

func (s *SQLWriter) String() string

type SQLWriterData Uses

type SQLWriterData struct {
    TableName  string      `json:"table_name"`
    InsertData interface{} `json:"insert_data"`
}

SQLWriterData is a custom data structure you can send into a SQLWriter stage if you need to specify TableName on a per-data payload basis. No extra configuration is needed to use SQLWriterData, each data payload received is first checked for this structure before processing.

type SftpReader Uses

type SftpReader struct {
    IoReader // embeds IoReader

    DeleteObjects bool
    Walk          bool
    FileNamesOnly bool

    CloseOnFinish bool
    // contains filtered or unexported fields
}

SftpReader reads a single object at a given path, or walks through the directory specified by the path (SftpReader.Walk must be set to true).

To only send full paths (and not file contents), set FileNamesOnly to true. If FileNamesOnly is set to true, DeleteObjects will be ignored.

func NewSftpReader Uses

func NewSftpReader(server string, username string, path string, authMethods ...ssh.AuthMethod) *SftpReader

NewSftpReader instantiates a new sftp reader, a connection to the remote server is delayed until data is recv'd by the reader By default, the connection to the remote client will be closed in the Finish() func. Set CloseOnFinish to false to manage the connection manually.

func NewSftpReaderByClient Uses

func NewSftpReaderByClient(client *sftp.Client, path string) *SftpReader

NewSftpReaderByClient instantiates a new sftp reader using an existing connection to the remote server. By default, the connection to the remote client will *not* be closed in the Finish() func. Set CloseOnFinish to true to have this processor clean up the connection when it's done.

func (*SftpReader) CloseClient Uses

func (r *SftpReader) CloseClient()

CloseClient allows you to manually close the connection to the remote client (as the remote client itself is not exported)

func (*SftpReader) Finish Uses

func (r *SftpReader) Finish(outputChan chan data.JSON, killChan chan error)

Finish optionally closes open references to the remote server

func (*SftpReader) ProcessData Uses

func (r *SftpReader) ProcessData(d data.JSON, outputChan chan data.JSON, killChan chan error)

ProcessData optionally walks through the tree to send each object separately, or sends the single object upstream

func (*SftpReader) String Uses

func (r *SftpReader) String() string

type SftpWriter Uses

type SftpWriter struct {
    CloseOnFinish bool
    // contains filtered or unexported fields
}

SftpWriter is an inline writer to remote sftp server

func NewSftpWriter Uses

func NewSftpWriter(server string, username string, path string, authMethods ...ssh.AuthMethod) *SftpWriter

NewSftpWriter instantiates a new sftp writer, a connection to the remote server is delayed until data is recv'd by the writer By default, the connection to the remote client will be closed in the Finish() func. Set CloseOnFinish to false to manage the connection manually.

func NewSftpWriterByFile Uses

func NewSftpWriterByFile(file *sftp.File) *SftpWriter

NewSftpWriterByFile allows you to manually manage the connection to the remote file object. Use this if you want to write to the same file object across multiple pipelines. By default, the connection to the remote client will *not* be closed in the Finish() func. Set CloseOnFinish to true to have this processor clean up the connection when it's done.

func (*SftpWriter) Finish Uses

func (w *SftpWriter) Finish(outputChan chan data.JSON, killChan chan error)

Finish optionally closes open references to the remote file and server

func (*SftpWriter) ProcessData Uses

func (w *SftpWriter) ProcessData(d data.JSON, outputChan chan data.JSON, killChan chan error)

ProcessData writes data as is directly to the output file

func (*SftpWriter) String Uses

func (w *SftpWriter) String() string

Package processors imports 21 packages (graph) and is imported by 1 packages. Updated 2016-08-25. Refresh now. Tools for package owners.