Documentation ¶
Index ¶
- type Target
- func (t *Target) Close(context.Context) error
- func (t *Target) InitializeRelation(ctx context.Context, relation *db.Relation, source io.Reader) error
- func (t *Target) String() string
- func (t *Target) VerifyRelation(ctx context.Context, relation *db.Relation) (bool, error)
- func (t *Target) Write(ctx context.Context, batch []*db.WalTransaction) error
Constants ¶
This section is empty.
Variables ¶
This section is empty.
Functions ¶
This section is empty.
Types ¶
type Target ¶
type Target struct {
// contains filtered or unexported fields
}
func (*Target) InitializeRelation ¶
func (t *Target) InitializeRelation(ctx context.Context, relation *db.Relation, source io.Reader) error
InitializeRelation creates a relation and populates it with initial data from source In order to avoid leaving any relation in an intermediate state if interrupted by errors, the strategy is to create a table with `_SCRATCH` appended to the name and load data to the scratch table. When loading is complete, atomically make the table live (by removing the _SCRATCH) suffix. This would be simpler if it could be done in a transaction, but Snowflake ends any current transaction as soon as a DDL statement is executed (like CREATE TABLE or ALTER TABLE). https://docs.snowflake.com/en/sql-reference/transactions.html#ddl
func (*Target) VerifyRelation ¶
func (*Target) Write ¶
Write pushes events from postgres into snowflake. Each batch is a list of complete postgres transactions (so referential integrity will be maintained after a complete batch). Each action affects one row, even if a single statement was issued in postgres that affected more than one row. For example, `INSERT INTO items(id) SELECT i FROM generate_series(0, 100) as t(i);` is one postgres statement that inserts 100 rows. This would generate an Actions object with 100 individual Insert actions. In order to execute this efficiently in Snowflake, we must group actions to execute multiple actions as a single snowflake statement.