pgx

package module
v3.1.0+incompatible Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Jan 15, 2018 License: MIT Imports: 29 Imported by: 0

README

Build Status

pgx - PostgreSQL Driver and Toolkit

pgx is a pure Go driver and toolkit for PostgreSQL. pgx is different from other drivers such as pq because, while it can operate as a database/sql compatible driver, pgx is also usable directly. It offers a native interface similar to database/sql that offers better performance and more features.

var name string
var weight int64
err := conn.QueryRow("select name, weight from widgets where id=$1", 42).Scan(&name, &weight)
if err != nil {
    return err
}

Features

pgx supports many additional features beyond what is available through database/sql.

  • Support for approximately 60 different PostgreSQL types
  • Batch queries
  • Single-round trip query mode
  • Full TLS connection control
  • Binary format support for custom types (can be much faster)
  • Copy protocol support for faster bulk data loads
  • Extendable logging support including built-in support for log15 and logrus
  • Connection pool with after connect hook to do arbitrary connection setup
  • Listen / notify
  • PostgreSQL array to Go slice mapping for integers, floats, and strings
  • Hstore support
  • JSON and JSONB support
  • Maps inet and cidr PostgreSQL types to net.IPNet and net.IP
  • Large object support
  • NULL mapping to Null* struct or pointer to pointer.
  • Supports database/sql.Scanner and database/sql/driver.Valuer interfaces for custom types
  • Logical replication connections, including receiving WAL and sending standby status updates
  • Notice response handling (this is different than listen / notify)

Performance

pgx performs roughly equivalent to go-pg and is almost always faster than pq. When parsing large result sets the percentage difference can be significant (16483 queries/sec for pgx vs. 10106 queries/sec for pq -- 63% faster).

In many use cases a significant cause of latency is network round trips between the application and the server. pgx supports query batching to bundle multiple queries into a single round trip. Even in the case of a connection with the lowest possible latency, a local Unix domain socket, batching as few as three queries together can yield an improvement of 57%. With a typical network connection the results can be even more substantial.

See this gist for the underlying benchmark results or checkout go_db_bench to run tests for yourself.

In addition to the native driver, pgx also includes a number of packages that provide additional functionality.

github.com/jackc/pgx/stdlib

database/sql compatibility layer for pgx. pgx can be used as a normal database/sql driver, but at any time the native interface may be acquired for more performance or PostgreSQL specific functionality.

github.com/jackc/pgx/pgtype

Approximately 60 PostgreSQL types are supported including uuid, hstore, json, bytea, numeric, interval, inet, and arrays. These types support database/sql interfaces and are usable even outside of pgx. They are fully tested in pgx and pq. They also support a higher performance interface when used with the pgx driver.

github.com/jackc/pgx/pgproto3

pgproto3 provides standalone encoding and decoding of the PostgreSQL v3 wire protocol. This is useful for implementing very low level PostgreSQL tooling.

github.com/jackc/pgx/pgmock

pgmock offers the ability to create a server that mocks the PostgreSQL wire protocol. This is used internally to test pgx by purposely inducing unusual errors. pgproto3 and pgmock together provide most of the foundational tooling required to implement a PostgreSQL proxy or MitM (such as for a custom connection pooler).

Documentation

pgx includes extensive documentation in the godoc format. It is viewable online at godoc.org.

Testing

pgx supports multiple connection and authentication types. Setting up a test environment that can test all of them can be cumbersome. In particular, Windows cannot test Unix domain socket connections. Because of this pgx will skip tests for connection types that are not configured.

Normal Test Environment

To setup the normal test environment, first install these dependencies:

go get github.com/cockroachdb/apd
go get github.com/hashicorp/go-version
go get github.com/jackc/fake
go get github.com/lib/pq
go get github.com/pkg/errors
go get github.com/satori/go.uuid
go get github.com/shopspring/decimal
go get github.com/sirupsen/logrus
go get go.uber.org/zap
go get gopkg.in/inconshreveable/log15.v2

Then run the following SQL:

create user pgx_md5 password 'secret';
create user " tricky, ' } "" \ test user " password 'secret';
create database pgx_test;
create user pgx_replication with replication password 'secret';

Connect to database pgx_test and run:

create extension hstore;

Next open conn_config_test.go.example and make a copy without the .example. If your PostgreSQL server is accepting connections on 127.0.0.1, then you are done.

Connection and Authentication Test Environment

Complete the normal test environment setup and also do the following.

Run the following SQL:

create user pgx_none;
create user pgx_pw password 'secret';

Add the following to your pg_hba.conf:

If you are developing on Unix with domain socket connections:

local  pgx_test  pgx_none  trust
local  pgx_test  pgx_pw    password
local  pgx_test  pgx_md5   md5

If you are developing on Windows with TCP connections:

host  pgx_test  pgx_none  127.0.0.1/32 trust
host  pgx_test  pgx_pw    127.0.0.1/32 password
host  pgx_test  pgx_md5   127.0.0.1/32 md5
Replication Test Environment

Add a replication user:

create user pgx_replication with replication password 'secret';

Add a replication line to your pg_hba.conf:

host replication pgx_replication 127.0.0.1/32 md5

Change the following settings in your postgresql.conf:

wal_level=logical
max_wal_senders=5
max_replication_slots=5

Set replicationConnConfig appropriately in conn_config_test.go.

Version Policy

pgx follows semantic versioning for the documented public API on stable releases. Branch v3 is the latest stable release. master can contain new features or behavior that will change or be removed before being merged to the stable v3 branch (in practice, this occurs very rarely). v2 is the previous stable release.

Documentation

Overview

Package pgx is a PostgreSQL database driver.

pgx provides lower level access to PostgreSQL than the standard database/sql. It remains as similar to the database/sql interface as possible while providing better speed and access to PostgreSQL specific features. Import github.com/jackc/pgx/stdlib to use pgx as a database/sql compatible driver.

Query Interface

pgx implements Query and Scan in the familiar database/sql style.

var sum int32

// Send the query to the server. The returned rows MUST be closed
// before conn can be used again.
rows, err := conn.Query("select generate_series(1,$1)", 10)
if err != nil {
    return err
}

// rows.Close is called by rows.Next when all rows are read
// or an error occurs in Next or Scan. So it may optionally be
// omitted if nothing in the rows.Next loop can panic. It is
// safe to close rows multiple times.
defer rows.Close()

// Iterate through the result set
for rows.Next() {
    var n int32
    err = rows.Scan(&n)
    if err != nil {
        return err
    }
    sum += n
}

// Any errors encountered by rows.Next or rows.Scan will be returned here
if rows.Err() != nil {
    return err
}

// No errors found - do something with sum

pgx also implements QueryRow in the same style as database/sql.

var name string
var weight int64
err := conn.QueryRow("select name, weight from widgets where id=$1", 42).Scan(&name, &weight)
if err != nil {
    return err
}

Use Exec to execute a query that does not return a result set.

commandTag, err := conn.Exec("delete from widgets where id=$1", 42)
if err != nil {
    return err
}
if commandTag.RowsAffected() != 1 {
    return errors.New("No row found to delete")
}

Connection Pool

Connection pool usage is explicit and configurable. In pgx, a connection can be created and managed directly, or a connection pool with a configurable maximum connections can be used. The connection pool offers an after connect hook that allows every connection to be automatically setup before being made available in the connection pool.

It delegates methods such as QueryRow to an automatically checked out and released connection so you can avoid manually acquiring and releasing connections when you do not need that level of control.

var name string
var weight int64
err := pool.QueryRow("select name, weight from widgets where id=$1", 42).Scan(&name, &weight)
if err != nil {
    return err
}

Base Type Mapping

pgx maps between all common base types directly between Go and PostgreSQL. In particular:

Go           PostgreSQL
-----------------------
string       varchar
             text

// Integers are automatically be converted to any other integer type if
// it can be done without overflow or underflow.
int8
int16        smallint
int32        int
int64        bigint
int
uint8
uint16
uint32
uint64
uint

// Floats are strict and do not automatically convert like integers.
float32      float4
float64      float8

time.Time   date
            timestamp
            timestamptz

[]byte      bytea

Null Mapping

pgx can map nulls in two ways. The first is package pgtype provides types that have a data field and a status field. They work in a similar fashion to database/sql. The second is to use a pointer to a pointer.

var foo pgtype.Varchar
var bar *string
err := conn.QueryRow("select foo, bar from widgets where id=$1", 42).Scan(&a, &b)
if err != nil {
    return err
}

Array Mapping

pgx maps between int16, int32, int64, float32, float64, and string Go slices and the equivalent PostgreSQL array type. Go slices of native types do not support nulls, so if a PostgreSQL array that contains a null is read into a native Go slice an error will occur. The pgtype package includes many more array types for PostgreSQL types that do not directly map to native Go types.

JSON and JSONB Mapping

pgx includes built-in support to marshal and unmarshal between Go types and the PostgreSQL JSON and JSONB.

Inet and CIDR Mapping

pgx encodes from net.IPNet to and from inet and cidr PostgreSQL types. In addition, as a convenience pgx will encode from a net.IP; it will assume a /32 netmask for IPv4 and a /128 for IPv6.

Custom Type Support

pgx includes support for the common data types like integers, floats, strings, dates, and times that have direct mappings between Go and SQL. In addition, pgx uses the github.com/jackc/pgx/pgtype library to support more types. See documention for that library for instructions on how to implement custom types.

See example_custom_type_test.go for an example of a custom type for the PostgreSQL point type.

pgx also includes support for custom types implementing the database/sql.Scanner and database/sql/driver.Valuer interfaces.

Raw Bytes Mapping

[]byte passed as arguments to Query, QueryRow, and Exec are passed unmodified to PostgreSQL.

Transactions

Transactions are started by calling Begin or BeginEx. The BeginEx variant can create a transaction with a specified isolation level.

tx, err := conn.Begin()
if err != nil {
    return err
}
// Rollback is safe to call even if the tx is already closed, so if
// the tx commits successfully, this is a no-op
defer tx.Rollback()

_, err = tx.Exec("insert into foo(id) values (1)")
if err != nil {
    return err
}

err = tx.Commit()
if err != nil {
    return err
}

Copy Protocol

Use CopyFrom to efficiently insert multiple rows at a time using the PostgreSQL copy protocol. CopyFrom accepts a CopyFromSource interface. If the data is already in a [][]interface{} use CopyFromRows to wrap it in a CopyFromSource interface. Or implement CopyFromSource to avoid buffering the entire data set in memory.

rows := [][]interface{}{
    {"John", "Smith", int32(36)},
    {"Jane", "Doe", int32(29)},
}

copyCount, err := conn.CopyFrom(
    pgx.Identifier{"people"},
    []string{"first_name", "last_name", "age"},
    pgx.CopyFromRows(rows),
)

CopyFrom can be faster than an insert with as few as 5 rows.

Listen and Notify

pgx can listen to the PostgreSQL notification system with the WaitForNotification function. It takes a maximum time to wait for a notification.

err := conn.Listen("channelname")
if err != nil {
    return nil
}

if notification, err := conn.WaitForNotification(time.Second); err != nil {
    // do something with notification
}

TLS

The pgx ConnConfig struct has a TLSConfig field. If this field is nil, then TLS will be disabled. If it is present, then it will be used to configure the TLS connection. This allows total configuration of the TLS connection.

Logging

pgx defines a simple logger interface. Connections optionally accept a logger that satisfies this interface. Set LogLevel to control logging verbosity. Adapters for github.com/inconshreveable/log15, github.com/sirupsen/logrus, and the testing log are provided in the log directory.

Index

Constants

View Source
const (
	LogLevelTrace = 6
	LogLevelDebug = 5
	LogLevelInfo  = 4
	LogLevelWarn  = 3
	LogLevelError = 2
	LogLevelNone  = 1
)

The values for log levels are chosen such that the zero value means that no log level was specified.

View Source
const (
	Serializable    = TxIsoLevel("serializable")
	RepeatableRead  = TxIsoLevel("repeatable read")
	ReadCommitted   = TxIsoLevel("read committed")
	ReadUncommitted = TxIsoLevel("read uncommitted")
)

Transaction isolation levels

View Source
const (
	ReadWrite = TxAccessMode("read write")
	ReadOnly  = TxAccessMode("read only")
)

Transaction access modes

View Source
const (
	Deferrable    = TxDeferrableMode("deferrable")
	NotDeferrable = TxDeferrableMode("not deferrable")
)

Transaction deferrable modes

View Source
const (
	TxStatusInProgress      = 0
	TxStatusCommitFailure   = -1
	TxStatusRollbackFailure = -2
	TxStatusCommitSuccess   = 1
	TxStatusRollbackSuccess = 2
)
View Source
const (
	TextFormatCode   = 0
	BinaryFormatCode = 1
)

PostgreSQL format codes

Variables

View Source
var ErrAcquireTimeout = errors.New("timeout acquiring connection from pool")

ErrAcquireTimeout occurs when an attempt to acquire a connection times out.

View Source
var ErrClosedPool = errors.New("cannot acquire from closed pool")

ErrClosedPool occurs on an attempt to acquire a connection from a closed pool.

View Source
var ErrConnBusy = errors.New("conn is busy")

ErrConnBusy occurs when the connection is busy (for example, in the middle of reading query results) and another action is attempted.

View Source
var ErrDeadConn = errors.New("conn is dead")

ErrDeadConn occurs on an attempt to use a dead connection

View Source
var ErrInvalidLogLevel = errors.New("invalid log level")

ErrInvalidLogLevel occurs on attempt to set an invalid log level.

View Source
var ErrNoRows = errors.New("no rows in result set")

ErrNoRows occurs when rows are expected but none are returned.

View Source
var ErrTLSRefused = errors.New("server refused TLS connection")

ErrTLSRefused occurs when the connection attempt requires TLS and the PostgreSQL server refuses to use TLS

View Source
var ErrTxClosed = errors.New("tx is closed")
View Source
var ErrTxCommitRollback = errors.New("commit unexpectedly resulted in rollback")

ErrTxCommitRollback occurs when an error has occurred in a transaction and Commit() is called. PostgreSQL accepts COMMIT on aborted transactions, but it is treated as ROLLBACK.

Functions

func FormatLSN

func FormatLSN(lsn uint64) string

Format the given 64bit LSN value into the XXX/XXX format, which is the format reported by postgres.

func ParseLSN

func ParseLSN(lsn string) (outputLsn uint64, err error)

Parse the given XXX/XXX format LSN as reported by postgres, into a 64 bit integer as used internally by the wire procotols

Types

type Batch

type Batch struct {
	// contains filtered or unexported fields
}

Batch queries are a way of bundling multiple queries together to avoid unnecessary network round trips.

func (*Batch) Close

func (b *Batch) Close() (err error)

Close closes the batch operation. Any error that occured during a batch operation may have made it impossible to resyncronize the connection with the server. In this case the underlying connection will have been closed.

func (*Batch) Conn

func (b *Batch) Conn() *Conn

Conn returns the underlying connection that b will or was performed on.

func (*Batch) ExecResults

func (b *Batch) ExecResults() (CommandTag, error)

ExecResults reads the results from the next query in the batch as if the query has been sent with Exec.

func (*Batch) QueryResults

func (b *Batch) QueryResults() (*Rows, error)

QueryResults reads the results from the next query in the batch as if the query has been sent with Query.

func (*Batch) QueryRowResults

func (b *Batch) QueryRowResults() *Row

QueryRowResults reads the results from the next query in the batch as if the query has been sent with QueryRow.

func (*Batch) Queue

func (b *Batch) Queue(query string, arguments []interface{}, parameterOIDs []pgtype.OID, resultFormatCodes []int16)

Queue queues a query to batch b. parameterOIDs are required if there are parameters and query is not the name of a prepared statement. resultFormatCodes are required if there is a result.

func (*Batch) Send

func (b *Batch) Send(ctx context.Context, txOptions *TxOptions) error

Send sends all queued queries to the server at once. If the batch is created from a conn Object then All queries are wrapped in a transaction. The transaction can optionally be configured with txOptions. The context is in effect until the Batch is closed.

Warning: Send writes all queued queries before reading any results. This can cause a deadlock if an excessive number of queries are queued. It is highly advisable to use a timeout context to protect against this possibility. Unfortunately, this excessive number can vary based on operating system, connection type (TCP or Unix domain socket), and type of query. Unix domain sockets seem to be much more susceptible to this issue than TCP connections. However, it usually is at least several thousand.

The deadlock occurs when the batched queries to be sent are so large that the PostgreSQL server cannot receive it all at once. PostgreSQL received some of the queued queries and starts executing them. As PostgreSQL executes the queries it sends responses back. pgx will not read any of these responses until it has finished sending. Therefore, if all network buffers are full pgx will not be able to finish sending the queries and PostgreSQL will not be able to finish sending the responses.

See https://github.com/jackc/pgx/issues/374.

type CommandTag

type CommandTag string

CommandTag is the result of an Exec function

func (CommandTag) RowsAffected

func (ct CommandTag) RowsAffected() int64

RowsAffected returns the number of rows affected. If the CommandTag was not for a row affecting command (such as "CREATE TABLE") then it returns 0

type Conn

type Conn struct {
	RuntimeParams map[string]string // parameters that have been reported by the server

	ConnInfo *pgtype.ConnInfo
	// contains filtered or unexported fields
}

Conn is a PostgreSQL connection handle. It is not safe for concurrent usage. Use ConnPool to manage access to multiple database connections from multiple goroutines.

func Connect

func Connect(config ConnConfig) (c *Conn, err error)

Connect establishes a connection with a PostgreSQL server using config. config.Host must be specified. config.User will default to the OS user name. Other config fields are optional.

func (*Conn) Begin

func (c *Conn) Begin() (*Tx, error)

Begin starts a transaction with the default transaction mode for the current connection. To use a specific transaction mode see BeginEx.

func (*Conn) BeginBatch

func (c *Conn) BeginBatch() *Batch

BeginBatch returns a *Batch query for c.

func (*Conn) BeginEx

func (c *Conn) BeginEx(ctx context.Context, txOptions *TxOptions) (*Tx, error)

BeginEx starts a transaction with txOptions determining the transaction mode. Unlike database/sql, the context only affects the begin command. i.e. there is no auto-rollback on context cancelation.

func (*Conn) CauseOfDeath

func (c *Conn) CauseOfDeath() error

func (*Conn) Close

func (c *Conn) Close() (err error)

Close closes a connection. It is safe to call Close on a already closed connection.

func (*Conn) CopyFrom

func (c *Conn) CopyFrom(tableName Identifier, columnNames []string, rowSrc CopyFromSource) (int, error)

CopyFrom uses the PostgreSQL copy protocol to perform bulk data insertion. It returns the number of rows copied and an error.

CopyFrom requires all values use the binary format. Almost all types implemented by pgx use the binary format by default. Types implementing Encoder can only be used if they encode to the binary format.

func (*Conn) Deallocate

func (c *Conn) Deallocate(name string) error

Deallocate released a prepared statement

func (*Conn) Exec

func (c *Conn) Exec(sql string, arguments ...interface{}) (commandTag CommandTag, err error)

Exec executes sql. sql can be either a prepared statement name or an SQL string. arguments should be referenced positionally from the sql string as $1, $2, etc.

func (*Conn) ExecEx

func (c *Conn) ExecEx(ctx context.Context, sql string, options *QueryExOptions, arguments ...interface{}) (CommandTag, error)

func (*Conn) IsAlive

func (c *Conn) IsAlive() bool

func (*Conn) Listen

func (c *Conn) Listen(channel string) error

Listen establishes a PostgreSQL listen/notify to channel

func (*Conn) PID

func (c *Conn) PID() uint32

PID returns the backend PID for this connection.

func (*Conn) Ping

func (c *Conn) Ping(ctx context.Context) error

func (*Conn) Prepare

func (c *Conn) Prepare(name, sql string) (ps *PreparedStatement, err error)

Prepare creates a prepared statement with name and sql. sql can contain placeholders for bound parameters. These placeholders are referenced positional as $1, $2, etc.

Prepare is idempotent; i.e. it is safe to call Prepare multiple times with the same name and sql arguments. This allows a code path to Prepare and Query/Exec without concern for if the statement has already been prepared.

func (*Conn) PrepareEx

func (c *Conn) PrepareEx(ctx context.Context, name, sql string, opts *PrepareExOptions) (ps *PreparedStatement, err error)

PrepareEx creates a prepared statement with name and sql. sql can contain placeholders for bound parameters. These placeholders are referenced positional as $1, $2, etc. It defers from Prepare as it allows additional options (such as parameter OIDs) to be passed via struct

PrepareEx is idempotent; i.e. it is safe to call PrepareEx multiple times with the same name and sql arguments. This allows a code path to PrepareEx and Query/Exec without concern for if the statement has already been prepared.

func (*Conn) Query

func (c *Conn) Query(sql string, args ...interface{}) (*Rows, error)

Query executes sql with args. If there is an error the returned *Rows will be returned in an error state. So it is allowed to ignore the error returned from Query and handle it in *Rows.

func (*Conn) QueryEx

func (c *Conn) QueryEx(ctx context.Context, sql string, options *QueryExOptions, args ...interface{}) (rows *Rows, err error)

func (*Conn) QueryRow

func (c *Conn) QueryRow(sql string, args ...interface{}) *Row

QueryRow is a convenience wrapper over Query. Any error that occurs while querying is deferred until calling Scan on the returned *Row. That *Row will error with ErrNoRows if no rows are returned.

func (*Conn) QueryRowEx

func (c *Conn) QueryRowEx(ctx context.Context, sql string, options *QueryExOptions, args ...interface{}) *Row

func (*Conn) SetLogLevel

func (c *Conn) SetLogLevel(lvl int) (int, error)

SetLogLevel replaces the current log level and returns the previous log level.

func (*Conn) SetLogger

func (c *Conn) SetLogger(logger Logger) Logger

SetLogger replaces the current logger and returns the previous logger.

func (*Conn) Unlisten

func (c *Conn) Unlisten(channel string) error

Unlisten unsubscribes from a listen channel

func (*Conn) WaitForNotification

func (c *Conn) WaitForNotification(ctx context.Context) (notification *Notification, err error)

WaitForNotification waits for a PostgreSQL notification.

type ConnConfig

type ConnConfig struct {
	Host              string // host (e.g. localhost) or path to unix domain socket directory (e.g. /private/tmp)
	Port              uint16 // default: 5432
	Database          string
	User              string // default: OS user name
	Password          string
	TLSConfig         *tls.Config // config for TLS connection -- nil disables TLS
	UseFallbackTLS    bool        // Try FallbackTLSConfig if connecting with TLSConfig fails. Used for preferring TLS, but allowing unencrypted, or vice-versa
	FallbackTLSConfig *tls.Config // config for fallback TLS connection (only used if UseFallBackTLS is true)-- nil disables TLS
	Logger            Logger
	LogLevel          int
	Dial              DialFunc
	RuntimeParams     map[string]string                     // Run-time parameters to set on connection as session default values (e.g. search_path or application_name)
	OnNotice          NoticeHandler                         // Callback function called when a notice response is received.
	CustomConnInfo    func(*Conn) (*pgtype.ConnInfo, error) // Callback function to implement connection strategies for different backends. crate, pgbouncer, pgpool, etc.

	// PreferSimpleProtocol disables implicit prepared statement usage. By default
	// pgx automatically uses the unnamed prepared statement for Query and
	// QueryRow. It also uses a prepared statement when Exec has arguments. This
	// can improve performance due to being able to use the binary format. It also
	// does not rely on client side parameter sanitization. However, it does incur
	// two round-trips per query and may be incompatible proxies such as
	// PGBouncer. Setting PreferSimpleProtocol causes the simple protocol to be
	// used by default. The same functionality can be controlled on a per query
	// basis by setting QueryExOptions.SimpleProtocol.
	PreferSimpleProtocol bool
}

ConnConfig contains all the options used to establish a connection.

func ParseConnectionString

func ParseConnectionString(s string) (ConnConfig, error)

ParseConnectionString parses either a URI or a DSN connection string. see ParseURI and ParseDSN for details.

func ParseDSN

func ParseDSN(s string) (ConnConfig, error)

ParseDSN parses a database DSN (data source name) into a ConnConfig

e.g. ParseDSN("user=username password=password host=1.2.3.4 port=5432 dbname=mydb sslmode=disable")

Any options not used by the connection process are parsed into ConnConfig.RuntimeParams.

e.g. ParseDSN("application_name=pgxtest search_path=admin user=username password=password host=1.2.3.4 dbname=mydb")

ParseDSN tries to match libpq behavior with regard to sslmode. See comments for ParseEnvLibpq for more information on the security implications of sslmode options.

func ParseEnvLibpq

func ParseEnvLibpq() (ConnConfig, error)

ParseEnvLibpq parses the environment like libpq does into a ConnConfig

See http://www.postgresql.org/docs/9.4/static/libpq-envars.html for details on the meaning of environment variables.

ParseEnvLibpq currently recognizes the following environment variables: PGHOST PGPORT PGDATABASE PGUSER PGPASSWORD PGSSLMODE PGAPPNAME PGCONNECT_TIMEOUT

Important TLS Security Notes: ParseEnvLibpq tries to match libpq behavior with regard to PGSSLMODE. This includes defaulting to "prefer" behavior if no environment variable is set.

See http://www.postgresql.org/docs/9.4/static/libpq-ssl.html#LIBPQ-SSL-PROTECTION for details on what level of security each sslmode provides.

"verify-ca" mode currently is treated as "verify-full". e.g. It has stronger security guarantees than it would with libpq. Do not rely on this behavior as it may be possible to match libpq in the future. If you need full security use "verify-full".

Several of the PGSSLMODE options (including the default behavior of "prefer") will set UseFallbackTLS to true and FallbackTLSConfig to a disabled or weakened TLS mode. This means that if ParseEnvLibpq is used, but TLSConfig is later set from a different source that UseFallbackTLS MUST be set false to avoid the possibility of falling back to weaker or disabled security.

func ParseURI

func ParseURI(uri string) (ConnConfig, error)

ParseURI parses a database URI into ConnConfig

Query parameters not used by the connection process are parsed into ConnConfig.RuntimeParams.

func (ConnConfig) Merge

func (old ConnConfig) Merge(other ConnConfig) ConnConfig

Merge returns a new ConnConfig with the attributes of old and other combined. When an attribute is set on both, other takes precedence.

As a security precaution, if the other TLSConfig is nil, all old TLS attributes will be preserved.

type ConnPool

type ConnPool struct {
	// contains filtered or unexported fields
}

func NewConnPool

func NewConnPool(config ConnPoolConfig) (p *ConnPool, err error)

NewConnPool creates a new ConnPool. config.ConnConfig is passed through to Connect directly.

func (*ConnPool) Acquire

func (p *ConnPool) Acquire() (*Conn, error)

Acquire takes exclusive use of a connection until it is released.

func (*ConnPool) Begin

func (p *ConnPool) Begin() (*Tx, error)

Begin acquires a connection and begins a transaction on it. When the transaction is closed the connection will be automatically released.

func (*ConnPool) BeginBatch

func (p *ConnPool) BeginBatch() *Batch

BeginBatch acquires a connection and begins a batch on that connection. When *Batch is finished, the connection is released automatically.

func (*ConnPool) BeginEx

func (p *ConnPool) BeginEx(ctx context.Context, txOptions *TxOptions) (*Tx, error)

BeginEx acquires a connection and starts a transaction with txOptions determining the transaction mode. When the transaction is closed the connection will be automatically released.

func (*ConnPool) Close

func (p *ConnPool) Close()

Close ends the use of a connection pool. It prevents any new connections from being acquired and closes available underlying connections. Any acquired connections will be closed when they are released.

func (*ConnPool) CopyFrom

func (p *ConnPool) CopyFrom(tableName Identifier, columnNames []string, rowSrc CopyFromSource) (int, error)

CopyFrom acquires a connection, delegates the call to that connection, and releases the connection

func (*ConnPool) Deallocate

func (p *ConnPool) Deallocate(name string) (err error)

Deallocate releases a prepared statement from all connections in the pool.

func (*ConnPool) Exec

func (p *ConnPool) Exec(sql string, arguments ...interface{}) (commandTag CommandTag, err error)

Exec acquires a connection, delegates the call to that connection, and releases the connection

func (*ConnPool) ExecEx

func (p *ConnPool) ExecEx(ctx context.Context, sql string, options *QueryExOptions, arguments ...interface{}) (commandTag CommandTag, err error)

func (*ConnPool) Prepare

func (p *ConnPool) Prepare(name, sql string) (*PreparedStatement, error)

Prepare creates a prepared statement on a connection in the pool to test the statement is valid. If it succeeds all connections accessed through the pool will have the statement available.

Prepare creates a prepared statement with name and sql. sql can contain placeholders for bound parameters. These placeholders are referenced positional as $1, $2, etc.

Prepare is idempotent; i.e. it is safe to call Prepare multiple times with the same name and sql arguments. This allows a code path to Prepare and Query/Exec/PrepareEx without concern for if the statement has already been prepared.

func (*ConnPool) PrepareEx

func (p *ConnPool) PrepareEx(ctx context.Context, name, sql string, opts *PrepareExOptions) (*PreparedStatement, error)

PrepareEx creates a prepared statement on a connection in the pool to test the statement is valid. If it succeeds all connections accessed through the pool will have the statement available.

PrepareEx creates a prepared statement with name and sql. sql can contain placeholders for bound parameters. These placeholders are referenced positional as $1, $2, etc. It defers from Prepare as it allows additional options (such as parameter OIDs) to be passed via struct

PrepareEx is idempotent; i.e. it is safe to call PrepareEx multiple times with the same name and sql arguments. This allows a code path to PrepareEx and Query/Exec/Prepare without concern for if the statement has already been prepared.

func (*ConnPool) Query

func (p *ConnPool) Query(sql string, args ...interface{}) (*Rows, error)

Query acquires a connection and delegates the call to that connection. When *Rows are closed, the connection is released automatically.

func (*ConnPool) QueryEx

func (p *ConnPool) QueryEx(ctx context.Context, sql string, options *QueryExOptions, args ...interface{}) (*Rows, error)

func (*ConnPool) QueryRow

func (p *ConnPool) QueryRow(sql string, args ...interface{}) *Row

QueryRow acquires a connection and delegates the call to that connection. The connection is released automatically after Scan is called on the returned *Row.

func (*ConnPool) QueryRowEx

func (p *ConnPool) QueryRowEx(ctx context.Context, sql string, options *QueryExOptions, args ...interface{}) *Row

func (*ConnPool) Release

func (p *ConnPool) Release(conn *Conn)

Release gives up use of a connection.

func (*ConnPool) Reset

func (p *ConnPool) Reset()

Reset closes all open connections, but leaves the pool open. It is intended for use when an error is detected that would disrupt all connections (such as a network interruption or a server state change).

It is safe to reset a pool while connections are checked out. Those connections will be closed when they are returned to the pool.

func (*ConnPool) Stat

func (p *ConnPool) Stat() (s ConnPoolStat)

Stat returns connection pool statistics

type ConnPoolConfig

type ConnPoolConfig struct {
	ConnConfig
	MaxConnections int               // max simultaneous connections to use, default 5, must be at least 2
	AfterConnect   func(*Conn) error // function to call on every new connection
	AcquireTimeout time.Duration     // max wait time when all connections are busy (0 means no timeout)
}

type ConnPoolStat

type ConnPoolStat struct {
	MaxConnections       int // max simultaneous connections to use
	CurrentConnections   int // current live connections
	AvailableConnections int // unused live connections
}

type CopyFromSource

type CopyFromSource interface {
	// Next returns true if there is another row and makes the next row data
	// available to Values(). When there are no more rows available or an error
	// has occurred it returns false.
	Next() bool

	// Values returns the values for the current row.
	Values() ([]interface{}, error)

	// Err returns any error that has been encountered by the CopyFromSource. If
	// this is not nil *Conn.CopyFrom will abort the copy.
	Err() error
}

CopyFromSource is the interface used by *Conn.CopyFrom as the source for copy data.

func CopyFromRows

func CopyFromRows(rows [][]interface{}) CopyFromSource

CopyFromRows returns a CopyFromSource interface over the provided rows slice making it usable by *Conn.CopyFrom.

type DialFunc

type DialFunc func(network, addr string) (net.Conn, error)

DialFunc is a function that can be used to connect to a PostgreSQL server

type FieldDescription

type FieldDescription struct {
	Name            string
	Table           pgtype.OID
	AttributeNumber uint16
	DataType        pgtype.OID
	DataTypeSize    int16
	DataTypeName    string
	Modifier        uint32
	FormatCode      int16
}

func (FieldDescription) Length

func (fd FieldDescription) Length() (int64, bool)

func (FieldDescription) PrecisionScale

func (fd FieldDescription) PrecisionScale() (precision, scale int64, ok bool)

func (FieldDescription) Type

func (fd FieldDescription) Type() reflect.Type

type Identifier

type Identifier []string

Identifier a PostgreSQL identifier or name. Identifiers can be composed of multiple parts such as ["schema", "table"] or ["table", "column"].

func (Identifier) Sanitize

func (ident Identifier) Sanitize() string

Sanitize returns a sanitized string safe for SQL interpolation.

type LargeObject

type LargeObject struct {
	// contains filtered or unexported fields
}

A LargeObject is a large object stored on the server. It is only valid within the transaction that it was initialized in. It implements these interfaces:

io.Writer
io.Reader
io.Seeker
io.Closer

func (*LargeObject) Close

func (o *LargeObject) Close() error

Close closees the large object descriptor.

func (*LargeObject) Read

func (o *LargeObject) Read(p []byte) (int, error)

Read reads up to len(p) bytes into p returning the number of bytes read.

func (*LargeObject) Seek

func (o *LargeObject) Seek(offset int64, whence int) (n int64, err error)

Seek moves the current location pointer to the new location specified by offset.

func (*LargeObject) Tell

func (o *LargeObject) Tell() (n int64, err error)

Tell returns the current read or write location of the large object descriptor.

func (*LargeObject) Truncate

func (o *LargeObject) Truncate(size int64) (err error)

Trunctes the large object to size.

func (*LargeObject) Write

func (o *LargeObject) Write(p []byte) (int, error)

Write writes p to the large object and returns the number of bytes written and an error if not all of p was written.

type LargeObjectMode

type LargeObjectMode int32
const (
	LargeObjectModeWrite LargeObjectMode = 0x20000
	LargeObjectModeRead  LargeObjectMode = 0x40000
)

type LargeObjects

type LargeObjects struct {
	// Has64 is true if the server is capable of working with 64-bit numbers
	Has64 bool
	// contains filtered or unexported fields
}

LargeObjects is a structure used to access the large objects API. It is only valid within the transaction where it was created.

For more details see: http://www.postgresql.org/docs/current/static/largeobjects.html

func (*LargeObjects) Create

func (o *LargeObjects) Create(id pgtype.OID) (pgtype.OID, error)

Create creates a new large object. If id is zero, the server assigns an unused OID.

func (*LargeObjects) Open

func (o *LargeObjects) Open(oid pgtype.OID, mode LargeObjectMode) (*LargeObject, error)

Open opens an existing large object with the given mode.

func (o *LargeObjects) Unlink(oid pgtype.OID) error

Unlink removes a large object from the database.

type LogLevel

type LogLevel int

LogLevel represents the pgx logging level. See LogLevel* constants for possible values.

func LogLevelFromString

func LogLevelFromString(s string) (LogLevel, error)

LogLevelFromString converts log level string to constant

Valid levels:

trace
debug
info
warn
error
none

func (LogLevel) String

func (ll LogLevel) String() string

type Logger

type Logger interface {
	// Log a message at the given level with data key/value pairs. data may be nil.
	Log(level LogLevel, msg string, data map[string]interface{})
}

Logger is the interface used to get logging from pgx internals.

type Notice

type Notice PgError

Notice represents a notice response message reported by the PostgreSQL server. Be aware that this is distinct from LISTEN/NOTIFY notification.

type NoticeHandler

type NoticeHandler func(*Conn, *Notice)

NoticeHandler is a function that can handle notices received from the PostgreSQL server. Notices can be received at any time, usually during handling of a query response. The *Conn is provided so the handler is aware of the origin of the notice, but it must not invoke any query method. Be aware that this is distinct from LISTEN/NOTIFY notification.

type Notification

type Notification struct {
	PID     uint32 // backend pid that sent the notification
	Channel string // channel from which notification was received
	Payload string
}

Notification is a message received from the PostgreSQL LISTEN/NOTIFY system

type PgError

type PgError struct {
	Severity         string
	Code             string
	Message          string
	Detail           string
	Hint             string
	Position         int32
	InternalPosition int32
	InternalQuery    string
	Where            string
	SchemaName       string
	TableName        string
	ColumnName       string
	DataTypeName     string
	ConstraintName   string
	File             string
	Line             int32
	Routine          string
}

PgError represents an error reported by the PostgreSQL server. See http://www.postgresql.org/docs/9.3/static/protocol-error-fields.html for detailed field description.

func (PgError) Error

func (pe PgError) Error() string

type PrepareExOptions

type PrepareExOptions struct {
	ParameterOIDs []pgtype.OID
}

PrepareExOptions is an option struct that can be passed to PrepareEx

type PreparedStatement

type PreparedStatement struct {
	Name              string
	SQL               string
	FieldDescriptions []FieldDescription
	ParameterOIDs     []pgtype.OID
}

PreparedStatement is a description of a prepared statement

type ProtocolError

type ProtocolError string

ProtocolError occurs when unexpected data is received from PostgreSQL

func (ProtocolError) Error

func (e ProtocolError) Error() string

type QueryArgs

type QueryArgs []interface{}

QueryArgs is a container for arguments to an SQL query. It is helpful when building SQL statements where the number of arguments is variable.

func (*QueryArgs) Append

func (qa *QueryArgs) Append(v interface{}) string

Append adds a value to qa and returns the placeholder value for the argument. e.g. $1, $2, etc.

type QueryExOptions

type QueryExOptions struct {
	// When ParameterOIDs are present and the query is not a prepared statement,
	// then ParameterOIDs and ResultFormatCodes will be used to avoid an extra
	// network round-trip.
	ParameterOIDs     []pgtype.OID
	ResultFormatCodes []int16

	SimpleProtocol bool
}

type ReplicationConn

type ReplicationConn struct {
	// contains filtered or unexported fields
}

func ReplicationConnect

func ReplicationConnect(config ConnConfig) (r *ReplicationConn, err error)

func (*ReplicationConn) CauseOfDeath

func (rc *ReplicationConn) CauseOfDeath() error

func (*ReplicationConn) Close

func (rc *ReplicationConn) Close() error

func (*ReplicationConn) CreateReplicationSlot

func (rc *ReplicationConn) CreateReplicationSlot(slotName, outputPlugin string) (err error)

Create the replication slot, using the given name and output plugin.

func (*ReplicationConn) CreateReplicationSlotEx

func (rc *ReplicationConn) CreateReplicationSlotEx(slotName, outputPlugin string) (consistentPoint string, snapshotName string, err error)

Create the replication slot, using the given name and output plugin, and return the consistent_point and snapshot_name values.

func (*ReplicationConn) DropReplicationSlot

func (rc *ReplicationConn) DropReplicationSlot(slotName string) (err error)

Drop the replication slot for the given name

func (*ReplicationConn) IdentifySystem

func (rc *ReplicationConn) IdentifySystem() (r *Rows, err error)

Execute the "IDENTIFY_SYSTEM" command as documented here: https://www.postgresql.org/docs/9.5/static/protocol-replication.html

This will return (if successful) a result set that has a single row that contains the systemid, current timeline, xlogpos and database name.

NOTE: Because this is a replication mode connection, we don't have type names, so the field descriptions in the result will have only OIDs and no DataTypeName values

func (*ReplicationConn) IsAlive

func (rc *ReplicationConn) IsAlive() bool

func (*ReplicationConn) SendStandbyStatus

func (rc *ReplicationConn) SendStandbyStatus(k *StandbyStatus) (err error)

Send standby status to the server, which both acts as a keepalive message to the server, as well as carries the WAL position of the client, which then updates the server's replication slot position.

func (*ReplicationConn) StartReplication

func (rc *ReplicationConn) StartReplication(slotName string, startLsn uint64, timeline int64, pluginArguments ...string) (err error)

Start a replication connection, sending WAL data to the given replication receiver. This function wraps a START_REPLICATION command as documented here: https://www.postgresql.org/docs/9.5/static/protocol-replication.html

Once started, the client needs to invoke WaitForReplicationMessage() in order to fetch the WAL and standby status. Also, it is the responsibility of the caller to periodically send StandbyStatus messages to update the replication slot position.

This function assumes that slotName has already been created. In order to omit the timeline argument pass a -1 for the timeline to get the server default behavior.

func (*ReplicationConn) TimelineHistory

func (rc *ReplicationConn) TimelineHistory(timeline int) (r *Rows, err error)

Execute the "TIMELINE_HISTORY" command as documented here: https://www.postgresql.org/docs/9.5/static/protocol-replication.html

This will return (if successful) a result set that has a single row that contains the filename of the history file and the content of the history file. If called for timeline 1, typically this will generate an error that the timeline history file does not exist.

NOTE: Because this is a replication mode connection, we don't have type names, so the field descriptions in the result will have only OIDs and no DataTypeName values

func (*ReplicationConn) WaitForReplicationMessage

func (rc *ReplicationConn) WaitForReplicationMessage(ctx context.Context) (*ReplicationMessage, error)

Wait for a single replication message.

Properly using this requires some knowledge of the postgres replication mechanisms, as the client can receive both WAL data (the ultimate payload) and server heartbeat updates. The caller also must send standby status updates in order to keep the connection alive and working.

This returns the context error when there is no replication message before the context is canceled.

type ReplicationMessage

type ReplicationMessage struct {
	WalMessage      *WalMessage
	ServerHeartbeat *ServerHeartbeat
}

The replication message wraps all possible messages from the server received during replication. At most one of the wal message or server heartbeat will be non-nil

type Row

type Row Rows

Row is a convenience wrapper over Rows that is returned by QueryRow.

func (*Row) Scan

func (r *Row) Scan(dest ...interface{}) (err error)

Scan works the same as (*Rows Scan) with the following exceptions. If no rows were found it returns ErrNoRows. If multiple rows are returned it ignores all but the first.

type Rows

type Rows struct {
	// contains filtered or unexported fields
}

Rows is the result set returned from *Conn.Query. Rows must be closed before the *Conn can be used again. Rows are closed by explicitly calling Close(), calling Next() until it returns false, or when a fatal error occurs.

func (*Rows) Close

func (rows *Rows) Close()

Close closes the rows, making the connection ready for use again. It is safe to call Close after rows is already closed.

func (*Rows) Err

func (rows *Rows) Err() error

func (*Rows) FieldDescriptions

func (rows *Rows) FieldDescriptions() []FieldDescription

func (*Rows) Next

func (rows *Rows) Next() bool

Next prepares the next row for reading. It returns true if there is another row and false if no more rows are available. It automatically closes rows when all rows are read.

func (*Rows) Scan

func (rows *Rows) Scan(dest ...interface{}) (err error)

Scan reads the values from the current row into dest values positionally. dest can include pointers to core types, values implementing the Scanner interface, []byte, and nil. []byte will skip the decoding process and directly copy the raw bytes received from PostgreSQL. nil will skip the value entirely.

func (*Rows) Values

func (rows *Rows) Values() ([]interface{}, error)

Values returns an array of the row values

type SerializationError

type SerializationError string

SerializationError occurs on failure to encode or decode a value

func (SerializationError) Error

func (e SerializationError) Error() string

type ServerHeartbeat

type ServerHeartbeat struct {
	// The current max wal position on the server,
	// used for lag tracking
	ServerWalEnd uint64
	// The server time, in microseconds since jan 1 2000
	ServerTime uint64
	// If 1, the server is requesting a standby status message
	// to be sent immediately.
	ReplyRequested byte
}

The server heartbeat is sent periodically from the server, including server status, and a reply request field

func (*ServerHeartbeat) String

func (s *ServerHeartbeat) String() string

func (*ServerHeartbeat) Time

func (s *ServerHeartbeat) Time() time.Time

type StandbyStatus

type StandbyStatus struct {
	// The WAL position that's been locally written
	WalWritePosition uint64
	// The WAL position that's been locally flushed
	WalFlushPosition uint64
	// The WAL position that's been locally applied
	WalApplyPosition uint64
	// The client time in microseconds since jan 1 2000
	ClientTime uint64
	// If 1, requests the server to immediately send a
	// server heartbeat
	ReplyRequested byte
}

The standby status is the client side heartbeat sent to the postgresql server to track the client wal positions. For practical purposes, all wal positions are typically set to the same value.

func NewStandbyStatus

func NewStandbyStatus(walPositions ...uint64) (status *StandbyStatus, err error)

Create a standby status struct, which sets all the WAL positions to the given wal position, and the client time to the current time. The wal positions are, in order: WalFlushPosition WalApplyPosition WalWritePosition

If only one position is provided, it will be used as the value for all 3 status fields. Note you must provide either 1 wal position, or all 3 in order to initialize the standby status.

type Tx

type Tx struct {
	// contains filtered or unexported fields
}

Tx represents a database transaction.

All Tx methods return ErrTxClosed if Commit or Rollback has already been called on the Tx.

func (*Tx) BeginBatch

func (tx *Tx) BeginBatch() *Batch

BeginBatch returns a *Batch query for tx. Since this *Batch is already part of a transaction it will not automatically be wrapped in a transaction.

func (*Tx) Commit

func (tx *Tx) Commit() error

Commit commits the transaction

func (*Tx) CommitEx

func (tx *Tx) CommitEx(ctx context.Context) error

CommitEx commits the transaction with a context.

func (*Tx) CopyFrom

func (tx *Tx) CopyFrom(tableName Identifier, columnNames []string, rowSrc CopyFromSource) (int, error)

CopyFrom delegates to the underlying *Conn

func (*Tx) Err

func (tx *Tx) Err() error

Err returns the final error state, if any, of calling Commit or Rollback.

func (*Tx) Exec

func (tx *Tx) Exec(sql string, arguments ...interface{}) (commandTag CommandTag, err error)

Exec delegates to the underlying *Conn

func (*Tx) ExecEx

func (tx *Tx) ExecEx(ctx context.Context, sql string, options *QueryExOptions, arguments ...interface{}) (commandTag CommandTag, err error)

ExecEx delegates to the underlying *Conn

func (*Tx) LargeObjects

func (tx *Tx) LargeObjects() (*LargeObjects, error)

LargeObjects returns a LargeObjects instance for the transaction.

func (*Tx) Prepare

func (tx *Tx) Prepare(name, sql string) (*PreparedStatement, error)

Prepare delegates to the underlying *Conn

func (*Tx) PrepareEx

func (tx *Tx) PrepareEx(ctx context.Context, name, sql string, opts *PrepareExOptions) (*PreparedStatement, error)

PrepareEx delegates to the underlying *Conn

func (*Tx) Query

func (tx *Tx) Query(sql string, args ...interface{}) (*Rows, error)

Query delegates to the underlying *Conn

func (*Tx) QueryEx

func (tx *Tx) QueryEx(ctx context.Context, sql string, options *QueryExOptions, args ...interface{}) (*Rows, error)

QueryEx delegates to the underlying *Conn

func (*Tx) QueryRow

func (tx *Tx) QueryRow(sql string, args ...interface{}) *Row

QueryRow delegates to the underlying *Conn

func (*Tx) QueryRowEx

func (tx *Tx) QueryRowEx(ctx context.Context, sql string, options *QueryExOptions, args ...interface{}) *Row

QueryRowEx delegates to the underlying *Conn

func (*Tx) Rollback

func (tx *Tx) Rollback() error

Rollback rolls back the transaction. Rollback will return ErrTxClosed if the Tx is already closed, but is otherwise safe to call multiple times. Hence, a defer tx.Rollback() is safe even if tx.Commit() will be called first in a non-error condition.

func (*Tx) RollbackEx

func (tx *Tx) RollbackEx(ctx context.Context) error

RollbackEx is the context version of Rollback

func (*Tx) Status

func (tx *Tx) Status() int8

Status returns the status of the transaction from the set of pgx.TxStatus* constants.

type TxAccessMode

type TxAccessMode string

type TxDeferrableMode

type TxDeferrableMode string

type TxIsoLevel

type TxIsoLevel string

type TxOptions

type TxOptions struct {
	IsoLevel       TxIsoLevel
	AccessMode     TxAccessMode
	DeferrableMode TxDeferrableMode
}

type WalMessage

type WalMessage struct {
	// The WAL start position of this data. This
	// is the WAL position we need to track.
	WalStart uint64
	// The server wal end and server time are
	// documented to track the end position and current
	// time of the server, both of which appear to be
	// unimplemented in pg 9.5.
	ServerWalEnd uint64
	ServerTime   uint64
	// The WAL data is the raw unparsed binary WAL entry.
	// The contents of this are determined by the output
	// logical encoding plugin.
	WalData []byte
}

The WAL message contains WAL payload entry data

func (*WalMessage) ByteLag

func (w *WalMessage) ByteLag() uint64

func (*WalMessage) String

func (w *WalMessage) String() string

func (*WalMessage) Time

func (w *WalMessage) Time() time.Time

Directories

Path Synopsis
examples
internal
log
log15adapter
Package log15adapter provides a logger that writes to a github.com/inconshreveable/log15.Logger log.
Package log15adapter provides a logger that writes to a github.com/inconshreveable/log15.Logger log.
logrusadapter
Package logrusadapter provides a logger that writes to a github.com/sirupsen/logrus.Logger log.
Package logrusadapter provides a logger that writes to a github.com/sirupsen/logrus.Logger log.
testingadapter
Package testingadapter provides a logger that writes to a test or benchmark log.
Package testingadapter provides a logger that writes to a test or benchmark log.
zapadapter
Package zapadapter provides a logger that writes to a go.uber.org/zap.Logger.
Package zapadapter provides a logger that writes to a go.uber.org/zap.Logger.
Package pgio is a low-level toolkit building messages in the PostgreSQL wire protocol.
Package pgio is a low-level toolkit building messages in the PostgreSQL wire protocol.
Package stdlib is the compatibility layer from pgx to database/sql.
Package stdlib is the compatibility layer from pgx to database/sql.

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL