sqlr

package module
v0.6.1 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Apr 28, 2019 License: MIT Imports: 19 Imported by: 0

README

sqlr: SQL API for Go

GoDoc Documentation License Build Status (Linux) Coverage Status GoReportCard

Package sqlr is designed to reduce the effort required to work with SQL databases. It is intended for programmers who are comfortable with writing SQL, but would like assistance with the sometimes tedious process of preparing SQL queries for tables that have a large number of columns, or have a variable number of input parameters.

This package is designed to work seamlessly with the standard library "database/sql" package. It does not provide any layer on top of *sql.DB or *sql.Tx. If the calling program has a need to execute queries independently of this package, it can use "database/sql" directly, or make use of any other third party package that uses "database/sql".

This README provides an overview of how to use this package. For more detailed documentation, see https://jjeffery.github.io/sqlr, or consult the GoDoc documentation.

Obtaining the package

go get github.com/jjeffery/sqlr

Note that if you are interested in running the tests, you will need to get additional database driver packages and setup a test database. See the detailed documentation for more information.

Prepare SQL queries based on row structures

Preparing SQL queries with many placeholder arguments is tedious and error-prone. The following insert query has a dozen placeholders, and it is difficult to match up the columns with the placeholders. It is not uncommon to have tables with many more columns than this example, and the level of difficulty increases with the number of columns in the table.

insert into users(id,given_name,family_name,dob,ssn,street,locality,postcode,country,phone,mobile,fax)
values(?,?,?,?,?,?,?,?,?,?,?,?)

This package uses reflection to simplify the construction of SQL queries. Supplementary information about each database column is stored in the structure tag of the associated field.

type User struct {
    ID          int       `sql:"primary key"`
    GivenName   string
    FamilyName  string
    DOB         time.Time
    SSN         string
    Street      string
    Locality    string
    Postcode    string
    Country     string
    Phone       string
    Mobile      string
    Facsimile   string    `sql:"fax"` // "fax" overrides the column name
}

The calling program creates a schema, which describes rules for generating SQL statements. These rules include specifying the SQL dialect (eg MySQL, Postgres, SQLite) and the naming convention used to convert Go struct field names into column names (eg "GivenName" => "given_name"). The schema is usually created during program initialization. Once created, a schema is immutable and can be called concurrently from multiple goroutines.

schema := NewSchema(
  WithDialect(MySQL),
  WithNamingConvention(SnakeCase),
)

A session is created using a context, a database connection (eg *sql.DB, *sql.Tx, *sql.Conn), and a schema. A session is inexpensive to create, and is intended to last no longer than a single request (which might be a HTTP request, in the case of a HTTP server). A session is bounded by the lifetime of its context. The most common pattern is to create a new session for each database transaction.

sess := NewSession(ctx, tx, schema)

With a session, it is possible to create simple CRUD statements with minimal effort.

 var row User
 // ... populate row with data here and then ...

 // generates the correct SQL to insert a row into the users table
 result, err := sess.InsertRow(row)

 // ... and then later on ...

 // generates the correct SQL to update a the matching row in the users table
 result, err := sess.UpdateRow(row)

In the example above, the generated insert and update statements would look like:

 insert into users(`id`,`given_name`,`family_name`,`dob`,`ssn`,`street`,`locality`,`postcode`,
 `country`,`phone`,`mobile`,`fax`) values(?,?,?,?,?,?,?,?,?,?,?,?)

 update users set `given_name`=?,`family_name`=?,`dob`=?,`ssn`=?,`street`=?,`locality`=?,
 `postcode`=?,`country`=?,`phone`=?,`mobile`=?,`fax`=? where `id`=?

If the schema is created with a different dialect then the generated SQL will be different. For example if the Postgres dialect was used the insert and update queries would look more like:

 insert into users("id","given_name","family_name","dob","ssn","street","locality","postcode",
 "country","phone","mobile","fax") values($1,$2,$3,$4,$5,$6,$7,$8,$9,$10,$11,$12)

 update users set "given_name"=$1,"family_name"=$2,"dob"=$3,"ssn"=$4,"street"=$5,"locality"=$6,
 "postcode"=$7,"country"=$8,"phone"=$9,"mobile"=$10,"fax"=$11 where "id"=$12

More complex update queries are handled by the Session.Exec method.

Select queries are handled by the Session.Select method:

 var rows []*User

 // will populate rows slice with the results of the query
 rowCount, err := sess.Select(&rows, "select {} from users where postcode = ?", postcode)

 var row User

 // will populate row with the first row returned by the query
 rowCount, err = sess.Select(&row, "select {} from users where {}", userID)

 // more complex query involving joins and aliases
 rowCount, err = sess.Select(&rows, `
     select {alias u}
     from users u
     inner join user_search_terms ust on ust.user_id = u.id
     where ust.search_term like ?
     order by {alias u}`, searchTermText)

The SQL queries prepared in the above example would look like the following:

 select `id`,`given_name`,`family_name`,`dob`,`ssn`,`street`,`locality`,`postcode`,
 `country`,`phone`,`mobile`,`fax` from users where postcode=?

 select `id`,`given_name`,`family_name`,`dob`,`ssn`,`street`,`locality`,`postcode`,`country`,
 `phone`,`mobile`,`fax` from users where id=?

 select u.`id`,u.`given_name`,u.`family_name`,u.`dob`,u.`ssn`,u.`street`,u.`locality`,
 u.`postcode`,u.`country`,u.`phone`,u.`mobile`,u.`fax` from users u inner join
 user_search_terms ust on ust.user_id = u.id where ust.search_term_like ? order by u.`id`

The examples are using a MySQL dialect. If the schema had been setup for, say, a Postgres dialect, a generated query would look more like:

 select "id","given_name","family_name","dob","ssn","street","locality","postcode","country",
 "phone","mobile","fax" from users where postcode=$1

It is an important point to note that this feature is not about writing the SQL for the programmer. Rather it is about "filling in the blanks": allowing the programmer to specify as much of the SQL query as they want without having to write the tiresome bits.

For more information on preparing queries, see the detailed documentation.

Autoincrement Column Values

When inserting rows using InsertRow, if a column is defined as an autoincrement column, then the generated value will be retrieved from the database server, and the corresponding field in the row structure will be updated.

 type Row {
   ID   int    `sql:"primary key autoincrement"`
   Name string
 }

 row := &Row{Name: "some name"}
 _, err := sess.InsertRow(row)
 if err != nil {
   log.Fatal(err)
 }

 // row.ID will contain the auto-generated value
 fmt.Println(row.ID)

Null Columns

Most SQL database tables have columns that are nullable, and it can be tiresome to always map to pointer types or special nullable types such as sql.NullString. In many cases it is acceptable to map the zero value for the field a database NULL in the corresponding database column.

Where it is acceptable to map a zero value to a NULL database column, the Go struct field can be marked with the "null" keyword in the field's struct tag.

 type Employee struct {
     ID        int     `sql:"primary key"`
     Name      string
     ManagerID int     `sql:"null"`
     Phone     string  `sql:"null"`
 }

In the above example the manager_id column can be null, but if all valid IDs are non-zero, it is unambiguous to map the zero value to a database NULL. Similarly, if the phone column an empty string it will be stored as a NULL in the database.

Care should be taken, because there are cases where an empty value and a database NULL do not represent the same thing. There are many cases, however, where this feature can be applied, and the result is simpler code that is easier to read.

JSON Columns

It is not uncommon to serialize complex objects as JSON text for storage in an SQL database. Native support for JSON is available in some database servers: in partcular Postgres has excellent support for JSON.

It is straightforward to use this package to serialize a structure field to JSON:

 type SomethingComplex struct {
     Name       string
     Values     []int
     MoreValues map[string]float64
     // ... and more fields here ...
 }

 type Row struct {
     ID    int                `sql:"primary key"`
     Name  string
     Cmplx *SomethingComplex  `sql:"json"`
 }

In the example above the Cmplx field will be marshaled as JSON text when writing to the database, and unmarshaled into the struct when reading from the database.

WHERE IN Clauses with Multiple Values

While most SQL queries accept a fixed number of parameters, if the SQL query contains a WHERE IN clause, it requires additional string manipulation to match the number of placeholders in the query with args.

This package simplifies queries with a variable number of arguments. When processing an SQL query, it detects if any of the arguments are slices:

 // GetWidgets returns all the widgets associated with the supplied IDs.
 func GetWidgets(sess *sqlr.Session, ids ...int) ([]*Widget, error) {
     var rows []*Widget
     _, err := sess.Select(&rows, `select {} from widgets where id in (?)`, ids)
     if err != nil {
       return nil, err
     }
     return widgets, nil
 }

In the above example, the number of placeholders ("?") in the query will be increased to match the number of values in the ids slice. The expansion logic can handle any mix of slice and scalar arguments.

Type-Safe Query Functions

A session can create type-safe query functions. This is a very powerful feature and makes it very easy to create type-safe data access objects.

var getWidget func(id int64) (*Widget, error)

// a session can make a typesafe function to retrieve an individual widget
sess.MakeQuery(&getWidget)

// now use the created function
widget, err := getWidget(42)
if err != nil {
    return err
}

// ... now use the widget ...

See Session.MakeQuery in the GoDoc for examples.

Documentation

Overview

Package sqlr is designed to reduce the effort required to work with SQL databases. It is intended for programmers who are comfortable with writing SQL, but would like assistance with the sometimes tedious process of preparing SQL queries for tables that have a large number of columns, or have a variable number of input parameters.

This GoDoc summary provides an overview of how to use this package. For more detailed documentation, see https://jjeffery.github.io/sqlr.

Prepare SQL queries based on row structures

Preparing SQL queries with many placeholder arguments is tedious and error-prone. The following insert query has a dozen placeholders, and it is difficult to match up the columns with the placeholders. It is not uncommon to have tables with many more columns than this example, and the level of difficulty increases with the number of columns in the table.

insert into users(id,given_name,family_name,dob,ssn,street,locality,postcode,
country,phone,mobile,fax) values(?,?,?,?,?,?,?,?,?,?,?,?)

This package uses reflection to simplify the construction of SQL queries. Supplementary information about each database column is stored in the structure tag of the associated field.

type User struct {
    ID          int       `sql:"primary key"`
    GivenName   string
    FamilyName  string
    DOB         time.Time
    SSN         string
    Street      string
    Locality    string
    Postcode    string
    Country     string
    Phone       string
    Mobile      string
    Facsimile   string    `sql:"fax"` // "fax" overrides the column name
}

The calling program creates a schema, which describes rules for generating SQL statements. These rules include specifying the SQL dialect (eg MySQL, Postgres, SQLite) and the naming convention used to convert Go struct field names into column names (eg "GivenName" => "given_name"). The schema is usually created during program initialization. Once created, a schema is immutable and can be called concurrently from multiple goroutines.

schema := NewSchema(
  WithDialect(MySQL),
  WithNamingConvention(SnakeCase),
)

A session is created using a context, a database connection (eg *sql.DB, *sql.Tx, *sql.Conn), and a schema. A session is inexpensive to create, and is intended to last no longer than a single request (which might be a HTTP request, in the case of a HTTP server). A session is bounded by the lifetime of its context.

session := NewSession(ctx, tx, schema)

Once a session has been created, it is possible to create simple row insert/update statements with minimal effort.

var row User
// ... populate row with data here and then ...

// generates the correct SQL to insert a row into the users table
result, err := session.Row(row).Exec("insert into users({}) values({})")

// ... and then later on ...

// generates the correct SQL to update a the matching row in the users table
result, err := session.Row(row).Exec("update users set {} where {}")

The Exec method parses the SQL query and replaces occurrences of "{}" with the column names or placeholders that make sense for the SQL clause in which they occur. In the example above, the insert and update statements would look like:

insert into users(`id`,`given_name`,`family_name`,`dob`,`ssn`,`street`,
`locality`,`postcode`,`country`,`phone`,`mobile`,`fax`) values(?,?,?,?,
?,?,?,?,?,?,?,?)

update users set `given_name`=?,`family_name`=?,`dob`=?,`ssn`=?,`street`=?,
`locality`=?,`postcode`=?,`country`=?,`phone`=?,`mobile`=?,`fax`=? where `id`=?

If the schema is created with a different dialect then the generated SQL will be different. For example if the Postgres dialect was used the insert and update queries would look more like:

insert into users("id","given_name","family_name","dob","ssn","street","locality",
"postcode","country","phone","mobile","fax") values($1,$2,$3,$4,$5,$6,$7,$8,$9,
$10,$11,$12)

update users set "given_name"=$1,"family_name"=$2,"dob"=$3,"ssn"=$4,"street"=$5,
"locality"=$6,"postcode"=$7,"country"=$8,"phone"=$9,"mobile"=$10,"fax"=$11
where "id"=$12

Inserting and updating a single row are common enough operations that the session has methods that make it very simple:

session.InsertRow(row)
session.UpdateRow(row)

Select queries can be performed using the session's Select method:

var rows []*User

// will populate rows slice with the results of the query
rowCount, err := session.Select(&rows, "select {} from users where postcode = ?", postcode)

var row User

// will populate row with the first row returned by the query
rowCount, err = session.Select(&row, "select {} from users where {}", userID)

// more complex query involving joins and aliases
rowCount, err = session.Select(&rows, `
    select {alias u}
    from users u
    inner join user_search_terms ust on ust.user_id = u.id
    where ust.search_term like ?
    order by {alias u}`, searchTermText + "%")

The SQL queries prepared in the above example would look like the following:

select `id`,`given_name`,`family_name`,`dob`,`ssn`,`street`,`locality`,
`postcode`,`country`,`phone`,`mobile`,`fax` from users where postcode=?

select `id`,`given_name`,`family_name`,`dob`,`ssn`,`street`,`locality`,
`postcode`,`country`,`phone`,`mobile`,`fax` from users where id=?

select u.`id`,u.`given_name`,u.`family_name`,u.`dob`,u.`ssn`,u.`street`,
u.`locality`,u.`postcode`,u.`country`,u.`phone`,u.`mobile`,u.`fax`
from users u inner join user_search_terms ust on ust.user_id = u.id where
ust.search_term_like ? order by u.`id`

The examples are using a MySQL dialect. If the schema had been setup for, say, a Postgres dialect, a generated query would look more like:

select "id","given_name","family_name","dob","ssn","street","locality",
"postcode","country","phone","mobile","fax" from users where postcode=$1

It is an important point to note that this feature is not about writing the SQL for the programmer. Rather it is about "filling in the blanks": allowing the programmer to specify as much of the SQL query as they want without having to write the tiresome bits.

Autoincrement Column Values

When inserting rows, if a column is defined as an autoincrement column, then the generated value will be retrieved from the database server, and the corresponding field in the row structure will be updated.

type Row {
  ID   int    `sql:"primary key autoincrement"`
  Name string
}

row := &Row{Name: "some name"}
_, err := session.InsertRow(row)
if err != nil {
  log.Fatal(err)
}

// row.ID will contain the auto-generated value
fmt.Println(row.ID)

Autoincrement column values work for all supported databases (PostgreSQL, MySQL, Microsoft SQL Server and SQLite).

Null Columns

Most SQL database tables have columns that are nullable, and it can be tiresome to always map to pointer types or special nullable types such as sql.NullString. In many cases it is acceptable to map the zero value for the field a database NULL in the corresponding database column.

Where it is acceptable to map a zero value to a NULL database column, the Go struct field can be marked with the "null" keyword in the field's struct tag.

type Employee struct {
    ID        int     `sql:"primary key"`
    Name      string
    ManagerID int     `sql:"null"`
    Phone     string  `sql:"null"`
}

In the above example the `manager_id` column can be null, but if all valid IDs are non-zero, it is unambiguous to map the zero value to a database NULL. Similarly, if the `phone` column an empty string it will be stored as a NULL in the database.

Care should be taken, because there are cases where a zero value and a database NULL do not represent the same thing. There are many cases, however, where this feature can be applied, and the result is simpler code that is easier to read.

JSON Columns

It is not uncommon to serialize complex objects as JSON text for storage in an SQL database. Native support for JSON is available in some database servers: in partcular Postgres has excellent support for JSON.

It is straightforward to use this package to serialize a structure field to JSON:

type SomethingComplex struct {
    Name       string
    Values     []int
    MoreValues map[string]float64
    // ... and more fields here ...
}

type Row struct {
    ID    int                `sql:"primary key"`
    Name  string
    Cmplx *SomethingComplex  `sql:"json"`
}

In the example above the `Cmplx` field will be marshaled as JSON text when writing to the database, and unmarshaled into the struct when reading from the database.

WHERE IN Clauses with Multiple Values

While most SQL queries accept a fixed number of parameters, if the SQL query contains a `WHERE IN` clause, it requires additional string manipulation to match the number of placeholders in the query with args.

This package simplifies queries with a variable number of arguments. When processing an SQL query, it detects if any of the arguments are slices:

// GetWidgets returns all the widgets associated with the supplied IDs.
func GetWidgets(session *sqlr.Session, ids ...int) ([]*Widget, error) {
    var rows []*Widget
    _, err := session.Select(&rows, `select {} from widgets where id in (?)`, ids)
    if err != nil {
      return nil, err
    }
    return widgets, nil
}

In the above example, the number of placeholders ("?") in the query will be increased to match the number of values in the `ids` slice. The expansion logic can handle any mix of slice and scalar arguments.

Type-Safe Query Functions

A session can create type-safe query functions. This is a very powerful feature and makes it very easy to create type-safe data access. See the Session.MakeQuery function for examples.

Index

Examples

Constants

This section is empty.

Variables

This section is empty.

Functions

This section is empty.

Types

type Column added in v0.6.0

type Column struct {
	// contains filtered or unexported fields
}

Column contains meta-data about a column in a database table.

func (*Column) AutoIncrement added in v0.6.0

func (col *Column) AutoIncrement() bool

AutoIncrement returns true if this column is an auto-increment column.

func (*Column) EmptyNull added in v0.6.0

func (col *Column) EmptyNull() bool

EmptyNull returns true if the zero value for the associated field type should be stored as NULL in the database.

This is commonly set for string values and time.Time values. It is common for an empty string value or an empty time.Time value to be represented as a database NULL.

func (*Column) JSON added in v0.6.0

func (col *Column) JSON() bool

JSON returns true if column's value is unmarshaled from JSON into the associated struct field, and if the struct field is marshaled into JSON to be stored in the database column.

func (*Column) Name added in v0.6.0

func (col *Column) Name() string

Name returns the name of the database column.

func (*Column) PrimaryKey added in v0.6.0

func (col *Column) PrimaryKey() bool

PrimaryKey returns true if this column is the primary key, or forms part of the primary key.

func (*Column) Version added in v0.6.0

func (col *Column) Version() bool

Version returns true if this column is an optimistic locking version column.

type ColumnConfig added in v0.6.0

type ColumnConfig struct {
	// ColumnName optionally specifies the database column
	// associated with the field.
	ColumnName string

	// Ignore optionally indicates that there is no database
	// column associated with this field.
	Ignore bool

	// PrimaryKey optionally indicates that this field forms
	// part of the primary key. When more than one field forms
	// part of the primary key, the order of columns is determined
	// by the order in the corresponding row struct.
	PrimaryKey bool

	// AutoIncrement optionally indicates that the column associated with
	// this field is an auto-incrementing column (aka an identity column).
	AutoIncrement bool

	// Version optionally indicates that the column associated with this
	// field is used for optimistic locking. The value of the field is incremented
	// every time the row is updated.
	Version bool

	// EmptyNull optionally indicates that an empty value in the field
	// should be interpreted as a NULL value in the database column, and a
	// NULL value in the database should be converted to an empty value.
	//
	// This setting is most applicable to strings and timestamps: it is
	// common to convert an empty string ("") and a zero time (time.Time{})
	// to a NULL value in the database.
	EmptyNull bool

	// JSON optionally indicates that the value of the field should be
	// marshaled into JSON before storing in the database, and that the
	// value in the database should be unmarshaled from JSON before
	// storing in the field.
	JSON bool

	// NaturalKey optionally indicates that the column forms part of a
	// natural key for the row. When a column forms part of a natural
	// key, then the value in that field is included in any error message
	// generated for row-level error conditions. This can be helpful
	// for diagnostics and debugging.
	NaturalKey bool

	// OverrideStructTag optionally specifies that the configuration
	// in this struct should override all configuration present in
	// the field's struct tag. This would only be used in unusual
	// circumstances.
	OverrideStructTag bool
}

ColumnConfig contains configuration for an individual database column.

Column configuration is typically specified in the struct tag of the relevant struct field, so it is not usually necessary to specify column configuration using this structure.

Example
type Address struct {
	Street   string
	Locality string
	City     string
	State    string
	Country  string
}

type PersonRow struct {
	ID      int64 `sql:"primary key"`
	Name    string
	DOB     time.Time
	Address Address
}

schema := NewSchema(
	WithTables(TablesConfig{
		(*PersonRow)(nil): {
			Columns: ColumnsConfig{
				"Address.Locality": {
					ColumnName: "address_suburb",
				},
				"DOB": {
					ColumnName: "date_of_birth",
					EmptyNull:  true,
				},
			},
		},
	}),
)

doSomethingWith(schema)
Output:

type ColumnsConfig added in v0.6.0

type ColumnsConfig map[string]ColumnConfig

ColumnsConfig is a map of individual column configurations, keyed by the field path to the field.

The field path is the the field name when the field is a simple, scalar type. When the row type contains embedded struct fields, then the field path is all of the field names required to navigate to the field separated by a period.

type Row struct {
    Name string         // field path = "Name"
    Address struct {
        Street   string // field path = "Address.Street"
        Locality string	// field path = "Address.Locality"
    }
}
Example
type Address struct {
	Street   string
	Locality string
	City     string
	State    string
	Country  string
}

type PersonRow struct {
	ID      int64 `sql:"primary key"`
	Name    string
	DOB     time.Time
	Address Address
}

schema := NewSchema(
	WithTables(TablesConfig{
		(*PersonRow)(nil): {
			Columns: ColumnsConfig{
				"Address.Locality": {
					ColumnName: "address_suburb",
				},
				"DOB": {
					ColumnName: "date_of_birth",
					EmptyNull:  true,
				},
			},
		},
	}),
)

doSomethingWith(schema)
Output:

type Dialect

type Dialect interface {
	// Quote a table name or column name so that it does
	// not clash with any reserved words. The SQL-99 standard
	// specifies double quotes (eg "table_name"), but many
	// dialects, including MySQL use the backtick (eg `table_name`).
	// SQL server uses square brackets (eg [table_name]).
	Quote(column string) string

	// Return the placeholder for binding a variable value.
	// Most SQL dialects support a single question mark (?), but
	// PostgreSQL uses numbered placeholders (eg $1).
	Placeholder(n int) string
}

Dialect is an interface used to handle differences in SQL dialects.

var (
	Postgres Dialect // Quote: "column_name", Placeholders: $1, $2, $3
	MySQL    Dialect // Quote: `column_name`, Placeholders: ?, ?, ?
	MSSQL    Dialect // Quote: [column_name], Placeholders: ?, ?, ?
	SQLite   Dialect // Quote: `column_name`, Placeholders: ?, ?, ?
	ANSISQL  Dialect // Quote: "column_name", Placeholders: ?, ?, ?
)

Pre-defined dialects

func DefaultDialect

func DefaultDialect() Dialect

DefaultDialect is the dialect used by a schema if none is specified. It is chosen from the first driver in the list of drivers returned by the sql.Drivers() function.

Many programs only load one database driver, and in this case the default dialect should be the correct choice.

func DialectFromDB added in v0.6.0

func DialectFromDB(db *sql.DB) Dialect

DialectFromDB returns the appropriate dialect for a database handle.

func DialectFromDriver added in v0.6.0

func DialectFromDriver(driver driver.Driver) Dialect

DialectFromDriver returns the appropriate dialect for a database driver.

type NamingConvention

type NamingConvention interface {
	// Convert converts a Go struct field name according to the naming convention.
	Convert(fieldName string) string

	// Join joins two or more converted names to form a column name.
	// Used for naming columns based on fields within embedded
	// structures.
	Join(names []string) string

	// TableName converts the name of the row type into a table name.
	TableName(typeName string) string
}

The NamingConvention interface provides methods that are used to infer a database column name from its associated Go struct field, and a database table name from the name of its associated row type.

var (
	SnakeCase NamingConvention // eg "FieldName" -> "field_name"
	SameCase  NamingConvention // eg "FieldName" -> "FieldName"
	LowerCase NamingConvention // eg "FieldName" -> "fieldname"
)

Pre-defined naming conventions. If a naming convention is not specified for a schema, it defaults to snake_case.

type OptimisticLockingError added in v0.6.0

type OptimisticLockingError struct {
	Table           *Table
	Row             interface{}
	ExpectedVersion int64
	ActualVersion   int64
}

OptimisticLockingError is an error generated during an Update or Upsert operation, where the value of the row struct version field does not match the value of the corresponding row in the database.

func (*OptimisticLockingError) Error added in v0.6.0

func (e *OptimisticLockingError) Error() string

type Querier

type Querier interface {
	// ExecContext executes a query without returning any rows.
	// The args are for any placeholder parameters in the query.
	ExecContext(ctx context.Context, query string, args ...interface{}) (sql.Result, error)

	// QueryContext executes a query that returns rows, typically a SELECT.
	// The args are for any placeholder parameters in the query.
	QueryContext(ctx context.Context, query string, args ...interface{}) (*sql.Rows, error)
}

The Querier interface defines the SQL database access methods used by this package.

The *DB, *Tx and *Conn types in the standard library package "database/sql" all implement this interface. This interface is based on https://godoc.org/github.com/golang-sql/sqlexp#Querier.

type Schema

type Schema struct {
	// contains filtered or unexported fields
}

Schema contains information known about the database schema and is used when generating SQL statements.

Information stored in the schema includes the SQL dialect, and the naming convention used to convert Go struct field names into database column names, and Go type names into database table names.

Although the zero value schema can be used and represents a database schema with default values, it is also common to use the NewSchema function to create a schema with options.

A schema maintains an internal cache, which is used to store details of frequently called SQL commands for improved performance. All methods are safe to call concurrently from different goroutines.

func NewSchema

func NewSchema(opts ...SchemaOption) *Schema

NewSchema creates a schema with options.

If the schema has any inconsistencies, then this function will panic. If there is an expectation that the options contain invalid data, call NewSchemaE instead. Errors due to inconsistencies are only possible if the WithTables option is specified.

func NewSchemaE added in v0.6.0

func NewSchemaE(opts ...SchemaOption) (*Schema, error)

NewSchemaE creates a new schema with options. If the schema contains any inconsistencies, then an error is returned.

Because a schema is usually created during program initialization, it is more common for a program to call NewSchema, which will panic if there are any inconsistencies in the schema configuration. Errors due to inconsistencies are only possible if the WithTables option is specified.

func (*Schema) Key

func (s *Schema) Key() string

Key returns the key associated with the schema, which is specififed using the WithKey schema option.

func (*Schema) Prepare

func (s *Schema) Prepare(row interface{}, query string) (*Stmt, error)

Prepare creates a prepared statement for later queries or executions. Multiple queries or executions may be run concurrently from the returned statement.

Statements are low-level, and most programs do not need to use them directly. This method may be removed in a future version of the API.

Example
type UserRow struct {
	ID         int `sql:"primary key autoincrement"`
	GivenName  string
	FamilyName string
}

// Define different schemas for different dialects and naming conventions
schemas := []*Schema{
	NewSchema(
		WithDialect(MSSQL),
		WithNamingConvention(SameCase),
	),
	NewSchema(
		WithDialect(MySQL),
		WithNamingConvention(LowerCase),
	),
	NewSchema(
		WithDialect(Postgres),
		WithNamingConvention(SnakeCase),
	),
}

// for each schema, print the SQL generated for each statement
for _, schema := range schemas {
	stmt, err := schema.Prepare(UserRow{}, `insert into users({}) values({})`)
	if err != nil {
		log.Fatal(err)
	}

	fmt.Println(stmt)
}
Output:

insert into users([GivenName], [FamilyName]) values(?, ?)
insert into users(`givenname`, `familyname`) values(?, ?)
insert into users("given_name", "family_name") values($1, $2)

func (*Schema) TableFor added in v0.6.0

func (s *Schema) TableFor(row interface{}) *Table

TableFor returns the table information associated with row, which should be an instance of a struct type or a pointer to a struct type. If row does not refer to a struct type then a panic results.

type SchemaOption

type SchemaOption func(schema *Schema) error

A SchemaOption provides optional configuration and is supplied when creating a new Schema.

func ForDB

func ForDB(db *sql.DB) SchemaOption

ForDB creates an option that sets the dialect based on the DB handle.

func WithDialect

func WithDialect(dialect Dialect) SchemaOption

WithDialect provides an option that sets the schema's dialect.

func WithField deprecated

func WithField(fieldName string, columnName string) SchemaOption

WithField creates an option that maps a Go field name to a database column name.

Deprecated: define field/column name mappings using TableConfig instead.

It is more common to override column names in the struct tag of the field, but there are some cases where it makes sense to declare column name overrides directly with the schema. One situation is with fields within embedded structures. For example, with the following structures:

type UserRow struct {
    Name string
    HomeAddress Address
    WorkAddress Address
}

type Address struct {
    Street   string
    Locality string
    State    string
}

If the column name for HomeAddress.Locality is called "home_suburb" for historical reasons, then it is not possible to specify a rename in the structure tag without also affecting the WorkAddress.Locality field. In this situation it is only possible to specify the column name override using the WithField option:

schema := NewSchema(
    WithField("HomeAddress.Locality", "home_suburb"),
)

func WithIdentifier deprecated

func WithIdentifier(identifier string, meaning string) SchemaOption

WithIdentifier creates an option that performs a global rename of an identifier when preparing SQL queries. This option is not needed very often: its main purpose is for helping a program operate against two different database schemas where table and column names follow a different naming convention.

The example shows a situation where a program operates against an SQL Server database where a table is named "[User]", but the same table is named "users" in the Postgres schema.

Deprecated: Use (TODO) instead. This is a confusing API, and another, clearer option will be provided.

Example
// Take an example of a program that operates against an SQL Server
// database where a table is named "[User]", but the same table is
// named "users" in the Postgres schema.
mssql := NewSchema(
	WithDialect(MSSQL),
	WithNamingConvention(SameCase),
	WithIdentifier("[User]", "users"),
	WithIdentifier("UserId", "user_id"),
	WithIdentifier("[Name]", "name"),
)
postgres := NewSchema(
	WithDialect(Postgres),
	WithNamingConvention(SnakeCase),
)

type User struct {
	UserId int `sql:"primary key"`
	Name   string
}

// If a statement is prepared and executed for both
const query = "select {} from users where user_id = ?"

mssqlStmt, err := mssql.Prepare(User{}, query)
if err != nil {
	log.Fatal(err)
}
fmt.Println(mssqlStmt)
postgresStmt, err := postgres.Prepare(User{}, query)
if err != nil {
	log.Fatal(err)
}
fmt.Println(postgresStmt)
Output:

select [UserId], [Name] from [User] where UserId = ?
select "user_id", "name" from users where user_id = $1

func WithKey

func WithKey(key string) SchemaOption

WithKey creates an option that associates the schema with a key in struct field tags. This option is not needed very often: its main purpose is for helping a program operate against two different database schemas.

func WithNamingConvention

func WithNamingConvention(convention NamingConvention) SchemaOption

WithNamingConvention creates and option that sets the schema's naming convention.

func WithTables added in v0.6.0

func WithTables(tables TablesConfig) SchemaOption

WithTables creates an option that sets configuration for one or more tables and their associated row types.

type Session added in v0.6.0

type Session struct {
	// contains filtered or unexported fields
}

A Session is a request-scoped database session. It can execute queries and it can construct strongly-typed query functions.

func NewSession added in v0.6.0

func NewSession(ctx context.Context, querier Querier, schema *Schema) *Session

NewSession returns a new, request-scoped session.

Although it is not mandatory, it is a good practice to call a session's Close method at the end of a request.

func (*Session) Close added in v0.6.0

func (sess *Session) Close() error

Close releases resources associated with the session. Any attempt to query using the session will fail after Close has been called.

Because a session is request-scoped, it should never be used once a request has completed. Calling a session's Close method at the end of a request is an effective way to release resources associated with the session and to ensure that it can no longer be used.

Close implements the io.Closer interface. It always returns nil.

func (*Session) Context added in v0.6.0

func (sess *Session) Context() context.Context

Context returns the context associated with this session.

Note that the context returned by this function is not identical to the context passed to the NewSession function. The context returned by this method is canceled when the Close method is called.

func (*Session) Exec added in v0.6.0

func (sess *Session) Exec(query string, args ...interface{}) (sql.Result, error)

Exec executes a query without returning any rows. The args are for any placeholder parameters in the query.

func (*Session) HandleRows added in v0.6.0

func (sess *Session) HandleRows(callback interface{})

HandleRows supplies a callback function for the session to call when one or more rows are retrieved during a select query. The callback must have a function signature like the following:

func(rows []*RowType)

Where RowType is a structure that represents a row. Whenever a select query is called that returns one or more RowType instances, then this callback will be called.

If callback is not a function as described above, this method will panic.

func (*Session) InsertRow added in v0.6.0

func (sess *Session) InsertRow(row interface{}) error

InsertRow inserts one row into the database.

If the row has an auto-increment field, then that field is updated with the value of the auto-increment column.

func (*Session) MakeQuery added in v0.6.0

func (sess *Session) MakeQuery(funcPtr ...interface{})

MakeQuery makes one or more functions that can be used to query in a type-safe manner. Each funcPtr is a pointer to a function that will be created by this function.

If "Row" is the row type, and "RowID" is the primary key type for Row objects, then the function can be any one of the following signatures:

// Get one row based on its ID
func(id RowID) (*Row, error)

// Get multiple rows given multiple IDs
func(ids []RowID) ([]*Row, error)
func(ids ...RowID) ([]*Row, error)

// Get one row returning a thunk: batches multiple requests into
// one query using the dataloader pattern
func(id RowID) func() (*Row, error)

// Execute a query that will return multiple Row objects
func(query string, args ...interface{}) ([]*Row, error)

// Execute a query that will return a single Row object.
func(query string, args ...interface{}) (*Row, error)

// Execute a query that will return a single integer value.
// Useful for select count(*) queries.
func(query string, args ...interface{}) (int, error)
func(query string, args ...interface{}) (int64, error)

If any of the funcPtr arguments are not pointers to a function, or do not fit one of the known function prototypes, then this function will panic.

Example
var schema Schema
ctx := context.Background()
tx := beginTransaction() // get a DB transaction, assumes no errors
defer tx.Commit()        // WARNING: no error handling here: example code only

type Row struct {
	ID            int64 `sql:"primary key"`
	Name          string
	FavoriteColor string
}

// begin a request-scoped database session
sess := NewSession(ctx, tx, &schema)

// data access object
var dao struct {
	Get    func(id int64) (*Row, error)
	Select func(query string, args ...interface{}) ([]*Row, error)
}

sess.MakeQuery(&dao.Get, &dao.Select)

// can now use the type-safe data access functions
row42, err := dao.Get(42)
if err != nil {
	log.Fatal(err)
}
log.Println("Row 42:", row42)

redRows, err := dao.Select("select {} from rows where favorite_color = ?", "red")
if err != nil {
	log.Fatal(err)
}
for _, row := range redRows {
	log.Println("Likes red:", row)
}
Output:

func (*Session) Querier added in v0.6.0

func (sess *Session) Querier() Querier

Querier returns the database querier associated with this session.

func (*Session) Query added in v0.6.0

func (sess *Session) Query(query string, args ...interface{}) (*sql.Rows, error)

Query performs a query that is not row-based. The query placeholders are converted to the format suitable for the SQL dialect, and any args that are slices are expanded (eg for WHERE IN (...) clauses).

func (*Session) Row added in v0.6.0

func (sess *Session) Row(row interface{}) *SessionRow

Row returns a session row, which is then used to execute a query based on the contents of row.

Example
var (
	session *Session
)

// start a session
session = newSession()

// ExampleRow is an example of a row structure.
type ExampleRow struct {
	ID    int `sql:"primary key"`
	Name  string
	Value int
}

var row = &ExampleRow{
	ID:    1,
	Name:  "first row",
	Value: 10,
}

// Insert a row
result, err := session.Row(row).Exec("insert into examples({}) values({})")
checkError(err)
count, err := result.RowsAffected()
checkError(err)
log.Printf("row inserted, count=%d", count)

// Delete the row
result, err = session.Row(row).Exec("delete from examples where {}")
checkError(err)
count, err = result.RowsAffected()
checkError(err)
log.Printf("row deleted, count=%d", count)
Output:

func (*Session) Schema added in v0.6.0

func (sess *Session) Schema() *Schema

Schema returns the schema used for this session.

func (*Session) Select added in v0.6.0

func (sess *Session) Select(rows interface{}, query string, args ...interface{}) (int, error)

Select executes a SELECT query and stores the result in rows. The argument passed to rows can be one of the following:

A pointer to an array of structs; or
a pointer to an array of struct pointers; or
a pointer to a struct.

When rows is a pointer to an array it is populated with one item for each row returned by the SELECT query.

When rows is a pointer to a struct, it is populated with the first row returned from the query. This is a good option when the query will only return one row.

Select returns the number of rows returned by the SELECT query.

func (*Session) UpdateRow added in v0.6.0

func (sess *Session) UpdateRow(row interface{}) (int, error)

UpdateRow updates one row in the database. It returns the number of rows updated, which should be zero or one.

If the row has a version field, then that field is incremented during the update. If the row being updated does not match the original value of the version field, then an OptimisticLockingError will be returned.

type SessionRow added in v0.6.0

type SessionRow struct {
	Session *Session
	Row     interface{}
}

SessionRow is is used to perform queries that apply to single row.

func (*SessionRow) Exec added in v0.6.0

func (row *SessionRow) Exec(query string, args ...interface{}) (sql.Result, error)

Exec executes a query using the row as parameters to the query.

Example
var (
	session *Session
)

// start a session
session = newSession()

// ExampleRow is an example of a row structure.
type ExampleRow struct {
	ID    int `sql:"primary key"`
	Name  string
	Value int
}

var row = &ExampleRow{
	ID:    1,
	Name:  "first row",
	Value: 10,
}

// Performs an insert, bypassing any of the cleverness of the
// Insert() method.
//
// SQL looks something like:
//  insert into the_table("id", "name", "value") values ($1, $2, $3)
// Arguments are
//  [ 1, "first row", 10 ]
n, err := session.Row(row).Exec("insert into the_table({}) values({})")
checkError(err)
log.Printf("row inserted, count=%d", n)

// Perform an update with an additional test
//
// SQL looks like:
//  update the_table set name = $1, value = $2 where id =$3 and value = $4
// Arguments are:
//  [ "first row", 10, 1, 10 ]
n, err = session.Row(row).Exec("update the_table set {} where {} and value > ?", 10)
checkError(err)
log.Printf("row inserted, count=%d", n)
Output:

type Stmt

type Stmt struct {
	// contains filtered or unexported fields
}

Stmt is a prepared statement. A Stmt is safe for concurrent use by multiple goroutines.

Stmt is important for the implementation, but currently does not export many public methods. Currently the only public operation is to print the SQL. It may be removed from the public API in a future version.

func (*Stmt) String

func (stmt *Stmt) String() string

String prints the SQL query associated with the statement.

type Table added in v0.6.0

type Table struct {
	// contains filtered or unexported fields
}

Table represents the information known about a database table. The information known about a table is derived from the row type (which must be a struct), and any configuration that was provided via the SchemaConfig when the schema was created.

func (*Table) Columns added in v0.6.0

func (tbl *Table) Columns() []*Column

Columns returns all columns defined for the table.

func (*Table) Name added in v0.6.0

func (tbl *Table) Name() string

Name returns the name of the table.

func (*Table) NaturalKey added in v0.6.0

func (tbl *Table) NaturalKey() []*Column

NaturalKey returns the natural key columns for the table. Returns nil if no natural key columns have been defined. Natural key columns are columns that are useful for identifying a row. They are used in error messages only. (And we might remove them to make the API simpler to start with).

func (*Table) PrimaryKey added in v0.6.0

func (tbl *Table) PrimaryKey() []*Column

PrimaryKey returns the column or columns that form the primary key for the table. Returns nil if no primary key has been defined.

func (*Table) RowType added in v0.6.0

func (tbl *Table) RowType() reflect.Type

RowType returns the row type, which is always a struct.

type TableConfig added in v0.6.0

type TableConfig struct {
	// TableName optionally specifies the name of the database
	// table associated with the row type.
	TableName string

	// Columns is an optional list of column configurations.
	// Only columns with non-default configuration need to
	// be included in this list.
	Columns ColumnsConfig
}

TableConfig contains configuration for an individual database table. It is not necessary to specify configuration for a table if the default settings apply.

type TablesConfig added in v0.6.0

type TablesConfig map[interface{}]TableConfig

TablesConfig is a map of table configurations, keyed by the row type that represents the table.

Directories

Path Synopsis
Package dataloader provides an implementation of the data loader pattern, which is useful for batching up requests to a database rather than making a large number of small queries.
Package dataloader provides an implementation of the data loader pattern, which is useful for batching up requests to a database rather than making a large number of small queries.
Directory docs contains detailed documentation.
Directory docs contains detailed documentation.
Package private and subdirectories have no backward compatibility guarantees.
Package private and subdirectories have no backward compatibility guarantees.
column
Package column extracts database column information from Go struct fields.
Package column extracts database column information from Go struct fields.
dialect
Package dialect handles differences in various SQL dialects.
Package dialect handles differences in various SQL dialects.
naming
Package naming provides naming conventions used to convert Go struct field names to database columns.
Package naming provides naming conventions used to convert Go struct field names to database columns.
scanner
Package scanner implements a simple lexical scanner for SQL statements.
Package scanner implements a simple lexical scanner for SQL statements.
wherein
Package wherein expands SQL statements that have placeholders that can accept slices of arguments.
Package wherein expands SQL statements that have placeholders that can accept slices of arguments.

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL