godb

package module
v1.0.15 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Oct 10, 2020 License: MIT Imports: 11 Imported by: 11

README

godb - a Go query builder and struct mapper

Build Status GoDoc

godb is a simple Go query builder and struct mapper, not a full-featured ORM. godb does not manage relationships.

Initially, godb was a learning project. The goal was to improve my Go skills by doing some useful things. But more and more features have been added and godb has become a serious project that can be used by others.

godb is a project that is still young and evolving. The API is almost stable, but it can still change slightly from one version to another. Each new version is associated with a tag, so it is possible to target a particular one if necessary.

Features

  • Queries builder.
  • Mapping between structs and tables (or views).
  • Mapping with nested structs.
  • Execution of custom SELECT, INSERT, UPDATE and DELETE queries with structs and slices.
  • Optional execution of SELECT queries with an iterator to limit memory consumption if needed (e.g. batches).
  • Execution of raw queries, mapping rows to structs.
  • Optimistic Locking
  • SQL queries and durations logs.
  • Two adjustable prepared statements caches (with/without transaction).
  • RETURNING support for PostgreSQL.
  • OUTPUT support for SQL Server.
  • Optional common db errors handling for backend databases.(db.UseErrorParser())
  • Define your own logger (should have Println(...) method)
  • Define model struct name to db table naming with db.SetDefaultTableNamer(yourFn). Supported types are: Plural,Snake,SnakePlural. You can also define TableName() string method to for your struct and return whatever table name will be.
  • BlackListing or WhiteListing columns for struct based inserts and updates.
  • Could by used with
    • SQLite
    • PostgreSQL
    • MySQL / MariaDB
    • MS SQL Server
    • other compatible database if you write an adapter.

I made tests of godb on differents architectures and operating systems : OSX, Windows, Linux, ARM (Cortex A7) and Intel x64.

The current version of godb is compatible from Go 1.13 to 1.15. Older versions through 1.10 to 1.12 are supported by the v1.0.14 tag .

Documentation

There are three forms of documentation :

  • This README with the example presented below, which gives an overview of what godb allows.
  • The tests in dbtests/common, which are run on the different databases supported.
  • Detailed documentation on GoDoc: https://godoc.org/github.com/samonzeweb/godb

Install

go get github.com/samonzeweb/godb

Install the required driver (see tests). You cas use multiple databases if needed.

Of course you can also use a dependency management tool like dep.

Running Tests

godb tests use GoConvey and at least SQLite :

go get github.com/smartystreets/goconvey
go get github.com/mattn/go-sqlite3

To run tests, go into the godb directory and executes go test ./...

SQLite tests are done with in memory database, it's fast. You can run tests with others databases, see below.

With the exception of SQLite, all drivers are pure Go code, and does not require external dependencies.

Test with PostgreSQL

Install the driver and set the GODB_POSTGRESQL environment variable with the PostgreSQL connection string.

go get github.com/lib/pq
GODB_POSTGRESQL="your connection string" go test ./...
Test with MySQL / MariaDB

Install the driver and set the GODB_MYSQL environment variable with the MySQL connection string.

go get github.com/go-sql-driver/mysql
GODB_MYSQL="your connection string" go test ./...
Test with MS SQL Server

Install the driver and set the GODB_MSSQL environment variable with the SQL Server connection string.

go get github.com/denisenkom/go-mssqldb
GODB_MSSQL="your connection string" go test ./...
Test all with Docker

Using Docker you can test with SQLite, PostgreSQL, MariaDB and SQL Server with the testallwithdocker.sh shell script.

SQL Server is greedy, on OSX allow at least 4Go to Docker.

If the containers are slow to start, the script could wait before accessing them. Simply add the time to wait in seconds as arguments :

./testallwithdocker.sh 15

Example

The example below illustrates the main features of godb.

You can copy the code into an example.go file and run it. You need to create the database and the books table as explained in the code.

package main

import (
	"database/sql"
	"fmt"
	"time"

	"github.com/samonzeweb/godb"
	"github.com/samonzeweb/godb/adapters/sqlite"
	"log"
	"os"
)

/*
  To run this example, initialize a SQLite3 DB called 'library.db' and add
  a 'books' table like this :

  create table books (
  	id        integer not null primary key autoincrement,
  	title     text    not null,
  	author    text    not null,
  	published date    not null);
*/

// Struct and its mapping
type Book struct {
	Id        int       `db:"id,key,auto"`
	Title     string    `db:"title"`
	Author    string    `db:"author"`
	Published time.Time `db:"published"`
}

// Optional, default if the struct name (Book)
func (*Book) TableName() string {
	return "books"
}

// See "group by" example
type CountByAuthor struct {
	Author string `db:"author"`
	Count  int    `db:"count"`
}

func main() {
	// Examples fixtures
	var authorTolkien = "J.R.R. tolkien"

	var bookTheHobbit = Book{
		Title:     "The Hobbit",
		Author:    authorTolkien,
		Published: time.Date(1937, 9, 21, 0, 0, 0, 0, time.UTC),
	}

	var bookTheFellowshipOfTheRing = Book{
		Title:     "The Fellowship of the Ring",
		Author:    authorTolkien,
		Published: time.Date(1954, 7, 29, 0, 0, 0, 0, time.UTC),
	}

	var bookTheTwoTowers = Book{
		Title:     "The Two Towers",
		Author:    authorTolkien,
		Published: time.Date(1954, 11, 11, 0, 0, 0, 0, time.UTC),
	}

	var bookTheReturnOfTheKing = Book{
		Title:     "The Return of the King",
		Author:    authorTolkien,
		Published: time.Date(1955, 10, 20, 0, 0, 0, 0, time.UTC),
	}

	var setTheLordOfTheRing = []Book{
		bookTheFellowshipOfTheRing,
		bookTheTwoTowers,
		bookTheReturnOfTheKing,
	}

	// Connect to the DB
	db, err := godb.Open(sqlite.Adapter, "./library.db")
	panicIfErr(err)
	// OPTIONAL: Set logger to show SQL execution logs
	db.SetLogger(log.New(os.Stderr, "", 0))
	// OPTIONAL: Set default table name building style from struct's name(if active struct doesn't have TableName() method)
	db.SetDefaultTableNamer(tablenamer.Plural())
	// Single insert (id will be updated)
	err = db.Insert(&bookTheHobbit).Do()
	panicIfErr(err)

	// Multiple insert
	// Warning : BulkInsert only update ids with PostgreSQL and SQL Server!
	err = db.BulkInsert(&setTheLordOfTheRing).Do()
	panicIfErr(err)

	// Count
	count, err := db.SelectFrom("books").Count()
	panicIfErr(err)
	fmt.Println("Books count : ", count)

	// Custom select
	countByAuthor := make([]CountByAuthor, 0, 0)
	err = db.SelectFrom("books").
		Columns("author", "count(*) as count").
		GroupBy("author").
		Having("count(*) > 3").
		Do(&countByAuthor)
	fmt.Println("Count by authors : ", countByAuthor)

	// Select single object
	singleBook := Book{}
	err = db.Select(&singleBook).
		Where("title = ?", bookTheHobbit.Title).
		Do()
	if err == sql.ErrNoRows {
		// sql.ErrNoRows is only returned when the target is a single instance
		fmt.Println("Book not found !")
	} else {
		panicIfErr(err)
	}

	// Select single record values
	authorName := ""
	title := ""
	err = db.SelectFrom("books").
		Where("title = ?", bookTheHobbit.Title).
		Columns("author", "title").
		Scanx(&authorName, &title)
	if err == sql.ErrNoRows {
		// sql.ErrNoRows is only returned when the target is a single instance
		fmt.Println("Book not found !")
	} else {
		panicIfErr(err)
	}

	// Select multiple objects
	multipleBooks := make([]Book, 0, 0)
	err = db.Select(&multipleBooks).Do()
	panicIfErr(err)
	fmt.Println("Books found : ", len(multipleBooks))

	// Iterator
	iter, err := db.SelectFrom("books").
		Columns("id", "title", "author", "published").
		DoWithIterator()
	panicIfErr(err)
	for iter.Next() {
		book := Book{}
		err := iter.Scan(&book)
		panicIfErr(err)
		fmt.Println(book)
	}
	panicIfErr(iter.Err())
	panicIfErr(iter.Close())

	// Raw query
	subQuery := godb.NewSQLBuffer(0, 0). // sizes are indicative
						Write("select author ").
						Write("from books ").
						WriteCondition(godb.Q("where title = ?", bookTheHobbit.Title))

	queryBuffer := godb.NewSQLBuffer(64, 0).
		Write("select * ").
		Write("from books ").
		Write("where author in (").
		Append(subQuery).
		Write(")")

	panicIfErr(queryBuffer.Err())

	books := make([]Book, 0, 0)
	err = db.RawSQL(queryBuffer.SQL(), queryBuffer.Arguments()...).Do(&books)
	panicIfErr(err)
	fmt.Printf("Raw query found %d books\n", len(books))

	// Update and transactions
	err = db.Begin()
	panicIfErr(err)

	updated, err := db.UpdateTable("books").Set("author", "Tolkien").Do()
	panicIfErr(err)
	fmt.Println("Books updated : ", updated)

	bookTheHobbit.Author = "Tolkien"
	err = db.Update(&bookTheHobbit).Do()
	panicIfErr(err)
	fmt.Println("Books updated : ", updated)

	err = db.Rollback()
	panicIfErr(err)

	// Delete
	deleted, err := db.Delete(&bookTheHobbit).Do()
	panicIfErr(err)
	fmt.Println("Books deleted : ", deleted)

	deleted, err = db.DeleteFrom("books").
		WhereQ(godb.Or(
			godb.Q("author = ?", authorTolkien),
			godb.Q("author = ?", "Georged Orwell"),
		)).
		Do()
	panicIfErr(err)
	fmt.Println("Books deleted : ", deleted)

	// Bye
	err = db.Close()
	panicIfErr(err)
}

// It's just an example, what did you expect ? (never do that in real code)
func panicIfErr(err error) {
	if err != nil {
		panic(err)
	}
}

Licence

Released under the MIT License, see LICENSE.txt for more informations.

Documentation

Overview

Package godb is query builder and struct mapper.

godb does not manage relationships like Active Record or Entity Framework, it's not a full-featured ORM. Its goal is to be more productive than manually doing mapping between Go structs and databases tables.

godb needs adapters to use databases, some are packaged with godb for :

  • SQLite
  • PostgreSQL
  • MySQL
  • SQL Server

Start with an adapter, and the Open method which returns a godb.DB pointer :

import (
	"github.com/samonzeweb/godb"
	"github.com/samonzeweb/godb/adapters/sqlite"
)

func main() {
	db, err := godb.Open(sqlite.Adapter, "./library.db")
	if err != nil {
		log.Fatal(err)
	}
	…
}

There are three ways to executes SQL with godb :

  • the statements tools
  • the structs tools
  • and raw queries

Using raw queries you can execute any SQL queries and get the results into a slice of structs (or single struct) using the automatic mapping.

Structs tools looks more 'orm-ish' as they're take instances of objects or slices to run select, insert, update and delete.

Statements tools stand between raw queries and structs tools. It's easier to use than raw queries, but are limited to simpler cases.

Statements tools

The statements tools are based on types :

  • SelectStatement : initialize it with db.SelectFrom
  • InsertStatement : initialize it with db.InsertInto
  • UpdateStatement : initialize it with db.UpdateTable
  • DeleteStatement : initialize it with db.DeleteFrom

Example :

type CountByAuthor struct {
	Author string `db:"author"`
	Count  int    `db:"count"`
}
…

count, err := db.SelectFrom("books").Count()
…

countByAuthor := make([]CountByAuthor, 0, 0)
err = db.SelectFrom("books").
	Columns("author", "count(*) as count").
	GroupBy("author").
	Having("count(*) > 3").
	Do(&countByAuthor)
…

newId, err := db.InsertInto("dummies")
	.Columns("foo", "bar", "baz")
	.Values(1, 2, 3)
	.Do()
…

The SelectStatement type could also build a query using columns from a structs. It facilitates the build of queries returning values from multiple table (or views). See struct mapping explanations, in particular the `rel` part.

Example :

type Book struct {
	Id        int       `db:"id,key,auto"`
	Title     string    `db:"title"`
	Author    string    `db:"author"`
	Published time.Time `db:"published"`
	Version   int       `db:"version,oplock"`
}

type InventoryPart struct {
	Id       sql.NullInt64 `db:"id"`
	Counting sql.NullInt64 `db:"counting"`
}

type BooksWithInventories struct {
	Book          `db:",rel=books"`
	InventoryPart `db:",rel=inventories"`
}
…

booksWithInventories := make([]BooksWithInventories, 0, 0)
err = db.SelectFrom("books").
	ColumnsFromStruct(&booksWithInventories).
	LeftJoin("inventories", "inventories", godb.Q("inventories.book_id = books.id")).
	Do(&booksWithInventories)

Structs tools

The structs tools are based on types :

  • StructSelect : initialize it with db.Select
  • StructInsert : initialize it with db.Insert or db.BulkInsert
  • StructUpdate : initialize it with db.Update
  • StructDelete : initialize it with db.Delete

Examples :

// Struct and its mapping
type Book struct {
	Id        int       `db:"id,key,auto"`
	Title     string    `db:"title"`
	Author    string    `db:"author"`
	Published time.Time `db:"published"`
}

// Optionnal, default if the struct name (Book)
func (*Book) TableName() string {
	return "books"
}

…

bookTheHobbit := Book{
	Title:     "The Hobbit",
	Author:    authorTolkien,
	Published: time.Date(1937, 9, 21, 0, 0, 0, 0, time.UTC),
}

err = db.Insert(&bookTheHobbit).Do()
…

singleBook := Book{}
err = db.Select(&singleBook).
	Where("title = ?", bookTheHobbit.Title).
	Do()
…

multipleBooks := make([]Book, 0, 0)
err = db.Select(&multipleBooks).Do()
…

Raw queries

Raw queries are executed using the RawSQL type.

The query could be a simple hand-written string, or something complex builded using SQLBuffer and Conditions.

Example :

books := make([]Book, 0, 0)
err = db.RawSQL("select * from books where author = ?", authorAssimov).Do(&books)

Structs mapping

Stucts contents are mapped to databases columns with tags, like in previous example with the Book struct. The tag is 'db' and its content is :

  • The columns name (mandatory, there is no default rule).
  • The 'key' keyword if the field/column is a part of the table key.
  • The 'auto' keyword if the field/column value is set by the database.

For autoincrement identifier simple use both 'key' and 'auto'.

Example :

type SimpleStruct struct {
	ID    int    `db:"id,key,auto"`
	Text  string `db:"my_text"`
	// ignored
	Other string
}

More than one field could have the 'key' keyword, but with most databases drivers none of them could have the 'auto' keyword, because executing an insert query only returns one value : the last inserted id : https://golang.org/pkg/database/sql/driver/#RowsAffected.LastInsertId .

With PostgreSQL you cas have multiple fields with 'key' and 'auto' options.

Structs could be nested. A nested struct is mapped only if has the 'db' tag. The tag value is a columns prefix applied to all fields columns of the struct. The prefix is not mandatory, a blank string is allowed (no prefix).

A nested struct could also have an optionnal `rel` attribute of the form `rel=relationname`. It's useful to build a select query using multiples relations (table, view, ...). See the example using the BooksWithInventories type.

Example

type KeyStruct struct {
	ID    int    `db:"id,key,auto"`
}

type ComplexStruct struct {
	KeyStruct          `db:""`
	Foobar   SubStruct `db:"nested_"`
	Ignored  SubStruct
}

type SubStruct struct {
	Foo string `db:"foo"`
	Bar string `db:"bar"`
}

Databases columns are :

  • id (no prefix)
  • nested_foo
  • nested_bar

The mapping is managed by the 'dbreflect' subpackage. Normally its direct use is not necessary, except in one case : some structs are scannable and have to be considered like fields, and mapped to databases columns. Common case are time.Time, or sql.NullString, ... You can register a custom struct with the `RegisterScannableStruct` and a struct instance, for example the time.Time is registered like this :

dbreflect.RegisterScannableStruct(time.Time{})

The structs statements use the struct name as table name. But you can override this simply by simplementing a TableName method :

func (*Book) TableName() string {
	return "books"
}

Conditions

Statements and structs tools manage 'where' and 'group by' sql clauses. These conditional clauses are build either with raw sql code, or build with the Condition struct like this :

q := godb.Or(godb.Q("foo is null"), godb.Q("foo > ?", 123))
count, err := db.SelectFrom("bar").WhereQ(q).Count()

WhereQ methods take a Condition instance build by godb.Q . Where mathods take raw SQL, but is just a syntactic sugar. These calls are equivalents :

…WhereQ(godb.Q("id = ?", 123))…
…Where("id = ?", 123)…

Multiple calls to Where or WhereQ are allowed, these calls are equivalents :

…Where("id = ?", 123).Where("foo is null")…
…WhereQ(godb.And(godb.Q("id = ?", 123), godb.Q("foo is null")))…

Slices are managed in a particular way : a single placeholder is replaced with multiple ones. This allows code like :

count, err := db.SelectFrom("bar").Where("foo in (?)", fooSlice).Count()

SQLBuffer

The SQLBuffer exists to ease the build of complex raw queries. It's also used internaly by godb. Its use and purpose are simple : concatenate sql parts (accompagned by their arguments) in an efficient way.

Example :

// see NewSQLBuffer for details about sizes
subQuery := godb.NewSQLBuffer(32, 0).
	Write("select author ").
	Write("from books ").
	WriteCondition(godb.Q("where title = ?", bookFoundation.Title))

queryBuffer := godb.NewSQLBuffer(64, 0).
	Write("select * ").
	Write("from books ").
	Write("where author in (").
	Append(subQuery).
	Write(")")

if queryBuffer.Err() != nil {
	…
}

err = db.RawSQL(queryBuffer.SQL(), queryBuffer.Arguments()...).Do(&books)
if err != nil {
	…
}

Optimistic Locking

For all databases, structs updates and deletes manage optimistic locking when a dedicated integer row is present. Simply tags it with `oplock` :

type KeyStruct struct {
	...
	Version    int    `db:"version,oplock"`
	...
}

When an update or delete operation fails, Do() returns the `ErrOpLock` error.

With PostgreSQL and SQL Server, godb manages optimistic locking with automatic fields. Just add a dedicated field in the struct and tag it with `auto,oplock`.

With PostgreSQL you can use the `xmin` system column like this :

type KeyStruct struct {
	...
	Version    int    `db:"xmin,auto,oplock"`
	...
}

For more informations about `xmin` see https://www.postgresql.org/docs/10/static/ddl-system-columns.html

With SQL Server you can use a `rowversion` field with the `mssql.Rowversion` type like this :

type KeyStruct struct {
	...
	Version   mssql.Rowversion `db:"version,auto,oplock"`
	...
}

For more informations about the `rowversion` data type see https://docs.microsoft.com/en-us/sql/t-sql/data-types/rowversion-transact-sql

Consumed Time

godb keep track of time consumed while executing queries. You can reset it and get the time consumed since Open or the previous reset :

fmt.Prinln("Consumed time : %v", db.ConsumedTime())
db.ResetConsumedTime()

Logger

You can log all executed queried and details of condumed time. Simply add a logger :

db.SetLogger(log.New(os.Stderr, "", 0))

RETURNING and OUTPUT Clauses

godb takes advantage of PostgreSQL RETURNING clause, and SQL Server OUTPUT clause.

With statements tools you have to add a RETURNING clause with the Suffix method and call DoWithReturning method instead of Do(). It's optionnal.

With StructInsert it's transparent, the RETURNING or OUTPUT clause is added for all 'auto' columns and it's managed for you. One of the big advantage is with BulkInsert : for others databases the rows are inserted but the new keys are unkonwns. With PostgreSQL and SQL Server the slice is updated for all inserted rows.

It also enables optimistic locking with *automatic* columns.

Prepared statements cache

godb has two prepared statements caches, one to use during transactions, and one to use outside of a transaction. Both use a LRU algorithm.

The transaction cache is enabled by default, but not the other. A transaction (sql.Tx) isn't shared between goroutines, using prepared statement with it has a predictable behavious. But without transaction a prepared statement could have to be reprepared on a different connection if needed, leading to unpredictable performances in high concurrency scenario.

Enabling the non transaction cache could improve performances with single goroutine batch. With multiple goroutines accessing the same database : it depends ! A benchmark would be wise.

Iterator

Using statements tools and structs tools you can execute select queries and get an iterator instead of filling a slice of struct instances. This could be useful if the request's result is big and you don't want to allocate too much memory. On the other side you will write almost as much code as with the `sql` package, but with an automatic struct mapping, and a request builder.

Iterators are also available with raw queries. In this cas you cas executes any kind of sql code, not just select queries.

To get an interator simply use the `DoWithIterator` method instead of `Do`. The iterator usage is similar to the standard `sql.Rows` type. Don't forget to check that there are no errors with the `Err` method, and don't forget to call `Close` when the iterator is no longer useful, especially if you don't scan all the resultset.

iter, err := db.SelectFrom("books").
	Columns("id", "title", "author", "published").
	OrderBy("author").OrderBy("title").
	DoWithIterator()
if err != nil {
	...
}
defer iter.Close()

for iter.Next() {
	book := Book{}
	if err := iter.Scan(&book); err != nil {
		...
	}
	// do something with the book
	...
	}
}

if iter.Err() != nil {
	t.Fatal(err)
}

Concurrency

To avoid performance cost godb.DB does not implement synchronization. So a given instance of godb.DB should not be used by multiple goroutines. But a godb.DB instance can be created and used as a blueprint and cloned for each goroutine. See Clone and Clear methods.

A typical use case is a web server. When the application starts a godb.DB is created, and cloned in each http handler with Clone, and ressources are to be freed calling Clear (use defer statement).

Index

Constants

View Source
const Placeholder string = "?"

Placeholder is the placeholder string, use it to build queries. Adapters could change it before queries are executed.

Variables

View Source
var DefaultStmtCacheSize = 64

DefaultStmtCacheSize is the default size of prepared statements LRU cache.

View Source
var ErrOpLock = errors.New("optimistic locking failure")

ErrOpLock is an error returned when Optimistic Locking failure occurs

View Source
var MaximumStmtCacheSize = 4096

MaximumStmtCacheSize is the maximum allowed size for the prepared statements cache. The value is arbitrary, but big enough for most uses.

Functions

This section is empty.

Types

type Condition

type Condition struct {
	// contains filtered or unexported fields
}

Condition is a struct allowing complex condition building, composing SQL predicates, and managing associated arguments.

func And

func And(conditions ...*Condition) *Condition

And combines two or more conditions inserting 'AND' between each given conditions.

func Not

func Not(condition *Condition) *Condition

Not negates a given condition surrounding it with 'NOT (' and ')'.

func Or

func Or(conditions ...*Condition) *Condition

Or combines two or more conditions inserting 'OR' between each given conditions, and surrounding all with parentheses.

func Q

func Q(sql string, args ...interface{}) *Condition

Q builds a simple condition, managing slices in a particular way : it replace the single placeholder with multiples ones according to the number of arguments.

func (*Condition) Err added in v1.0.7

func (c *Condition) Err() error

Err returns the error of the given condition.

type DB

type DB struct {
	// contains filtered or unexported fields
}

DB stores a connection to the database, the current transaction, logger, ... Everything starts with a DB. DB is not thread safe (see Clone).

func Open

func Open(adapter adapters.Adapter, dataSourceName string) (*DB, error)

Open creates a new DB struct and initialise a sql.DB connection.

func Wrap added in v1.0.4

func Wrap(adapter adapters.Adapter, dbInst *sql.DB) *DB

Wrap creates a godb.DB by using provided and initialized sql.DB Helpful for using custom configured sql.DB instance for godb. Can be used before starting a goroutine.

func (*DB) Adapter

func (db *DB) Adapter() adapters.Adapter

Adapter returns the current adapter.

func (*DB) Begin

func (db *DB) Begin() error

Begin starts a new transaction, fails if there is already one.

func (*DB) BulkInsert

func (db *DB) BulkInsert(record interface{}) *StructInsert

BulkInsert initializes an INSERT sql statement for a slice.

Warning : not all databases are able to update the auto columns in the case of insert with multiple rows. Only adapters implementing the InsertReturningSuffix interface will have auto columns updated.

func (*DB) Clear added in v1.0.5

func (db *DB) Clear() error

Clear closes current transaction (rollback) and frees statements caches. It does not close the underlying database connection. Use Clear when a clone of godb is no longer useful, or when you don't use anymore godb but want to keep the underlying database connection open.

func (*DB) Clone

func (db *DB) Clone() *DB

Clone creates a copy of an existing DB, without the current transaction. The clone has consumedTime set to zero, and new prepared statements caches with the same characteristics. Use it to create new DB object before starting a goroutine. Use Clear when a clone is not longer useful to free ressources.

func (*DB) Close

func (db *DB) Close() error

Close closes an existing DB created by Open. Don't close a cloned DB still used by others goroutines as the sql.DB is shared ! Don't use a DB anymore after a call to Close.

func (*DB) Commit

func (db *DB) Commit() error

Commit commits an existing transaction, fails if none exists.

func (*DB) ConsumedTime

func (db *DB) ConsumedTime() time.Duration

ConsumedTime returns the time consumed by SQL queries executions The duration is reseted when the DB is cloned.

func (*DB) CurrentDB

func (db *DB) CurrentDB() *sql.DB

CurrentDB returns the current *sql.DB. Use it wisely.

func (*DB) CurrentTx

func (db *DB) CurrentTx() *sql.Tx

CurrentTx returns the current Tx (or nil). Don't commit or rollback it directly !

func (*DB) Delete

func (db *DB) Delete(record interface{}) *StructDelete

Delete initializes a DELETE sql statement for the given object.

func (*DB) DeleteFrom

func (db *DB) DeleteFrom(tableName string) *DeleteStatement

DeleteFrom initializes a DELETE statement builder.

func (*DB) Insert

func (db *DB) Insert(record interface{}) *StructInsert

Insert initializes an INSERT sql statement for the given object.

func (*DB) InsertInto

func (db *DB) InsertInto(tableName string) *InsertStatement

InsertInto initializes a INSERT statement builder

func (*DB) RawSQL added in v1.0.7

func (db *DB) RawSQL(sql string, args ...interface{}) *RawSQL

RawSQL create a RawSQL structure, allowing the executing of a custom sql query, and the mapping of its result.

func (*DB) ResetConsumedTime

func (db *DB) ResetConsumedTime()

ResetConsumedTime resets the time consumed by SQL queries executions

func (*DB) Rollback

func (db *DB) Rollback() error

Rollback rollbacks an existing transaction, fails if none exists.

func (*DB) Select

func (db *DB) Select(record interface{}) *StructSelect

Select initializes a SELECT statement with the given pointer as target. The pointer could point to a single instance or a slice.

func (*DB) SelectFrom

func (db *DB) SelectFrom(tableNames ...string) *SelectStatement

SelectFrom initializes a SELECT statement builder.

func (*DB) SetDefaultTableNamer added in v1.0.8

func (db *DB) SetDefaultTableNamer(tnamer tablenamer.NamerFn)

SetDefaultTableNamer sets table naming function

func (*DB) SetLogger

func (db *DB) SetLogger(logger Logger)

SetLogger sets the logger for the given DB. By default there is no logger.

func (*DB) StmtCacheDB

func (db *DB) StmtCacheDB() *StmtCache

StmtCacheDB returns the prepared statement cache for queries outside a transaction (run with sql.DB).

func (*DB) StmtCacheTx

func (db *DB) StmtCacheTx() *StmtCache

StmtCacheTx returns the prepared statement cache for queries inside a transaction (run with sql.Tx).

func (*DB) Update

func (db *DB) Update(record interface{}) *StructUpdate

Update initializes an UPDATE sql statement for the given object.

func (*DB) UpdateTable

func (db *DB) UpdateTable(tableName string) *UpdateStatement

UpdateTable creates an UpdateStatement and specify table to update. It's the entry point to build an UPDATE query.

func (*DB) UseErrorParser added in v1.0.8

func (db *DB) UseErrorParser()

UseErrorParser will allow adapters to parse errors and wrap ones returned by drivers

type DeleteStatement

type DeleteStatement struct {
	// contains filtered or unexported fields
}

DeleteStatement is a DELETE sql statement builder. Initialize it with the DeleteFrom method.

Example :

count, err := db.DeleteFrom("bar").Where("foo = 1").Do()

func (*DeleteStatement) Do

func (ds *DeleteStatement) Do() (int64, error)

Do executes the builded query, and return thr rows affected count.

func (*DeleteStatement) DoWithReturning

func (ds *DeleteStatement) DoWithReturning(record interface{}) (int64, error)

DoWithReturning executes the statement and fills the fields according to the columns in RETURNING clause.

func (*DeleteStatement) Returning added in v1.0.4

func (ds *DeleteStatement) Returning(columns ...string) *DeleteStatement

Returning adds a RETURNING or OUTPUT clause to the statement. Use it with PostgreSQL and SQL Server.

func (*DeleteStatement) Suffix

func (ds *DeleteStatement) Suffix(suffix string) *DeleteStatement

Suffix adds an expression to suffix the statement. Use it to add a RETURNING clause with PostgreSQL (or whatever you need).

func (*DeleteStatement) ToSQL

func (ds *DeleteStatement) ToSQL() (string, []interface{}, error)

ToSQL returns a string with the SQL statement (containing placeholders), the arguments slices, and an error.

func (*DeleteStatement) Where

func (ds *DeleteStatement) Where(sql string, args ...interface{}) *DeleteStatement

Where adds a condition using string and arguments.

func (*DeleteStatement) WhereQ

func (ds *DeleteStatement) WhereQ(condition *Condition) *DeleteStatement

WhereQ adds a simple or complex predicate generated with Q and confunctions.

type InsertStatement

type InsertStatement struct {
	// contains filtered or unexported fields
}

InsertStatement is an INSERT statement builder. Initialize it with the InsertInto method.

Examples :

id, err := db.InsertInto("bar").
	Columns("foo", "baz").
	Values(2, "something").
	Do()

func (*InsertStatement) Columns

func (is *InsertStatement) Columns(columns ...string) *InsertStatement

Columns adds columns to insert.

func (*InsertStatement) Do

func (is *InsertStatement) Do() (int64, error)

Do executes the builded INSERT statement and returns the creadted 'id' if the adapter does not implement InsertReturningSuffixer.

func (*InsertStatement) DoWithReturning

func (is *InsertStatement) DoWithReturning(record interface{}) (int64, error)

DoWithReturning executes the statement and fills the fields according to the columns in RETURNING clause.

func (*InsertStatement) Returning added in v1.0.4

func (is *InsertStatement) Returning(columns ...string) *InsertStatement

Returning adds a RETURNING or OUTPUT clause to the statement. Use it with PostgreSQL and SQL Server.

func (*InsertStatement) Suffix

func (is *InsertStatement) Suffix(suffix string) *InsertStatement

Suffix adds an expression to suffix the statement.

func (*InsertStatement) ToSQL

func (is *InsertStatement) ToSQL() (string, []interface{}, error)

ToSQL returns a string with the SQL statement (containing placeholders), the arguments slices, and an error.

func (*InsertStatement) Values

func (is *InsertStatement) Values(values ...interface{}) *InsertStatement

Values add values to insert.

type Iterator added in v1.0.5

type Iterator interface {
	Next() bool
	Scan(interface{}) error
	Scanx(...interface{}) error
	Close() error
	Err() error
}

Iterator is an interface to iterate over the result of a sql query and scan each row one at a time instead of getting all into one slice. The principe is similar to the standard sql.Rows type.

type Logger added in v1.0.8

type Logger interface {
	Println(v ...interface{})
}

Logger is interface for custom logger Should implement `Println(v ...interface{})` method

type RawSQL added in v1.0.7

type RawSQL struct {
	// contains filtered or unexported fields
}

RawSQL allows the execution of a custom SQL query. Initialize it with the RawSQL method.

WARNING : the arguments will be used 'as is' by the Go sql package, then it will not duplicate the placeholders if you use a slice.

Note : the API could have been build without an intermediaite struct. But this produce a mode homogeneous API, and allows later evolutions without breaking the it.

func (*RawSQL) Do added in v1.0.7

func (raw *RawSQL) Do(record interface{}) error

Do executes the raw query. The record argument has to be a pointer to a struct or a slice. If the argument is not a slice, a row is expected, and Do returns sql.ErrNoRows is none where found.

func (*RawSQL) DoWithIterator added in v1.0.7

func (raw *RawSQL) DoWithIterator() (Iterator, error)

DoWithIterator executes the select query and returns an Iterator allowing the caller to fetch rows one at a time. Warning : it does not use an existing transation to avoid some pitfalls with drivers, nor the prepared statement.

type SQLBuffer added in v1.0.7

type SQLBuffer struct {
	// contains filtered or unexported fields
}

SQLBuffer is a buffer for creating SQL queries with arguments of effectively, using bytes.Buffer.

Create a buffer, add SQL parts with their arguments with the Write method and its friends, and get the result with SQL() and Arguments(). Check the presence presence of an error with Err().

Unlike godb.Q, SQLBuffer does not check if arguments count and placeholders count matches, because strings similar to placeholders could be valid in other circumstances.

godb use it internally, but you can use it yourself to create raw queries.

func NewSQLBuffer added in v1.0.7

func NewSQLBuffer(sqlLength int, argsLength int) *SQLBuffer

NewSQLBuffer create a new SQLBuffer, preallocating sqlLength bytes for the SQL parts, and argsLength for the arguments list. The lengths could be zero, but it's more efficient to give an approximate size.

func (*SQLBuffer) Append added in v1.0.7

func (b *SQLBuffer) Append(other *SQLBuffer) *SQLBuffer

Append add to the buffer the SQL string and arguments from other buffer. It does not add separator like space between the sql parts, if needed do it yourself.

func (*SQLBuffer) Arguments added in v1.0.7

func (b *SQLBuffer) Arguments() []interface{}

Arguments returns the arguments given while building the SQL query.

func (*SQLBuffer) Err added in v1.0.7

func (b *SQLBuffer) Err() error

Err returns the error that may have occurred during the build.

func (*SQLBuffer) SQL added in v1.0.7

func (b *SQLBuffer) SQL() string

SQL returns the string for the SQL part. It will create the string from the buffer, avoid calling it multiple times.

func (*SQLBuffer) SQLLen added in v1.0.7

func (b *SQLBuffer) SQLLen() int

SQLLen returns the length of the SQL part (it's bytes count, not characters count, beware of unicode !). Use it instead of len(myBuffer.SQL()), it's faster and does not allocate memory.

func (*SQLBuffer) Write added in v1.0.7

func (b *SQLBuffer) Write(sql string, args ...interface{}) *SQLBuffer

Write adds a sql string and its arguments into the buffer.

func (*SQLBuffer) WriteBytes added in v1.0.7

func (b *SQLBuffer) WriteBytes(sql []byte, args ...interface{}) *SQLBuffer

WriteBytes add the givent bytes to the internal SQL buffer, and append givens arguments to the existing ones. It's useful when you have build something with a bytes.Buffer.

func (*SQLBuffer) WriteCondition added in v1.0.7

func (b *SQLBuffer) WriteCondition(condition *Condition) *SQLBuffer

WriteCondition writes single conditional expressions.

func (*SQLBuffer) WriteIfNotEmpty added in v1.0.7

func (b *SQLBuffer) WriteIfNotEmpty(sql string, args ...interface{}) *SQLBuffer

WriteIfNotEmpty writes the given string only if the sql buffer isn't empty.

func (*SQLBuffer) WriteStrings added in v1.0.7

func (b *SQLBuffer) WriteStrings(separator string, sqlParts ...string) *SQLBuffer

WriteStrings writes strings separated by the given separator.

type SelectStatement

type SelectStatement struct {
	// contains filtered or unexported fields
}

SelectStatement is a SELECT sql statement builder. Initialize it with the SelectFrom method.

Examples :

err := db.SelecFrom("bar").
	Columns("foo", "baz").
	Where("foo > 2").
	Do(&target)

func (*SelectStatement) ColumnAlias added in v1.0.9

func (ss *SelectStatement) ColumnAlias(column, alias string) *SelectStatement

ColumnAlias allows to define alias for a column. Useful if selectable columns are built with ColumnsFromStruct and when using joins.

func (*SelectStatement) Columns

func (ss *SelectStatement) Columns(columns ...string) *SelectStatement

Columns adds columns to select. Multple calls of columns are allowed.

func (*SelectStatement) ColumnsFromStruct added in v1.0.6

func (ss *SelectStatement) ColumnsFromStruct(record interface{}) *SelectStatement

ColumnsFromStruct adds columns to select, extrating them from the given struct (or slice of struct). Always use a pointer as argument. You can't mix the use of ColumnsFromStruct and Columns methods.

func (*SelectStatement) Count

func (ss *SelectStatement) Count() (int64, error)

Count runs the request with COUNT(*) (remove others columns) and returns the count.

func (*SelectStatement) Distinct

func (ss *SelectStatement) Distinct() *SelectStatement

Distinct adds DISTINCT keyword the the generated statement.

func (*SelectStatement) Do

func (ss *SelectStatement) Do(record interface{}) error

Do executes the select statement. The record argument has to be a pointer to a struct or a slice. If no columns is defined for current select statement, all columns are added from record parameter's struct. If the argument is not a slice, a row is expected, and Do returns sql.ErrNoRows is none where found.

func (*SelectStatement) DoWithIterator added in v1.0.5

func (ss *SelectStatement) DoWithIterator() (Iterator, error)

DoWithIterator executes the select query and returns an Iterator allowing the caller to fetch rows one at a time. Warning : it does not use an existing transation to avoid some pitfalls with drivers, nor the prepared statement.

func (*SelectStatement) From

func (ss *SelectStatement) From(tableNames ...string) *SelectStatement

From adds table to the select statement. It can be called multiple times.

func (*SelectStatement) GroupBy

func (ss *SelectStatement) GroupBy(groupBy string) *SelectStatement

GroupBy adds a GROUP BY clause. You can call GroupBy multiple times.

func (*SelectStatement) Having

func (ss *SelectStatement) Having(sql string, args ...interface{}) *SelectStatement

Having adds a HAVING clause with a condition build with a sql string and its arguments (like Where).

func (*SelectStatement) HavingQ

func (ss *SelectStatement) HavingQ(condition *Condition) *SelectStatement

HavingQ adds a simple or complex predicate generated with Q and conjunctions (like WhereQ).

func (*SelectStatement) InnerJoin added in v1.0.6

func (ss *SelectStatement) InnerJoin(tableName string, as string, on *Condition) *SelectStatement

InnerJoin adds as INNER JOIN clause, which will be inserted between FROM and WHERE clauses.

func (*SelectStatement) LeftJoin

func (ss *SelectStatement) LeftJoin(tableName string, as string, on *Condition) *SelectStatement

LeftJoin adds a LEFT JOIN clause, which will be inserted between FROM and WHERE clauses.

func (*SelectStatement) Limit

func (ss *SelectStatement) Limit(limit int) *SelectStatement

Limit specifies the value for the LIMIT clause.

func (*SelectStatement) Offset

func (ss *SelectStatement) Offset(offset int) *SelectStatement

Offset specifies the value for the OFFSET clause.

func (*SelectStatement) OrderBy

func (ss *SelectStatement) OrderBy(orderBy string) *SelectStatement

OrderBy adds an expression for the ORDER BY clause. You can call GroupBy multiple times.

func (*SelectStatement) Scanx added in v1.0.7

func (ss *SelectStatement) Scanx(dest ...interface{}) error

Scanx runs the request and scans results to dest params

func (*SelectStatement) Suffix

func (ss *SelectStatement) Suffix(suffix string) *SelectStatement

Suffix adds an expression to suffix the query.

func (*SelectStatement) ToSQL

func (ss *SelectStatement) ToSQL() (string, []interface{}, error)

ToSQL returns a string with the SQL request (containing placeholders), the arguments slices, and an error.

func (*SelectStatement) Where

func (ss *SelectStatement) Where(sql string, args ...interface{}) *SelectStatement

Where adds a condition using string and arguments.

func (*SelectStatement) WhereQ

func (ss *SelectStatement) WhereQ(condition *Condition) *SelectStatement

WhereQ adds a simple or complex predicate generated with Q and conjunctions.

type StmtCache

type StmtCache struct {
	// contains filtered or unexported fields
}

StmtCache is a LRU cache for prepared statements.

func (*StmtCache) Clear

func (cache *StmtCache) Clear() error

Clear closes properly the the cached stmt, and clear the cache.

func (*StmtCache) Disable

func (cache *StmtCache) Disable()

Disable disables the cache. Disabling the cache does not clear it.

func (*StmtCache) Enable

func (cache *StmtCache) Enable()

Enable enables the cache.

func (*StmtCache) GetSize

func (cache *StmtCache) GetSize() int

GetSize returns the maximum cache size

func (*StmtCache) IsEnabled

func (cache *StmtCache) IsEnabled() bool

IsEnabled returns true if the cache is enabled, false otherwise.

func (*StmtCache) SetSize

func (cache *StmtCache) SetSize(size int) error

SetSize sets the maximum cache size, and remove exceeding entries if needed.

type StructDelete

type StructDelete struct {
	// contains filtered or unexported fields
}

StructDelete builds a DELETE statement for the given object.

Example (book is a struct instance):

count, err := db.Delete(&book).Do()

func (*StructDelete) Do

func (sd *StructDelete) Do() (int64, error)

Do executes the DELETE statement for the struct given to the Delete method, and returns the count of deleted rows and an error.

type StructInsert

type StructInsert struct {
	// contains filtered or unexported fields
}

StructInsert builds an INSERT statement for the given object.

Example (book is a struct instance, books a slice) :

err := db.Insert(&book).Do()

err = db.BulkInsert(&books).Do()

func (*StructInsert) Blacklist added in v1.0.8

func (si *StructInsert) Blacklist(columns ...string) *StructInsert

Blacklist saves columns not to be inserted from struct It adds columns to list each time it is called. If a column defined in whitelist is also given in black list than that column will be blacklisted.

func (*StructInsert) BlacklistReset added in v1.0.8

func (si *StructInsert) BlacklistReset() *StructInsert

BlacklistReset resets blacklist

func (*StructInsert) Do

func (si *StructInsert) Do() error

Do executes the insert statement.

The behavior differs according to the adapter. If it implements the InsertReturningSuffixer interface it will use it and fill all auto fields of the given struct. Otherwise it only fills the key with LastInsertId.

With BulkInsert the behavior changeq according to the adapter, see BulkInsert documentation for more information.

func (*StructInsert) Whitelist added in v1.0.8

func (si *StructInsert) Whitelist(columns ...string) *StructInsert

Whitelist saves columns to be inserted from struct It adds columns to list each time it is called whitelist should not include auto key tagged columns

func (*StructInsert) WhitelistReset added in v1.0.8

func (si *StructInsert) WhitelistReset() *StructInsert

WhitelistReset resets whiteList

type StructSelect

type StructSelect struct {
	// contains filtered or unexported fields
}

StructSelect builds a SELECT statement for the given object.

Example (book is a struct instance, books a slice) :

err := db.Select(&book).Where("id = ?", 123).Do()

err = db.Select(&books).Where("id > 1").Where("id < 10").Do()

func (*StructSelect) Count

func (ss *StructSelect) Count() (int64, error)

Count run the request with COUNT(*) and returns the count

func (*StructSelect) Do

func (ss *StructSelect) Do() error

Do executes the select statement, the record given to Select will contain the data.

func (*StructSelect) DoWithIterator added in v1.0.5

func (ss *StructSelect) DoWithIterator() (Iterator, error)

DoWithIterator executes the select query and returns an Iterator allowing the caller to fetch rows one at a time. Warning : it does not use an existing transation to avoid some pitfalls with drivers, nor the prepared statement.

func (*StructSelect) Limit

func (ss *StructSelect) Limit(limit int) *StructSelect

Limit specifies the value for the LIMIT clause.

func (*StructSelect) Offset

func (ss *StructSelect) Offset(offset int) *StructSelect

Offset specifies the value for the OFFSET clause.

func (*StructSelect) OrderBy

func (ss *StructSelect) OrderBy(orderBy string) *StructSelect

OrderBy adds an expression for the ORDER BY clause.

func (*StructSelect) Where

func (ss *StructSelect) Where(sql string, args ...interface{}) *StructSelect

Where adds a condition using string and arguments.

func (*StructSelect) WhereQ

func (ss *StructSelect) WhereQ(condition *Condition) *StructSelect

WhereQ adds a simple or complex predicate generated with Q and confunctions.

type StructUpdate

type StructUpdate struct {
	// contains filtered or unexported fields
}

StructUpdate builds an UPDATE statement for the given object.

Example (book is a struct instance):

count, err := db.Update(&book).Do()

func (*StructUpdate) Blacklist added in v1.0.8

func (su *StructUpdate) Blacklist(columns ...string) *StructUpdate

Blacklist saves columns not to be updated from struct It adds columns to list each time it is called. If a column defined in whitelist is also given in black list than that column will be blacklisted.

func (*StructUpdate) BlacklistReset added in v1.0.8

func (su *StructUpdate) BlacklistReset() *StructUpdate

BlacklistReset resets blacklist

func (*StructUpdate) Do

func (su *StructUpdate) Do() error

Do executes the UPDATE statement for the struct given to the Update method.

func (*StructUpdate) Whitelist added in v1.0.8

func (su *StructUpdate) Whitelist(columns ...string) *StructUpdate

Whitelist saves columns to be updated from struct

whitelist should not include auto key tagged columns

func (*StructUpdate) WhitelistReset added in v1.0.8

func (su *StructUpdate) WhitelistReset() *StructUpdate

WhitelistReset resets whiteList

type UpdateStatement

type UpdateStatement struct {
	// contains filtered or unexported fields
}

UpdateStatement will contains all parts needed to build an UPDATE statement. Initialize it with the UpdateTable method.

Example :

count, err := db.UpdateTable("bar").
	Set("foo", 1).
	Where("foo = ?", 2).
	Do()

func (*UpdateStatement) Do

func (us *UpdateStatement) Do() (int64, error)

Do executes the builded query, and return RowsAffected()

func (*UpdateStatement) DoWithReturning

func (us *UpdateStatement) DoWithReturning(record interface{}) (int64, error)

DoWithReturning executes the statement and fills the fields according to the columns in RETURNING clause.

func (*UpdateStatement) Returning added in v1.0.4

func (us *UpdateStatement) Returning(columns ...string) *UpdateStatement

Returning adds a RETURNING or OUTPUT clause to the statement. Use it with PostgreSQL and SQL Server.

func (*UpdateStatement) Set

func (us *UpdateStatement) Set(column string, value interface{}) *UpdateStatement

Set adds a part of SET clause to the query.

func (*UpdateStatement) SetRaw

func (us *UpdateStatement) SetRaw(rawSQL string) *UpdateStatement

SetRaw adds a raw SET clause to the query.

func (*UpdateStatement) Suffix

func (us *UpdateStatement) Suffix(suffix string) *UpdateStatement

Suffix adds an expression to suffix the statement. Use it to add a RETURNING clause with PostgreSQL (or whatever you need).

func (*UpdateStatement) ToSQL

func (us *UpdateStatement) ToSQL() (string, []interface{}, error)

ToSQL returns a string with the SQL statement (containing placeholders), the arguments slices, and an error.

func (*UpdateStatement) Where

func (us *UpdateStatement) Where(sql string, args ...interface{}) *UpdateStatement

Where adds a condition using string and arguments.

func (*UpdateStatement) WhereQ

func (us *UpdateStatement) WhereQ(condition *Condition) *UpdateStatement

WhereQ adds a simple or complex predicate generated with Q and confunctions.

Directories

Path Synopsis
Package adapters contains database specific code, mainly in sub-packages.
Package adapters contains database specific code, mainly in sub-packages.
dbtests

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL