schema

package module
v1.3.6 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Apr 21, 2024 License: MIT Imports: 15 Imported by: 14

README

Schema - Database Migrations for Go

An embeddable library for applying changes to your Go application's database/sql schema.

go.dev reference CircleCI Build Status Go Report Card Code Coverage

Features

  • Cloud-friendly design tolerates embedded use in clusters
  • Supports migrations in embed.FS (requires go:embed in Go 1.16+)
  • Depends only on Go standard library (Note that all go.mod dependencies are used only in tests)
  • Unidirectional migrations (no "down" migration complexity)

Usage Instructions

Create a schema.Migrator in your bootstrap/config/database connection code, then call its Apply() method with your database connection and a slice of *schema.Migration structs.

The .Apply() function figures out which of the supplied Migrations have not yet been executed in the database (based on the ID), and executes the Script for each in alphabetical order by IDe.

The []*schema.Migration can be created manually, but the package has some utility functions to make it easier to parse .sql files into structs with the filename as the ID and the file contents as the Script.

Using go:embed (requires Go 1.16+)

Go 1.16 added features to embed a directory of files into the binary as an embedded filesystem (embed.FS).

Assuming you have a directory of SQL files called my-migrations/ next to your main.go file, you'll run something like this:

//go:embed my-migrations
var MyMigrations embed.FS

func main() {
   db, err := sql.Open(...) // Or however you get a *sql.DB

   migrations, err := schema.FSMigrations(MyMigrations, "my-migrations/*.sql")
   migrator := schema.NewMigrator(schema.WithDialect(schema.MySQL))
   err = migrator.Apply(db, migrations)
}

The WithDialect() option accepts: schema.MySQL, schema.Postgres, schema.SQLite or schema.MSSQL. These dialects all use only database/sql calls, so you may have success with other databases which are SQL-compatible with the above dialects.

You can also provide your own custom Dialect. See dialect.go for the definition of the Dialect interface, and the optional Locker interface. Note that Locker is critical for clustered operation to ensure that only one of many processes is attempting to run migrations simultaneously.

Using Inline Migration Structs

If you're running in an earlier version of Go, Migration{} structs will need to be created manually:

db, err := sql.Open(...)

migrator := schema.NewMigrator() // Postgres is the default Dialect
migrator.Apply(db, []*schema.Migration{
   &schema.Migration{
      ID: "2019-09-24 Create Albums",
      Script: `
      CREATE TABLE albums (
         id SERIAL PRIMARY KEY,
         title CHARACTER VARYING (255) NOT NULL
      )
      `
   },
})

Constructor Options

The NewMigrator() function accepts option arguments to customize the dialect and the name of the migration tracking table. By default, the tracking table will be named schema_migrations. To change it to my_migrations instead:

migrator := schema.NewMigrator(schema.WithTableName("my_migrations"))

It is theoretically possible to create multiple Migrators and to use mutliple migration tracking tables within the same application and database.

It is also OK for multiple processes to run Apply on identically configured migrators simultaneously. The Migrator only creates the tracking table if it does not exist, and then locks it to modifications while building and running the migration plan. This means that the first-arriving process will win and will perform its migrations on the database.

Supported Databases

This package was extracted from a PostgreSQL project. Other databases have solid automated test coverage, but should be considered somewhat experimental in production use cases. Contributions are welcome for additional databases or feature enhancements / bug fixes.

  • PostgreSQL (database/sql driver only, see adlio/pgxschema if you use jack/pgx)
  • SQLite (thanks kalafut!)
  • MySQL / MariaDB
  • SQL Server
  • CockroachDB, Redshift, Snowflake, etc (open a Pull Request)

Package Opinions

There are many other schema migration tools. This one exists because of a particular set of opinions:

  1. Database credentials are runtime configuration details, but database schema is a build-time applicaton dependency, which means it should be "compiled in" to the build, and should not rely on external tools.
  2. Using an external command-line tool for schema migrations needlessly complicates testing and deployment.
  3. SQL is the best language to use to specify changes to SQL schemas.
  4. "Down" migrations add needless complication, aren't often used, and are tedious to properly test when they are used. In the unlikely event you need to migrate backwards, it's possible to write the "rollback" migration as a separate "up" migration.
  5. Deep dependency chains should be avoided, especially in a compiled binary. We don't want to import an ORM into our binaries just to get SQL the features of this package. The schema package imports only standard library packages (NOTE *We do import ory/dockertest in our tests).
  6. Sequentially-numbered integer migration IDs will create too many unnecessary schema collisions on a distributed, asynchronously-communicating team (this is not yet strictly enforced, but may be later).

Rules of Applying Migrations

  1. Never, ever change the ID (filename) or Script (file contents) of a Migration which has already been executed on your database. If you've made a mistake, you'll need to correct it in a subsequent migration.

  2. Use a consistent, but descriptive format for migration IDs/filenames. Consider prefixing them with today's timestamp. Examples:

     ID: "2019-01-01T13:45:00 Creates Users"
     ID: "2001-12-18 001 Changes the Default Value of User Affiliate ID"
    

    Do not use simple sequentialnumbers like ID: "1".

Migration Ordering

Migrations are not executed in the order they are specified in the slice. They will be re-sorted alphabetically by their IDs before executing them.

Contributions

... are welcome. Please include tests with your contribution. We've integrated dockertest to automate the process of creating clean test databases.

Before contributing, please read the package opinions section. If your contribution is in disagreement with those opinions, then there's a good chance a different schema migration tool is more appropriate.

Roadmap

  • Enhancements and documentation to facilitate asset embedding via go:embed
  • Add a Validate() method to allow checking migration names for consistency and to detect problematic changes in the migrations list.
  • SQL Server support
  • SQL Server support for the Locker interface to protect against simultaneous migrations from clusters of servers.

Version History

1.3.4 - Apr 9, 2023
  • Update downstream dependencies to address vulnerabilities in test dependencies.
1.3.3 - Jun 19, 2022
  • Update downstream dependencies of ory/dockertest due to security issues.
1.3.0 - Mar 25, 2022
  • Basic SQL Server support (no locking, not recommended for use in clusters)
  • Improved support for running tests on ARM64 machines (M1 Macs)
1.2.3 - Dec 10, 2021
  • BUGFIX: Restore the ability to chain NewMigrator().Apply
1.2.2 - Dec 9, 2021
  • Add support for migrations in an embed.FS (FSMigrations(filesystem fs.FS, glob string))
  • Add MySQL/MariaDB support (experimental)
  • Add SQLite support (experimental)
  • Update go.mod to go 1.17.
1.1.14 - Nov 18, 2021

Security patches in upstream dependencies.

1.1.13 - May 22, 2020

Bugfix for error with advisory lock being held open. Improved test coverage for simultaneous execution.

1.1.11 - May 19, 2020

Use a database-held lock for all migrations not just the initial table creation.

1.1.9 - May 17, 2020

Add the ability to attach a logger.

1.1.8 - Nov 24, 2019

Switch to filepath package for improved cross-platform filesystem support.

1.1.7 - Oct 1, 2019

Began using pg_advisory_lock() to prevent race conditions when multiple processes/machines try to simultaneously create the migrations table.

1.1.1 - Sep 28, 2019

First published version.

Documentation

Overview

Package schema provides tools to manage database schema changes ("migrations") as embedded functionality inside another application which is using a database/sql

Basic usage instructions involve creating a schema.Migrator via the schema.NewMigrator() function, and then passing your *sql.DB to its .Apply() method.

See the package's README.md file for more usage instructions.

Index

Constants

View Source
const DefaultTableName = "schema_migrations"

DefaultTableName defines the name of the database table which will hold the status of applied migrations

Variables

View Source
var ErrNilDB = errors.New("DB pointer is nil")

ErrNilDB is thrown when the database pointer is nil

View Source
var MSSQL = mssqlDialect{}

MSSQL is the dialect for MS SQL-compatible databases

View Source
var MySQL = mysqlDialect{}

MySQL is the dialect which should be used for MySQL/MariaDB databases

View Source
var Postgres = postgresDialect{}

Postgres is the dialect for Postgres-compatible databases

View Source
var SQLite = &sqliteDialect{}

SQLite is the dialect for sqlite3 databases

Functions

func MigrationIDFromFilename

func MigrationIDFromFilename(filename string) string

MigrationIDFromFilename removes directory paths and extensions from the filename to make a friendlier Migration ID

func SortMigrations

func SortMigrations(migrations []*Migration)

SortMigrations sorts a slice of migrations by their IDs

Types

type AppliedMigration

type AppliedMigration struct {
	Migration

	// Checksum is the MD5 hash of the Script for this migration
	Checksum string

	// ExecutionTimeInMillis is populated after the migration is run, indicating
	// how much time elapsed while the Script was executing.
	ExecutionTimeInMillis int64

	// AppliedAt is the time at which this particular migration's Script began
	// executing (not when it completed executing).
	AppliedAt time.Time
}

AppliedMigration represents a successfully-executed migration. It embeds Migration, and adds fields for execution results. This type is what records persisted in the schema_migrations table align with.

type Connection added in v1.2.0

type Connection interface {
	Transactor
	Queryer
}

Connection defines the interface for a *sql.Conn, which can both start a new transaction and run queries.

type DB added in v1.2.0

type DB interface {
	Conn(ctx context.Context) (*sql.Conn, error)
}

DB defines the interface for a *sql.DB, which can be used to get a concrete connection to the database.

type Dialect

type Dialect interface {
	QuotedTableName(schemaName, tableName string) string

	CreateMigrationsTable(ctx context.Context, tx Queryer, tableName string) error
	GetAppliedMigrations(ctx context.Context, tx Queryer, tableName string) (applied []*AppliedMigration, err error)
	InsertAppliedMigration(ctx context.Context, tx Queryer, tableName string, migration *AppliedMigration) error
}

Dialect defines the minimal interface for a database dialect. All dialects must implement functions to create the migrations table, get all applied migrations, insert a new migration tracking record, and perform escaping for the tracking table's name

type File

type File interface {
	Name() string
	Read(b []byte) (n int, err error)
}

File wraps the standard library io.Read and os.File.Name methods

type Locker added in v1.2.0

type Locker interface {
	Lock(ctx context.Context, tx Queryer, tableName string) error
	Unlock(ctx context.Context, tx Queryer, tableName string) error
}

Locker defines an optional Dialect extension for obtaining and releasing a global database lock during the running of migrations. This feature is supported by PostgreSQL and MySQL, but not SQLite.

type Logger added in v1.1.9

type Logger interface {
	Print(...interface{})
}

Logger is the interface for logging operations of the logger. By default the migrator operates silently. Providing a Logger enables output of the migrator's operations.

type Migration

type Migration struct {
	ID     string
	Script string
}

Migration is a yet-to-be-run change to the schema. This is the type which is provided to Migrator.Apply to request a schema change.

func FSMigrations added in v1.2.0

func FSMigrations(filesystem fs.FS, glob string) (migrations []*Migration, err error)

FSMigrations receives a filesystem (such as an embed.FS) and extracts all files matching the provided glob as Migrations, with the filename (without extension) being the ID and the file's contents being the Script.

Example usage:

FSMigrations(embeddedFS, "my-migrations/*.sql")

func MigrationFromFile

func MigrationFromFile(file File) (migration *Migration, err error)

MigrationFromFile builds a migration by reading from an open File-like object. The migration's ID will be based on the file's name. The file will *not* be closed after being read.

func MigrationFromFilePath

func MigrationFromFilePath(filename string) (migration *Migration, err error)

MigrationFromFilePath creates a Migration from a path on disk

func MigrationsFromDirectoryPath

func MigrationsFromDirectoryPath(dirPath string) (migrations []*Migration, err error)

MigrationsFromDirectoryPath retrieves a slice of Migrations from the contents of the directory. Only .sql files are read

func (*Migration) MD5 added in v1.2.0

func (m *Migration) MD5() string

MD5 computes the MD5 hash of the Script for this migration so that it can be uniquely identified later.

type Migrator

type Migrator struct {
	SchemaName string
	TableName  string
	Dialect    Dialect
	Logger     Logger
	// contains filtered or unexported fields
}

Migrator is an instance customized to perform migrations on a particular database against a particular tracking table and with a particular dialect defined.

func NewMigrator

func NewMigrator(options ...Option) *Migrator

NewMigrator creates a new Migrator with the supplied options

func (*Migrator) Apply

func (m *Migrator) Apply(db DB, migrations []*Migration) (err error)

Apply takes a slice of Migrations and applies any which have not yet been applied against the provided database. Apply can be re-called sequentially with the same Migrations and different databases, but it is not threadsafe... if concurrent applies are desired, multiple Migrators should be used.

func (Migrator) GetAppliedMigrations

func (m Migrator) GetAppliedMigrations(db Queryer) (applied map[string]*AppliedMigration, err error)

GetAppliedMigrations retrieves all already-applied migrations in a map keyed by the migration IDs

func (*Migrator) QuotedTableName

func (m *Migrator) QuotedTableName() string

QuotedTableName returns the dialect-quoted fully-qualified name for the migrations tracking table

type Option

type Option func(m Migrator) Migrator

Option supports option chaining when creating a Migrator. An Option is a function which takes a Migrator and returns a Migrator with an Option modified.

func WithContext added in v1.2.0

func WithContext(ctx context.Context) Option

WithContext is an Option which sets the Migrator to run within the provided Context

func WithDialect

func WithDialect(dialect Dialect) Option

WithDialect builds an Option which will set the supplied dialect on a Migrator. Usage: NewMigrator(WithDialect(MySQL))

func WithLogger added in v1.1.9

func WithLogger(logger Logger) Option

WithLogger builds an Option which will set the supplied Logger on a Migrator. Usage: NewMigrator(WithLogger(logrus.New()))

func WithTableName

func WithTableName(names ...string) Option

WithTableName is an option which customizes the name of the schema_migrations tracking table. It can be called with either 1 or 2 string arguments. If called with 2 arguments, the first argument is assumed to be a schema qualifier (for example, WithTableName("public", "schema_migrations") would assign the table named "schema_migrations" in the the default "public" schema for Postgres)

type Queryer

type Queryer interface {
	ExecContext(ctx context.Context, query string, args ...interface{}) (sql.Result, error)
	QueryContext(ctx context.Context, query string, args ...interface{}) (*sql.Rows, error)
}

Queryer is something which can execute a Query (either a sql.DB or a sql.Tx)

type Transactor added in v1.2.0

type Transactor interface {
	BeginTx(ctx context.Context, opts *sql.TxOptions) (*sql.Tx, error)
}

Transactor defines the interface for the Begin method from the *sql.DB

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL