Documentation ¶
Overview ¶
Package db is a way to interface with sql databases as more than just simple data stores. At the moment, it is an ORM-like in an alpha state.
Guiding Ideas ¶
A Connection is both a sql.DB connection to an active database and a mechanism for translating that database connection into structs.
Structs can be configured by the Connection.Config mechanism or struct tags for special cases.
Mappers turn rows from sql tables into structs. Mappers can be used to save structs to databases or retrieve subsets of the table using scopes. Scopes are implemented on Mappers, which return a Queryable that you can then chain off of to further filter the results that would be returned with RetrieveAll.
Scopes are useful for more than just filtering results, you can also act on them. You will be able to pull out specific attributes into arrays of simple types. For example you should be able to run the following code to get a list of users who meet certain conditions and pull back their email addresses. Either way would work.
Option 1
User.Where("welcomed_at", nil). Where("created_at >=", time.Now().Add(time.Duration(time.Minute * -15)). Pluck("email").([]string)
Option 2
var emails []string User.Unwelcomed().AddedSince(time.Minute * -15).Pluck(email, &emails) type UserMapper *MapperPlus func (um UserMapper) Unwelcomed() *UserMapper { return &UserMapper{um.Where("welcomed_at", nil)} } func (um UserMapper) AddedSince(d time.Duration) *UserMapper { return &UserMapper{um.Where("created_at >= ", time.Now().Add(d))} }
But wait there's more. You can also run delete or update statements from scopes. You can update via a column and value, a map of columns to values, or straight sql. You can also run a sql delete from a scope.
Scopes are used for more than just that. Has Many relations can have both an array of related structs, and a scope that are filled in by the retrieval process. Calling user.Posts.Save(post) would set the user relation on the post before saving it. user.Posts.Where(...) would start with a queryable scoped to the user's posts before applying the rest of the scope.
Connections ¶
Start building an environment by calling NewConnection with you dialect name ("mysql", "sqlite", "postgres") the name of the database to use, and the connector string for the database adapter (document this).
There is code to manage Transactions using Connections, this isn't well supported and should be removed if it is not well supported or semi-supported until EphemeralConnections come around.
Mappers ¶
Mappers are created by running CreateMapper on a Connection. You will receive back a Mapper object that you can then use to retrieve database rows, save structs into the database and create scopes of the struct.
Posts := Connection.MustCreateMapper("Post", &Post{}) // Retrieve posts created from ~14 days ago to present timeCutoff := time.Now().Add(time.Duration(-14*24*time.Hour)) var recentPosts []Post Posts.Cond("created_at", db.GT, timeCutoff).RetrieveAll(&recentPosts) var myRecentPosts []Post Posts.EqualTo("author_id", Current.User.Id).Order("hits_counter").RetrieveAll(&myRecentPosts)
Mappers can also save structs to the database using the SaveAll function. You can pass either a single instance or multiple instances to it, and it will use the primary key value of the passed instance(s) to determine whether it needs to update existing records or create new records.
expiration := time.Now().Add(time.Duration(7*24*time.Hour)) for _, post := range newPosts { if post.AboveAverage(newPosts) { post.FeaturedUntil = expiration } } Posts.SaveAll(newPosts)
MapperPlus ¶
Mapper+'s are like mappers, but they are designed to be part of a user defined struct, that the user then defines their own Scopes on, where each custom scope would be composed of 1 or more regular scopes. The main difference between a Mapper and a Mapper+ is that Mappers return a Scope when you call a Queryable method on it, while Mapper+ will return a new Mapper+ for the first scope, and then all further scopes will be applied to the Mapper+ object. The reason for that is so that user structs do not have to be continually casted to or created, the Scopes simply add to the current Scope.
type UserMapper struct { MapperPlus } func (um *UserMapper) Activated() *UserMapper { return &UserMapper{um.Cond("activated_at", db.NE, nil)} } func (um *UserMapper) RecentlyActive() *UserMapper { timeCutoff := time.Now().Add(time.Duration(-30*24*time.Hour)) return &UserMapper{um.Cond("last_login", db.GTE, timeCutoff)} } var Users UserMapper Connection.InitMapperPlus("User", &Users) goodUsers := Users.RecentlyActive() moreGoodUsers := goodUsers.Activated() // at this moment the results of moreGoodUsers == goodUsers // but they are different instances
When you need to duplicate a Mapper+ scope, you could use Identity, but that will return you a Scope, not a MapperPlus. To assist in this situation, the MapperPlus interface has a Dupe method that will return a MapperPlus for you to use for this situation.
type UserMapper struct { db.MapperPlus } var Users *UserMapper db.InitMapperPlus("User", &Users) // will error out &UserMapper{Users.Where(...).Identity()} // will compile correctly &UserMapper{Users.Where(...).Dupe()}
Scopes ¶
Scopes are the cornerstone of db. Scopes are the common case for creating SQL queries. Scopes come in different forms, depending on what you are wanting to accomplish.
For instance, lets say I needed all students who have attended at least 1 class and have a score of 90 on the 3 tests they've taken.
myClass.Students. Join(myClass.Meetings). EqualTo("meeting_attendance.attended", true). Join(myClass.Quizzes). GroupBy(myClass.Students) Having("AVG(quiz_score.overall) >= ?", 90)
Or perhaps you would rather see the top 5 most popular posts on your blog from the articles you released in the past month.
Posts.Cond("created_at", GTE, time.Now().AddDate(0, -1, 0).Order("hits").Limit(5)
For detailed descriptions of each Scope function, please see the Queryable interface documentation.
Database Mapping ¶
There are multiple ways to retrieve data and map the data into struct instancess.
The Find function takes two parameters, the primary key of the record you want and a pointer to the struct instance you want to map to. The Find function may start from Mappers, Scopes and Mapper+'s. Note that the Find will still respect any conditions on the Scope or Mapper+ if you are calling it from one of them.
// find user by id var CurrentUser User err := Users.Find(session["user_id"].Int(), &User) // find user if the user is an admin var AdminUser User err := Users.Admins().Find(session["user_id"].Int(), &AdminUser)
The Retrieve function takes 1 parameter, the struct instance to map the first records data into. If there are more than 1 records that would be returned from the current Scope, Mapper, or Mapper+, then the first record will be the mapped record.
// retrieve head writer for section var SectionHead User Users.Joins(SectionAssignments).EqualTo("section_assignments.section_id", section.Id).Retrieve(&SectionHead) // retrieve first created user var FirstUser User Users.OrderBy("created_at", "ASC").Retrieve(&User)
The RetrieveAll function takes 1 parameter, which is a pointer to an array of the struct you want to map into.
// get all the Users var Many []User Users.RetrieveAll(&Many) // get all the commentors for an article var Commentors []User Users.Joins(Comments).EqualTo("comments.article_id", CurrentArticle.Id).RetrieveAll(&Commentors)
Saving and Updating Values ¶
You can save slices of new structs into the database using a Mapper using the SaveAll call. You can also save single instances of structs as well using SaveAll, but you will need to pass a pointer to the struct instance, so the mapper can update the instance with the primary key assigned to that struct.
// newPost is an unsaved post and newPost.Id is the zero value Posts.SaveAll(&post) // now post.Id will equal the primary key of the db record associated with it // otherPosts is an array of posts some of which are new, some of which need to be updated Posts.SaveAll(otherPosts) // posts that need to be saved, will be saved and their slice instance should be updated, // no matter whether the slice is of []Post or []*Post
You can also update columns in the database off of a Scope or a Mapper. There are three functions, UpdateAttribute, UpdateAttributes, and UpdateSql that will to this for you. UpdateAttribute takes a column name and a value, and will then update that column to the value for all the database rows that would match the scope. UpdateAttributes takes a map of column names to values so you may update more than 1 column at once. UpdateSql takes a sql fragment and will allow you to write sql that uses sql functions instead of using dumb values. UpdateSql will be less used when db.Formula objects are implemented. UpdateSql is not yet implemented as well.
Posts.EqualTo("late", true).UpdateAttribute("delete_on", time.Now().Add(10 * time.Minute))
The Count method allows you to retrieve a count of the rows that would be retrieved from a Scope or Mapper.
// Returns the number of Posts saved in the database Posts.Count() // Returns the number of Posts written by a specific user user.Posts.Count()
The Pluck method allows you to retrieve a selected column from a Scope, Mapper, etc. It is then mapped into a simple array value that was passed as the second value.
// get emails for users who haven't paid for last month var emails []string Users.Joins(Payments).Where("payments.month = ? AND payments.paid_on IS NULL", month).Pluck("email", &emails)
Possible Future Retrieval Methods ¶
The CountOn method is a user controlled version of Count. If you would like to specify a specific column, perhaps to do a DISTINCT count on, this is what you want.
// get total number of distinct authors for a category of posts Posts.CountOn("DISTINCT category_id")
The PluckSeveral is similar to Pluck, but allows you to specify multiple parameters and arrays to map results into. It uses a string array for the first parameters, then a variable amount of pointers to the arrays for the data.
// get emails and names for users who have paid for last month var emails, names []string Users. Joins(Payments). Where("payments.month = ? AND payments.paid_on IS NOT NULL", month). Pluck([]string{"name", "email"}, &names, &emails)
The Select function allows you to map specially selected columns and/or formulas into purpose-written or anonymous structs. If a table has many columns, or you are returning quite a bit of data, this can be a performance boost to use special structs instead of the default mapper.
// get weekly newsletter readers type weeklyReaders struct { Name, Email string Sections string } var readers []weeklyReaders columns := "users.name, users.email, GROUP_CONCAT(subscription_sections.name SEPARATOR '|')" Users.Joins(Subscriptions).Joins(SubscriptionSections).GroupBy("users.id").Select(columns, &readers)
There are also TableInformation and ScopeInformation interfaces. I would caution use of the two interfaces at the moment, as they are intended to be improved heavily before a stable release of db. A stable version of db will provide a comprehensive informational interface for both Scopes and Mappers, but there are more pressing features than it at the moment.
Mixin Functionality ¶
If your use case involves significant use of the database, instead of using the database as a simple persistence mechanism, you will enjoy the Mixin functionality offered by db. When you add the db.Mixin struct as an embedded field to your sturcts, you will have the ability to Save, Delete, and UpdateAttribute(s) from struct instances directly instead of having to use the mapper objects.
Mixins need to be initialized explicitly, this can be done by sending the instances individually, as a slice, or any number of individual instances to the mapper for that sturct type's Initialize function. You can also initialize individual instances by calling that instances Init function with a pointer to the instance. This is only required if you are constructing your instances manually and not using the Find/Retrieve/RetrieveAll Scope/Mapper functions. Find, Retrieve, and RetrieveAll will all initialize the instances they retrieve if the instances have Mixin instances. Instances do not need to be resident in the database for Initialization to succeed. Instances also don't need to be initialized to be saved using the Mapper.SaveAll function.
// initalize an instance post := new(Post) post.Init(&post) post.Name = "Hello World" // Save the post to a new record in the corresponding database table post.Save() // initialize an instance that is mapped on multiple connections // this is only necessary when a struct is mapped on different connections // Init will return an error in situations when you must use this function post := new(Post) post.InitWithConn(pgConn, &post) post = new(Post) post.InitWithConn(myConn, &post) // initalize three instances at once post1, post2, post3 := new(Post), new(Post), new(Post) Posts.Initialize(&post1, &post2, &post3) var newPosts []Post ... // code that add instances to newPosts // initialize all instances in newPosts Posts.Initialize(newPosts)
Joining and Sub-struct Operations ¶
While joining in db can be divided multiple ways, the simplest division may be the division between automatic joins and manual joins. Manual Joins may be specified by the user in the joins query and may add specifiers to the join call, or may be joining on non-intuitive columns. Automatic joins are discovered during mapping by db and can the be retrieved using the mapper or mixin, or using the Include Scope method.
Manual scopes are intended for use either in cases when you need a filtering that is created from the existence of the join, or you need to select columns/formulas/etc. from the query using the Select method of retrieval.
// find posts that have not been commented on by a specific user Posts.Join("LEFT JOIN comments ON comments.user_id = 123456 AND comments.post_id = posts.id"). EqualTo("comments.id", nil) // find the average score of good and bad comments for posts that have both good and bad comments var scoredPosts = []struct { Post, BadAverage, GoodAverage float64, } Posts. Join("INNER JOIN comments AS bad_comments ON bad_comments.post_id = posts.id AND bad_comments.score < 0"). Join("INNER JOIN comments AS good_comments ON good_comments.post_id = posts.id AND good_comments.score > 0"). Group("posts.id"). Select(db.Selections{ db.TableSelection("posts.*"), db.FormulaSelection("AVG(bad_comments.score)", "BadAverage"), db.FormulaSelection("AVG(good_comments.score)", "GoodAverage"), &scoredPosts, )
Automatic joins are declared as part of the struct, and then can be used in Join calls by simply passing in a string or mapper corresponding to the joined struct. See an example below. By default, joins are implemented as outer joins, but you can default specific joins to be inner joins in the sql statment by setting the struct tag of db_join to be inner. You can also use the alternative Join function, InnerJoin to have the join run as an inner join. Finally, if you have set a db_join to default to inner, but want it to be a outer join instead, you can use the OuterJoin function.
type Post struct { Title, Body string AuthorId int Author *User `db_join:"inner"` Comments []*Comment } // find all posts with authors that are guests Posts.Join("Author").EqualTo("author.role", Author_GUEST) // retrieve the posts from a specific author, that have featured comments Posts.EqualTo("Author", theAuthor).Join(Comments).EqualTo("comments.featured", true)
Possible Future Join Enhancements ¶
The FullJoin function allows you to retrieve records that don't have a match to the primary mapped struct. You pass the normal Join paramerters, but add a pointer to an array of the struct you are asking to be joined, which will be filled with the non-matching records when the first Retrieve/RetrieveAll call is made
// find all author's posts, also get one's that are missing an author var authors []User var orphaned []Post User.FullJoin(&orphaned, Posts).Cond("posts.created_at", GTE, oneYearAgo).RetrieveAll(&authors)
SQL Sundries ¶
If you need to use functions to be evaluated by the sql server as part of conditions, you can pass a formula created with the Func function. Func's can have their own parameters, which you should specify using ?'s to denote where the values should appear. Where scopings do not respect Func's at the moment, but they will in the future.
// Find all appointments with a length of less than 5 minutes var tooShort []Appointment shortFunc := db.Func("TIMESTAMPADD(MINUTE, 5, appointments.begin_date_time)") Appointments.Cond("end_date_time", db.LT, shortFunc).RetrieveAll(&tooShort) // Where calls do not need Func instances to use functions Appointments.Where("end_date_time < TIMESTAMPADD(MINUTE, 5, appointments.begin_date_time)")
The Col function allows you to specify a column to be used as a parameter in the same manner as a value or Func.
// Find all appointments that have been updated Appointment.Cond("begin_date_time", db.LT, db.Col("end_date_time"))
Dialects ¶
A Dialect creates a way for db to talk to a specific RDBMS. The current internal ones are mysql and sqlite3, with postgres planned for the near future. You can replace existing dialects or add your own dialects by writing a struct that corresponds to the Dialect interface and then calling RegisterDialect with the name you want the dialect to be accessible under and an instance of your dialect struct.
Logging ¶
Before a public announcement of a db version, I need to implement the Logging facilities. It won't be difficult, but it takes time. Time that I haven't invested yet.
Index ¶
- Constants
- func RegisterDialect(name string, dialect Dialect)
- type Attributes
- type Base
- func (d Base) ColumnsInTable(db *sql.DB, dbName string, table string) map[string]*ColumnInfo
- func (d Base) Create(mapper Mapper, values map[string]interface{}) (string, []interface{})
- func (d Base) CreateExec() bool
- func (d Base) Delete(scope Scope) (string, []interface{})
- func (d Base) ExpandGroupBy() bool
- func (d Base) FormatQuery(query string) string
- func (d Base) Query(scope Scope) (string, []interface{})
- func (d Base) Update(scope Scope, values map[string]interface{}) (string, []interface{})
- type COND
- type ColumnInfo
- type Config
- type Connection
- func (c *Connection) CacheSize(n int)
- func (c *Connection) Close() error
- func (c *Connection) CreateMapper(name string, mapee interface{}) (Mapper, error)
- func (c *Connection) Exec(query string, args ...interface{}) (sql.Result, error)
- func (c *Connection) InitMapperPlus(name string, v interface{})
- func (c *Connection) MustCreateMapper(name string, v interface{}) Mapper
- func (c *Connection) Query(query string, args ...interface{}) (*sql.Rows, error)
- func (c *Connection) QueryRow(query string, args ...interface{}) *sql.Row
- func (c *Connection) SetLogger(logger Logger, logType int)
- type Dialect
- type IncludeList
- type Logger
- type Mapper
- type MapperPlus
- type Mixin
- func (m *Mixin) Delete() error
- func (m *Mixin) Init(instance interface{}) error
- func (m *Mixin) InitWithConn(conn *Connection, instance interface{}) error
- func (m *Mixin) IsNull(column string) bool
- func (m *Mixin) Save() error
- func (m *Mixin) SetNull(column string)
- func (m *Mixin) UpdateAttribute(attr string, value interface{}) error
- func (m *Mixin) UpdateAttributes(values Attributes) error
- type Queryable
- type Scope
- type ScopeInformation
- type SqlBit
- type SqlCol
- type SqlFunc
- type TableInformation
Constants ¶
const ( LOG_ALL = iota LOG_QUERY LOG_ERROR )
Each Connection can have multiple loggers, and each logger will be logged to based on the logType that is passed. One of the following log types are the possibilities.
Variables ¶
This section is empty.
Functions ¶
func RegisterDialect ¶
If you have an external dialect, use this function to load it in so that consumers can create Connections using it. You can override builtin dialects by naming your dialect the same as a builtin dialect.
Types ¶
type Attributes ¶
type Attributes map[string]interface{}
This is used to pass multiple columns and values to UpdateAttributes functions
// Update multiple attributes somePostsScope.UpdateAttributes(db.Attributes{ "created_at": time.Now(), "building": true, "state": "build", })
type Base ¶
type Base struct {
Dialect Dialect
}
Base dialect implements default implementations of a Dialect Each of the builtin dialects have a base embedded into them, which simplifies their implementation to database specific functions. An example implementation using Base would look like.
type OracleDialect struct { Base } // Dialects need to be circular in structure, mainly for the // FormatQuery to be able to be called from Base into your // Dialect func NewOracle() Dialect { d := &OracleDialect{} d.Base.Dialect = d return d } func (d OracleDialect) CompatibleSqlTypes(f reflect.Type) []string { ... } func ...
func (Base) ColumnsInTable ¶
The Base ColumnsInTable will attempt to use information_schema to retrieve column names, it will not try to guess types for columns It is in your best interest to implement this per database
func (Base) Create ¶
The Base Create function uses the syntax of INSERT INTO `table` (col...) VALUES (...) if this syntax will not work or you need a RETURNING predicate to get the id of the inserted records, you should override this
func (Base) CreateExec ¶
The Base CreateExec return value is true, so INSERT statements will be run by sql.Exec calls.
func (Base) ExpandGroupBy ¶
The Base ExpandGroupBy will return true
func (Base) FormatQuery ¶
The Base FormatQuery will not do anything, just returns the input string This works for databases that parameterize with ?, but postgres and similar database will need to implement a transformation for this function
type ColumnInfo ¶
type ColumnInfo struct { // Name will be set to the struct field name, you can set it // to the column name if you wish without harming anything Name string // The table that this column belongs to SqlTable string // The name for the database column SqlColumn string // The type, this should correlate to a type given by the Dialect // function CompatibleSqlTypes SqlType string // The Length of the field, you should set this to -1 for fields that // have no effective limit Length int // Whether this field could return a NULL value, this is safe to mark // as true if in doubt. It activates nil protection for mapping Nullable bool // The index of the column within the table, it is an optional field Number int }
ColumnInfo is the data returned by a ColumnsInTable function which is implemented by the database Dialect's.
type Config ¶
type Config struct { // A function that would give the default table name for a struct // in the database, for instance Rails would take in "Post" and // return "posts" StructToTable func(structName string) string // A function to give a default name for a databse column based // on a struct field FieldToColumn func(structName, fieldName string) string // A function that gives a guess as to what the default name // would be for a foreign key field based on the field name from // the struct and the struct that was embedded in the larger // struct ForeignKeyName func(fieldName string, structName string) string // The default name for a primary key field, this will turn // the name of the struct into the primary key name. Note that // this will return the field name for the struct first, and then // the database column next IdName func(structName string) (string, string) // When timestamping is added, CreatedColumn is the default column // to set to the current time when creating records in the database CreatedColumn string // When timestamping is added, UpdatedColumn is the default column // to set to the current time when Saving records in the database. // Update* calls would not update this. UpdatedColumn string }
A Config is the way you can define defaults for the database as far as table, column, foreignkey, and primary key naming.
func NewRailsConfig ¶
func NewRailsConfig() *Config
The RailsConfig tries to replicate Rails practices for naming.
Table names are pluralized lowercase versions of struct names and field names are lowercased versions of field names. Foreign key names are the lowercased name of field with '_id' appended.
Ids are identified with the field name "Id" and the database column "id", while the timestamp fields are "CreatedAt" and "UpdatedAt".
func NewSimpleConfig ¶
func NewSimpleConfig() *Config
Simple config is the simplest Config implementation I could think of. Table and Column names are lowercase versions of struct and field names, foreign keys are lowercased field names with 'id' appended. Primary key names are Id and id, with timestamps of Creation and Modified.
type Connection ¶
type Connection struct { // This is the main Database connection used by the Connection DB *sql.DB // The Dialect that the Connection uses to format queries Dialect Dialect // The Config for mapping structs to database tables and records Config *Config // The QueryCache for saving and reusing Queries. To disable, simply set this to nil // By default, it will store up to 4096 distinct queries, you can use the // CacheSize(n int) to change the query storage number QueryCache *cache.Cache // contains filtered or unexported fields }
func NewConnection ¶
func NewConnection(dialectName, dbName, connector string) (*Connection, error)
NewConnection creates a connection to a specific database server with a specific database. Connections are used to create Mappers, which are then used to retrieve records. Note that NewConnection will set the MaxIdleConns to 100 for the database connection.
func (*Connection) CacheSize ¶
func (c *Connection) CacheSize(n int)
Set the number of queries that may be present in the query cache at any time. Default is 4096 for the arbitrary reason of I like that number.
func (*Connection) Close ¶
func (c *Connection) Close() error
A Connection can be closed, which essentially means that the *sql.DB connection is closed, though it should not be counted on that the Close operation will not clear out other structures or will clear out other structures.
func (*Connection) CreateMapper ¶
func (c *Connection) CreateMapper(name string, mapee interface{}) (Mapper, error)
CreateMapper returns a Mapper instance for the mapee struct you passed
func (*Connection) Exec ¶
func (c *Connection) Exec(query string, args ...interface{}) (sql.Result, error)
This is almost the same as Connection.DB.Exec(query, args), but will prepare the statement and cache it in the connection's query cache. The resemblence to the database/sql DB interface is intentional.
result, e := pgConn.Exec(createThings, values...)
func (*Connection) InitMapperPlus ¶
func (c *Connection) InitMapperPlus(name string, v interface{})
Initialize a MapperPlus instance, need more documentation and tests on this
func (*Connection) MustCreateMapper ¶
func (c *Connection) MustCreateMapper(name string, v interface{}) Mapper
This is the same as CreateMapper, but will panic on an error instead of returning it
func (*Connection) Query ¶
func (c *Connection) Query(query string, args ...interface{}) (*sql.Rows, error)
This is almost the same as Connection.DB.Query(query, args), but will prepare the statement and cache it in the connection's query cache. The resemblence to the database/sql DB interface is intentional.
rows, e := pgConn.Query(bigQuery, values...)
func (*Connection) QueryRow ¶
func (c *Connection) QueryRow(query string, args ...interface{}) *sql.Row
This is almost the same as Connection.DB.QueryRow(query, args), but will prepare the statement and cache it in the connection's query cache. The resemblence to the database/sql DB interface is intentional.
row := pgConn.QueryRow(complicated, values...)
func (*Connection) SetLogger ¶
func (c *Connection) SetLogger(logger Logger, logType int)
A Logger is a struct that has a subset of a log.Logger, you can use a log.Logger for it, but you can substitute a different struct of your own imagining if you wish.
type Dialect ¶
type Dialect interface { // Given a database name and table name, output the information for each // column listed in the table ColumnsInTable(conn *Connection, dbName string, tableName string) map[string]*ColumnInfo // This function isn't used at the moment, but when db gets migration // support, it will be used CompatibleSqlTypes(f reflect.Type) []string // Format query takes in a query that uses ?'s as placeholders for // query parameters, and formats the query as needed for the database FormatQuery(query string) string // Query creates a simple select query Query(scope Scope) (string, []interface{}) // Create takes a mapper and a map of the sql column names with the // values set on a struct instance corresponding to the mapped // fields corresponding to those columns and returns the query and // any necessary parameters Create(mapper Mapper, values map[string]interface{}) (string, []interface{}) // If the database doesn't repsond to LastInsertId when using Exec, // you can add a RETURNING predicate in Create and return false here // so struct instances can get their primary key values CreateExec() bool // Write an update statement similar to the create statement and // return the parameterized query form Update(scope Scope, values map[string]interface{}) (string, []interface{}) // Formulate a delete from statement from a scope and return // the sql and parameters Delete(scope Scope) (string, []interface{}) // The GROUP BY sql syntax has some differences from database to database // Mysql allows you to specify a single column that determines the table // to group by, while sqlite, postgres, others require every non-aggregated // field to be present. It would be onerous to ask developers to write // every field for database systems that require it, so we'll do it for // them, but we need to know whether we need to do it for this database // system ExpandGroupBy() bool }
New dialects must fufill this interface, sqlite, mysql and postgres have internal dialects that implement this interface
type IncludeList ¶
type IncludeList []interface{}
type Logger ¶
type Logger interface { Println(v ...interface{}) Printf(format string, v ...interface{}) }
I need to write the code for this, while I'm at I can add a quick performance tweak of a prepared query cache
type Mapper ¶
type Mapper interface { Queryable TableInformation Initialize(val ...interface{}) error SaveAll(val interface{}) error }
Mappers are the basic mechanism to turn database rows into struct instances. You can Scope queries off of them, you can get basic SQL information about their database table, and you can act on multiple instances of the mapped struct.
Initialize allows you to Initialize Mixins on struct instances ranging from just one instance, to several instances, to many instances in an array. SaveAll is a either a shortcut to calling Save on structs that have Mixin's or the main way to save records when the mapped struct doesn't have a Mixin.
type MapperPlus ¶
type MapperPlus interface { Mapper ScopeInformation Dupe() MapperPlus }
A MapperPlus is both a Scope-like interface, but also the Mapper for a struct. It has to be that in order to be able to allow you to specify custom Scope-like functions. I'll add examples soon.
type Mixin ¶
type Mixin struct {
// contains filtered or unexported fields
}
The Mixin struct can be embedded into your structs to allow for ActiveRecored (the pattern) like operations. It does need to be Initialized, which you can do manually when you create the instances with User{} or new(User), or it will be automatically done when you map it using Find, Retrieve, or RetrieveAll. If you are using constructor functions for your structs, you should add this to the function.
func NewUser(p UserParams) *User { u := &User{} u.Init(u) u.SetFromParams(p) return u }
func (*Mixin) Init ¶
Manually initialize a Mixin. Pass a pointer to the current instance in to initialize.
user.Init(&user)
func (*Mixin) InitWithConn ¶
func (m *Mixin) InitWithConn(conn *Connection, instance interface{}) error
Manually initialize a struct that is mapped on multiple connections.
user.InitWithConn(pgConn, &user)
func (*Mixin) IsNull ¶
Return whether a column had the value NULL when retrieved from the database In this manner, you don't need to use sql.NullString, or *string values in your structs for fields that may be nullable in the database.
if userAttendance.IsNull("arrival") { fmt.Println(userAttendance.User.Name, "did not attend") }
func (*Mixin) Save ¶
Save the instance to the database. If the primary key field isn't set, this will create the object, otherwise it will use the primary key field to update the fields. Mixin should log the fields so we don't have to update all fields, but that hasn't been written yet.
user.FirstName = "Bob" user.LastName = "Zealot" e := user.Save()
func (*Mixin) SetNull ¶
Sets whether a column should have a null value in the database. TODO: this only works for IsNull atm, should also be used for saving
func (*Mixin) UpdateAttribute ¶
Update the database record record with the column name attr with the value passed Note: the instance that you are calling this on will not get the updated values
user.UpdateAttribute("visit_time", time.Now())
func (*Mixin) UpdateAttributes ¶
func (m *Mixin) UpdateAttributes(values Attributes) error
Update multiple columns of the database record with the passed values Note: the instance you are calling this on will not get the updated values
apptAttendance.UpdateAttributes(db.Attributes{ "attended": true, "arrival": time.Now(), })
type Queryable ¶
type Queryable interface { // Identity is the canonical way to duplicate a Scope, it doesn't do anything else Identity() Scope // Cond is a quick interface to the simple compare operations Cond(column string, condition COND, val interface{}) Scope // The EqualTo scope is a short hand way of saying Cond(column, db.EQUAL_TO, value). EqualTo(column string, val interface{}) Scope // The Between Scope is a way to specify a SQL BETWEEN without writing a Where fragment. Between(column string, lower, upper interface{}) Scope // The In Scope is a way to specify an SQL IN for variable sized arrays of items In(column string, items interface{}) Scope // The Where Scope is a generalized way to write SQL Expressions. It can do simple matching // like EqualTo, Cond, Between or In written in SQL. It will also handle binding variables // within a SQL statement. Where(fragment string, args ...interface{}) Scope // The Having SQL clause allows you to filter on aggregated // values from a GROUP BY. Since Having always is using SQL // functions, it should be simpler to just write the SQL // fragment directly instead of using a SqlFunc constructor. Having(fragment string, values ...interface{}) Scope // GroupBy allows you to group by a table or column, this is // necessary for aggregation functions. GroupBy(groupItem string) Scope // Limit sets the number of results to return from the full query Limit(limit int) Scope // Offset sets the number of results to skip over before returning results Offset(offset int) Scope // Order sets an ordering column for the query, direction is ASC unless specified Order(ordering string) Scope // Specify both an ordering and direction as separate parameters OrderBy(column, direction string) Scope // Drop all previous order declarations and only order by the parameter passed Reorder(ordering string) Scope // Search for a record with the primary key of id, then place the result in the val pointer Find(id, val interface{}) error // Return the first result from the scope and place it into the val pointer Retrieve(val interface{}) error // Return all results from the Scope and put the in the array pointed at by the dest parameter RetrieveAll(dest interface{}) error // Return the count results, uses the primary key of the originating mapper to count on // Not distinct, need to add a CountSql function Count() (int64, error) // Retrieve a single column using joins, limits, conditions from the Scope and place // the results into the array pointed at by values Pluck(column, values interface{}) error // Run a DELETE FROM query using the conditions from the Scope Delete() error /* WARNING: None of the Update* functions update timestamps */ // Update a single column using a UPDATE query UpdateAttribute(column string, val interface{}) error // Update multiple columns at once, using an UpdateAttributes(values Attributes) error // Update however you want off of a scope, go wild with SQL, you can pass in some values to be // substituted if you need them UpdateSql(sql string, vals ...interface{}) error // LeftJoin is a Left Outer Join to the Mapper or Scoped Mapper String LeftJoin(joins ...interface{}) Scope // InnerJoin will require both sides of the relationship to exist InnerJoin(joins ...interface{}) Scope // Full Join will make a Full Outer Join query using the passed arguments FullJoin(joins ...interface{}) Scope // Right Join will do a right join, if you need it RightJoin(joins ...interface{}) Scope // JoinSql will allow you to write straight SQL for the JOIN JoinSql(sql string, args ...interface{}) Scope // LeftInclude LeftInclude(include ...interface{}) Scope // InnerInclude InnerInclude(include ...interface{}) Scope // FullInclude FullInclude(include interface{}, nullRecords interface{}) Scope // RightInclude RightInclude(include interface{}, nullRecords interface{}) Scope // IncludeSql, magic come true IncludeSql(il IncludeList, query string, args ...interface{}) Scope }
The Identity Scope will always return a copy of the current scope, whether it is on a Mapper, Scope or Mapper+. Internally it is the canonical method to duplicate a Scope.
recent := Posts.Where(...).Joins(...).Order(...) recent2 := recent.Identity() // recent != recent2 // recent.RetrieveAll(...) == recent2.RetrieveAll()
The Cond Scope is the matching shorthand scope. It allows you to run common SQL matching conditions, but not have to write out the sql code like you would for the Where condition. Cond supports the matchers: Equal, Not Equal, Less Than, Less Than or Equal To, Greater Than, and Greater Than or Equal To.
// find all posts published since recentTime recentScope := Posts.Cond("publish_date", db.GT, recentTime) // find all users created before longTimeAgo ancientScope := Users.Cond("created_at", db.LESS_OR_EQUAL, longTimeAgo) // find all appointments happening today todaysAppointments := Appointments.Cond("end_time", db.GTE, beginningOfDay).Cond("begin_time", db.LTE, endingOfDay)
The EqualTo scope is a short hand way of saying Cond(column, db.EQUAL_TO, value).
// find all admins admins := Users.EqualTo("is_admin", true) // find nicer users niceUsers := Users.Joins(Bans).EqualTo("bans.id", nil) // find non-permabanned users meanUsers := Users.EqualTo("permabanned_on", nil)
The Between Scope is a way to specify a SQL BETWEEN without writing a Where fragment.
// find last weeks posts lastWeek := Posts.Between(twoWeeksAgo, oneWeekAgo) // find the days appointments appts := Appointment.Between(beginningOfDay, endingOfDay)
The In Scope allows you to specify that a value should match 1 or more values in an array
// find appointment conflicts conflicts := Appointments.Joins(Attendees).In("attendees.user_id", userIds) // find powerful users daPower := Users.In("user_type", []string{ADMIN, AUTHOR, BDFL, CTHULHU})
The Where Scope is a generalized way to write SQL Expressions. It can do simple matching like EqualTo, Cond, Between or In written in SQL. It will also handle binding variables within a SQL statement.
// find awesome users Users.Where("kudos_count >= ?", coolnessQuotient) // find users with names like "mike" Users.Where("first_name LIKE ?", "mike") // find appointments that will be missed Attendees.Where( "user_ids IN (:students:) AND cancelled_on IS NULL AND begin_time BETWEEN :begin: AND :end:", map[string]interface{}{ "students": userIds, "begin": tripBeginning, "end": tripEnding, }, )
The Limit Scope allows you to specify the maximum number of records returned in a RetrieveAll call.
// find the 10 newest users Users.Limit(10).Order("created_at DESC")
The Offset specifies the number of records that will be skipped when returning records
// find the users for the grid Users.Limit(25).Offset((pageNum - 1) * 25)
The ordering scopes are 3 different Scopes. The simplest is Order, which could be a sql fragment or just a column name. If you do not pass a specific direction, then the direction will be set to ascending. OrderBy allows you to specify the ordering as a string. Reorder will wipe any previous orders and replace them with the passed ordering.
// order by age Users.Order("birth_date") // order by creation date Users.OrderBy("created_at", "DESC") // order only by beginning time Appointments.Reorder("begin_time")
Joining and Including Related Tables ¶
There's two different ways of dealing with joining in db, joining for the use in Scopes and joining for the use of returning related data. When you just want to use the data for scoping conditions or for Pluck/custom selections, you can use the *Join functions (InnerJoin, LeftJoin, FullJoin, RightJoin).
If you need to retrieve the joined records and have them present in the data structures you pull back, you can use the *Include functions. LeftInclude and InnerInclude you can specify normally, but RightInclude and FullInclude are different. in that you are expected to send along an array to put the records that can't be joined. Read up on how Right and Full Outer Joins work if you don't understand why you have to send in the array.
Specifying Joins or Includes without writing SQL is done in several ways. The simplest is to just pass in the Mapper or MapperPlus for the table you wish to join or include. That works fine for then the join is a simple unaliased join from one mapper to another. Passing in the mapper will work whether the relationship is 1-to-1 or 1-to-many.
But not every join is simple, perhaps you have multiple joins of a mapper in a single struct, some or all being aliased, in that case you can pass in a string that describes the join. For instance, if an Appointments mapper had an Coordinator User as well as Users through an Attendee Join Mapper, you could use the string "Coordinator" to indicate that Users join to work as opposed to the Attendee Users that would happen if you just passed in the Users Mapper.
Simple Join Examples
// Join Comments to Users for a recentPost, then select some things Comments.EqualTo("post_id", recentPost.Id).InnerJoin(User).Select(...) // Join From Threads To Entries as well as the Original Post (First Entry) Threads.LeftJoin(Entries).InnerJoin("OP")...
Aliased Join Examples
// Join all supervised departments, then if there are any recent employees add them Departments.InnerJoin("Supervisor").LeftJoin(Employees) // Join students and the creator of an enrollment (an instance of a college class in a specific semester) Enrollment.InnerJoin("Students").InnerJoin("Creator")
SQL Join Examples
// Custom polymorphic join Meeting.JoinSql(`INNER JOIN calendared ON calendared.parent_id = meeting.id AND calendared.parent_type = 'Meeting' `)
type Scope ¶
type Scope interface { Queryable TableInformation ScopeInformation }
A Scope is an auto-duplicating Query construction mechanism
type ScopeInformation ¶
type ScopeInformation interface { SelectorSql() string ConditionSql() (string, []interface{}) JoinsSql() string EndingSql() (string, []interface{}) }
These functions are used by Dialects to construct the queries. As Dialects could be written without explicit support from db, these function (though we'll get around to adding more) need to be publicly accessible.
type SqlBit ¶
SqlBit is a common interface shared by many internal SQL fragments like joins, SqlFunc's, etc.
type SqlFunc ¶
type SqlFunc interface { // The SQL code with ?'s standing in for parameters. Fragment() string // The parameters to replace the ?'s in Fragment. Values() []interface{} // A version of Fragment with the parameters inside that // is suitable for logging. String() string }
Interface for a SQL Function condition
type TableInformation ¶
Information used in query generation and Mapper interrogation. Much more information will be exposed at a later time