noms: github.com/attic-labs/noms/go/datas Index | Files

package datas

import "github.com/attic-labs/noms/go/datas"

Package datas defines and implements the database layer used in Noms.

Index

Package Files

caching_chunk_haver.go commit.go commit_options.go database.go database_common.go database_server.go dataset.go factory.go http_batch_store.go local_batch_store.go local_database.go pull.go put_cache.go remote_database_client.go remote_database_handlers.go serialize_hints.go

Constants

const (
    ParentsField = "parents"
    ValueField   = "value"
    MetaField    = "meta"
)
const (
    // NomsVersionHeader is the name of the header that Noms clients and
    // servers must set in every request/response.
    NomsVersionHeader = "x-noms-vers"
)

Variables

var (
    ErrOptimisticLockFailed = errors.New("Optimistic lock failed on database Root update")
    ErrMergeNeeded          = errors.New("Dataset head is not ancestor of commit")
)
var (
    // HandleWriteValue is meant to handle HTTP POST requests to the
    // writeValue/ server endpoint. The payload should be an appropriately-
    // ordered sequence of Chunks to be validated and stored on the server.
    // TODO: Nice comment about what headers it expects/honors, payload
    // format, and error responses.
    HandleWriteValue = createHandler(handleWriteValue, true)

    // HandleGetRefs is meant to handle HTTP POST requests to the getRefs/
    // server endpoint. Given a sequence of Chunk hashes, the server will
    // fetch and return them.
    // TODO: Nice comment about what headers it
    // expects/honors, payload format, and responses.
    HandleGetRefs = createHandler(handleGetRefs, true)

    // HandleGetBlob is a custom endpoint whose sole purpose is to directly
    // fetch the *bytes* contained in a Blob value. It expects a single query
    // param of `h` to be the ref of the Blob.
    // TODO: Support retrieving blob contents via a path.
    HandleGetBlob = createHandler(handleGetBlob, false)

    // HandleWriteValue is meant to handle HTTP POST requests to the hasRefs/
    // server endpoint. Given a sequence of Chunk hashes, the server check for
    // their presence and return a list of true/false responses.
    // TODO: Nice comment about what headers it expects/honors, payload
    // format, and responses.
    HandleHasRefs = createHandler(handleHasRefs, true)

    // HandleRootGet is meant to handle HTTP GET requests to the root/ server
    // endpoint. The server returns the hash of the Root as a string.
    // TODO: Nice comment about what headers it expects/honors, payload
    // format, and responses.
    HandleRootGet = createHandler(handleRootGet, true)

    // HandleWriteValue is meant to handle HTTP POST requests to the root/
    // server endpoint. This is used to update the Root to point to a new
    // Chunk.
    // TODO: Nice comment about what headers it expects/honors, payload
    // format, and error responses.
    HandleRootPost = createHandler(handleRootPost, true)

    // HandleBaseGet is meant to handle HTTP GET requests to the / server
    // endpoint. This is used to give a friendly message to users.
    // TODO: Nice comment about what headers it expects/honors, payload
    // format, and error responses.
    HandleBaseGet = handleBaseGet

    HandleGraphQL = createHandler(handleGraphQL, false)
)
var DatasetFullRe = regexp.MustCompile("^" + DatasetRe.String() + "$")

DatasetFullRe is a regexp that matches a only a target string that is entirely legal Dataset name.

var DatasetRe = regexp.MustCompile(`[a-zA-Z0-9\-_/]+`)

DatasetRe is a regexp that matches a legal Dataset name anywhere within the target string.

func FindCommonAncestor Uses

func FindCommonAncestor(c1, c2 types.Ref, vr types.ValueReader) (a types.Ref, ok bool)

FindCommonAncestor returns the most recent common ancestor of c1 and c2, if one exists, setting ok to true. If there is no common ancestor, ok is set to false.

func IsCommitType Uses

func IsCommitType(t *types.Type) bool

func IsRefOfCommitType Uses

func IsRefOfCommitType(t *types.Type) bool

func IsValidDatasetName Uses

func IsValidDatasetName(name string) bool

func NewCommit Uses

func NewCommit(value types.Value, parents types.Set, meta types.Struct) types.Struct

NewCommit creates a new commit object. The type of Commit is computed based on the type of the value, the type of the meta info as well as the type of the parents.

For the first commit we get:

“` struct Commit {

meta: M,
parents: Set<Ref<Cycle<0>>>,
value: T,

} “`

As long as we continue to commit values with type T and meta of type M that type stays the same.

When we later do a commit with value of type U and meta of type N we get:

“` struct Commit {

meta: N,
parents: Set<Ref<struct Commit {
  meta: M | N,
  parents: Set<Ref<Cycle<0>>>,
  value: T | U
}>>,
value: U,

} “`

Similarly if we do a commit with a different type for the meta info.

The new type gets combined as a union type for the value/meta of the inner commit struct.

func NewHTTPBatchStore Uses

func NewHTTPBatchStore(baseURL, auth string) *httpBatchStore

func Pull Uses

func Pull(srcDB, sinkDB Database, sourceRef, sinkHeadRef types.Ref, concurrency int, progressCh chan PullProgress)

Pull objects that descend from sourceRef from srcDB to sinkDB. sinkHeadRef should point to a Commit (in sinkDB) that's an ancestor of sourceRef. This allows the algorithm to figure out which portions of data are already present in sinkDB and skip copying them.

func PullWithFlush Uses

func PullWithFlush(srcDB, sinkDB Database, sourceRef, sinkHeadRef types.Ref, concurrency int, progressCh chan PullProgress)

PullWithFlush calls Pull and then manually flushes data to sinkDB. This is an unfortunate current necessity. The Flush() can't happen at the end of regular Pull() because that breaks tests that try to ensure we're not reading more data from the sinkDB than expected. Flush() triggers validation, which triggers sinkDB reads, which means that the code can no longer tell which reads were caused by Pull() and which by Flush(). TODO: Get rid of this (BUG 2982)

type CommitOptions Uses

type CommitOptions struct {
    // Parents, if provided is the parent commits of the commit we are
    // creating.
    Parents types.Set

    // Meta is a Struct that describes arbitrary metadata about this Commit,
    // e.g. a timestamp or descriptive text.
    Meta types.Struct

    // Policy will be called to attempt to merge this Commit with the current
    // Head, if this is not a fast-forward. If Policy is nil, no merging will
    // be attempted. Note that because Commit() retries in some cases, Policy
    // might also be called multiple times with different values.
    Policy merge.Policy
}

CommitOptions is used to pass options into Commit.

type Database Uses

type Database interface {
    // To implement types.ValueWriter, Database implementations provide
    // WriteValue(). WriteValue() writes v to this Database, though v is not
    // guaranteed to be be persistent until after a subsequent Commit(). The
    // return value is the Ref of v.
    types.ValueReadWriter
    io.Closer

    // Datasets returns the root of the database which is a
    // Map<String, Ref<Commit>> where string is a datasetID.
    Datasets() types.Map

    // GetDataset returns a Dataset struct containing the current mapping of
    // datasetID in the above Datasets Map.
    GetDataset(datasetID string) Dataset

    // Commit updates the Commit that ds.ID() in this database points at. All
    // Values that have been written to this Database are guaranteed to be
    // persistent after Commit() returns.
    // The new Commit struct is constructed using v, opts.Parents, and
    // opts.Meta. If opts.Parents is the zero value (types.Set{}) then
    // the current head is used. If opts.Meta is the zero value
    // (types.Struct{}) then a fully initialized empty Struct is passed to
    // NewCommit.
    // The returned Dataset is always the newest snapshot, regardless of
    // success or failure, and Datasets() is updated to match backing storage
    // upon return as well. If the update cannot be performed, e.g., because
    // of a conflict, Commit returns an 'ErrMergeNeeded' error.
    Commit(ds Dataset, v types.Value, opts CommitOptions) (Dataset, error)

    // CommitValue updates the Commit that ds.ID() in this database points at.
    // All Values that have been written to this Database are guaranteed to be
    // persistent after Commit().
    // The new Commit struct is constructed using `v`, and the current Head of
    // `ds` as the lone Parent.
    // The returned Dataset is always the newest snapshot, regardless of
    // success or failure, and Datasets() is updated to match backing storage
    // upon return as well. If the update cannot be performed, e.g., because
    // of a conflict, Commit returns an 'ErrMergeNeeded' error.
    CommitValue(ds Dataset, v types.Value) (Dataset, error)

    // Delete removes the Dataset named ds.ID() from the map at the root of
    // the Database. The Dataset data is not necessarily cleaned up at this
    // time, but may be garbage collected in the future.
    // The returned Dataset is always the newest snapshot, regardless of
    // success or failure, and Datasets() is updated to match backing storage
    // upon return as well. If the update cannot be performed, e.g., because
    // of a conflict, Delete returns an 'ErrMergeNeeded' error.
    Delete(ds Dataset) (Dataset, error)

    // SetHead ignores any lineage constraints (e.g. the current Head being in
    // commit’s Parent set) and force-sets a mapping from datasetID: commit in
    // this database.
    // All Values that have been written to this Database are guaranteed to be
    // persistent after SetHead(). If the update cannot be performed, e.g.,
    // because another process moved the current Head out from under you,
    // error will be non-nil.
    // The newest snapshot of the Dataset is always returned, so the caller an
    // easily retry using the latest.
    // Regardless, Datasets() is updated to match backing storage upon return.
    SetHead(ds Dataset, newHeadRef types.Ref) (Dataset, error)

    // FastForward takes a types.Ref to a Commit object and makes it the new
    // Head of ds iff it is a descendant of the current Head. Intended to be
    // used e.g. after a call to Pull(). If the update cannot be performed,
    // e.g., because another process moved the current Head out from under
    // you, err will be non-nil.
    // The newest snapshot of the Dataset is always returned, so the caller
    // can easily retry using the latest.
    // Regardless, Datasets() is updated to match backing storage upon return.
    FastForward(ds Dataset, newHeadRef types.Ref) (Dataset, error)
    // contains filtered or unexported methods
}

Database provides versioned storage for noms values. While Values can be directly read and written from a Database, it is generally more appropriate to read data by inspecting the Head of a Dataset and write new data by updating the Head of a Dataset via Commit() or similar. Particularly, new data is not guaranteed to be persistent until after a Commit (Delete, SetHead, or FastForward) operation completes. The Database API is stateful, meaning that calls to GetDataset() or Datasets() occurring after a call to Commit() (et al) will represent the result of the Commit().

func NewDatabase Uses

func NewDatabase(cs chunks.ChunkStore) Database

type Dataset Uses

type Dataset struct {
    // contains filtered or unexported fields
}

Dataset is a named Commit within a Database.

func (Dataset) Database Uses

func (ds Dataset) Database() Database

Database returns the Database object in which this Dataset is stored. WARNING: This method is under consideration for deprecation.

func (Dataset) HasHead Uses

func (ds Dataset) HasHead() bool

HasHead() returns 'true' if this dataset has a Head Commit, false otherwise.

func (Dataset) Head Uses

func (ds Dataset) Head() types.Struct

Head returns the current head Commit, which contains the current root of the Dataset's value tree.

func (Dataset) HeadRef Uses

func (ds Dataset) HeadRef() types.Ref

HeadRef returns the Ref of the current head Commit, which contains the current root of the Dataset's value tree.

func (Dataset) HeadValue Uses

func (ds Dataset) HeadValue() types.Value

HeadValue returns the Value field of the current head Commit.

func (Dataset) ID Uses

func (ds Dataset) ID() string

ID returns the name of this Dataset.

func (Dataset) MaybeHead Uses

func (ds Dataset) MaybeHead() (types.Struct, bool)

MaybeHead returns the current Head Commit of this Dataset, which contains the current root of the Dataset's value tree, if available. If not, it returns a new Commit and 'false'.

func (Dataset) MaybeHeadRef Uses

func (ds Dataset) MaybeHeadRef() (types.Ref, bool)

MaybeHeadRef returns the Ref of the current Head Commit of this Dataset, which contains the current root of the Dataset's value tree, if available. If not, it returns an empty Ref and 'false'.

func (Dataset) MaybeHeadValue Uses

func (ds Dataset) MaybeHeadValue() (types.Value, bool)

MaybeHeadValue returns the Value field of the current head Commit, if available. If not it returns nil and 'false'.

type Factory Uses

type Factory interface {
    Create(string) (Database, bool)

    // Shutter shuts down the factory. Subsequent calls to Create() will fail.
    Shutter()
}

Factory allows the creation of namespaced Database instances. The details of how namespaces are separated is left up to the particular implementation of Factory and Database.

func NewRemoteStoreFactory Uses

func NewRemoteStoreFactory(host, auth string) Factory

type Handler Uses

type Handler func(w http.ResponseWriter, req *http.Request, ps URLParams, cs chunks.ChunkStore)

type LocalDatabase Uses

type LocalDatabase struct {
    // contains filtered or unexported fields
}

Database provides versioned storage for noms values. Each Database instance represents one moment in history. Heads() returns the Commit from each active fork at that moment. The Commit() method returns a new Database, representing a new moment in history.

func (*LocalDatabase) Close Uses

func (ldb *LocalDatabase) Close() error

func (*LocalDatabase) Commit Uses

func (ldb *LocalDatabase) Commit(ds Dataset, v types.Value, opts CommitOptions) (Dataset, error)

func (*LocalDatabase) CommitValue Uses

func (ldb *LocalDatabase) CommitValue(ds Dataset, v types.Value) (Dataset, error)

func (*LocalDatabase) Datasets Uses

func (dbc *LocalDatabase) Datasets() types.Map

func (*LocalDatabase) Delete Uses

func (ldb *LocalDatabase) Delete(ds Dataset) (Dataset, error)

func (*LocalDatabase) FastForward Uses

func (ldb *LocalDatabase) FastForward(ds Dataset, newHeadRef types.Ref) (Dataset, error)

func (*LocalDatabase) GetDataset Uses

func (ldb *LocalDatabase) GetDataset(datasetID string) Dataset

func (*LocalDatabase) SetHead Uses

func (ldb *LocalDatabase) SetHead(ds Dataset, newHeadRef types.Ref) (Dataset, error)

type LocalFactory Uses

type LocalFactory struct {
    // contains filtered or unexported fields
}

func NewLocalFactory Uses

func NewLocalFactory(cf chunks.Factory) *LocalFactory

func (*LocalFactory) Create Uses

func (lf *LocalFactory) Create(ns string) (Database, bool)

func (*LocalFactory) Shutter Uses

func (lf *LocalFactory) Shutter()

type PullProgress Uses

type PullProgress struct {
    DoneCount, KnownCount, ApproxWrittenBytes uint64
}

type RemoteDatabaseClient Uses

type RemoteDatabaseClient struct {
    // contains filtered or unexported fields
}

Database provides versioned storage for noms values. Each Database instance represents one moment in history. Heads() returns the Commit from each active fork at that moment. The Commit() method returns a new Database, representing a new moment in history.

func NewRemoteDatabase Uses

func NewRemoteDatabase(baseURL, auth string) *RemoteDatabaseClient

func (*RemoteDatabaseClient) Close Uses

func (dbc *RemoteDatabaseClient) Close() error

func (*RemoteDatabaseClient) Commit Uses

func (rdb *RemoteDatabaseClient) Commit(ds Dataset, v types.Value, opts CommitOptions) (Dataset, error)

func (*RemoteDatabaseClient) CommitValue Uses

func (rdb *RemoteDatabaseClient) CommitValue(ds Dataset, v types.Value) (Dataset, error)

func (*RemoteDatabaseClient) Datasets Uses

func (dbc *RemoteDatabaseClient) Datasets() types.Map

func (*RemoteDatabaseClient) Delete Uses

func (rdb *RemoteDatabaseClient) Delete(ds Dataset) (Dataset, error)

func (*RemoteDatabaseClient) FastForward Uses

func (rdb *RemoteDatabaseClient) FastForward(ds Dataset, newHeadRef types.Ref) (Dataset, error)

func (*RemoteDatabaseClient) GetDataset Uses

func (rdb *RemoteDatabaseClient) GetDataset(datasetID string) Dataset

func (*RemoteDatabaseClient) SetHead Uses

func (rdb *RemoteDatabaseClient) SetHead(ds Dataset, newHeadRef types.Ref) (Dataset, error)

type RemoteDatabaseServer Uses

type RemoteDatabaseServer struct {

    // Called just before the server is started.
    Ready func()
    // contains filtered or unexported fields
}

func NewRemoteDatabaseServer Uses

func NewRemoteDatabaseServer(cs chunks.ChunkStore, port int) *RemoteDatabaseServer

func (*RemoteDatabaseServer) Port Uses

func (s *RemoteDatabaseServer) Port() int

Port is the actual port used. This may be different than the port passed in to NewRemoteDatabaseServer.

func (*RemoteDatabaseServer) Run Uses

func (s *RemoteDatabaseServer) Run()

Run blocks while the RemoteDatabaseServer is listening. Running on a separate go routine is supported.

func (*RemoteDatabaseServer) Stop Uses

func (s *RemoteDatabaseServer) Stop()

Will cause the RemoteDatabaseServer to stop listening and an existing call to Run() to continue.

type RemoteStoreFactory Uses

type RemoteStoreFactory struct {
    // contains filtered or unexported fields
}

func (RemoteStoreFactory) Create Uses

func (f RemoteStoreFactory) Create(ns string) (Database, bool)

func (RemoteStoreFactory) CreateStore Uses

func (f RemoteStoreFactory) CreateStore(ns string) Database

func (RemoteStoreFactory) Shutter Uses

func (f RemoteStoreFactory) Shutter()

type URLParams Uses

type URLParams interface {
    ByName(string) string
}

Package datas imports 35 packages (graph) and is imported by 8 packages. Updated 2017-03-19. Refresh now. Tools for package owners.