Documentation ¶
Index ¶
- Variables
- func WebInterface(ds DS, opts ...webInterfaceOption) http.Handler
- func WebInterfaceOptionLogger(log *slog.Logger) webInterfaceOption
- func WebOptionCustomizeRequest(f func(*http.Request) error) webConnectorOption
- func WebOptionHttpClient(client *http.Client) webConnectorOption
- type DS
- func FromLocation(location string) (DS, error)
- func FromWeb(baseURL string, options ...webConnectorOption) (DS, error)
- func InFileSystem(path string) (DS, error)
- func InMemory() DS
- func InRawFileSystem(path string) (DS, error)
- func NewMultiSource(main DS, refreshTime time.Duration, additional ...DS) DS
- type WriteCloseCanceller
Constants ¶
This section is empty.
Variables ¶
var (
ErrInvalidMemoryLocation = fmt.Errorf("memory datastore must not use any parameters, use only `%s`", memoryPrefix)
)
var ( // ErrNotFound will be used when blob with given name was not found in datastore ErrNotFound = errors.New("not found") )
var (
ErrUploadInProgress = errors.New("another upload is already in progress")
)
var (
ErrWebConnectionError = errors.New("connection error")
)
Functions ¶
func WebInterface ¶
WebInterface returns http handler representing web interface to given Datastore instance
func WebInterfaceOptionLogger ¶ added in v0.0.6
func WebOptionHttpClient ¶
Types ¶
type DS ¶
type DS interface { // Kind returns string representation of datastore kind (i.e. "Memory") Kind() string // Address returns string representing datastore address Address() string // Open returns a read stream for given blob name or an error. In case blob // is not found in datastore, returned error must be of ErrNotFound type. // // The blob may be detected to be invalid (not passing the validation), // in that case, either the Open call or the Read method from the returned // reader will return ErrInvalidData error. // // If a non-nil error is returned, the writer will be nil. Otherwise it // is necessary to call the `Close` on the returned reader once done // with the reader. Open(ctx context.Context, name *common.BlobName) (io.ReadCloser, error) // Update retrieves an update for given blob. The data is read from given // reader until it returns either EOF, ending successful save, or any other // error which will cancel the save - in such case this error will be // returned from this function. If the data does not pass validation, // ErrInvalidData will be returned. Update(ctx context.Context, name *common.BlobName, r io.Reader) error // Exists does check whether blob of given name exists in the datastore. // Partially written blobs are equal to non-existing ones. Boolean value // returned indicates whether the blob exists or not, non-nil error indicates // that there was an error while trying to check blob's existence. Exists(ctx context.Context, name *common.BlobName) (bool, error) // Delete tries to remove blob with given name from the datastore. // If blob does not exist (which includes partially written blobs) // ErrNotFound will be returned. If blob is being opened at the moment // of removal, all opened references to the blob must still be able to // read the blob data. After the `Delete` call succeeds, trying to read // the blob with the `Open` should end up with an ErrNotFound error // until the blob is updated again with a successful `Update` call. Delete(ctx context.Context, name *common.BlobName) error }
DS interface contains the public interface of any conformant datastore
Stored data is split into small chunks called blobs. Each blob has its unique name. The name is used to perform cryptographic verification of the blob data (e.g. it must match the digest of data for static blobs).
The blob content (in case of blobs other than the static blob) can be updated over time. The rule of forward progress is that if there are two or more valid datasets for a single blob, performing update of that blob with those datasets must be deterministic and always result with a single final dataset. The merge result may be the content one of the source datasets deterministically selected or a combined dataset containing information from other datasets merged.
On the interface level, there is no distinction between blob types and their internal data. Working with that interface allows treating the dataset as completely opaque byte streams. This simplifies implementation of data through various transfer mechanisms independently from blob types.gaze
func FromLocation ¶
FromLocation creates new instance of the datastore from location string.
The string may be of the following form:
- file://<path> - create datastore using local filesystem's path (optimized) as the storage, see InFileSystem for more details
- file-raw://<path> - create datastore using local filesystem's path (simplified) as the storage, see InRawFileSystem for more details
- http://<address> or https://<address> - connects to datastore exposed through a http protocol, see FromWeb for more details
- memory:// - creates a local in-process datastore without persistent storage
- <path> - equivalent to file://<path>
func InFileSystem ¶
InFileSystem constructs a datastore using filesystem as a storage layer.
Contrary to InRawFileSystem, this datastore is optimized for large datastores and concurrent use.
func InMemory ¶
func InMemory() DS
InMemory constructs an in-memory datastore
The content is lost if the datastore is destroyed (either by garbage collection or by program termination)
func InRawFileSystem ¶
InRawFilesystem is a simplified storage that uses filesystem as a storage layer.
Datastore files are stored directly under base58-encoded blob names. This datastore should not be used for highly concurrent or highly modified cases. The main purpose is to dump files to a disk in a form that can be lated used in a classic web server and used as a static web source.
type WriteCloseCanceller ¶
type WriteCloseCanceller interface { io.WriteCloser Cancel() }