gcsx

package
v0.36.5 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Apr 27, 2021 License: Apache-2.0 Imports: 27 Imported by: 0

Documentation

Index

Constants

View Source
const MB = 1 << 20

MB is 1 Megabyte. (Silly comment to make the lint warning go away)

View Source
const MtimeMetadataKey = "gcsfuse_mtime"

MtimeMetadataKey objects are created by Syncer.SyncObject and contain a metadata field with this key and with a UTC mtime in the format defined by time.RFC3339Nano.

Variables

This section is empty.

Functions

func NewContentTypeBucket added in v0.17.0

func NewContentTypeBucket(b gcs.Bucket) gcs.Bucket

NewContentTypeBucket creates a wrapper bucket that guesses MIME types for newly created or composed objects when an explicit type is not already set.

func NewMonitoringBucket added in v0.36.0

func NewMonitoringBucket(b gcs.Bucket) gcs.Bucket

NewMonitoringBucket returns a gcs.Bucket that exports metrics for monitoring

func NewPrefixBucket added in v0.14.0

func NewPrefixBucket(
	prefix string,
	wrapped gcs.Bucket) (b gcs.Bucket, err error)

NewPrefixBucket creates a view on the wrapped bucket that pretends as if only the objects whose names contain the supplied string as a strict prefix exist, and that strips the prefix from the names of those objects before exposing them.

In order to preserve the invariant that object names are valid UTF-8, prefix must be valid UTF-8.

Types

type BucketConfig added in v0.31.0

type BucketConfig struct {
	BillingProject                     string
	OnlyDir                            string
	EgressBandwidthLimitBytesPerSecond float64
	OpRateLimitHz                      float64
	StatCacheCapacity                  int
	StatCacheTTL                       time.Duration
	EnableMonitoring                   bool

	// Files backed by on object of length at least AppendThreshold that have
	// only been appended to (i.e. none of the object's contents have been
	// dirtied) will be written out by "appending" to the object in GCS with this
	// process:
	//
	// 1. Write out a temporary object containing the appended contents whose
	//    name begins with TmpObjectPrefix.
	//
	// 2. Compose the original object and the temporary object on top of the
	//    original object.
	//
	// 3. Delete the temporary object.
	//
	// Note that if the process fails or is interrupted the temporary object will
	// not be cleaned up, so the user must ensure that TmpObjectPrefix is
	// periodically garbage collected.
	AppendThreshold int64
	TmpObjectPrefix string
}

type BucketManager added in v0.31.0

type BucketManager interface {
	// Sets up a gcs bucket by its name
	SetUpBucket(
		ctx context.Context,
		name string) (b SyncerBucket, err error)

	// Lists the names of all the buckets in the project.
	ListBuckets(ctx context.Context) (names []string, err error)

	// Shuts down the bucket manager and its buckets
	ShutDown()
}

BucketManager manages the lifecycle of buckets.

func NewBucketManager added in v0.31.0

func NewBucketManager(config BucketConfig, conn *Connection) BucketManager

type Connection added in v0.36.0

type Connection struct {
	// contains filtered or unexported fields
}

func NewConnection added in v0.36.0

func NewConnection(cfg *gcs.ConnConfig) (c *Connection, err error)

func (*Connection) ListBuckets added in v0.36.0

func (c *Connection) ListBuckets(
	ctx context.Context,
	projectId string) (names []string, err error)

func (*Connection) OpenBucket added in v0.36.0

func (c *Connection) OpenBucket(
	ctx context.Context,
	options *gcs.OpenBucketOptions) (b gcs.Bucket, err error)

type RandomReader

type RandomReader interface {
	// Panic if any internal invariants are violated.
	CheckInvariants()

	// Matches the semantics of io.ReaderAt, with the addition of context
	// support.
	ReadAt(ctx context.Context, p []byte, offset int64) (n int, err error)

	// Return the record for the object to which the reader is bound.
	Object() (o *gcs.Object)

	// Clean up any resources associated with the reader, which must not be used
	// again.
	Destroy()
}

RandomReader is an object that knows how to read ranges within a particular generation of a particular GCS object. Optimised for (large) sequential reads.

Not safe for concurrent access.

func NewRandomReader

func NewRandomReader(
	o *gcs.Object,
	bucket gcs.Bucket) (rr RandomReader, err error)

NewRandomReader create a random reader for the supplied object record that reads using the given bucket.

type StatResult

type StatResult struct {
	// The current size in bytes of the content.
	Size int64

	// The largest value T such that we are sure that the range of bytes [0, T)
	// is unmodified from the original content with which the temp file was
	// created.
	DirtyThreshold int64

	// The mtime of the temp file is updated according to the temp file's clock
	// with each call to a method that modified its content, and is also updated
	// when the user explicitly calls SetMtime.
	//
	// If neither of those things has ever happened, it is nil. This implies that
	// DirtyThreshold == Size.
	Mtime *time.Time
}

StatResult stores the result of a stat operation.

type Syncer

type Syncer interface {
	// Given an object record and content that was originally derived from that
	// object's contents (and potentially modified):
	//
	// *   If the temp file has not been modified, return a nil new object.
	//
	// *   Otherwise, write out a new generation in the bucket (failing with
	//     *gcs.PreconditionError if the source generation is no longer current).
	//
	// In the second case, the TempFile is destroyed. Otherwise, including when
	// this function fails, it is guaranteed to still be valid.
	SyncObject(
		ctx context.Context,
		srcObject *gcs.Object,
		content TempFile) (o *gcs.Object, err error)
}

Syncer is safe for concurrent access.

func NewSyncer

func NewSyncer(
	appendThreshold int64,
	tmpObjectPrefix string,
	bucket gcs.Bucket) (os Syncer)

NewSyncer creates a syncer that syncs into the supplied bucket.

When the source object has been changed only by appending, and the source object's size is at least appendThreshold, we will "append" to it by writing out a temporary blob and composing it with the source object.

Temporary blobs have names beginning with tmpObjectPrefix. We make an effort to delete them, but if we are interrupted for some reason we may not be able to do so. Therefore the user should arrange for garbage collection.

type SyncerBucket added in v0.31.0

type SyncerBucket struct {
	gcs.Bucket
	Syncer
}

func NewSyncerBucket added in v0.31.0

func NewSyncerBucket(
	appendThreshold int64,
	tmpObjectPrefix string,
	bucket gcs.Bucket,
) SyncerBucket

NewSyncerBucket creates a SyncerBucket, which can be used either as a gcs.Bucket, or as a Syncer.

type TempFile

type TempFile interface {
	// Panic if any internal invariants are violated.
	CheckInvariants()

	// Semantics matching os.File.
	io.ReadSeeker
	io.ReaderAt
	io.WriterAt
	Truncate(n int64) (err error)

	// Return information about the current state of the content. May invalidate
	// the seek position.
	Stat() (sr StatResult, err error)

	// Explicitly set the mtime that will return in stat results. This will stick
	// until another method that modifies the file is called.
	SetMtime(mtime time.Time)

	// Throw away the resources used by the temporary file. The object must not
	// be used again.
	Destroy()
}

TempFile is a temporary file that keeps track of the lowest offset at which it has been modified.

Not safe for concurrent access.

func NewTempFile

func NewTempFile(
	source io.ReadCloser,
	dir string,
	clock timeutil.Clock) (tf TempFile, err error)

NewTempFile creates a temp file whose initial contents are given by the supplied reader. dir is a directory on whose file system the inode will live, or the system default temporary location if empty.

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL