fs

package module
v0.0.0-...-8e84a60 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Sep 4, 2022 License: MIT Imports: 5 Imported by: 5

README

gopherfs logo-sm

A set of io/fs filesystem abstractions and utilities for Go

GoDoc

Please ⭐ this project

Overview

This package provides io/fs interfaces for:

  • Cloud providers
  • Memory storage
  • Wrappers for the "os" package
  • Utilities for merging io.FS packages
  • A caching system with support for:
    • Redis
    • GroupCache
    • Disk cache

If you are looking to use a single group of interfaces to access any type of filesystem, look no further. This package brings the power of Go 1.16's io/fs package with new interfaces to allow for writable filesystems.

With these standard sets of interfaces we have expanded the reach of the standard library to cover several common sets of filesystems. In addition we provide a caching system allowing a cascade of cache fills to handle your file caching needs.

Below we will break down the packages and you can locate documentation within the GoDoc or the README in various packages.

Packages breakdown

└── fs
    ├── io
	│   ├── cache
	│   │   ├── disk
	│   │   ├── groupcache
	│   │   │   └── peerpicker
	│   │   └── redis
	│   ├── cloud
	│   │   └── azure
	│   │       └── blob
	│   │           ├── auth
	│   │           └── blob.go
	│   ├── mem
	│   │   └── simple
	│   └── os
  • fs: Additional interfaces to allow writeable filesystems and filesystem utility functions
  • fs/io/cache: Additional interfaces and helpers for our cache system
    • disk: A disk based cache filesystem
    • groupcache: A groupcache based filesystem
      • peerpicker: A multicast based peerpicker for groupcache (does not work in the cloud)
    • redis: A Redis based filesystem
  • fs/io/cloud: A collection of cloud provider filesystems
    • azure: A collection of Microsoft Azure filesystems
      • blob: A filesystem implementation based on Azure's Blob storage
  • fs/io/mem: A collection of local memory based filesystems
    • simple: A memory filesystem that requires ASCII based file paths, supports RO Pearson hasing
  • fs/io/os: A filesystem wrapper based around the "os" package

Examples

The most complete examples will be in the GoDoc for individual packages. But here are some excerpts for a few use cases.

Optimize embed.FS when not in debug mode

Choices

embed.FS is great. But what if you want to have readable JS for debug and compact code when in production? What if you'd also like to take several embed.FS and merge into a single tree?

Merge() and our simple memory storage to the rescue:

optimized := simple.New(simple.WithPearson())

err := Merge(
	optimized, 
	somePkg.Embeded, 
	"/js/", // Puts the content of the embed fs into a sub-directory
	WithTransform(
		func(name string, content []byte) ([]byte, error){
			// If we are in debug mode, we want unoptimized Javascript
			if debug {
				return content, nil
			}
			switch path.Ext(name){
			case "js":
				return optimizeJS(content)
			case "go":
				return nil, nil
			}
			return content, nil
		},
	),
)
if err != nil {
	// Do something
}
optimized.RO() // Locks this filesystem for readonly
Access Redis as a filesystem

Just Cause

One of the more popular caching systems around is Redis. Redis of course has a lot of options around it, but most use cases are simply as a filesystem. If this is your use, you can gain access to Redis using our fs/io/cache/redis implementation.

Here we simply create a connection to our local Redis cache, set a 5 minute expiration time on all files and then write a file.

redisFS, err := redis.New(
	redis.Args{Addr: "127.0.0.1:6379"},
	// This causes all files to exire in 5 minutes.
	// You can write a complex ruleset to handle different files at
	// different rates.
	redis.WithWriteFileOFOptions(
		regexp.MustCompile(`.*`),
		redis.ExpireFiles(5 * time.Minute),
	),
)
if err != nil {
	// Do something
}

if err := redisFS.WriteFile("gopher.jpg", gopherBytes, 0644); err != nil {
	// Do something
}
Build a Cascading Cache

Need for speed

Here we are going to build a cascading cache system. The goal is to have multiple layers of cache to look at before finally going to the source. This code will:

  • Pull from a groupcache first
  • Try a disk cache second
  • Pull from Azure's Blob Storage as the final resort

Note: This example uses a peerpicker for groupcache that will not work on major cloud providers, as they block broadcast and local multicast packets. You would need your own peerpicker to work for your cloud vendor.

// This sets up a filesystem accessing Azure's Blob storage. This is where
// our permanent location for files will be located.
blobStore, err := blob.NewFS("account", "container", *cred)
if err != nil {
	// Do something
}

// A new peerpicker that broadcasts on port 7586 to find new peers.
picker, err := peerpicker.New(7586)
if err != nil {
	// Do something
}

// A groupcache that our app uses to find cached entries.
gc, err := groupcache.New(picker)
if err != nil {
	// Do something
}

// A disk cache for when the groupcache doesn't have the data.
diskFS, err := disk.New(
	"", 
	disk.WithExpireCheck(1 * time.Minute), 
	disk.WithExpireFiles(30 * time.Minute),
)
if err != nil {
	// Do something
}

// Creates our diskCache that looks at our disk for a file and if it
// cannot find it, pulls from blob storage.
diskCache, err := cache.New(diskFS, blobStore)
if err != nil {
	// Do something
}

// Creates our cascader that will search the groupcache first, then
// search the disk cache and finally will pull from Azure Blob storage.
cascacder, err := cache.New(gc, diskCache)
if err != nil {
	// Do something
}

// This reads a file. Since this is our first read of this file, it will
// come from Azure Blob storage and back fill our caches.
b, err := cascacder.ReadFile("/path/to/file")
if err != nil {
	// Do something
}

Contributions

This project is open to contributions. The best way to contribute:

  1. Open a feature/bug request for the feature
  2. After a brief discusssion, fork the repo
  3. Commit your changes
  4. Create a Pull Request

#1 and #2 simple prevents any time wasted by for things that might not be within the scope for this project or to make sure a bug solution is the right solution.

We are looking for contributors to:

  • Support Azure Append and Page Blobs (we already support block blobs)
  • Support GCP Blob storage
  • Support GCP Filestore
  • Support AWS S3
  • Support AWS Elastic storage
  • Support SFTP

Alternatives

I should point out that there is already a great package for filesystem abstractions Afero. While I've never used it, spf13 is the author of several great packages and it looks like it has great support for several different filesystem types.

So why gopherfs? When I started writing this I was simply interested in trying to take advantage of io/fs. I saw Afero after I had written a couple of filesystems and it did not have io/fs support.

Afero was also geared towards its own method of abstraction that was built long before io/fs was a twinkle in the Go authors' eyes.

Most of my services don't need complicated file permissions that afero provides. For my use cases, the service is access control and has full rights to the file system.

I find Afero more complicated to use for my use cases and it doesn't have support for cloud provider filesystems (though you could write one).

If you need to support more complicated setups, I would use Aferno. I expect I might add wrappers around some of its filesytems at some point in the future.

Documentation

Overview

Package fs contains abstractions not provided by io/fs needed to provide services such as writing files and utility functions that can be useful.

OpenFiler provides OpenFile() similar to the "os" package when you need to write file data.

Writer provides a WriteFile() similar to the "os" package when you want to write an entire file at once.

OFOption provides a generic Option type for implementations of OpenFile() to use.

This package also introduces a Merge() function to allow merging of filesystem content into another filesystem and the ability to tranform the content in some way (like optimizations).

Using Merge to optimize embed.FS Javascript into a subdirectory "js":

optimized := simple.New(simple.WithPearson())

err := Merge(
	optimized,
	somePkg.Embeded,
	"/js/",
	WithTransform(
		func(name string, content []byte) ([]byte, error){
			// If we are in debug mode, we want unoptimized Javascript
			if debug {
				return content, nil
			}
			switch path.Ext(name){
			case "js":
				return optimizeJS(content)
			case "go":
				return nil, nil
			}
			return content, nil
		},
	),
)
if err != nil {
	// Do something
}
optimized.RO()

The above code takes embedded Javscript stored in an embed.FS and if we are not in a debug mode, optimized the Javscript with an optimizer. This allows us to keep our embed.FS local to the code that directly uses it and create an overall filesystem for use by all our code while also optmizing that code for production.

Index

Constants

This section is empty.

Variables

This section is empty.

Functions

func Merge

func Merge(into Writer, from fs.FS, prepend string, options ...MergeOption) error

Merge will merge "from" into "into" by walking "from" the root "/". Each file will be prepended with "prepend" which must start and end with "/". If into does not implement Writer, this will panic. If the file already exists, this will error and leave a partial copied fs.FS.

Types

type DefaultLogger

type DefaultLogger struct{}

DefaultLogger provides a default Logger implementation that uses Go's standard log.Println/Printf calls.

func (DefaultLogger) Printf

func (DefaultLogger) Printf(format string, v ...interface{})

func (DefaultLogger) Println

func (DefaultLogger) Println(v ...interface{})

type FileTransform

type FileTransform func(name string, content []byte) ([]byte, error)

FileTransform gives the base name of a file and the content of the file. It returns the content that MAY be transformed in some way. If this return a nil for []byte and a nil error, this file is skipped.

type Logger

type Logger interface {
	Println(v ...interface{})
	Printf(format string, v ...interface{})
}

Logger provides the minimum interface for a logging client.

type MergeOption

type MergeOption func(o *mergeOptions)

MergeOption is an optional argument for Merge().

func WithTransform

func WithTransform(ft FileTransform) MergeOption

WithTransform instructs the Merge() to use a FileTransform on the files it reads before writing them to the destination.

type MkdirAllFS

type MkdirAllFS interface {
	OpenFiler

	// MkdirAll creates a directory named path, along with any necessary parents, and returns nil, or else returns an error.
	// The permission bits perm (before umask) are used for all directories that MkdirAll creates.
	// If path is already a directory, MkdirAll does nothing and returns nil.
	MkdirAll(path string, perm fs.FileMode) error
}

MkdirAllFS provides a filesystem that impelments MkdirAll(). An FS not implementing this is expected to create the directory structure on a file write.

type OFOption

type OFOption func(o interface{}) error

OFOption is an option for the OpenFiler.OpenFile() call. The passed "o" arg is implementation dependent.

type OpenFiler

type OpenFiler interface {
	fs.FS

	// OpenFile opens the file at name with fs.FileMode. The set of options is implementation
	// dependent. The fs.File that is returned should be type asserted to gain access to additional
	// capabilities. If opening for ReadOnly, generally the standard fs.Open() call is better.
	OpenFile(name string, perm fs.FileMode, options ...OFOption) (fs.File, error)
}

OpenFiler provides a more robust method of opening a file that allows for additional capabilities like writing to files. The fs.File and options are generic and implementation specific. To gain access to additional capabilities usually requires type asserting the fs.File to the implementation specific type.

type Remove

type Remove interface {
	// Remove removes the named file or (empty) directory. If there is an error, it will be of type *PathError.
	Remove(name string) error
	// RemoveAll removes path and any children it contains. It removes
	// everything it can but returns the first error it encounters.
	// If the path does not exist, RemoveAll returns nil (no error).
	// If there is an error, it will be of type *fs.PathError.
	RemoveAll(path string) error
}

Remove provides a filesystem that implements Remove() and RemoveAll().

type Writer

type Writer interface {
	OpenFiler

	// WriteFile writes a file's content to the file system. This implementation may
	// return fs.ErrExist if the file already exists. The FileMode
	// may or may not be honored by the implementation.
	WriteFile(name string, data []byte, perm fs.FileMode) error
}

Writer provides a filesystem implememnting OpenFiler with a simple way to write an entire file.

Directories

Path Synopsis
io
cache
Package cache provides helpers for building a caching system based on io/fs.FS.
Package cache provides helpers for building a caching system based on io/fs.FS.
cache/disk
Package disk provides an FS that wraps the johnsiilver/fs/os package to be used for a disk cache that expires files.
Package disk provides an FS that wraps the johnsiilver/fs/os package to be used for a disk cache that expires files.
cache/groupcache
Package groupcache is an fs.FS wrapper for caching purposes built around Brad Fitzpatrick's groupcache.
Package groupcache is an fs.FS wrapper for caching purposes built around Brad Fitzpatrick's groupcache.
cache/groupcache/peerpicker
Package peerpicker provides a groupcache.PeerPicker that utilizes a LAN peer discovery mechanism and sets up the groupcache to use the HTTPPool for communication between nodes.
Package peerpicker provides a groupcache.PeerPicker that utilizes a LAN peer discovery mechanism and sets up the groupcache to use the HTTPPool for communication between nodes.
So....
cache/redis
Package redis provides an io/fs.FS implementation that can be used in our cache.FS package.
Package redis provides an io/fs.FS implementation that can be used in our cache.FS package.
cloud/azure/blob
Package blob is an implementation of the io.FS for Azure blob storage.
Package blob is an implementation of the io.FS for Azure blob storage.
cloud/azure/blob/auth/msi
Package msi provides authentication methods using Microsoft Service Identities.
Package msi provides authentication methods using Microsoft Service Identities.
os
Package os provides an io.FS that is implemented using the os package.
Package os provides an io.FS that is implemented using the os package.

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL