b2

package module
v1.3.1 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Apr 20, 2024 License: MIT Imports: 12 Imported by: 0

README

b2

Tests Go Report Card

A Go library for the Backblaze B2 Cloud Storage API.

Contents

  1. API Support
  2. Install
  3. Setup
  4. Usage
    1. Authentication
    2. Upload File
    3. Upload Large File
    4. Download File
    5. Delete a File
    6. List Files

API Support

The following API endpoints and functionality are currently supported:

  • Authentication
    • b2_authorize_account
  • Uploading a file
    • b2_get_upload_url
    • b2_upload_file
  • Uploading a large file (multi-part upload)
    • b2_start_large_file
    • b2_get_upload_part_url
    • b2_upload_part
    • b2_finish_large_file
    • b2_cancel_large_file
  • Downloading a file
    • b2_download_file_by_id
  • Deleting a file
    • b2_delete_file_version

The project is being actively developed, and more functionality will likely be added in the near future. Existing functionality is unlikely to change and should be considered stable.

Install

go get github.com/benbusby/b2

Setup

To use this library with B2, create an account on backblaze.com, create a new bucket (or use an existing one) and follow the following steps to create an Application Key:

  1. Select Account > Application Keys > Add New Application Key
  2. Name the key and select which bucket the key should have access to
  3. Save the keyID and applicationKey values
Local Storage Only

If you just want to use the library functions to write files to your machine for testing, you can skip creating a Backblaze account and just use one of the "dummy" authentication methods outlined below in Authentication.

Usage

Authentication

You can authenticate with Backblaze B2 using b2.AuthorizeAccount(keyID, key), where keyID is either an application or master key ID, and key is the actual key contents. b2.AuthorizeAccount returns a b2.Auth struct:

type Auth struct {
	AbsoluteMinimumPartSize int    `json:"absoluteMinimumPartSize"`
	AccountID               string `json:"accountId"`
	Allowed                 struct {
		BucketID     string   `json:"bucketId"`
		BucketName   string   `json:"bucketName"`
		Capabilities []string `json:"capabilities"`
		NamePrefix   any      `json:"namePrefix"`
	} `json:"allowed"`
	APIURL              string `json:"apiUrl"`
	AuthorizationToken  string `json:"authorizationToken"`
	DownloadURL         string `json:"downloadUrl"`
	RecommendedPartSize int    `json:"recommendedPartSize"`
	S3APIURL            string `json:"s3ApiUrl"`
}

Most B2 functions have a receiver type of Auth and will use the AuthorizationToken field to authenticate requests.


Functions
func AuthorizeAccount(
	b2BucketKeyId string,
	b2BucketKey string,
) (Service, AuthV2, error)

func AuthorizeAccountV2(
    b2BucketKeyId string,
    b2BucketKey string,
) (Service, AuthV3, error)

func AuthorizeDummyAccount(
	path string,
) (Service, error)

func AuthorizeLimitedDummyAccount(
	path string,
	storageLimit int,
) (Service, error)

Example
# Authenticate with B2
b2, err := b2.AuthorizeAccount(
	os.Getenv("B2_BUCKET_KEY_ID"),
	os.Getenv("B2_BUCKET_KEY"))

# Create dummy authentication
b2, err := b2.AuthorizeDummyAccount("/tmp")

# Create dummy authentication w/ 1GB storage limit
b2, err := b2.AuthorizeLimitedDummyAccount("local-bucket", 1024*1024*1024)
Upload File

Uploading a regular (non-chunked) file involves fetching the upload parameters first (unique upload URL, token, etc), and then sending your file data, a SHA-1 checksum for the data, and the file name.

Note that although the endpoint for retrieving upload data is named b2_get_upload_url, the returned struct contains more than just a URL. Most importantly, it contains the required auth token for actually uploading the file.

After uploading, you'll receive a struct with fields such as FileID that you can use to access or delete the file later.


Functions

Get upload URL:

func (b2Service Service) GetUploadURL() (FileInfo, error)

Upload file:

func (b2Info FileInfo) UploadFile(
	filename string,
	checksum string,
	contents []byte,
) (File, error)

Example
b2, _ := b2.AuthorizeAccount(
	os.Getenv("B2_BUCKET_KEY_ID"),
	os.Getenv("B2_BUCKET_KEY"))

b2Uploader, _ := b2.GetUploadURL()

data := []byte("test")

h := sha1.New()
h.Write(data)
checksum := fmt.Sprintf("%x", h.Sum(nil))

file, err := b2Uploader.UploadFile(
	"myfile.txt",
	checksum,
	data)

// save/store `file.FileID` somewhere in order to access it later
Upload Large File

Uploading a large file requires extra steps to "start" and "stop" uploading, which will depend on how many chunks of data you're sending. Each chunk of file data needs to be at least 5mb, except for the final chunk. You cannot use the large file upload process to upload files <5mb.

The basic flow of uploading a large file is:

  1. Start the file
  2. Get the upload info for a chunk
  3. Upload the chunks of data until finished
  4. Stop the file

You'll also need to track each checksum as you upload data, since finishing a large file requires an array of past checksums to finalize the upload.

The finalized large file struct, like the normal B2 file struct, contains metadata that you may want in order to access the file later.


Function(s)

Start large file:

func (b2Service Service) StartLargeFile(filename string) (StartFile, error)

Get upload part URL:

func (b2Service Service) GetUploadPartURL(b2File StartFile) (FilePartInfo, error)

Upload file part:

func (b2PartInfo FilePartInfo) UploadFilePart(
	chunkNum int,
	checksum string,
	contents []byte,
) error

Finish large file:

func (b2Service Service) FinishLargeFile(
	fileID string,
	checksums []string,
) (LargeFile, error)

Cancel large file:

func (b2Service Service) CancelLargeFile(fileID string) (bool, error)

Example
dataSize := 10485760
chunkSize := 5242880

data := make([]byte, dataSize) // empty, only for example
var checksums []string

b2, _ := b2.AuthorizeAccount(
	os.Getenv("B2_BUCKET_KEY_ID"),
	os.Getenv("B2_BUCKET_KEY"))

b2InitFile, _ := b2.StartLargeFile("mybigfile.mp4")
b2PartUploader, _ := b2.GetUploadPartURL(b2InitFile)

for i := 0; i < dataSize; i++ {
	start := i * chunkSize
	stop := i * chunkSize + chunkSize
	chunk := data[start:stop]

	h := sha1.New()
	h.Write(data)
	checksum := fmt.Sprintf("%x", h.Sum(nil))
	checksums = append(checksums, checksum)

	// B2 chunk numbering starts at 1, not 0
	err := info.UploadFilePart(i+1, checksum, chunk)

	if err != nil {
		panic(err)
	}
}

err = b2.FinishLargeFile(b2PartUploader.FileID, checksums)
if err != nil {
	panic(err)
}
Download File

Downloading a file can either be done in one request (likely only feasible for smaller files) or chunked, similar to how large files are uploaded but in reverse.


Functions

Multi-part download:

func (b2Service Service) PartialDownloadById(
	id string,
	begin int,
	end int,
) ([]byte, error)

Full download:

func (b2Service Service) DownloadById(id string) ([]byte, error)

Example (single request)
b2, _ := b2.AuthorizeAccount(
	os.Getenv("B2_BUCKET_KEY_ID"),
	os.Getenv("B2_BUCKET_KEY"))

id := getB2FileID() // value from UploadFile or FinishLargeFile

data, err := b2.DownloadById(id)

// do something with the file data
Example (multi request)
b2, _ := b2.AuthorizeAccount(
	os.Getenv("B2_BUCKET_KEY_ID"),
	os.Getenv("B2_BUCKET_KEY"))

id := getB2FileID() // value from UploadFile or FinishLargeFile
fileSize := getB2FileSize() // same note as above

chunkSize := 5242880
i := 0
var output []byte

for i < fileSize {
	start := i * chunkSize
	stop := i * chunkSize + chunkSize
	data, _ := auth.PartialDownloadById(id, start, stop)
	output = append(output, data...)
}

// do something with output (full file data)
Delete a File

Deleting a file requires both the file's ID, and the file's name. Both of these are returned in the final struct when uploading a file and should be stored somewhere if you want to delete the file later on.


Function
func (b2Service Service) DeleteFile(b2ID string, name string) bool

Example
b2, _ := b2.AuthorizeAccount(
	os.Getenv("B2_BUCKET_KEY_ID"),
	os.Getenv("B2_BUCKET_KEY"))

id, name := getB2FileInfo()

if b2.DeleteFile(id, name) {
	fmt.Println("File successfully deleted")
} else {
	return errors.New("failed to delete file")
}
List Files

Listing files requires the bucket ID that you're wanting to query, and can accept a few optional parameters for filtering.


Functions
func (b2Service Service) ListAllFiles(bucketID string) (FileList, error)

func (b2Service Service) ListNFiles(bucketID string, count int) (FileList, error)

func (b2Service Service) ListFiles(
	bucketID string,
	count int,
	startName string,
	startID string,
) (FileList, error)

Example
files, _ := b2.ListAllFiles(bucketID)

for _, file := range files.Files {
    // do something with `file`
}

Documentation

Index

Constants

View Source
const APICancelLargeFile = "b2_cancel_large_file"
View Source
const APIDeleteFile = "b2_delete_file_version"
View Source
const APIDownloadById string = "b2_download_file_by_id"
View Source
const APIFinishLargeFile = "b2_finish_large_file"
View Source
const APIGetUploadPartURL string = "b2_get_upload_part_url"
View Source
const APIGetUploadURL string = "b2_get_upload_url"
View Source
const APIListFileVersions = "b2_list_file_versions"
View Source
const APIStartLargeFile string = "b2_start_large_file"
View Source
const AuthURLV2 string = "https://api.backblazeb2.com/b2api/v2/b2_authorize_account"
View Source
const AuthURLV3 string = "https://api.backblazeb2.com/b2api/v3/b2_authorize_account"

Variables

This section is empty.

Functions

func AuthorizeAccount

func AuthorizeAccount(b2BucketKeyId, b2BucketKey string) (Service, AuthV3, error)

func AuthorizeAccountV2 added in v1.3.0

func AuthorizeAccountV2(b2BucketKeyId, b2BucketKey string) (Service, AuthV2, error)

func InitAuthorization added in v1.3.0

func InitAuthorization(b2BucketKeyId, b2BucketKey, authURL string) (io.ReadCloser, error)

func UploadFilePart

func UploadFilePart(
	b2PartInfo FilePartInfo,
	chunkNum int,
	checksum string,
	contents []byte,
) error

UploadFilePart uploads a single chunk of file data to the URL provided by GetUploadPartURL. Each subsequent chunk should increment chunkNum, with the first chunk starting at 1 (not 0). Each chunk should be provided with a SHA1 checksum as well.

Types

type AuthV2 added in v1.3.0

type AuthV2 struct {
	AbsoluteMinimumPartSize int    `json:"absoluteMinimumPartSize"`
	AccountID               string `json:"accountId"`
	Allowed                 struct {
		BucketID     string   `json:"bucketId"`
		BucketName   string   `json:"bucketName"`
		Capabilities []string `json:"capabilities"`
		NamePrefix   any      `json:"namePrefix"`
	} `json:"allowed"`
	APIURL              string `json:"apiUrl"`
	AuthorizationToken  string `json:"authorizationToken"`
	DownloadURL         string `json:"downloadUrl"`
	RecommendedPartSize int    `json:"recommendedPartSize"`
	S3APIURL            string `json:"s3ApiUrl"`
}

type AuthV3 added in v1.3.0

type AuthV3 struct {
	AccountID string `json:"accountId"`
	APIInfo   struct {
		StorageAPI struct {
			AbsoluteMinimumPartSize int      `json:"absoluteMinimumPartSize"`
			APIURL                  string   `json:"apiUrl"`
			BucketID                any      `json:"bucketId"`
			BucketName              any      `json:"bucketName"`
			Capabilities            []string `json:"capabilities"`
			DownloadURL             string   `json:"downloadUrl"`
			InfoType                string   `json:"infoType"`
			NamePrefix              any      `json:"namePrefix"`
			RecommendedPartSize     int      `json:"recommendedPartSize"`
			S3APIURL                string   `json:"s3ApiUrl"`
		} `json:"storageApi"`
		GroupsAPI struct {
			Capabilities []string `json:"capabilities"`
			GroupsAPIURL string   `json:"groupsApiUrl"`
			InfoType     string   `json:"infoType"`
		} `json:"groupsApi"`
	} `json:"apiInfo"`
	ApplicationKeyExpirationTimestamp any    `json:"applicationKeyExpirationTimestamp"`
	AuthorizationToken                string `json:"authorizationToken"`
}

type File

type File struct {
	AccountID     string `json:"accountId"`
	Action        string `json:"action"`
	BucketID      string `json:"bucketId"`
	ContentLength int    `json:"contentLength"`
	ContentMd5    string `json:"contentMd5"`
	ContentSha1   string `json:"contentSha1"`
	ContentType   string `json:"contentType"`
	FileID        string `json:"fileId"`
	FileInfo      struct {
	} `json:"fileInfo"`
	FileName      string `json:"fileName"`
	FileRetention struct {
		IsClientAuthorizedToRead bool `json:"isClientAuthorizedToRead"`
		Value                    any  `json:"value"`
	} `json:"fileRetention"`
	LegalHold struct {
		IsClientAuthorizedToRead bool `json:"isClientAuthorizedToRead"`
		Value                    any  `json:"value"`
	} `json:"legalHold"`
	ServerSideEncryption struct {
		Algorithm string `json:"algorithm"`
		Mode      string `json:"mode"`
	} `json:"serverSideEncryption"`
	UploadTimestamp int64 `json:"uploadTimestamp"`
}

File represents the data returned by UploadFile

func UploadFile

func UploadFile(
	b2Info FileInfo,
	filename string,
	checksum string,
	contents []byte,
) (File, error)

UploadFile uploads file byte content to B2 alongside a name for the file and a SHA1 checksum for the byte content. It returns a File object, which contains fields such as FileID and ContentLength which can be stored and used later to download the file.

type FileInfo

type FileInfo struct {
	BucketID           string `json:"bucketId"`
	UploadURL          string `json:"uploadUrl"`
	AuthorizationToken string `json:"authorizationToken"`
	Dummy              bool
	StorageMaximum     int
}

FileInfo represents the data returned by GetUploadURL

type FileList

type FileList struct {
	Files        []FileListItem `json:"files"`
	NextFileName string         `json:"nextFileName"`
	NextFileID   string         `json:"nextFileId"`
}

type FileListItem

type FileListItem struct {
	AccountID     string `json:"accountId"`
	Action        string `json:"action"`
	BucketID      string `json:"bucketId"`
	ContentLength int    `json:"contentLength"`
	ContentSha1   string `json:"contentSha1"`
	ContentMd5    string `json:"contentMd5"`
	ContentType   string `json:"contentType"`
	FileID        string `json:"fileId"`
	FileInfo      struct {
		SrcLastModifiedMillis string `json:"src_last_modified_millis"`
	} `json:"fileInfo"`
	FileName      string `json:"fileName"`
	FileRetention struct {
		IsClientAuthorizedToRead bool `json:"isClientAuthorizedToRead"`
		Value                    struct {
			Mode                 string `json:"mode"`
			RetainUntilTimestamp string `json:"retainUntilTimestamp"`
		} `json:"value"`
	} `json:"fileRetention"`
	LegalHold struct {
		IsClientAuthorizedToRead bool   `json:"isClientAuthorizedToRead"`
		Value                    string `json:"value"`
	} `json:"legalHold"`
	ReplicationStatus    string `json:"replicationStatus"`
	ServerSideEncryption struct {
		Algorithm string `json:"algorithm"`
		Mode      string `json:"mode"`
	} `json:"serverSideEncryption"`
	UploadTimestamp int `json:"uploadTimestamp"`
}

type FilePartInfo

type FilePartInfo struct {
	FileID             string `json:"fileId"`
	UploadURL          string `json:"uploadUrl"`
	AuthorizationToken string `json:"authorizationToken"`
	Dummy              bool
	StorageMaximum     int
}

FilePartInfo represents the data returned by GetUploadPartURL

type LargeFile

type LargeFile struct {
	AccountID     string `json:"accountId"`
	Action        string `json:"action"`
	BucketID      string `json:"bucketId"`
	ContentLength int    `json:"contentLength"`
	ContentMd5    any    `json:"contentMd5"`
	ContentSha1   string `json:"contentSha1"`
	ContentType   string `json:"contentType"`
	FileID        string `json:"fileId"`
	FileInfo      struct {
	} `json:"fileInfo"`
	FileName      string `json:"fileName"`
	FileRetention struct {
		IsClientAuthorizedToRead bool `json:"isClientAuthorizedToRead"`
		Value                    any  `json:"value"`
	} `json:"fileRetention"`
	LegalHold struct {
		IsClientAuthorizedToRead bool `json:"isClientAuthorizedToRead"`
		Value                    any  `json:"value"`
	} `json:"legalHold"`
	ServerSideEncryption struct {
		Algorithm string `json:"algorithm"`
		Mode      string `json:"mode"`
	} `json:"serverSideEncryption"`
	UploadTimestamp int64 `json:"uploadTimestamp"`
}

LargeFile represents the file object created by FinishLargeFile

type Service added in v1.3.0

type Service struct {
	APIURL             string
	AuthorizationToken string
	APIVersion         string
	Dummy              bool
	LocalPath          string
	StorageMaximum     int
}

func AuthorizeDummyAccount

func AuthorizeDummyAccount(path string) (Service, error)

AuthorizeDummyAccount allows using the B2 library as normal, but having all files saved and retrieved from a specific folder on the machine.

func AuthorizeLimitedDummyAccount

func AuthorizeLimitedDummyAccount(path string, storageLimit int) (Service, error)

AuthorizeLimitedDummyAccount functions the same as AuthorizeDummyAccount, but imposes an additional limitation for the total size of the directory specified in the "path" variable.

func (Service) CancelLargeFile added in v1.3.0

func (b2Service Service) CancelLargeFile(fileID string) (bool, error)

CancelLargeFile cancels an in-progress large file upload and deletes the partial file from the B2 bucket. Returns true if the file was successfully deleted, otherwise false. Requires the fileID returned from StartLargeFile.

func (Service) DeleteFile added in v1.3.0

func (b2Service Service) DeleteFile(b2ID string, name string) bool

DeleteFile removes a file from B2 using the file's ID and name. Both fields are required, and are provided when a file finishes uploading.

func (Service) DownloadById added in v1.3.0

func (b2Service Service) DownloadById(id string) ([]byte, error)

DownloadById downloads an entire file (regardless of size) from B2.

func (Service) FinishLargeFile added in v1.3.0

func (b2Service Service) FinishLargeFile(
	fileID string,
	checksums []string,
) (LargeFile, error)

FinishLargeFile completes the chunked upload process. The FileID from calling StartLargeFile should be used here, and all checksums from UploadFilePart should be passed a string-ified array. For example: "['checksum1', 'checksum2']"

func (Service) GetUploadPartURL added in v1.3.0

func (b2Service Service) GetUploadPartURL(fileID string) (FilePartInfo, error)

GetUploadPartURL generates a URL and token for uploading individual chunks of a file to B2. It requires a StartFile struct returned by StartLargeFile, which contains the unique file ID for this new file.

func (Service) GetUploadURL added in v1.3.0

func (b2Service Service) GetUploadURL(bucketID string) (FileInfo, error)

GetUploadURL returns a FileInfo struct containing the URL to use for uploading a file, the ID of the bucket the file will be put in, and a token for authenticating the upload request.

func (Service) ListAllFiles added in v1.3.0

func (b2Service Service) ListAllFiles(bucketID string) (FileList, error)

ListAllFiles is a helper function for simply fetching all available files in the bucket. If more than 100 files exist, the FileList struct will contain NextFileName and NextFileID fields that can be used with ListFiles to fetch the remainder.

func (Service) ListFiles added in v1.3.0

func (b2Service Service) ListFiles(
	bucketID string,
	count int,
	startName string,
	startID string,
) (FileList, error)

ListFiles lists all files in the specified bucket up to a maximum of `count`, starting with `startName` and, optionally, `startID`. If count is set to an invalid or negative value, the default number of files returned is 100. If startName or startID are not set, the bucket will list all files

func (Service) ListNFiles added in v1.3.0

func (b2Service Service) ListNFiles(bucketID string, count int) (FileList, error)

ListNFiles is similar to ListAllFiles, but allows explicitly stating how many files you want returned in the response.

func (Service) PartialDownloadById added in v1.3.0

func (b2Service Service) PartialDownloadById(
	id string,
	begin int,
	end int,
) ([]byte, error)

PartialDownloadById downloads a file from B2 with a specified begin and end byte. For example, setting begin to 0 and end to 99 will download only the first 99 bytes of the file.

func (Service) StartLargeFile added in v1.3.0

func (b2Service Service) StartLargeFile(
	filename string,
	bucketID string,
) (StartFile, error)

StartLargeFile begins the process for uploading a multi-chunk file to B2. The filename provided cannot change once the large file upload has begun.

type StartFile

type StartFile struct {
	AccountID     string `json:"accountId"`
	Action        string `json:"action"`
	BucketID      string `json:"bucketId"`
	ContentLength int    `json:"contentLength"`
	ContentSha1   string `json:"contentSha1"`
	ContentType   string `json:"contentType"`
	FileID        string `json:"fileId"`
	FileInfo      struct {
	} `json:"fileInfo"`
	FileName      string `json:"fileName"`
	FileRetention struct {
		IsClientAuthorizedToRead bool `json:"isClientAuthorizedToRead"`
		Value                    struct {
			Mode                 any `json:"mode"`
			RetainUntilTimestamp any `json:"retainUntilTimestamp"`
		} `json:"value"`
	} `json:"fileRetention"`
	LegalHold struct {
		IsClientAuthorizedToRead bool `json:"isClientAuthorizedToRead"`
		Value                    any  `json:"value"`
	} `json:"legalHold"`
	ServerSideEncryption struct {
		Algorithm any `json:"algorithm"`
		Mode      any `json:"mode"`
	} `json:"serverSideEncryption"`
	UploadTimestamp int64 `json:"uploadTimestamp"`
}

StartFile represents the data returned by StartLargeFile

Directories

Path Synopsis

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL