ipfscluster

package module
v0.0.0-...-6646f2c Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Apr 15, 2019 License: MIT Imports: 35 Imported by: 0

README

Elastos Hive Cluster

img img standard-readme compliant Build Status

Introduction

Elastos Hive Cluster is a decentralized File Storage Service that based on IPFS cluster. Hive Cluster use IPFS and IPFS-cluster as the base infrastructure to save Elastos data. Hive Cluster is also a stand-alone application as same as IPFS Cluster.

The typical IPFS peer is a resource hunger program. If you install IPFS daemon to your mobile device, it will take up resources and slow down your device. We are creating Hive project, which uses IPFS cluster as the Elastos App storage backend, and can be used in a low resources consumption scenario.

The project distils from the IPFS Cluster, but it will have many differences with the IPFS Cluster.

Elastos Hive Cluster maintains a big IPFS pinset for sharing. It can serve numerous virtual IPFS peers with only one IPFS peer instance running.

Hive Cluster is not only a pinset manager but also a backend for multiple IPFS clients.

Table of Contents

Install

The following requirements apply to the installation from source:

  • Git
  • Go 1.11+
  • IPFS or internet connectivity (to download depedencies).
Install Go

The build process for cluster requires Go 1.10 or higher. Download propriate binary archive from golang.org, and install it onto specific directory:

$ curl go1.11.4.linux-amd64.tar.gz -o golang.tar.gz
$ tar -xzvf golang.tar.gz -C YOUR-INSTALL-PATH

Then add path YOUR-INSTALL-PATH/go/bin to user environment variable PATH.

$ export PATH="YOUR-INSTALL-PATH/go/bin:$PATH"

Besides that, build environment for golang projects must be required to setup:

$ export GOPATH="YOUR-GO-PATH"
$ export PATH="$GOPATH/bin:$PATH"

In convinience, just add these lines to your profile $HOME/.profile, then validate it with the command:

$ . $HOME/.profile
$ go version

Then use export command to check if it validated or not.

Notice : If you run into trouble, see the Go install instructions.

Download Source

There are two ways to setup your source code. One is directly to download source code under $GOPATH environment as below:

$ cd $GOPATH/src/github.com/elastos
$ git clone https://github.com/elastos/Elastos.NET.Hive.Cluster.git

The other way is to download source to specific location, and then create linkage to that directory underp propriate $GOPATH environment:

$ cd YOUR-PATH
$ git clone https://github.com/elastos/Elastos.NET.Hive.Cluster.git 
$ cd $GOPATH/src/github.com/elastos
$ link -s YOUR-PATH/Elastos.NET.Hive.Cluster Elastos.NET.Hive.Cluster
Build Cluster

To build Cluster with the following commands:

$ cd $GOPATH/src/github.com/elastos/Elastos.NET.Hive.Cluster
$ make
$ make install

After installation, ipfs-cluster-service and ipfs-cluster-ctl would be generated under the directory $GOPATH/bin

If you would rather have them built locally, use make build instead. You can run make clean to remove any generated artifacts and rewrite the import paths to their original form.

Note that when the ipfs daemon is running locally on its default ports, the build process will use it to fetch gx, gx-go and all the needed dependencies directly from IPFS.

Building Debian package

For Linux Debian based systems. This project can generate a deb package for to distribute conveniently.

Please make sure that the IPFS and IPFS-Cluster projects were built and the binraries (ipfs, ipfs-cluster-service, ipfs-cluster-ctl) had been generated under the directory $GOPATH/bin.

To generate a deb package with the following commands:

$ cd $GOPATH/src/github.com/elastos/Elastos.NET.Hive.Cluster/extras/LinuxDeb
$ make

After the deb package generated, it will have a hive-dist-???.deb file in the current directory. You can copy it to and install it on the Debian systems. Or, you can run make clean to remove generated.

# Install package on Linux Debian based systems
$ dpkg -i hive-dist-???.deb

Usage

ipfs-cluster-service is a command line program to start cluster daemon, while ipfs-cluster-ctl is the client application to manage the cluster nodes and perform actions.

Run the following commands with hep options to see more usages:

$ ipfs-cluster-service help
...
$ ipfs-cluster-ctl help
...

Details about ipfs cluster please refer to the docs from https://cluster.ipfs.io.

Hive cluster uses HTTP interface to serve numerous clients. About HTTP API, please refer to

Get Started

Hive cluster have to be running before starting Hive cluster. About how to run Hive IPFS daemon, please refer to Elastos Hive IPFS.

Initialize config files

To start using Cluster, you must first initialize Cluster's config files on your system, which would be done with the command below:

$ ipfs-cluster-service init
$ ls $HOME/.ipfs-cluster
service.json

See ipfs-cluster-service help for more information on the optional arguments it takes.

Run as Daemon

After initialization and configure, try to run ipfs-cluster-service daemon:

$ ipfs-cluster-service daemon &

then you can use ipfs-cluster-ctl program to check if cluster is running

$ ipfs-cluster-ctl id
QmUGXgnUcqgZf9Js7GrvFP1uz7G6soq3urWk9Gz237DsQm | guest | Sees 1 other peers
  > Addresses:
    - /ip4/127.0.0.1/tcp/9096/ipfs/QmUGXgnUcqgZf9Js7GrvFP1uz7G6soq3urWk9Gz237DsQm
    - /ip4/222.222.222.222/tcp/9096/ipfs/QmUGXgnUcqgZf9Js7GrvFP1uz7G6soq3urWk9Gz237DsQm
    - /p2p-circuit/ipfs/QmUGXgnUcqgZf9Js7GrvFP1uz7G6soq3urWk9Gz237DsQm
  > IPFS: QmSPDCbxBSq7PYAnemABL7VpGRkpqBX9NT1AovEmzaXkvM
    - /ip4/127.0.0.1/tcp/4001/ipfs/QmSPDCbxBSq7PYAnemABL7VpGRkpqBX9NT1AovEmzaXkvM
    - /ip4/222.222.222.222/tcp/4001/ipfs/QmSPDCbxBSq7PYAnemABL7VpGRkpqBX9NT1AovEmzaXkvM
    - /ip6/::1/tcp/4001/ipfs/QmSPDCbxBSq7PYAnemABL7VpGRkpqBX9NT1AovEmzaXkvM

You also can run ipfs-cluster-service as a slave cluster node with following command:

$ ipfs-cluster-service daemon --bootstrap /ip4/222.222.222.222/tcp/9096/ipfs/QmNTD6Zbhdao
DmjQqGJrp8dKEPvtBQGzRxwWHNcmvNYsbK &

Contribution

We welcome contributions to the Elastos Hive Project.

Acknowledgments

A sincere thank you to all teams and projects that we rely on directly or indirectly.

License

This project is licensed under the terms of the MIT license.

Documentation

Overview

Package ipfscluster implements a wrapper for the IPFS deamon which allows to orchestrate pinning operations among several IPFS nodes.

IPFS Cluster uses a go-libp2p-raft to keep a shared state between the different cluster peers. It also uses LibP2P to enable communication between its different components, which perform different tasks like managing the underlying IPFS daemons, or providing APIs for external control.

Index

Constants

View Source
const (
	DefaultConfigCrypto        = crypto.RSA
	DefaultConfigKeyLength     = 2048
	DefaultListenAddr          = "/ip4/0.0.0.0/tcp/9096"
	DefaultStateSyncInterval   = 600 * time.Second
	DefaultIPFSSyncInterval    = 130 * time.Second
	DefaultMonitorPingInterval = 15 * time.Second
	DefaultPeerWatchInterval   = 5 * time.Second
	DefaultReplicationFactor   = -1
	DefaultLeaveOnShutdown     = false
	DefaultDisableRepinning    = false
	DefaultPeerstoreFile       = "peerstore"
)

Configuration defaults

Variables

View Source
var LoggingFacilities = map[string]string{
	"cluster":      "INFO",
	"restapi":      "INFO",
	"ipfsproxy":    "INFO",
	"ipfshttp":     "INFO",
	"monitor":      "INFO",
	"mapstate":     "INFO",
	"consensus":    "INFO",
	"pintracker":   "INFO",
	"ascendalloc":  "INFO",
	"diskinfo":     "INFO",
	"apitypes":     "INFO",
	"config":       "INFO",
	"shardingdags": "INFO",
	"localdags":    "INFO",
	"adder":        "INFO",
	"optracker":    "INFO",
}

LoggingFacilities provides a list of logging identifiers used by cluster and their default logging level.

View Source
var LoggingFacilitiesExtra = map[string]string{
	"p2p-gorpc":   "CRITICAL",
	"swarm2":      "ERROR",
	"libp2p-raft": "CRITICAL",
	"raft":        "ERROR",
}

LoggingFacilitiesExtra provides logging identifiers used in ipfs-cluster dependencies, which may be useful to display. Along with their default value.

View Source
var ReadyTimeout = 30 * time.Second

ReadyTimeout specifies the time before giving up during startup (waiting for consensus to be ready) It may need adjustment according to timeouts in the consensus layer.

Functions

func DecodeClusterSecret

func DecodeClusterSecret(hexSecret string) ([]byte, error)

DecodeClusterSecret parses a hex-encoded string, checks that it is exactly 32 bytes long and returns its value as a byte-slice.x

func EncodeProtectorKey

func EncodeProtectorKey(secretBytes []byte) string

EncodeProtectorKey converts a byte slice to its hex string representation.

func GlobalPinInfoSliceToSerial

func GlobalPinInfoSliceToSerial(gpi []api.GlobalPinInfo) []api.GlobalPinInfoSerial

GlobalPinInfoSliceToSerial is a helper function for serializing a slice of api.GlobalPinInfos.

func NewClusterHost

func NewClusterHost(ctx context.Context, cfg *Config) (host.Host, error)

NewClusterHost creates a libp2p Host with the options from the provided cluster configuration.

func PeersFromMultiaddrs

func PeersFromMultiaddrs(addrs []ma.Multiaddr) []peer.ID

PeersFromMultiaddrs returns all the different peers in the given addresses. each peer only will appear once in the result, even if several multiaddresses for it are provided.

func SetFacilityLogLevel

func SetFacilityLogLevel(f, l string)

SetFacilityLogLevel sets the log level for a given module

Types

type API

type API interface {
	Component
}

API is a component which offers an API for Cluster. This is a base component.

type Cluster

type Cluster struct {
	// contains filtered or unexported fields
}

Cluster is the main IPFS cluster component. It provides the go-API for it and orchestrates the components that make up the system.

func NewCluster

func NewCluster(
	host host.Host,
	cfg *Config,
	consensus Consensus,
	apis []API,
	ipfs IPFSConnector,
	st state.State,
	tracker PinTracker,
	monitor PeerMonitor,
	allocator PinAllocator,
	informer Informer,
) (*Cluster, error)

NewCluster builds a new IPFS Cluster peer. It initializes a LibP2P host, creates and RPC Server and client and sets up all components.

The new cluster peer may still be performing initialization tasks when this call returns (consensus may still be bootstrapping). Use Cluster.Ready() if you need to wait until the peer is fully up.

func (*Cluster) AddFile

func (c *Cluster) AddFile(reader *multipart.Reader, params *api.AddParams) (cid.Cid, error)

AddFile adds a file to the ipfs daemons of the cluster. The ipfs importer pipeline is used to DAGify the file. Depending on input parameters this DAG can be added locally to the calling cluster peer's ipfs repo, or sharded across the entire cluster.

func (*Cluster) ConnectGraph

func (c *Cluster) ConnectGraph() (api.ConnectGraph, error)

ConnectGraph returns a description of which cluster peers and ipfs daemons are connected to each other

func (*Cluster) Done

func (c *Cluster) Done() <-chan struct{}

Done provides a way to learn if the Peer has been shutdown (for example, because it has been removed from the Cluster)

func (*Cluster) FindKey

func (c *Cluster) FindKey(uid string) (api.UIDKey, error)

FindKey finds user key from IFPS keystore

func (*Cluster) ID

func (c *Cluster) ID() api.ID

ID returns information about the Cluster peer

func (*Cluster) Join

func (c *Cluster) Join(addr ma.Multiaddr) error

Join adds this peer to an existing cluster by bootstrapping to a given multiaddress. It works by calling PeerAdd on the destination cluster and making sure that the new peer is ready to discover and contact the rest.

func (*Cluster) PeerAdd

func (c *Cluster) PeerAdd(pid peer.ID) (api.ID, error)

PeerAdd adds a new peer to this Cluster.

For it to work well, the new peer should be discoverable (part of our peerstore or connected to one of the existing peers) and reachable. Since PeerAdd allows to add peers which are not running, or reachable, it is recommended to call Join() from the new peer instead.

The new peer ID will be passed to the consensus component to be added to the peerset.

func (*Cluster) PeerRemove

func (c *Cluster) PeerRemove(pid peer.ID) error

PeerRemove removes a peer from this Cluster.

The peer will be removed from the consensus peerset. This may first trigger repinnings for all content if not disabled.

func (*Cluster) Peers

func (c *Cluster) Peers() []api.ID

Peers returns the IDs of the members of this Cluster.

func (*Cluster) Pin

func (c *Cluster) Pin(pin api.Pin) error

Pin makes the cluster Pin a Cid. This implies adding the Cid to the IPFS Cluster peers shared-state. Depending on the cluster pinning strategy, the PinTracker may then request the IPFS daemon to pin the Cid.

Pin returns an error if the operation could not be persisted to the global state. Pin does not reflect the success or failure of underlying IPFS daemon pinning operations.

If the argument's allocations are non-empty then these peers are pinned with priority over other peers in the cluster. If the max repl factor is less than the size of the specified peerset then peers are chosen from this set in allocation order. If the min repl factor is greater than the size of this set then the remaining peers are allocated in order from the rest of the cluster. Priority allocations are best effort. If any priority peers are unavailable then Pin will simply allocate from the rest of the cluster.

func (*Cluster) PinGet

func (c *Cluster) PinGet(h cid.Cid) (api.Pin, error)

PinGet returns information for a single Cid managed by Cluster. The information is obtained from the current global state. The returned api.Pin provides information about the allocations assigned for the requested Cid, but does not indicate if the item is successfully pinned. For that, use Status(). PinGet returns an error if the given Cid is not part of the global state.

func (*Cluster) Pins

func (c *Cluster) Pins() []api.Pin

Pins returns the list of Cids managed by Cluster and which are part of the current global state. This is the source of truth as to which pins are managed and their allocation, but does not indicate if the item is successfully pinned. For that, use StatusAll().

func (*Cluster) Ready

func (c *Cluster) Ready() <-chan struct{}

Ready returns a channel which signals when this peer is fully initialized (including consensus).

func (*Cluster) Recover

func (c *Cluster) Recover(h cid.Cid) (api.GlobalPinInfo, error)

Recover triggers a recover operation for a given Cid in all cluster peers.

func (*Cluster) RecoverAllLocal

func (c *Cluster) RecoverAllLocal() ([]api.PinInfo, error)

RecoverAllLocal triggers a RecoverLocal operation for all Cids tracked by this peer.

func (*Cluster) RecoverLocal

func (c *Cluster) RecoverLocal(h cid.Cid) (pInfo api.PinInfo, err error)

RecoverLocal triggers a recover operation for a given Cid in this peer only. It returns the updated PinInfo, after recovery.

func (*Cluster) Shutdown

func (c *Cluster) Shutdown() error

Shutdown stops the IPFS cluster components

func (*Cluster) StateSync

func (c *Cluster) StateSync() error

StateSync syncs the consensus state to the Pin Tracker, ensuring that every Cid in the shared state is tracked and that the Pin Tracker is not tracking more Cids than it should.

func (*Cluster) Status

func (c *Cluster) Status(h cid.Cid) (api.GlobalPinInfo, error)

Status returns the GlobalPinInfo for a given Cid as fetched from all current peers. If an error happens, the GlobalPinInfo should contain as much information as could be fetched from the other peers.

func (*Cluster) StatusAll

func (c *Cluster) StatusAll() ([]api.GlobalPinInfo, error)

StatusAll returns the GlobalPinInfo for all tracked Cids in all peers. If an error happens, the slice will contain as much information as could be fetched from other peers.

func (*Cluster) StatusAllLocal

func (c *Cluster) StatusAllLocal() []api.PinInfo

StatusAllLocal returns the PinInfo for all the tracked Cids in this peer.

func (*Cluster) StatusLocal

func (c *Cluster) StatusLocal(h cid.Cid) api.PinInfo

StatusLocal returns this peer's PinInfo for a given Cid.

func (*Cluster) Sync

func (c *Cluster) Sync(h cid.Cid) (api.GlobalPinInfo, error)

Sync triggers a SyncLocal() operation for a given Cid. in all cluster peers.

func (*Cluster) SyncAll

func (c *Cluster) SyncAll() ([]api.GlobalPinInfo, error)

SyncAll triggers SyncAllLocal() operations in all cluster peers, making sure that the state of tracked items matches the state reported by the IPFS daemon and returning the results as GlobalPinInfo. If an error happens, the slice will contain as much information as could be fetched from the peers.

func (*Cluster) SyncAllLocal

func (c *Cluster) SyncAllLocal() ([]api.PinInfo, error)

SyncAllLocal makes sure that the current state for all tracked items in this peer matches the state reported by the IPFS daemon.

SyncAllLocal returns the list of PinInfo that where updated because of the operation, along with those in error states.

func (*Cluster) SyncKey

func (c *Cluster) SyncKey(uid string) error

SyncKey finds the Key of the member of this Cluster.

func (*Cluster) SyncLocal

func (c *Cluster) SyncLocal(h cid.Cid) (pInfo api.PinInfo, err error)

SyncLocal performs a local sync operation for the given Cid. This will tell the tracker to verify the status of the Cid against the IPFS daemon. It returns the updated PinInfo for the Cid.

func (*Cluster) SyncUidRenew

func (c *Cluster) SyncUidRenew(l []string) (api.UIDRenew, error)

SyncUidRenew rename the Key of the member of this Cluster.

func (*Cluster) Unpin

func (c *Cluster) Unpin(h cid.Cid) error

Unpin makes the cluster Unpin a Cid. This implies adding the Cid to the IPFS Cluster peers shared-state.

Unpin returns an error if the operation could not be persisted to the global state. Unpin does not reflect the success or failure of underlying IPFS daemon unpinning operations.

func (*Cluster) Version

func (c *Cluster) Version() string

Version returns the current IPFS Cluster version.

type Component

type Component interface {
	SetClient(*rpc.Client)
	Shutdown() error
}

Component represents a piece of ipfscluster. Cluster components usually run their own goroutines (a http server for example). They communicate with the main Cluster component and other components (both local and remote), using an instance of rpc.Client.

type Config

type Config struct {
	config.Saver

	// Libp2p ID and private key for Cluster communication (including)
	// the Consensus component.
	ID         peer.ID
	PrivateKey crypto.PrivKey

	// User-defined peername for use as human-readable identifier.
	Peername string

	// Cluster secret for private network. Peers will be in the same cluster if and
	// only if they have the same ClusterSecret. The cluster secret must be exactly
	// 64 characters and contain only hexadecimal characters (`[0-9a-f]`).
	Secret []byte

	// Leave Cluster on shutdown. Politely informs other peers
	// of the departure and removes itself from the consensus
	// peer set. The Cluster size will be reduced by one.
	LeaveOnShutdown bool

	// Listen parameters for the Cluster libp2p Host. Used by
	// the RPC and Consensus components.
	ListenAddr ma.Multiaddr

	// Time between syncs of the consensus state to the
	// tracker state. Normally states are synced anyway, but this helps
	// when new nodes are joining the cluster. Reduce for faster
	// consistency, increase with larger states.
	StateSyncInterval time.Duration

	// Number of seconds between syncs of the local state and
	// the state of the ipfs daemon. This ensures that cluster
	// provides the right status for tracked items (for example
	// to detect that a pin has been removed. Reduce for faster
	// consistency, increase when the number of pinned items is very
	// large.
	IPFSSyncInterval time.Duration

	// ReplicationFactorMax indicates the target number of nodes
	// that should pin content. For exampe, a replication_factor of
	// 3 will have cluster allocate each pinned hash to 3 peers if
	// possible.
	// See also ReplicationFactorMin. A ReplicationFactorMax of -1
	// will allocate to every available node.
	ReplicationFactorMax int

	// ReplicationFactorMin indicates the minimum number of healthy
	// nodes pinning content. If the number of nodes available to pin
	// is less than this threshold, an error will be returned.
	// In the case of peer health issues, content pinned will be
	// re-allocated if the threshold is crossed.
	// For exampe, a ReplicationFactorMin of 2 will allocate at least
	// two peer to hold content, and return an error if this is not
	// possible.
	ReplicationFactorMin int

	// MonitorPingInterval is the frequency with which a cluster peer pings
	// the monitoring component. The ping metric has a TTL set to the double
	// of this value.
	MonitorPingInterval time.Duration

	// PeerWatchInterval is the frequency that we use to watch for changes
	// in the consensus peerset and save new peers to the configuration
	// file. This also affects how soon we realize that we have
	// been removed from a cluster.
	PeerWatchInterval time.Duration

	// If true, DisableRepinning, ensures that no repinning happens
	// when a node goes down.
	// This is useful when doing certain types of maintainance, or simply
	// when not wanting to rely on the monitoring system which needs a revamp.
	DisableRepinning bool

	// Peerstore file specifies the file on which we persist the
	// libp2p host peerstore addresses. This file is regularly saved.
	PeerstoreFile string
	// contains filtered or unexported fields
}

Config is the configuration object containing customizable variables to initialize the main ipfs-cluster component. It implements the config.ComponentConfig interface.

func (*Config) ConfigKey

func (cfg *Config) ConfigKey() string

ConfigKey returns a human-readable string to identify a cluster Config.

func (*Config) Default

func (cfg *Config) Default() error

Default fills in all the Config fields with default working values. This means, it will generate a valid random ID, PrivateKey and Secret.

func (*Config) GetPeerstorePath

func (cfg *Config) GetPeerstorePath() string

GetPeerstorePath returns the full path of the PeerstoreFile, obtained by concatenating that value with BaseDir of the configuration, if set. An empty string is returned when BaseDir is not set.

func (*Config) LoadJSON

func (cfg *Config) LoadJSON(raw []byte) error

LoadJSON receives a raw json-formatted configuration and sets the Config fields from it. Note that it should be JSON as generated by ToJSON().

func (*Config) ToJSON

func (cfg *Config) ToJSON() (raw []byte, err error)

ToJSON generates a human-friendly version of Config.

func (*Config) Validate

func (cfg *Config) Validate() error

Validate will check that the values of this config seem to be working ones.

type Consensus

type Consensus interface {
	Component
	// Returns a channel to signal that the consensus layer is ready
	// allowing the main component to wait for it during start.
	Ready() <-chan struct{}
	// Logs a pin operation
	LogPin(c api.Pin) error
	// Logs an unpin operation
	LogUnpin(c api.Pin) error
	AddPeer(p peer.ID) error
	RmPeer(p peer.ID) error
	State() (state.State, error)
	// Provide a node which is responsible to perform
	// specific tasks which must only run in 1 cluster peer
	Leader() (peer.ID, error)
	// Only returns when the consensus state has all log
	// updates applied to it
	WaitForSync() error
	// Clean removes all consensus data
	Clean() error
	// Peers returns the peerset participating in the Consensus
	Peers() ([]peer.ID, error)
}

Consensus is a component which keeps a shared state in IPFS Cluster and triggers actions on updates to that state. Currently, Consensus needs to be able to elect/provide a Cluster Leader and the implementation is very tight to the Cluster main component.

type IPFSConnector

type IPFSConnector interface {
	Component
	ID() (api.IPFSID, error)
	Pin(context.Context, cid.Cid, int) error
	Unpin(context.Context, cid.Cid) error
	PinLsCid(context.Context, cid.Cid) (api.IPFSPinStatus, error)
	PinLs(ctx context.Context, typeFilter string) (map[string]api.IPFSPinStatus, error)
	// ConnectSwarms make sure this peer's IPFS daemon is connected to
	// other peers IPFS daemons.
	ConnectSwarms() error
	// SwarmPeers returns the IPFS daemon's swarm peers
	SwarmPeers() (api.SwarmPeers, error)
	// ConfigKey returns the value for a configuration key.
	// Subobjects are reached with keypaths as "Parent/Child/GrandChild...".
	ConfigKey(keypath string) (interface{}, error)
	// RepoStat returns the current repository size and max limit as
	// provided by "repo stat".
	RepoStat() (api.IPFSRepoStat, error)
	// BlockPut directly adds a block of data to the IPFS repo
	BlockPut(api.NodeWithMeta) error
	// BlockGet retrieves the raw data of an IPFS block
	BlockGet(cid.Cid) ([]byte, error)
	// UidNew registers a uid in hive cluster
	UidNew(name string) (api.UIDSecret, error)
	// UidRenew is used to change uid
	UidRenew([]string) (api.UIDRenew, error)
	// UidInfo get uid Name and Id
	UidInfo(uid string) (api.UIDSecret, error)
	// UidLogin login server and create home directory
	UidLogin([]string) error
	// FileGet downloads file from ipfs service
	FileGet(fg []string) ([]byte, error)
	// FilesCp is used to copy file
	FilesCp([]string) error
	// FilesFlush is used to change uid
	FilesFlush([]string) error
	// FilesLs is used to list files
	FilesLs([]string) (api.FilesLs, error)
	// FilesMkdir creates directory in IFPS peer
	FilesMkdir([]string) error
	// FilesMv moves file in IPFS peer
	FilesMv([]string) error
	// FilesRead reads file
	FilesRead([]string) ([]byte, error)
	// FilesRm remove directory from IPFS peer
	FilesRm([]string) error
	// FilesStat fetchs file statistics
	FilesStat([]string) (api.FilesStat, error)
	// FilesWrite writes file
	FilesWrite(api.FilesWrite) error
	// NamePublish publish ipfs path with uid
	NamePublish(np []string) (api.NamePublish, error)
}

IPFSConnector is a component which allows cluster to interact with an IPFS daemon. This is a base component.

type Informer

type Informer interface {
	Component
	Name() string
	GetMetric() api.Metric
}

Informer provides Metric information from a peer. The metrics produced by informers are then passed to a PinAllocator which will use them to determine where to pin content. The metric is agnostic to the rest of Cluster.

type PeerMonitor

type PeerMonitor interface {
	Component
	// LogMetric stores a metric. It can be used to manually inject
	// a metric to a monitor.
	LogMetric(api.Metric) error
	// PublishMetric sends a metric to the rest of the peers.
	// How to send it, and to who, is to be decided by the implementation.
	PublishMetric(api.Metric) error
	// LatestMetrics returns a map with the latest metrics of matching name
	// for the current cluster peers.
	LatestMetrics(name string) []api.Metric
	// Alerts delivers alerts generated when this peer monitor detects
	// a problem (i.e. metrics not arriving as expected). Alerts can be used
	// to trigger self-healing measures or re-pinnings of content.
	Alerts() <-chan api.Alert
}

PeerMonitor is a component in charge of publishing a peer's metrics and reading metrics from other peers in the cluster. The PinAllocator will use the metrics provided by the monitor as candidates for Pin allocations.

The PeerMonitor component also provides an Alert channel which is signaled when a metric is no longer received and the monitor identifies it as a problem.

type Peered

type Peered interface {
	AddPeer(p peer.ID)
	RmPeer(p peer.ID)
}

Peered represents a component which needs to be aware of the peers in the Cluster and of any changes to the peer set.

type PinAllocator

type PinAllocator interface {
	Component
	// Allocate returns the list of peers that should be assigned to
	// Pin content in order of preference (from the most preferred to the
	// least). The "current" map contains valid metrics for peers
	// which are currently pinning the content. The candidates map
	// contains the metrics for all peers which are eligible for pinning
	// the content.
	Allocate(c cid.Cid, current, candidates, priority map[peer.ID]api.Metric) ([]peer.ID, error)
}

PinAllocator decides where to pin certain content. In order to make such decision, it receives the pin arguments, the peers which are currently allocated to the content and metrics available for all peers which could allocate the content.

type PinTracker

type PinTracker interface {
	Component
	// Track tells the tracker that a Cid is now under its supervision
	// The tracker may decide to perform an IPFS pin.
	Track(api.Pin) error
	// Untrack tells the tracker that a Cid is to be forgotten. The tracker
	// may perform an IPFS unpin operation.
	Untrack(cid.Cid) error
	// StatusAll returns the list of pins with their local status.
	StatusAll() []api.PinInfo
	// Status returns the local status of a given Cid.
	Status(cid.Cid) api.PinInfo
	// SyncAll makes sure that all tracked Cids reflect the real IPFS status.
	// It returns the list of pins which were updated by the call.
	SyncAll() ([]api.PinInfo, error)
	// Sync makes sure that the Cid status reflect the real IPFS status.
	// It returns the local status of the Cid.
	Sync(cid.Cid) (api.PinInfo, error)
	// RecoverAll calls Recover() for all pins tracked.
	RecoverAll() ([]api.PinInfo, error)
	// Recover retriggers a Pin/Unpin operation in a Cids with error status.
	Recover(cid.Cid) (api.PinInfo, error)
}

PinTracker represents a component which tracks the status of the pins in this cluster and ensures they are in sync with the IPFS daemon. This component should be thread safe.

type RPCAPI

type RPCAPI struct {
	// contains filtered or unexported fields
}

RPCAPI is a go-libp2p-gorpc service which provides the internal ipfs-cluster API, which enables components and cluster peers to communicate and request actions from each other.

The RPC API methods are usually redirects to the actual methods in the different components of ipfs-cluster, with very little added logic. Refer to documentation on those methods for details on their behaviour.

func (*RPCAPI) BlockAllocate

func (rpcapi *RPCAPI) BlockAllocate(ctx context.Context, in api.PinSerial, out *[]string) error

BlockAllocate returns allocations for blocks. This is used in the adders. It's different from pin allocations when ReplicationFactor < 0.

func (*RPCAPI) ConnectGraph

func (rpcapi *RPCAPI) ConnectGraph(ctx context.Context, in struct{}, out *api.ConnectGraphSerial) error

ConnectGraph runs Cluster.GetConnectGraph().

func (*RPCAPI) ConsensusAddPeer

func (rpcapi *RPCAPI) ConsensusAddPeer(ctx context.Context, in peer.ID, out *struct{}) error

ConsensusAddPeer runs Consensus.AddPeer().

func (*RPCAPI) ConsensusLogPin

func (rpcapi *RPCAPI) ConsensusLogPin(ctx context.Context, in api.PinSerial, out *struct{}) error

ConsensusLogPin runs Consensus.LogPin().

func (*RPCAPI) ConsensusLogUnpin

func (rpcapi *RPCAPI) ConsensusLogUnpin(ctx context.Context, in api.PinSerial, out *struct{}) error

ConsensusLogUnpin runs Consensus.LogUnpin().

func (*RPCAPI) ConsensusPeers

func (rpcapi *RPCAPI) ConsensusPeers(ctx context.Context, in struct{}, out *[]peer.ID) error

ConsensusPeers runs Consensus.Peers().

func (*RPCAPI) ConsensusRmPeer

func (rpcapi *RPCAPI) ConsensusRmPeer(ctx context.Context, in peer.ID, out *struct{}) error

ConsensusRmPeer runs Consensus.RmPeer().

func (*RPCAPI) FindKey

func (rpcapi *RPCAPI) FindKey(ctx context.Context, in string, out *api.UIDKey) error

FindKey finds user key from IFPS keystore

func (*RPCAPI) ID

func (rpcapi *RPCAPI) ID(ctx context.Context, in struct{}, out *api.IDSerial) error

ID runs Cluster.ID()

func (*RPCAPI) IPFSBlockGet

func (rpcapi *RPCAPI) IPFSBlockGet(ctx context.Context, in api.PinSerial, out *[]byte) error

IPFSBlockGet runs IPFSConnector.BlockGet().

func (*RPCAPI) IPFSBlockPut

func (rpcapi *RPCAPI) IPFSBlockPut(ctx context.Context, in api.NodeWithMeta, out *struct{}) error

IPFSBlockPut runs IPFSConnector.BlockPut().

func (*RPCAPI) IPFSConfigKey

func (rpcapi *RPCAPI) IPFSConfigKey(ctx context.Context, in string, out *interface{}) error

IPFSConfigKey runs IPFSConnector.ConfigKey().

func (*RPCAPI) IPFSConnectSwarms

func (rpcapi *RPCAPI) IPFSConnectSwarms(ctx context.Context, in struct{}, out *struct{}) error

IPFSConnectSwarms runs IPFSConnector.ConnectSwarms().

func (*RPCAPI) IPFSFileGet

func (rpcapi *RPCAPI) IPFSFileGet(ctx context.Context, in []string, out *[]byte) error

IPFSFileGet runs IPFSConnector.IPFSFileGet().

func (*RPCAPI) IPFSFilesCp

func (rpcapi *RPCAPI) IPFSFilesCp(ctx context.Context, in []string, out *struct{}) error

FilesCp runs IPFSConnector.FilesCp().

func (*RPCAPI) IPFSFilesFlush

func (rpcapi *RPCAPI) IPFSFilesFlush(ctx context.Context, in []string, out *struct{}) error

FilesFlush runs IPFSConnector.FilesFlush().

func (*RPCAPI) IPFSFilesLs

func (rpcapi *RPCAPI) IPFSFilesLs(ctx context.Context, in []string, out *api.FilesLs) error

FilesLs runs IPFSConnector.FilesLs().

func (*RPCAPI) IPFSFilesMkdir

func (rpcapi *RPCAPI) IPFSFilesMkdir(ctx context.Context, in []string, out *struct{}) error

FilesMkdir runs IPFSConnector.FilesMkdir().

func (*RPCAPI) IPFSFilesMv

func (rpcapi *RPCAPI) IPFSFilesMv(ctx context.Context, in []string, out *struct{}) error

FilesMv runs IPFSConnector.FilesMv().

func (*RPCAPI) IPFSFilesRead

func (rpcapi *RPCAPI) IPFSFilesRead(ctx context.Context, in []string, out *[]byte) error

FilesRead runs IPFSConnector.FilesRead().

func (*RPCAPI) IPFSFilesRm

func (rpcapi *RPCAPI) IPFSFilesRm(ctx context.Context, in []string, out *struct{}) error

FilesRm runs IPFSConnector.FilesRm().

func (*RPCAPI) IPFSFilesStat

func (rpcapi *RPCAPI) IPFSFilesStat(ctx context.Context, in []string, out *api.FilesStat) error

FilesStat runs IPFSConnector.FilesStat().

func (*RPCAPI) IPFSFilesWrite

func (rpcapi *RPCAPI) IPFSFilesWrite(ctx context.Context, in api.FilesWrite, out *struct{}) error

FilesWrite runs IPFSConnector.FilesWrite().

func (*RPCAPI) IPFSNamePublish

func (rpcapi *RPCAPI) IPFSNamePublish(ctx context.Context, in []string, out *api.NamePublish) error

IPFSNamePublish runs IPFSConnector.IPFSNamePublish().

func (*RPCAPI) IPFSPin

func (rpcapi *RPCAPI) IPFSPin(ctx context.Context, in api.PinSerial, out *struct{}) error

IPFSPin runs IPFSConnector.Pin().

func (*RPCAPI) IPFSPinLs

func (rpcapi *RPCAPI) IPFSPinLs(ctx context.Context, in string, out *map[string]api.IPFSPinStatus) error

IPFSPinLs runs IPFSConnector.PinLs().

func (*RPCAPI) IPFSPinLsCid

func (rpcapi *RPCAPI) IPFSPinLsCid(ctx context.Context, in api.PinSerial, out *api.IPFSPinStatus) error

IPFSPinLsCid runs IPFSConnector.PinLsCid().

func (*RPCAPI) IPFSRepoStat

func (rpcapi *RPCAPI) IPFSRepoStat(ctx context.Context, in struct{}, out *api.IPFSRepoStat) error

IPFSRepoStat runs IPFSConnector.RepoStat().

func (*RPCAPI) IPFSSwarmPeers

func (rpcapi *RPCAPI) IPFSSwarmPeers(ctx context.Context, in struct{}, out *api.SwarmPeersSerial) error

IPFSSwarmPeers runs IPFSConnector.SwarmPeers().

func (*RPCAPI) IPFSUnpin

func (rpcapi *RPCAPI) IPFSUnpin(ctx context.Context, in api.PinSerial, out *struct{}) error

IPFSUnpin runs IPFSConnector.Unpin().

func (*RPCAPI) Join

func (rpcapi *RPCAPI) Join(ctx context.Context, in api.MultiaddrSerial, out *struct{}) error

Join runs Cluster.Join().

func (*RPCAPI) PeerAdd

func (rpcapi *RPCAPI) PeerAdd(ctx context.Context, in string, out *api.IDSerial) error

PeerAdd runs Cluster.PeerAdd().

func (*RPCAPI) PeerMonitorLatestMetrics

func (rpcapi *RPCAPI) PeerMonitorLatestMetrics(ctx context.Context, in string, out *[]api.Metric) error

PeerMonitorLatestMetrics runs PeerMonitor.LatestMetrics().

func (*RPCAPI) PeerMonitorLogMetric

func (rpcapi *RPCAPI) PeerMonitorLogMetric(ctx context.Context, in api.Metric, out *struct{}) error

PeerMonitorLogMetric runs PeerMonitor.LogMetric().

func (*RPCAPI) PeerRemove

func (rpcapi *RPCAPI) PeerRemove(ctx context.Context, in peer.ID, out *struct{}) error

PeerRemove runs Cluster.PeerRm().

func (*RPCAPI) Peers

func (rpcapi *RPCAPI) Peers(ctx context.Context, in struct{}, out *[]api.IDSerial) error

Peers runs Cluster.Peers().

func (*RPCAPI) Pin

func (rpcapi *RPCAPI) Pin(ctx context.Context, in api.PinSerial, out *struct{}) error

Pin runs Cluster.Pin().

func (*RPCAPI) PinGet

func (rpcapi *RPCAPI) PinGet(ctx context.Context, in api.PinSerial, out *api.PinSerial) error

PinGet runs Cluster.PinGet().

func (*RPCAPI) Pins

func (rpcapi *RPCAPI) Pins(ctx context.Context, in struct{}, out *[]api.PinSerial) error

Pins runs Cluster.Pins().

func (*RPCAPI) Recover

func (rpcapi *RPCAPI) Recover(ctx context.Context, in api.PinSerial, out *api.GlobalPinInfoSerial) error

Recover runs Cluster.Recover().

func (*RPCAPI) RecoverAllLocal

func (rpcapi *RPCAPI) RecoverAllLocal(ctx context.Context, in struct{}, out *[]api.PinInfoSerial) error

RecoverAllLocal runs Cluster.RecoverAllLocal().

func (*RPCAPI) RecoverLocal

func (rpcapi *RPCAPI) RecoverLocal(ctx context.Context, in api.PinSerial, out *api.PinInfoSerial) error

RecoverLocal runs Cluster.RecoverLocal().

func (*RPCAPI) SendInformerMetric

func (rpcapi *RPCAPI) SendInformerMetric(ctx context.Context, in struct{}, out *api.Metric) error

SendInformerMetric runs Cluster.sendInformerMetric().

func (*RPCAPI) Status

func (rpcapi *RPCAPI) Status(ctx context.Context, in api.PinSerial, out *api.GlobalPinInfoSerial) error

Status runs Cluster.Status().

func (*RPCAPI) StatusAll

func (rpcapi *RPCAPI) StatusAll(ctx context.Context, in struct{}, out *[]api.GlobalPinInfoSerial) error

StatusAll runs Cluster.StatusAll().

func (*RPCAPI) StatusAllLocal

func (rpcapi *RPCAPI) StatusAllLocal(ctx context.Context, in struct{}, out *[]api.PinInfoSerial) error

StatusAllLocal runs Cluster.StatusAllLocal().

func (*RPCAPI) StatusLocal

func (rpcapi *RPCAPI) StatusLocal(ctx context.Context, in api.PinSerial, out *api.PinInfoSerial) error

StatusLocal runs Cluster.StatusLocal().

func (*RPCAPI) Sync

func (rpcapi *RPCAPI) Sync(ctx context.Context, in api.PinSerial, out *api.GlobalPinInfoSerial) error

Sync runs Cluster.Sync().

func (*RPCAPI) SyncAll

func (rpcapi *RPCAPI) SyncAll(ctx context.Context, in struct{}, out *[]api.GlobalPinInfoSerial) error

SyncAll runs Cluster.SyncAll().

func (*RPCAPI) SyncAllLocal

func (rpcapi *RPCAPI) SyncAllLocal(ctx context.Context, in struct{}, out *[]api.PinInfoSerial) error

SyncAllLocal runs Cluster.SyncAllLocal().

func (*RPCAPI) SyncKey

func (rpcapi *RPCAPI) SyncKey(ctx context.Context, in string, out *struct{}) error

SyncKey runs Cluster.SyncKey().

func (*RPCAPI) SyncLocal

func (rpcapi *RPCAPI) SyncLocal(ctx context.Context, in api.PinSerial, out *api.PinInfoSerial) error

SyncLocal runs Cluster.SyncLocal().

func (*RPCAPI) SyncUidRenew

func (rpcapi *RPCAPI) SyncUidRenew(ctx context.Context, in []string, out *api.UIDRenew) error

SyncUidRenew runs Cluster.SyncUidRenew().

func (*RPCAPI) Track

func (rpcapi *RPCAPI) Track(ctx context.Context, in api.PinSerial, out *struct{}) error

Track runs PinTracker.Track().

func (*RPCAPI) TrackerRecover

func (rpcapi *RPCAPI) TrackerRecover(ctx context.Context, in api.PinSerial, out *api.PinInfoSerial) error

TrackerRecover runs PinTracker.Recover().

func (*RPCAPI) TrackerRecoverAll

func (rpcapi *RPCAPI) TrackerRecoverAll(ctx context.Context, in struct{}, out *[]api.PinInfoSerial) error

TrackerRecoverAll runs PinTracker.RecoverAll().f

func (*RPCAPI) TrackerStatus

func (rpcapi *RPCAPI) TrackerStatus(ctx context.Context, in api.PinSerial, out *api.PinInfoSerial) error

TrackerStatus runs PinTracker.Status().

func (*RPCAPI) TrackerStatusAll

func (rpcapi *RPCAPI) TrackerStatusAll(ctx context.Context, in struct{}, out *[]api.PinInfoSerial) error

TrackerStatusAll runs PinTracker.StatusAll().

func (*RPCAPI) UidInfo

func (rpcapi *RPCAPI) UidInfo(ctx context.Context, in string, out *api.UIDSecret) error

UidInfo runs IPFSConnector.UidInfo().

func (*RPCAPI) UidLogin

func (rpcapi *RPCAPI) UidLogin(ctx context.Context, in []string, out *struct{}) error

UidLogin runs IPFSConnector.UidLogin().

func (*RPCAPI) UidNew

func (rpcapi *RPCAPI) UidNew(ctx context.Context, in string, out *api.UIDSecret) error

UidNew runs IPFSConnector.UidNew().

func (*RPCAPI) UidRenew

func (rpcapi *RPCAPI) UidRenew(ctx context.Context, in []string, out *api.UIDRenew) error

UidRenew runs IPFSConnector.UidRenew().

func (*RPCAPI) Unpin

func (rpcapi *RPCAPI) Unpin(ctx context.Context, in api.PinSerial, out *struct{}) error

Unpin runs Cluster.Unpin().

func (*RPCAPI) Untrack

func (rpcapi *RPCAPI) Untrack(ctx context.Context, in api.PinSerial, out *struct{}) error

Untrack runs PinTracker.Untrack().

func (*RPCAPI) Version

func (rpcapi *RPCAPI) Version(ctx context.Context, in struct{}, out *api.Version) error

Version runs Cluster.Version().

Directories

Path Synopsis
Package adder implements functionality to add content to IPFS daemons managed by the Cluster.
Package adder implements functionality to add content to IPFS daemons managed by the Cluster.
adderutils
Package adderutils provides some utilities for adding content to cluster.
Package adderutils provides some utilities for adding content to cluster.
ipfsadd
Package ipfsadd is a simplified copy of go-ipfs/core/coreunix/add.go
Package ipfsadd is a simplified copy of go-ipfs/core/coreunix/add.go
local
Package local implements a ClusterDAGService that chunks and adds content to a local peer, before pinning it.
Package local implements a ClusterDAGService that chunks and adds content to a local peer, before pinning it.
sharding
Package sharding implements a sharding ClusterDAGService places content in different shards while it's being added, creating a final Cluster DAG and pinning it.
Package sharding implements a sharding ClusterDAGService places content in different shards while it's being added, creating a final Cluster DAG and pinning it.
allocator
ascendalloc
Package ascendalloc implements an ipfscluster.PinAllocator, which returns allocations based on sorting the metrics in ascending order.
Package ascendalloc implements an ipfscluster.PinAllocator, which returns allocations based on sorting the metrics in ascending order.
descendalloc
Package descendalloc implements an ipfscluster.PinAllocator returns allocations based on sorting the metrics in descending order.
Package descendalloc implements an ipfscluster.PinAllocator returns allocations based on sorting the metrics in descending order.
util
Package util is a utility package used by the allocator implementations.
Package util is a utility package used by the allocator implementations.
api
Package api holds declarations for types used in ipfs-cluster APIs to make them re-usable across differen tools.
Package api holds declarations for types used in ipfs-cluster APIs to make them re-usable across differen tools.
rest
Package rest implements an IPFS Cluster API component.
Package rest implements an IPFS Cluster API component.
rest/client
Package client provides a Go Client for the IPFS Cluster API provided by the "api/rest" component.
Package client provides a Go Client for the IPFS Cluster API provided by the "api/rest" component.
cmd
ipfs-cluster-ctl
The ipfs-cluster-ctl application.
The ipfs-cluster-ctl application.
ipfs-cluster-service
The ipfs-cluster-service application.
The ipfs-cluster-service application.
Package config provides interfaces and utilities for different Cluster components to register, read, write and validate configuration sections stored in a central configuration file.
Package config provides interfaces and utilities for different Cluster components to register, read, write and validate configuration sections stored in a central configuration file.
consensus
raft
Package raft implements a Consensus component for IPFS Cluster which uses Raft (go-libp2p-raft).
Package raft implements a Consensus component for IPFS Cluster which uses Raft (go-libp2p-raft).
informer
disk
Package disk implements an ipfs-cluster informer which can provide different disk-related metrics from the IPFS daemon as an api.Metric.
Package disk implements an ipfs-cluster informer which can provide different disk-related metrics from the IPFS daemon as an api.Metric.
numpin
Package numpin implements an ipfs-cluster informer which determines how many items this peer is pinning and returns it as api.Metric
Package numpin implements an ipfs-cluster informer which determines how many items this peer is pinning and returns it as api.Metric
ipfsconn
ipfshttp
Package ipfshttp implements an IPFS Cluster IPFSConnector component.
Package ipfshttp implements an IPFS Cluster IPFSConnector component.
monitor
basic
Package basic implements a basic PeerMonitor component for IPFS Cluster.
Package basic implements a basic PeerMonitor component for IPFS Cluster.
metrics
Package metrics provides common functionality for working with metrics, particulary useful for monitoring components.
Package metrics provides common functionality for working with metrics, particulary useful for monitoring components.
pubsubmon
Package pubsubmon implements a PeerMonitor component for IPFS Cluster that uses PubSub to send and receive metrics.
Package pubsubmon implements a PeerMonitor component for IPFS Cluster that uses PubSub to send and receive metrics.
pintracker
maptracker
Package maptracker implements a PinTracker component for IPFS Cluster.
Package maptracker implements a PinTracker component for IPFS Cluster.
optracker
Package optracker implements functionality to track the status of pin and operations as needed by implementations of the pintracker component.
Package optracker implements functionality to track the status of pin and operations as needed by implementations of the pintracker component.
stateless
Package stateless implements a PinTracker component for IPFS Cluster, which aims to reduce the memory footprint when handling really large cluster states.
Package stateless implements a PinTracker component for IPFS Cluster, which aims to reduce the memory footprint when handling really large cluster states.
Package pstoremgr provides a Manager that simplifies handling addition, listing and removal of cluster peer multiaddresses from the libp2p Host.
Package pstoremgr provides a Manager that simplifies handling addition, listing and removal of cluster peer multiaddresses from the libp2p Host.
Package rpcutil provides utility methods to perform go-libp2p-gorpc calls, particularly gorpc.MultiCall().
Package rpcutil provides utility methods to perform go-libp2p-gorpc calls, particularly gorpc.MultiCall().
Package state holds the interface that any state implementation for IPFS Cluster must satisfy.
Package state holds the interface that any state implementation for IPFS Cluster must satisfy.
mapstate
Package mapstate implements the State interface for IPFS Cluster by using a map to keep track of the consensus-shared state.
Package mapstate implements the State interface for IPFS Cluster by using a map to keep track of the consensus-shared state.
Package test offers testing utilities for all the IPFS Cluster codebase, like IPFS daemon and RPC mocks and pre-defined testing CIDs.
Package test offers testing utilities for all the IPFS Cluster codebase, like IPFS daemon and RPC mocks and pre-defined testing CIDs.

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL