ipfscluster

package module
v0.0.0-...-857e381 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Feb 15, 2017 License: MIT Imports: 32 Imported by: 0

README

ipfs-cluster

standard-readme compliant GoDoc Go Report Card Build Status Coverage Status

Collective pinning and composition for IPFS.

WORK IN PROGRESS

DO NOT USE IN PRODUCTION

ipfs-cluster is a tool which groups a number of IPFS nodes together, allowing to collectively perform operations such as pinning.

In order to do so IPFS Cluster nodes use a libp2p-based consensus algorithm (currently Raft) to agree on a log of operations and build a consistent state across the cluster. The state represents which objects should be pinned by which nodes.

Additionally, cluster nodes act as a proxy/wrapper to the IPFS API, so they can be used as a regular node, with the difference that pin add, pin rm and pin ls requests are handled by the Cluster.

IPFS Cluster provides a cluster-node application (ipfs-cluster-service), a Go API, a HTTP API and a command-line tool (ipfs-cluster-ctl).

Current functionality only allows pinning in all cluster peers, but more strategies (like setting a replication factor for each pin) will be developed.

Table of Contents

Background

Since the start of IPFS it was clear that a tool to coordinate a number of different nodes (and the content they are supposed to store) would add a great value to the IPFS ecosystem. Naïve approaches are possible, but they present some weaknesses, specially at dealing with error handling, recovery and implementation of advanced pinning strategies.

ipfs-cluster aims to address this issues by providing a IPFS node wrapper which coordinates multiple cluster peers via a consensus algorithm. This ensures that the desired state of the system is always agreed upon and can be easily maintained by the cluster peers. Thus, every cluster node knows which content is tracked, can decide whether asking IPFS to pin it and can react to any contingencies like node reboots.

Maintainers and Roadmap

This project is captained by @hsanjuan. See the captain's log for a written summary of current status and upcoming features. You can also check out the project's Roadmap for a high level overview of what's coming and the project's Waffle Board to see what issues are being worked on at the moment.

Install

In order to install the ipfs-cluster-service the ipfs-cluster-ctl tool simply download this repository and run make as follows:

$ go get -u -d github.com/ipfs/ipfs-cluster
$ cd $GOPATH/src/github.com/ipfs/ipfs-cluster
$ make install

This will install ipfs-cluster-service and ipfs-cluster-ctl in your $GOPATH/bin folder.

Usage

ipfs-cluster-service

ipfs-cluster-service runs a cluster peer. Usage information can be obtained running:

$ ipfs-cluster-service -h

Before running ipfs-cluster-service for the first time, initialize a configuration file with:

$ ipfs-cluster-service -init

The configuration will be placed in ~/.ipfs-cluster/service.json by default.

You can add the multiaddresses for the other cluster peers the bootstrap_multiaddresses variable. For example, here is a valid configuration for a single-peer cluster:

{
    "id": "QmXMhZ53zAoes8TYbKGn3rnm5nfWs5Wdu41Fhhfw9XmM5A",
    "private_key": "<redacted>",
    "cluster_peers": [],
    "bootstrap": [],
    "leave_on_shutdown": false,
    "cluster_multiaddress": "/ip4/0.0.0.0/tcp/9096",
    "api_listen_multiaddress": "/ip4/127.0.0.1/tcp/9094",
    "ipfs_proxy_listen_multiaddress": "/ip4/127.0.0.1/tcp/9095",
    "ipfs_node_multiaddress": "/ip4/127.0.0.1/tcp/5001",
    "consensus_data_folder": "/home/hector/go/src/github.com/ipfs/ipfs-cluster/ipfs-cluster-service/d1/data",
    "state_sync_seconds": 60,
    "replication_factor": -1
}

The configuration file should probably be identical among all cluster peers, except for the id and private_key fields. Once every cluster peer has the configuration in place, you can run ipfs-cluster-service to start the cluster.

Clusters using cluster_peers

The cluster_peers configuration variable holds a list of current cluster members. If you know the members of the cluster in advance, or you want to start a cluster fully in parallel, set cluster_peers in all configurations so that every peer knows the rest upon boot. Leave bootstrap empty (although it will be ignored anyway)

Clusters using bootstrap

When the cluster_peers variable is empty, the multiaddresses bootstrap can be used to have a peer join an existing cluster. The peer will contact those addresses (in order) until one of them succeeds in joining it to the cluster. When the peer is shut down, it will save the current cluster peers in the cluster_peers configuration variable for future use.

Bootstrap is a convenient method, but more prone to errors than cluster_peers. It can be used as well with ipfs-cluster-service --bootstrap <multiaddress>. Note that bootstrapping nodes with an old state (or diverging state) from the one running in the cluster may lead to problems with the consensus, so usually you would want to bootstrap blank nodes.

Debugging

ipfs-cluster-service offers two debugging options:

  • --debug enables debug logging from the ipfs-cluster, go-libp2p-raft and go-libp2p-rpc layers. This will be a very verbose log output, but at the same time it is the most informative.
  • --loglevel sets the log level ([error, warning, info, debug]) for the ipfs-cluster only, allowing to get an overview of the what cluster is doing. The default log-level is info.
ipfs-cluster-ctl

ipfs-cluster-ctl is the client application to manage the cluster nodes and perform actions. ipfs-cluster-ctl uses the HTTP API provided by the nodes and it is completely separate from the cluster service. It can talk to any cluster peer (--host) and uses localhost by default.

After installing, you can run ipfs-cluster-ctl --help to display general description and options, or alternatively ipfs-cluster-ctl help [cmd] to display information about supported commands.

In summary, it works as follows:

$ ipfs-cluster-ctl id                                                       # show cluster peer and ipfs daemon information
$ ipfs-cluster-ctl peers ls                                                 # list cluster peers
$ ipfs-cluster-ctl peers add /ip4/1.2.3.4/tcp/1234/<peerid>                 # add a new cluster peer
$ ipfs-cluster-ctl peers rm <peerid>                                        # remove a cluster peer
$ ipfs-cluster-ctl pin add Qma4Lid2T1F68E3Xa3CpE6vVJDLwxXLD8RfiB9g1Tmqp58   # pins a CID in the cluster
$ ipfs-cluster-ctl pin rm Qma4Lid2T1F68E3Xa3CpE6vVJDLwxXLD8RfiB9g1Tmqp58    # unpins a CID from the cluster
$ ipfs-cluster-ctl status                                                   # display tracked CIDs information
$ ipfs-cluster-ctl sync Qma4Lid2T1F68E3Xa3CpE6vVJDLwxXLD8RfiB9g1Tmqp58      # sync information from the IPFS daemon
$ ipfs-cluster-ctl recover Qma4Lid2T1F68E3Xa3CpE6vVJDLwxXLD8RfiB9g1Tmqp58   # attempt to re-pin/unpin CIDs in error state
Debugging

ipfs-cluster-ctl provides a --debug flag which allows to inspect request paths and raw response bodies.

Quick start: Building and updating an IPFS Cluster
Step 0: Run your first cluster node

This step creates a single-node IPFS Cluster.

First initialize the configuration:

node0 $ ipfs-cluster-service init
ipfs-cluster-service configuration written to /home/user/.ipfs-cluster/service.json

Then run cluster:

node0> ipfs-cluster-service
13:33:34.044  INFO    cluster: IPFS Cluster v0.0.1 listening on: cluster.go:59
13:33:34.044  INFO    cluster:         /ip4/127.0.0.1/tcp/9096/ipfs/QmQHKLBXfS7hf8o2acj7FGADoJDLat3UazucbHrgxqisim cluster.go:61
13:33:34.044  INFO    cluster:         /ip4/192.168.1.103/tcp/9096/ipfs/QmQHKLBXfS7hf8o2acj7FGADoJDLat3UazucbHrgxqisim cluster.go:61
13:33:34.044  INFO    cluster: starting Consensus and waiting for a leader... consensus.go:163
13:33:34.047  INFO    cluster: PinTracker ready map_pin_tracker.go:71
13:33:34.047  INFO    cluster: waiting for leader raft.go:118
13:33:34.047  INFO    cluster: REST API: /ip4/127.0.0.1/tcp/9094 rest_api.go:309
13:33:34.047  INFO    cluster: IPFS Proxy: /ip4/127.0.0.1/tcp/9095 -> /ip4/127.0.0.1/tcp/5001 ipfs_http_connector.go:168
13:33:35.420  INFO    cluster: Raft Leader elected: QmQHKLBXfS7hf8o2acj7FGADoJDLat3UazucbHrgxqisim raft.go:145
13:33:35.921  INFO    cluster: Consensus state is up to date consensus.go:214
13:33:35.921  INFO    cluster: IPFS Cluster is ready cluster.go:191
13:33:35.921  INFO    cluster: Cluster Peers (not including ourselves): cluster.go:192
13:33:35.921  INFO    cluster:     - No other peers cluster.go:195
Step 1: Add new members to the cluster

Initialize and run cluster in a different node(s):

node1> ipfs-cluster-service init
ipfs-cluster-service configuration written to /home/user/.ipfs-cluster/service.json
node1> ipfs-cluster-service
13:36:19.313  INFO    cluster: IPFS Cluster v0.0.1 listening on: cluster.go:59
13:36:19.313  INFO    cluster:         /ip4/127.0.0.1/tcp/7096/ipfs/QmU7JJftGsP1zM8H37XwvfA7dwdepsB7xVnJA6sKpozc85 cluster.go:61
13:36:19.313  INFO    cluster:         /ip4/192.168.1.103/tcp/7096/ipfs/QmU7JJftGsP1zM8H37XwvfA7dwdepsB7xVnJA6sKpozc85 cluster.go:61
13:36:19.313  INFO    cluster: starting Consensus and waiting for a leader... consensus.go:163
13:36:19.316  INFO    cluster: REST API: /ip4/127.0.0.1/tcp/7094 rest_api.go:309
13:36:19.316  INFO    cluster: IPFS Proxy: /ip4/127.0.0.1/tcp/7095 -> /ip4/127.0.0.1/tcp/5001 ipfs_http_connector.go:168
13:36:19.316  INFO    cluster: waiting for leader raft.go:118
13:36:19.316  INFO    cluster: PinTracker ready map_pin_tracker.go:71
13:36:20.834  INFO    cluster: Raft Leader elected: QmU7JJftGsP1zM8H37XwvfA7dwdepsB7xVnJA6sKpozc85 raft.go:145
13:36:21.334  INFO    cluster: Consensus state is up to date consensus.go:214
13:36:21.334  INFO    cluster: IPFS Cluster is ready cluster.go:191
13:36:21.334  INFO    cluster: Cluster Peers (not including ourselves): cluster.go:192
13:36:21.334  INFO    cluster:     - No other peers cluster.go:195

Add them to the original cluster with ipfs-cluster-ctl peers add <multiaddr>. The command will return the ID information of the newly added member:

node0> ipfs-cluster-ctl peers add /ip4/192.168.1.103/tcp/7096/ipfs/QmU7JJftGsP1zM8H37XwvfA7dwdepsB7xVnJA6sKpozc85
{
  "id": "QmU7JJftGsP1zM8H37XwvfA7dwdepsB7xVnJA6sKpozc85",
  "public_key": "CAASpgIwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQDtjpvI+XKVGT5toXTimtWceONYsf/1bbRMxLt/fCSYJoSeJqj0HUtttCD3dcBv1M2rElIMXDhyLUpkET+AN6otr9lQnbgi0ZaKrtzphR0w6g/0EQZZaxI2scxF4NcwkwUfe5ceEmPFwax1+C00nd2BF+YEEp+VHNyWgXhCxncOGO74p0YdXBrvXkyfTiy/567L3PPX9F9x+HiutBL39CWhx9INmtvdPB2HwshodF6QbfeljdAYCekgIrCQC8mXOVeePmlWgTwoge9yQbuViZwPiKwwo1AplANXFmSv8gagfjKL7Kc0YOqcHwxBsoUskbJjfheDZJzl19iDs9EvUGk5AgMBAAE=",
  "addresses": [
    "/ip4/127.0.0.1/tcp/7096/ipfs/QmU7JJftGsP1zM8H37XwvfA7dwdepsB7xVnJA6sKpozc85",
    "/ip4/192.168.1.103/tcp/7096/ipfs/QmU7JJftGsP1zM8H37XwvfA7dwdepsB7xVnJA6sKpozc85"
  ],
  "cluster_peers": [
    "/ip4/192.168.123.103/tcp/7096/ipfs/QmU7JJftGsP1zM8H37XwvfA7dwdepsB7xVnJA6sKpozc85"
  ],
  "version": "0.0.1",
  "commit": "83baa5c859b9b17b2deec4f782d1210590025c80",
  "rpc_protocol_version": "/ipfscluster/0.0.1/rpc",
  "ipfs": {
    "id": "QmXZrtE5jQwXNqCJMfHUTQkvhQ4ZAnqMnmzFMJfLewur2n",
    "addresses": [
      "/ip4/127.0.0.1/tcp/4001/ipfs/QmXZrtE5jQwXNqCJMfHUTQkvhQ4ZAnqMnmzFMJfLewur2n",
      "/ip4/192.168.1.103/tcp/4001/ipfs/QmXZrtE5jQwXNqCJMfHUTQkvhQ4ZAnqMnmzFMJfLewur2n"
    ]
  }
}

You can repeat the process with any other nodes.

Step 3: Remove no longer needed nodes

You can use ipfs-cluster-ctl peers rm <multiaddr> to remove and disconnect any nodes from your cluster. The nodes will be automatically shutdown. They can be restarted manually and re-added to the Cluster any time:

node0> ipfs-cluster-ctl peers rm QmbGFbZVTF3UAEPK9pBVdwHGdDAYkHYufQwSh4k1i8bbbb
Request succeeded

The node1 is then disconnected and shuts down:

13:42:50.828 WARNI    cluster: this peer has been removed from the Cluster and will shutdown itself in 5 seconds peer_manager.go:48
13:42:51.828  INFO    cluster: stopping Consensus component consensus.go:257
13:42:55.836  INFO    cluster: shutting down IPFS Cluster cluster.go:235
13:42:55.836  INFO    cluster: Saving configuration config.go:283
13:42:55.837  INFO    cluster: stopping Cluster API rest_api.go:327
13:42:55.837  INFO    cluster: stopping IPFS Proxy ipfs_http_connector.go:332
13:42:55.837  INFO    cluster: stopping MapPinTracker map_pin_tracker.go:87
Go

IPFS Cluster nodes can be launched directly from Go. The Cluster object provides methods to interact with the cluster and perform actions.

Documentation and examples on how to use IPFS Cluster from Go can be found in godoc.org/github.com/ipfs/ipfs-cluster.

API

TODO: Swagger

This is a quick summary of API endpoints offered by the Rest API component (these may change before 1.0):

Method Endpoint Comment
GET /id Cluster peer information
GET /version Cluster version
GET /peers Cluster peers
POST /peers Add new peer
DELETE /peers/{peerID} Remove a peer
GET /pinlist List of pins in the consensus state
GET /pins Status of all tracked CIDs
POST /pins/sync Sync all
GET /pins/{cid} Status of single CID
POST /pins/{cid} Pin CID
DELETE /pins/{cid} Unpin CID
POST /pins/{cid}/sync Sync CID
POST /pins/{cid}/recover Recover CID

Architecture

The best place to get an overview of how cluster works, what components exist etc. is the architecture.md doc.

Contribute

PRs accepted.

Small note: If editing the README, please conform to the standard-readme specification.

License

MIT © Protocol Labs, Inc.

Documentation

Overview

Package ipfscluster implements a wrapper for the IPFS deamon which allows to orchestrate pinning operations among several IPFS nodes.

IPFS Cluster uses a go-libp2p-raft to keep a shared state between the different cluster peers. It also uses LibP2P to enable communication between its different components, which perform different tasks like managing the underlying IPFS daemons, or providing APIs for external control.

Index

Constants

View Source
const (
	DefaultConfigCrypto     = crypto.RSA
	DefaultConfigKeyLength  = 2048
	DefaultAPIAddr          = "/ip4/127.0.0.1/tcp/9094"
	DefaultIPFSProxyAddr    = "/ip4/127.0.0.1/tcp/9095"
	DefaultIPFSNodeAddr     = "/ip4/127.0.0.1/tcp/5001"
	DefaultClusterAddr      = "/ip4/0.0.0.0/tcp/9096"
	DefaultStateSyncSeconds = 60
)

Default parameters for the configuration

View Source
const (
	LogOpPin = iota + 1
	LogOpUnpin
	LogOpAddPeer
	LogOpRmPeer
)

Type of consensus operation

Variables

View Source
var (
	// maximum duration before timing out read of the request
	IPFSProxyServerReadTimeout = 5 * time.Second
	// maximum duration before timing out write of the response
	IPFSProxyServerWriteTimeout = 10 * time.Second
	// server-side the amount of time a Keep-Alive connection will be
	// kept idle before being reused
	IPFSProxyServerIdleTimeout = 60 * time.Second
)

IPFS Proxy settings

View Source
var (
	PinningTimeout   = 15 * time.Minute
	UnpinningTimeout = 10 * time.Second
)

A Pin or Unpin operation will be considered failed if the Cid has stayed in Pinning or Unpinning state for longer than these values.

View Source
var (
	// maximum duration before timing out read of the request
	RESTAPIServerReadTimeout = 5 * time.Second
	// maximum duration before timing out write of the response
	RESTAPIServerWriteTimeout = 10 * time.Second
	// server-side the amount of time a Keep-Alive connection will be
	// kept idle before being reused
	RESTAPIServerIdleTimeout = 60 * time.Second
)

Server settings

View Source
var AlertChannelCap = 256

AlertChannelCap specifies how much buffer the alerts channel has.

View Source
var Commit string

Commit is the current build commit of cluster. See Makefile

View Source
var CommitRetries = 2

CommitRetries specifies how many times we retry a failed commit until we give up

View Source
var DefaultRaftConfig = hashiraft.DefaultConfig()

DefaultRaftConfig allows to tweak Raft configuration used by Cluster from from the outside.

View Source
var LeaderTimeout = 15 * time.Second

LeaderTimeout specifies how long to wait before failing an operation because there is no leader

View Source
var PinQueueSize = 1024

PinQueueSize specifies the maximum amount of pin operations waiting to be performed. If the queue is full, pins/unpins will be set to pinError/unpinError.

View Source
var RPCProtocol = protocol.ID("/ipfscluster/" + Version + "/rpc")

RPCProtocol is used to send libp2p messages between cluster peers

View Source
var RaftMaxSnapshots = 5

RaftMaxSnapshots indicates how many snapshots to keep in the consensus data folder.

View Source
var Version = "0.0.1"

Version is the current cluster version. Version alignment between components, apis and tools ensures compatibility among them.

Functions

func SetFacilityLogLevel

func SetFacilityLogLevel(f, l string)

SetFacilityLogLevel sets the log level for a given module

Types

type API

type API interface {
	Component
}

API is a component which offers an API for Cluster. This is a base component.

type Cluster

type Cluster struct {
	// contains filtered or unexported fields
}

Cluster is the main IPFS cluster component. It provides the go-API for it and orchestrates the components that make up the system.

func NewCluster

func NewCluster(
	cfg *Config,
	api API,
	ipfs IPFSConnector,
	state State,
	tracker PinTracker,
	monitor PeerMonitor,
	allocator PinAllocator,
	informer Informer) (*Cluster, error)

NewCluster builds a new IPFS Cluster peer. It initializes a LibP2P host, creates and RPC Server and client and sets up all components.

The new cluster peer may still be performing initialization tasks when this call returns (consensus may still be bootstrapping). Use Cluster.Ready() if you need to wait until the peer is fully up.

func (*Cluster) Done

func (c *Cluster) Done() <-chan struct{}

Done provides a way to learn if the Peer has been shutdown (for example, because it has been removed from the Cluster)

func (*Cluster) ID

func (c *Cluster) ID() api.ID

ID returns information about the Cluster peer

func (*Cluster) Join

func (c *Cluster) Join(addr ma.Multiaddr) error

Join adds this peer to an existing cluster. The calling peer should be a single-peer cluster node. This is almost equivalent to calling PeerAdd on the destination cluster.

func (*Cluster) PeerAdd

func (c *Cluster) PeerAdd(addr ma.Multiaddr) (api.ID, error)

PeerAdd adds a new peer to this Cluster.

The new peer must be reachable. It will be added to the consensus and will receive the shared state (including the list of peers). The new peer should be a single-peer cluster, preferable without any relevant state.

func (*Cluster) PeerRemove

func (c *Cluster) PeerRemove(pid peer.ID) error

PeerRemove removes a peer from this Cluster.

The peer will be removed from the consensus peer set, it will be shut down after this happens.

func (*Cluster) Peers

func (c *Cluster) Peers() []api.ID

Peers returns the IDs of the members of this Cluster

func (*Cluster) Pin

func (c *Cluster) Pin(h *cid.Cid) error

Pin makes the cluster Pin a Cid. This implies adding the Cid to the IPFS Cluster peers shared-state. Depending on the cluster pinning strategy, the PinTracker may then request the IPFS daemon to pin the Cid.

Pin returns an error if the operation could not be persisted to the global state. Pin does not reflect the success or failure of underlying IPFS daemon pinning operations.

func (*Cluster) Pins

func (c *Cluster) Pins() []api.CidArg

Pins returns the list of Cids managed by Cluster and which are part of the current global state. This is the source of truth as to which pins are managed, but does not indicate if the item is successfully pinned.

func (*Cluster) Ready

func (c *Cluster) Ready() <-chan struct{}

Ready returns a channel which signals when this peer is fully initialized (including consensus).

func (*Cluster) Recover

func (c *Cluster) Recover(h *cid.Cid) (api.GlobalPinInfo, error)

Recover triggers a recover operation for a given Cid in all cluster peers.

func (*Cluster) RecoverLocal

func (c *Cluster) RecoverLocal(h *cid.Cid) (api.PinInfo, error)

RecoverLocal triggers a recover operation for a given Cid

func (*Cluster) Shutdown

func (c *Cluster) Shutdown() error

Shutdown stops the IPFS cluster components

func (*Cluster) StateSync

func (c *Cluster) StateSync() ([]api.PinInfo, error)

StateSync syncs the consensus state to the Pin Tracker, ensuring that every Cid that should be tracked is tracked. It returns PinInfo for Cids which were added or deleted.

func (*Cluster) Status

func (c *Cluster) Status(h *cid.Cid) (api.GlobalPinInfo, error)

Status returns the GlobalPinInfo for a given Cid. If an error happens, the GlobalPinInfo should contain as much information as could be fetched.

func (*Cluster) StatusAll

func (c *Cluster) StatusAll() ([]api.GlobalPinInfo, error)

StatusAll returns the GlobalPinInfo for all tracked Cids. If an error happens, the slice will contain as much information as could be fetched.

func (*Cluster) Sync

func (c *Cluster) Sync(h *cid.Cid) (api.GlobalPinInfo, error)

Sync triggers a LocalSyncCid() operation for a given Cid in all cluster peers.

func (*Cluster) SyncAll

func (c *Cluster) SyncAll() ([]api.GlobalPinInfo, error)

SyncAll triggers LocalSync() operations in all cluster peers.

func (*Cluster) SyncAllLocal

func (c *Cluster) SyncAllLocal() ([]api.PinInfo, error)

SyncAllLocal makes sure that the current state for all tracked items matches the state reported by the IPFS daemon.

SyncAllLocal returns the list of PinInfo that where updated because of the operation, along with those in error states.

func (*Cluster) SyncLocal

func (c *Cluster) SyncLocal(h *cid.Cid) (api.PinInfo, error)

SyncLocal performs a local sync operation for the given Cid. This will tell the tracker to verify the status of the Cid against the IPFS daemon. It returns the updated PinInfo for the Cid.

func (*Cluster) Unpin

func (c *Cluster) Unpin(h *cid.Cid) error

Unpin makes the cluster Unpin a Cid. This implies adding the Cid to the IPFS Cluster peers shared-state.

Unpin returns an error if the operation could not be persisted to the global state. Unpin does not reflect the success or failure of underlying IPFS daemon unpinning operations.

func (*Cluster) Version

func (c *Cluster) Version() string

Version returns the current IPFS Cluster version

type Component

type Component interface {
	SetClient(*rpc.Client)
	Shutdown() error
}

Component represents a piece of ipfscluster. Cluster components usually run their own goroutines (a http server for example). They communicate with the main Cluster component and other components (both local and remote), using an instance of rpc.Client.

type Config

type Config struct {
	// Libp2p ID and private key for Cluster communication (including)
	// the Consensus component.
	ID         peer.ID
	PrivateKey crypto.PrivKey

	// ClusterPeers is the list of peers in the Cluster. They are used
	// as the initial peers in the consensus. When bootstrapping a peer,
	// ClusterPeers will be filled in automatically for the next run upon
	// shutdown.
	ClusterPeers []ma.Multiaddr

	// Bootstrap peers multiaddresses. This peer will attempt to
	// join the clusters of the peers in this list after booting.
	// Leave empty for a single-peer-cluster.
	Bootstrap []ma.Multiaddr

	// Leave Cluster on shutdown. Politely informs other peers
	// of the departure and removes itself from the consensus
	// peer set. The Cluster size will be reduced by one.
	LeaveOnShutdown bool

	// Listen parameters for the Cluster libp2p Host. Used by
	// the RPC and Consensus components.
	ClusterAddr ma.Multiaddr

	// Listen parameters for the the Cluster HTTP API component.
	APIAddr ma.Multiaddr

	// Listen parameters for the IPFS Proxy. Used by the IPFS
	// connector component.
	IPFSProxyAddr ma.Multiaddr

	// Host/Port for the IPFS daemon.
	IPFSNodeAddr ma.Multiaddr

	// Storage folder for snapshots, log store etc. Used by
	// the Consensus component.
	ConsensusDataFolder string

	// Number of seconds between StateSync() operations
	StateSyncSeconds int

	// ReplicationFactor is the number of copies we keep for each pin
	ReplicationFactor int
	// contains filtered or unexported fields
}

Config represents an ipfs-cluster configuration. It is used by Cluster components. An initialized version of it can be obtained with NewDefaultConfig().

func LoadConfig

func LoadConfig(path string) (*Config, error)

LoadConfig reads a JSON configuration file from the given path, parses it and returns a new Config object.

func NewDefaultConfig

func NewDefaultConfig() (*Config, error)

NewDefaultConfig returns a default configuration object with a randomly generated ID and private key.

func (*Config) Save

func (cfg *Config) Save(path string) error

Save stores a configuration as a JSON file in the given path. If no path is provided, it uses the path the configuration was loaded from.

func (*Config) ToJSONConfig

func (cfg *Config) ToJSONConfig() (j *JSONConfig, err error)

ToJSONConfig converts a Config object to its JSON representation which is focused on user presentation and easy understanding.

type Consensus

type Consensus struct {
	// contains filtered or unexported fields
}

Consensus handles the work of keeping a shared-state between the peers of an IPFS Cluster, as well as modifying that state and applying any updates in a thread-safe manner.

func NewConsensus

func NewConsensus(clusterPeers []peer.ID, host host.Host, dataFolder string, state State) (*Consensus, error)

NewConsensus builds a new ClusterConsensus component. The state is used to initialize the Consensus system, so any information in it is discarded.

func (*Consensus) Leader

func (cc *Consensus) Leader() (peer.ID, error)

Leader returns the peerID of the Leader of the cluster. It returns an error when there is no leader.

func (*Consensus) LogAddPeer

func (cc *Consensus) LogAddPeer(addr ma.Multiaddr) error

LogAddPeer submits a new peer to the shared state of the cluster. It will forward the operation to the leader if this is not it.

func (*Consensus) LogPin

func (cc *Consensus) LogPin(c api.CidArg) error

LogPin submits a Cid to the shared state of the cluster. It will forward the operation to the leader if this is not it.

func (*Consensus) LogRmPeer

func (cc *Consensus) LogRmPeer(pid peer.ID) error

LogRmPeer removes a peer from the shared state of the cluster. It will forward the operation to the leader if this is not it.

func (*Consensus) LogUnpin

func (cc *Consensus) LogUnpin(c api.CidArg) error

LogUnpin removes a Cid from the shared state of the cluster.

func (*Consensus) Ready

func (cc *Consensus) Ready() <-chan struct{}

Ready returns a channel which is signaled when the Consensus algorithm has finished bootstrapping and is ready to use

func (*Consensus) Rollback

func (cc *Consensus) Rollback(state State) error

Rollback replaces the current agreed-upon state with the state provided. Only the consensus leader can perform this operation.

func (*Consensus) SetClient

func (cc *Consensus) SetClient(c *rpc.Client)

SetClient makes the component ready to perform RPC requets

func (*Consensus) Shutdown

func (cc *Consensus) Shutdown() error

Shutdown stops the component so it will not process any more updates. The underlying consensus is permanently shutdown, along with the libp2p transport.

func (*Consensus) State

func (cc *Consensus) State() (State, error)

State retrieves the current consensus State. It may error if no State has been agreed upon or the state is not consistent. The returned State is the last agreed-upon State known by this node.

func (*Consensus) WaitForSync

func (cc *Consensus) WaitForSync() error

WaitForSync waits for a leader and for the state to be up to date, then returns.

type IPFSConnector

type IPFSConnector interface {
	Component
	ID() (api.IPFSID, error)
	Pin(*cid.Cid) error
	Unpin(*cid.Cid) error
	PinLsCid(*cid.Cid) (api.IPFSPinStatus, error)
	PinLs(typeFilter string) (map[string]api.IPFSPinStatus, error)
}

IPFSConnector is a component which allows cluster to interact with an IPFS daemon. This is a base component.

type IPFSHTTPConnector

type IPFSHTTPConnector struct {
	// contains filtered or unexported fields
}

IPFSHTTPConnector implements the IPFSConnector interface and provides a component which does two tasks:

On one side, it proxies HTTP requests to the configured IPFS daemon. It is able to intercept these requests though, and perform extra operations on them.

On the other side, it is used to perform on-demand requests against the configured IPFS daemom (such as a pin request).

func NewIPFSHTTPConnector

func NewIPFSHTTPConnector(cfg *Config) (*IPFSHTTPConnector, error)

NewIPFSHTTPConnector creates the component and leaves it ready to be started

func (*IPFSHTTPConnector) ID

func (ipfs *IPFSHTTPConnector) ID() (api.IPFSID, error)

ID performs an ID request against the configured IPFS daemon. It returns the fetched information. If the request fails, or the parsing fails, it returns an error and an empty IPFSID which also contains the error message.

func (*IPFSHTTPConnector) Pin

func (ipfs *IPFSHTTPConnector) Pin(hash *cid.Cid) error

Pin performs a pin request against the configured IPFS daemon.

func (*IPFSHTTPConnector) PinLs

func (ipfs *IPFSHTTPConnector) PinLs(typeFilter string) (map[string]api.IPFSPinStatus, error)

PinLs performs a "pin ls --type typeFilter" request against the configured IPFS daemon and returns a map of cid strings and their status.

func (*IPFSHTTPConnector) PinLsCid

func (ipfs *IPFSHTTPConnector) PinLsCid(hash *cid.Cid) (api.IPFSPinStatus, error)

PinLsCid performs a "pin ls <hash> "request and returns IPFSPinStatus for that hash.

func (*IPFSHTTPConnector) SetClient

func (ipfs *IPFSHTTPConnector) SetClient(c *rpc.Client)

SetClient makes the component ready to perform RPC requests.

func (*IPFSHTTPConnector) Shutdown

func (ipfs *IPFSHTTPConnector) Shutdown() error

Shutdown stops any listeners and stops the component from taking any requests.

func (*IPFSHTTPConnector) Unpin

func (ipfs *IPFSHTTPConnector) Unpin(hash *cid.Cid) error

Unpin performs an unpin request against the configured IPFS daemon.

type Informer

type Informer interface {
	Component
	Name() string
	GetMetric() api.Metric
}

Informer provides Metric information from a peer. The metrics produced by informers are then passed to a PinAllocator which will use them to determine where to pin content. The metric is agnostic to the rest of Cluster.

type JSONConfig

type JSONConfig struct {
	// Libp2p ID and private key for Cluster communication (including)
	// the Consensus component.
	ID         string `json:"id"`
	PrivateKey string `json:"private_key"`

	// ClusterPeers is the list of peers' multiaddresses in the Cluster.
	// They are used as the initial peers in the consensus. When
	// bootstrapping a peer, ClusterPeers will be filled in automatically.
	ClusterPeers []string `json:"cluster_peers"`

	// Bootstrap peers multiaddresses. This peer will attempt to
	// join the clusters of the peers in the list. ONLY when ClusterPeers
	// is empty. Otherwise it is ignored. Leave empty for a single-peer
	// cluster.
	Bootstrap []string `json:"bootstrap"`

	// Leave Cluster on shutdown. Politely informs other peers
	// of the departure and removes itself from the consensus
	// peer set. The Cluster size will be reduced by one.
	LeaveOnShutdown bool `json:"leave_on_shutdown"`

	// Listen address for the Cluster libp2p host. This is used for
	// interal RPC and Consensus communications between cluster peers.
	ClusterListenMultiaddress string `json:"cluster_multiaddress"`

	// Listen address for the the Cluster HTTP API component.
	// Tools like ipfs-cluster-ctl will connect to his endpoint to
	// manage cluster.
	APIListenMultiaddress string `json:"api_listen_multiaddress"`

	// Listen address for the IPFS Proxy, which forwards requests to
	// an IPFS daemon.
	IPFSProxyListenMultiaddress string `json:"ipfs_proxy_listen_multiaddress"`

	// API address for the IPFS daemon.
	IPFSNodeMultiaddress string `json:"ipfs_node_multiaddress"`

	// Storage folder for snapshots, log store etc. Used by
	// the Consensus component.
	ConsensusDataFolder string `json:"consensus_data_folder"`

	// Number of seconds between syncs of the consensus state to the
	// tracker state. Normally states are synced anyway, but this helps
	// when new nodes are joining the cluster
	StateSyncSeconds int `json:"state_sync_seconds"`

	// ReplicationFactor indicates the number of nodes that must pin content.
	// For exampe, a replication_factor of 2 will prompt cluster to choose
	// two nodes for each pinned hash. A replication_factor -1 will
	// use every available node for each pin.
	ReplicationFactor int `json:"replication_factor"`
}

JSONConfig represents a Cluster configuration as it will look when it is saved using JSON. Most configuration keys are converted into simple types like strings, and key names aim to be self-explanatory for the user.

func (*JSONConfig) ToConfig

func (jcfg *JSONConfig) ToConfig() (c *Config, err error)

ToConfig converts a JSONConfig to its internal Config representation, where options are parsed into their native types.

type LogOp

type LogOp struct {
	Cid  api.CidArgSerial
	Peer api.MultiaddrSerial
	Type LogOpType
	// contains filtered or unexported fields
}

LogOp represents an operation for the OpLogConsensus system. It implements the consensus.Op interface and it is used by the Consensus component.

func (*LogOp) ApplyTo

func (op *LogOp) ApplyTo(cstate consensus.State) (consensus.State, error)

ApplyTo applies the operation to the State

type LogOpType

type LogOpType int

LogOpType expresses the type of a consensus Operation

type MapPinTracker

type MapPinTracker struct {
	// contains filtered or unexported fields
}

MapPinTracker is a PinTracker implementation which uses a Go map to store the status of the tracked Cids. This component is thread-safe.

func NewMapPinTracker

func NewMapPinTracker(cfg *Config) *MapPinTracker

NewMapPinTracker returns a new object which has been correcly initialized with the given configuration.

func (*MapPinTracker) Recover

func (mpt *MapPinTracker) Recover(c *cid.Cid) (api.PinInfo, error)

Recover will re-track or re-untrack a Cid in error state, possibly retriggering an IPFS pinning operation and returning only when it is done. The pinning/unpinning operation happens synchronously, jumping the queues.

func (*MapPinTracker) SetClient

func (mpt *MapPinTracker) SetClient(c *rpc.Client)

SetClient makes the MapPinTracker ready to perform RPC requests to other components.

func (*MapPinTracker) Shutdown

func (mpt *MapPinTracker) Shutdown() error

Shutdown finishes the services provided by the MapPinTracker and cancels any active context.

func (*MapPinTracker) Status

func (mpt *MapPinTracker) Status(c *cid.Cid) api.PinInfo

Status returns information for a Cid tracked by this MapPinTracker.

func (*MapPinTracker) StatusAll

func (mpt *MapPinTracker) StatusAll() []api.PinInfo

StatusAll returns information for all Cids tracked by this MapPinTracker.

func (*MapPinTracker) Sync

func (mpt *MapPinTracker) Sync(c *cid.Cid) (api.PinInfo, error)

Sync verifies that the status of a Cid matches that of the IPFS daemon. If not, it will be transitioned to PinError or UnpinError.

Sync returns the updated local status for the given Cid. Pins in error states can be recovered with Recover(). An error is returned if we are unable to contact the IPFS daemon.

func (*MapPinTracker) SyncAll

func (mpt *MapPinTracker) SyncAll() ([]api.PinInfo, error)

SyncAll verifies that the statuses of all tracked Cids match the one reported by the IPFS daemon. If not, they will be transitioned to PinError or UnpinError.

SyncAll returns the list of local status for all tracked Cids which were updated or have errors. Cids in error states can be recovered with Recover(). An error is returned if we are unable to contact the IPFS daemon.

func (*MapPinTracker) Track

func (mpt *MapPinTracker) Track(c api.CidArg) error

Track tells the MapPinTracker to start managing a Cid, possibly trigerring Pin operations on the IPFS daemon.

func (*MapPinTracker) Untrack

func (mpt *MapPinTracker) Untrack(c *cid.Cid) error

Untrack tells the MapPinTracker to stop managing a Cid. If the Cid is pinned locally, it will be unpinned.

type PeerMonitor

type PeerMonitor interface {
	Component
	// LogMetric stores a metric. Metrics are pushed reguarly from each peer
	// to the active PeerMonitor.
	LogMetric(api.Metric)
	// LastMetrics returns a map with the latest metrics of matching name
	// for the current cluster peers.
	LastMetrics(name string) []api.Metric
	// Alerts delivers alerts generated when this peer monitor detects
	// a problem (i.e. metrics not arriving as expected). Alerts are used to
	// trigger rebalancing operations.
	Alerts() <-chan api.Alert
}

PeerMonitor is a component in charge of monitoring the peers in the cluster and providing candidates to the PinAllocator when a pin request arrives.

type Peered

type Peered interface {
	AddPeer(p peer.ID)
	RmPeer(p peer.ID)
}

Peered represents a component which needs to be aware of the peers in the Cluster and of any changes to the peer set.

type PinAllocator

type PinAllocator interface {
	Component
	// Allocate returns the list of peers that should be assigned to
	// Pin content in oder of preference (from the most preferred to the
	// least). The "current" map contains valid metrics for peers
	// which are currently pinning the content. The candidates map
	// contains the metrics for all peers which are eligible for pinning
	// the content.
	Allocate(c *cid.Cid, current, candidates map[peer.ID]api.Metric) ([]peer.ID, error)
}

PinAllocator decides where to pin certain content. In order to make such decision, it receives the pin arguments, the peers which are currently allocated to the content and metrics available for all peers which could allocate the content.

type PinTracker

type PinTracker interface {
	Component
	// Track tells the tracker that a Cid is now under its supervision
	// The tracker may decide to perform an IPFS pin.
	Track(api.CidArg) error
	// Untrack tells the tracker that a Cid is to be forgotten. The tracker
	// may perform an IPFS unpin operation.
	Untrack(*cid.Cid) error
	// StatusAll returns the list of pins with their local status.
	StatusAll() []api.PinInfo
	// Status returns the local status of a given Cid.
	Status(*cid.Cid) api.PinInfo
	// SyncAll makes sure that all tracked Cids reflect the real IPFS status.
	// It returns the list of pins which were updated by the call.
	SyncAll() ([]api.PinInfo, error)
	// Sync makes sure that the Cid status reflect the real IPFS status.
	// It returns the local status of the Cid.
	Sync(*cid.Cid) (api.PinInfo, error)
	// Recover retriggers a Pin/Unpin operation in Cids with error status.
	Recover(*cid.Cid) (api.PinInfo, error)
}

PinTracker represents a component which tracks the status of the pins in this cluster and ensures they are in sync with the IPFS daemon. This component should be thread safe.

type RESTAPI

type RESTAPI struct {
	// contains filtered or unexported fields
}

RESTAPI implements an API and aims to provides a RESTful HTTP API for Cluster.

func NewRESTAPI

func NewRESTAPI(cfg *Config) (*RESTAPI, error)

NewRESTAPI creates a new object which is ready to be started.

func (*RESTAPI) SetClient

func (rest *RESTAPI) SetClient(c *rpc.Client)

SetClient makes the component ready to perform RPC requests.

func (*RESTAPI) Shutdown

func (rest *RESTAPI) Shutdown() error

Shutdown stops any API listeners.

type RPCAPI

type RPCAPI struct {
	// contains filtered or unexported fields
}

RPCAPI is a go-libp2p-gorpc service which provides the internal ipfs-cluster API, which enables components and cluster peers to communicate and request actions from each other.

The RPC API methods are usually redirects to the actual methods in the different components of ipfs-cluster, with very little added logic. Refer to documentation on those methods for details on their behaviour.

func (*RPCAPI) ConsensusLogAddPeer

func (rpcapi *RPCAPI) ConsensusLogAddPeer(in api.MultiaddrSerial, out *struct{}) error

ConsensusLogAddPeer runs Consensus.LogAddPeer().

func (*RPCAPI) ConsensusLogPin

func (rpcapi *RPCAPI) ConsensusLogPin(in api.CidArgSerial, out *struct{}) error

ConsensusLogPin runs Consensus.LogPin().

func (*RPCAPI) ConsensusLogRmPeer

func (rpcapi *RPCAPI) ConsensusLogRmPeer(in peer.ID, out *struct{}) error

ConsensusLogRmPeer runs Consensus.LogRmPeer().

func (*RPCAPI) ConsensusLogUnpin

func (rpcapi *RPCAPI) ConsensusLogUnpin(in api.CidArgSerial, out *struct{}) error

ConsensusLogUnpin runs Consensus.LogUnpin().

func (*RPCAPI) ID

func (rpcapi *RPCAPI) ID(in struct{}, out *api.IDSerial) error

ID runs Cluster.ID()

func (*RPCAPI) IPFSPin

func (rpcapi *RPCAPI) IPFSPin(in api.CidArgSerial, out *struct{}) error

IPFSPin runs IPFSConnector.Pin().

func (*RPCAPI) IPFSPinLs

func (rpcapi *RPCAPI) IPFSPinLs(in string, out *map[string]api.IPFSPinStatus) error

IPFSPinLs runs IPFSConnector.PinLs().

func (*RPCAPI) IPFSPinLsCid

func (rpcapi *RPCAPI) IPFSPinLsCid(in api.CidArgSerial, out *api.IPFSPinStatus) error

IPFSPinLsCid runs IPFSConnector.PinLsCid().

func (*RPCAPI) IPFSUnpin

func (rpcapi *RPCAPI) IPFSUnpin(in api.CidArgSerial, out *struct{}) error

IPFSUnpin runs IPFSConnector.Unpin().

func (*RPCAPI) Join

func (rpcapi *RPCAPI) Join(in api.MultiaddrSerial, out *struct{}) error

Join runs Cluster.Join().

func (*RPCAPI) PeerAdd

func (rpcapi *RPCAPI) PeerAdd(in api.MultiaddrSerial, out *api.IDSerial) error

PeerAdd runs Cluster.PeerAdd().

func (*RPCAPI) PeerManagerAddFromMultiaddrs

func (rpcapi *RPCAPI) PeerManagerAddFromMultiaddrs(in api.MultiaddrsSerial, out *struct{}) error

PeerManagerAddFromMultiaddrs runs peerManager.addFromMultiaddrs().

func (*RPCAPI) PeerManagerAddPeer

func (rpcapi *RPCAPI) PeerManagerAddPeer(in api.MultiaddrSerial, out *struct{}) error

PeerManagerAddPeer runs peerManager.addPeer().

func (*RPCAPI) PeerManagerPeers

func (rpcapi *RPCAPI) PeerManagerPeers(in struct{}, out *[]peer.ID) error

PeerManagerPeers runs peerManager.peers().

func (*RPCAPI) PeerManagerRmPeer

func (rpcapi *RPCAPI) PeerManagerRmPeer(in peer.ID, out *struct{}) error

PeerManagerRmPeer runs peerManager.rmPeer().

func (*RPCAPI) PeerManagerRmPeerShutdown

func (rpcapi *RPCAPI) PeerManagerRmPeerShutdown(in peer.ID, out *struct{}) error

PeerManagerRmPeerShutdown runs peerManager.rmPeer().

func (*RPCAPI) PeerMonitorLastMetrics

func (rpcapi *RPCAPI) PeerMonitorLastMetrics(in string, out *[]api.Metric) error

PeerMonitorLastMetrics runs PeerMonitor.LastMetrics().

func (*RPCAPI) PeerMonitorLogMetric

func (rpcapi *RPCAPI) PeerMonitorLogMetric(in api.Metric, out *struct{}) error

PeerMonitorLogMetric runs PeerMonitor.LogMetric().

func (*RPCAPI) PeerRemove

func (rpcapi *RPCAPI) PeerRemove(in peer.ID, out *struct{}) error

PeerRemove runs Cluster.PeerRm().

func (*RPCAPI) Peers

func (rpcapi *RPCAPI) Peers(in struct{}, out *[]api.IDSerial) error

Peers runs Cluster.Peers().

func (*RPCAPI) Pin

func (rpcapi *RPCAPI) Pin(in api.CidArgSerial, out *struct{}) error

Pin runs Cluster.Pin().

func (*RPCAPI) PinList

func (rpcapi *RPCAPI) PinList(in struct{}, out *[]api.CidArgSerial) error

PinList runs Cluster.Pins().

func (*RPCAPI) Recover

func (rpcapi *RPCAPI) Recover(in api.CidArgSerial, out *api.GlobalPinInfoSerial) error

Recover runs Cluster.Recover().

func (*RPCAPI) RemoteMultiaddrForPeer

func (rpcapi *RPCAPI) RemoteMultiaddrForPeer(in peer.ID, out *api.MultiaddrSerial) error

RemoteMultiaddrForPeer returns the multiaddr of a peer as seen by this peer. This is necessary for a peer to figure out which of its multiaddresses the peers are seeing (also when crossing NATs). It should be called from the peer the IN parameter indicates.

func (*RPCAPI) StateSync

func (rpcapi *RPCAPI) StateSync(in struct{}, out *[]api.PinInfoSerial) error

StateSync runs Cluster.StateSync().

func (*RPCAPI) Status

func (rpcapi *RPCAPI) Status(in api.CidArgSerial, out *api.GlobalPinInfoSerial) error

Status runs Cluster.Status().

func (*RPCAPI) StatusAll

func (rpcapi *RPCAPI) StatusAll(in struct{}, out *[]api.GlobalPinInfoSerial) error

StatusAll runs Cluster.StatusAll().

func (*RPCAPI) Sync

func (rpcapi *RPCAPI) Sync(in api.CidArgSerial, out *api.GlobalPinInfoSerial) error

Sync runs Cluster.Sync().

func (*RPCAPI) SyncAll

func (rpcapi *RPCAPI) SyncAll(in struct{}, out *[]api.GlobalPinInfoSerial) error

SyncAll runs Cluster.SyncAll().

func (*RPCAPI) SyncAllLocal

func (rpcapi *RPCAPI) SyncAllLocal(in struct{}, out *[]api.PinInfoSerial) error

SyncAllLocal runs Cluster.SyncAllLocal().

func (*RPCAPI) SyncLocal

func (rpcapi *RPCAPI) SyncLocal(in api.CidArgSerial, out *api.PinInfoSerial) error

SyncLocal runs Cluster.SyncLocal().

func (*RPCAPI) Track

func (rpcapi *RPCAPI) Track(in api.CidArgSerial, out *struct{}) error

Track runs PinTracker.Track().

func (*RPCAPI) TrackerRecover

func (rpcapi *RPCAPI) TrackerRecover(in api.CidArgSerial, out *api.PinInfoSerial) error

TrackerRecover runs PinTracker.Recover().

func (*RPCAPI) TrackerStatus

func (rpcapi *RPCAPI) TrackerStatus(in api.CidArgSerial, out *api.PinInfoSerial) error

TrackerStatus runs PinTracker.Status().

func (*RPCAPI) TrackerStatusAll

func (rpcapi *RPCAPI) TrackerStatusAll(in struct{}, out *[]api.PinInfoSerial) error

TrackerStatusAll runs PinTracker.StatusAll().

func (*RPCAPI) Unpin

func (rpcapi *RPCAPI) Unpin(in api.CidArgSerial, out *struct{}) error

Unpin runs Cluster.Unpin().

func (*RPCAPI) Untrack

func (rpcapi *RPCAPI) Untrack(in api.CidArgSerial, out *struct{}) error

Untrack runs PinTracker.Untrack().

func (*RPCAPI) Version

func (rpcapi *RPCAPI) Version(in struct{}, out *api.Version) error

Version runs Cluster.Version().

type Raft

type Raft struct {
	// contains filtered or unexported fields
}

Raft performs all Raft-specific operations which are needed by Cluster but are not fulfilled by the consensus interface. It should contain most of the Raft-related stuff so it can be easily replaced in the future, if need be.

func NewRaft

func NewRaft(peers []peer.ID, host host.Host, dataFolder string, fsm hashiraft.FSM) (*Raft, error)

NewRaft launches a go-libp2p-raft consensus peer.

func (*Raft) AddPeer

func (r *Raft) AddPeer(peer string) error

AddPeer adds a peer to Raft

func (*Raft) Leader

func (r *Raft) Leader() string

Leader returns Raft's leader. It may be an empty string if there is no leader or it is unknown.

func (*Raft) RemovePeer

func (r *Raft) RemovePeer(peer string) error

RemovePeer removes a peer from Raft

func (*Raft) Shutdown

func (r *Raft) Shutdown() error

Shutdown shutdown Raft and closes the BoltDB.

func (*Raft) Snapshot

func (r *Raft) Snapshot() error

Snapshot tells Raft to take a snapshot.

func (*Raft) WaitForLeader

func (r *Raft) WaitForLeader(ctx context.Context) error

WaitForLeader holds until Raft says we have a leader. Returns an error if we don't.

func (*Raft) WaitForUpdates

func (r *Raft) WaitForUpdates(ctx context.Context) error

WaitForUpdates holds until Raft has synced to the last index in the log

type State

type State interface {
	// Add adds a pin to the State
	Add(api.CidArg) error
	// Rm removes a pin from the State
	Rm(*cid.Cid) error
	// List lists all the pins in the state
	List() []api.CidArg
	// Has returns true if the state is holding information for a Cid
	Has(*cid.Cid) bool
	// Get returns the information attacthed to this pin
	Get(*cid.Cid) api.CidArg
}

State represents the shared state of the cluster and it is used by the Consensus component to keep track of objects which objects are pinned. This component should be thread safe.

type StdPeerMonitor

type StdPeerMonitor struct {
	// contains filtered or unexported fields
}

StdPeerMonitor is a component in charge of monitoring peers, logging metrics and detecting failures

func NewStdPeerMonitor

func NewStdPeerMonitor(windowCap int) *StdPeerMonitor

NewStdPeerMonitor creates a new monitor.

func (*StdPeerMonitor) Alerts

func (mon *StdPeerMonitor) Alerts() <-chan api.Alert

Alerts returns a channel on which alerts are sent when the monitor detects a failure.

func (*StdPeerMonitor) LastMetrics

func (mon *StdPeerMonitor) LastMetrics(name string) []api.Metric

LastMetrics returns last known VALID metrics of a given type

func (*StdPeerMonitor) LogMetric

func (mon *StdPeerMonitor) LogMetric(m api.Metric)

LogMetric stores a metric so it can later be retrieved.

func (*StdPeerMonitor) SetClient

func (mon *StdPeerMonitor) SetClient(c *rpc.Client)

SetClient saves the given rpc.Client for later use

func (*StdPeerMonitor) Shutdown

func (mon *StdPeerMonitor) Shutdown() error

Shutdown stops the peer monitor. It particular, it will not deliver any alerts.

Directories

Path Synopsis
allocator
numpinalloc
Package numpinalloc implements an ipfscluster.Allocator based on the "numpin" Informer.
Package numpinalloc implements an ipfscluster.Allocator based on the "numpin" Informer.
Package api holds declarations for types used in ipfs-cluster APIs to make them re-usable across differen tools.
Package api holds declarations for types used in ipfs-cluster APIs to make them re-usable across differen tools.
informer
numpin
Package numpin implements an ipfs-cluster informer which determines how many items this peer is pinning and returns it as api.Metric
Package numpin implements an ipfs-cluster informer which determines how many items this peer is pinning and returns it as api.Metric
state
Package test offers testing utilities to ipfs-cluster like mocks
Package test offers testing utilities to ipfs-cluster like mocks

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL