ipfscluster

package module
v0.4.0 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: May 30, 2018 License: MIT Imports: 26 Imported by: 0

README

IPFS Cluster

Made by Main project IRC channel standard-readme compliant GoDoc Go Report Card Build Status Coverage Status

Pinset orchestration for IPFS.

logo

IPFS Cluster allows to allocate, replicate and track Pins across a cluster of IPFS daemons.

It provides:

  • A cluster peer application: ipfs-cluster-service, to be run along with go-ipfs.
  • A client CLI application: ipfs-cluster-ctl, which allows easily interacting with the peer's HTTP API.

Table of Contents

Documentation

Please visit https://cluster.ipfs.io/documentation/ to access user documentation, guides and any other resources, including detailed download and usage instructions.

News & Roadmap

We regularly post project updates to https://cluster.ipfs.io/news/ .

The most up-to-date Roadmap is available at https://cluster.ipfs.io/roadmap/ .

Install

Instructions for different installation methods (including from source) are available at https://cluster.ipfs.io/documentation/download .

Usage

Extensive usage information is provided at https://cluster.ipfs.io/documentation/ , including:

Contribute

PRs accepted. As part of the IPFS project, we have some contribution guidelines.

Small note: If editing the README, please conform to the standard-readme specification.

License

MIT © Protocol Labs, Inc.

Documentation

Overview

Package ipfscluster implements a wrapper for the IPFS deamon which allows to orchestrate pinning operations among several IPFS nodes.

IPFS Cluster uses a go-libp2p-raft to keep a shared state between the different cluster peers. It also uses LibP2P to enable communication between its different components, which perform different tasks like managing the underlying IPFS daemons, or providing APIs for external control.

Index

Constants

View Source
const (
	DefaultConfigCrypto        = crypto.RSA
	DefaultConfigKeyLength     = 2048
	DefaultListenAddr          = "/ip4/0.0.0.0/tcp/9096"
	DefaultStateSyncInterval   = 60 * time.Second
	DefaultIPFSSyncInterval    = 130 * time.Second
	DefaultMonitorPingInterval = 15 * time.Second
	DefaultPeerWatchInterval   = 5 * time.Second
	DefaultReplicationFactor   = -1
	DefaultLeaveOnShutdown     = false
	DefaultDisableRepinning    = false
	DefaultPeerstoreFile       = "peerstore"
)

Configuration defaults

View Source
const Version = "0.4.0"

Version is the current cluster version. Version alignment between components, apis and tools ensures compatibility among them.

Variables

View Source
var Commit = "00000000" // actual commit set during builds.

Commit is the current build commit of cluster. See Makefile.

View Source
var LoggingFacilities = map[string]string{
	"cluster":     "INFO",
	"restapi":     "INFO",
	"ipfshttp":    "INFO",
	"monitor":     "INFO",
	"mapstate":    "INFO",
	"consensus":   "INFO",
	"pintracker":  "INFO",
	"ascendalloc": "INFO",
	"diskinfo":    "INFO",
	"apitypes":    "INFO",
	"config":      "INFO",
}

LoggingFacilities provides a list of logging identifiers used by cluster and their default logging level.

View Source
var LoggingFacilitiesExtra = map[string]string{
	"p2p-gorpc":   "CRITICAL",
	"swarm2":      "ERROR",
	"libp2p-raft": "CRITICAL",
	"raft":        "ERROR",
}

LoggingFacilitiesExtra provides logging identifiers used in ipfs-cluster dependencies, which may be useful to display. Along with their default value.

View Source
var RPCProtocol = protocol.ID("/ipfscluster/" + Version + "/rpc")

RPCProtocol is used to send libp2p messages between cluster peers

View Source
var ReadyTimeout = 30 * time.Second

ReadyTimeout specifies the time before giving up during startup (waiting for consensus to be ready) It may need adjustment according to timeouts in the consensus layer.

Functions

func DecodeClusterSecret added in v0.1.0

func DecodeClusterSecret(hexSecret string) ([]byte, error)

DecodeClusterSecret parses a hex-encoded string, checks that it is exactly 32 bytes long and returns its value as a byte-slice.x

func EncodeProtectorKey added in v0.3.5

func EncodeProtectorKey(secretBytes []byte) string

EncodeProtectorKey converts a byte slice to its hex string representation.

func GlobalPinInfoSliceToSerial added in v0.4.0

func GlobalPinInfoSliceToSerial(gpi []api.GlobalPinInfo) []api.GlobalPinInfoSerial

GlobalPinInfoSliceToSerial is a helper function for serializing a slice of api.GlobalPinInfos.

func NewClusterHost added in v0.3.5

func NewClusterHost(ctx context.Context, cfg *Config) (host.Host, error)

NewClusterHost creates a libp2p Host with the options from the provided cluster configuration.

func PeersFromMultiaddrs added in v0.3.2

func PeersFromMultiaddrs(addrs []ma.Multiaddr) []peer.ID

PeersFromMultiaddrs returns all the different peers in the given addresses. each peer only will appear once in the result, even if several multiaddresses for it are provided.

func SetFacilityLogLevel added in v0.0.3

func SetFacilityLogLevel(f, l string)

SetFacilityLogLevel sets the log level for a given module

Types

type API

type API interface {
	Component
}

API is a component which offers an API for Cluster. This is a base component.

type Cluster

type Cluster struct {
	// contains filtered or unexported fields
}

Cluster is the main IPFS cluster component. It provides the go-API for it and orchestrates the components that make up the system.

func NewCluster

func NewCluster(
	host host.Host,
	cfg *Config,
	consensus Consensus,
	api API,
	ipfs IPFSConnector,
	st state.State,
	tracker PinTracker,
	monitor PeerMonitor,
	allocator PinAllocator,
	informer Informer,
) (*Cluster, error)

NewCluster builds a new IPFS Cluster peer. It initializes a LibP2P host, creates and RPC Server and client and sets up all components.

The new cluster peer may still be performing initialization tasks when this call returns (consensus may still be bootstrapping). Use Cluster.Ready() if you need to wait until the peer is fully up.

func (*Cluster) ConnectGraph added in v0.3.3

func (c *Cluster) ConnectGraph() (api.ConnectGraph, error)

ConnectGraph returns a description of which cluster peers and ipfs daemons are connected to each other

func (*Cluster) Done added in v0.0.3

func (c *Cluster) Done() <-chan struct{}

Done provides a way to learn if the Peer has been shutdown (for example, because it has been removed from the Cluster)

func (*Cluster) ID

func (c *Cluster) ID() api.ID

ID returns information about the Cluster peer

func (*Cluster) Join added in v0.0.3

func (c *Cluster) Join(addr ma.Multiaddr) error

Join adds this peer to an existing cluster. The calling peer should be a single-peer cluster node. This is almost equivalent to calling PeerAdd on the destination cluster.

func (*Cluster) PeerAdd added in v0.0.3

func (c *Cluster) PeerAdd(addr ma.Multiaddr) (api.ID, error)

PeerAdd adds a new peer to this Cluster.

The new peer must be reachable. It will be added to the consensus and will receive the shared state (including the list of peers). The new peer should be a single-peer cluster, preferable without any relevant state.

func (*Cluster) PeerRemove added in v0.0.3

func (c *Cluster) PeerRemove(pid peer.ID) error

PeerRemove removes a peer from this Cluster.

The peer will be removed from the consensus peerset, all it's content will be re-pinned and the peer it will shut itself down.

func (*Cluster) Peers added in v0.0.3

func (c *Cluster) Peers() []api.ID

Peers returns the IDs of the members of this Cluster.

func (*Cluster) Pin

func (c *Cluster) Pin(pin api.Pin) error

Pin makes the cluster Pin a Cid. This implies adding the Cid to the IPFS Cluster peers shared-state. Depending on the cluster pinning strategy, the PinTracker may then request the IPFS daemon to pin the Cid.

Pin returns an error if the operation could not be persisted to the global state. Pin does not reflect the success or failure of underlying IPFS daemon pinning operations.

If the argument's allocations are non-empty then these peers are pinned with priority over other peers in the cluster. If the max repl factor is less than the size of the specified peerset then peers are chosen from this set in allocation order. If the min repl factor is greater than the size of this set then the remaining peers are allocated in order from the rest of the cluster. Priority allocations are best effort. If any priority peers are unavailable then Pin will simply allocate from the rest of the cluster.

func (*Cluster) PinGet added in v0.1.0

func (c *Cluster) PinGet(h *cid.Cid) (api.Pin, error)

PinGet returns information for a single Cid managed by Cluster. The information is obtained from the current global state. The returned api.Pin provides information about the allocations assigned for the requested Cid, but does not indicate if the item is successfully pinned. For that, use Status(). PinGet returns an error if the given Cid is not part of the global state.

func (*Cluster) Pins

func (c *Cluster) Pins() []api.Pin

Pins returns the list of Cids managed by Cluster and which are part of the current global state. This is the source of truth as to which pins are managed and their allocation, but does not indicate if the item is successfully pinned. For that, use StatusAll().

func (*Cluster) Ready added in v0.0.3

func (c *Cluster) Ready() <-chan struct{}

Ready returns a channel which signals when this peer is fully initialized (including consensus).

func (*Cluster) Recover added in v0.0.3

func (c *Cluster) Recover(h *cid.Cid) (api.GlobalPinInfo, error)

Recover triggers a recover operation for a given Cid in all cluster peers.

func (*Cluster) RecoverAllLocal added in v0.3.1

func (c *Cluster) RecoverAllLocal() ([]api.PinInfo, error)

RecoverAllLocal triggers a RecoverLocal operation for all Cids tracked by this peer.

func (*Cluster) RecoverLocal added in v0.0.3

func (c *Cluster) RecoverLocal(h *cid.Cid) (api.PinInfo, error)

RecoverLocal triggers a recover operation for a given Cid in this peer only. It returns the updated PinInfo, after recovery.

func (*Cluster) Shutdown

func (c *Cluster) Shutdown() error

Shutdown stops the IPFS cluster components

func (*Cluster) StateSync

func (c *Cluster) StateSync() error

StateSync syncs the consensus state to the Pin Tracker, ensuring that every Cid in the shared state is tracked and that the Pin Tracker is not tracking more Cids than it should.

func (*Cluster) Status

func (c *Cluster) Status(h *cid.Cid) (api.GlobalPinInfo, error)

Status returns the GlobalPinInfo for a given Cid as fetched from all current peers. If an error happens, the GlobalPinInfo should contain as much information as could be fetched from the other peers.

func (*Cluster) StatusAll added in v0.0.3

func (c *Cluster) StatusAll() ([]api.GlobalPinInfo, error)

StatusAll returns the GlobalPinInfo for all tracked Cids in all peers. If an error happens, the slice will contain as much information as could be fetched from other peers.

func (*Cluster) StatusAllLocal added in v0.3.1

func (c *Cluster) StatusAllLocal() []api.PinInfo

StatusAllLocal returns the PinInfo for all the tracked Cids in this peer.

func (*Cluster) StatusLocal added in v0.3.1

func (c *Cluster) StatusLocal(h *cid.Cid) api.PinInfo

StatusLocal returns this peer's PinInfo for a given Cid.

func (*Cluster) Sync added in v0.0.3

func (c *Cluster) Sync(h *cid.Cid) (api.GlobalPinInfo, error)

Sync triggers a SyncLocal() operation for a given Cid. in all cluster peers.

func (*Cluster) SyncAll added in v0.0.3

func (c *Cluster) SyncAll() ([]api.GlobalPinInfo, error)

SyncAll triggers SyncAllLocal() operations in all cluster peers, making sure that the state of tracked items matches the state reported by the IPFS daemon and returning the results as GlobalPinInfo. If an error happens, the slice will contain as much information as could be fetched from the peers.

func (*Cluster) SyncAllLocal added in v0.0.3

func (c *Cluster) SyncAllLocal() ([]api.PinInfo, error)

SyncAllLocal makes sure that the current state for all tracked items in this peer matches the state reported by the IPFS daemon.

SyncAllLocal returns the list of PinInfo that where updated because of the operation, along with those in error states.

func (*Cluster) SyncLocal added in v0.0.3

func (c *Cluster) SyncLocal(h *cid.Cid) (api.PinInfo, error)

SyncLocal performs a local sync operation for the given Cid. This will tell the tracker to verify the status of the Cid against the IPFS daemon. It returns the updated PinInfo for the Cid.

func (*Cluster) Unpin

func (c *Cluster) Unpin(h *cid.Cid) error

Unpin makes the cluster Unpin a Cid. This implies adding the Cid to the IPFS Cluster peers shared-state.

Unpin returns an error if the operation could not be persisted to the global state. Unpin does not reflect the success or failure of underlying IPFS daemon unpinning operations.

func (*Cluster) Version

func (c *Cluster) Version() string

Version returns the current IPFS Cluster version.

type Component

type Component interface {
	SetClient(*rpc.Client)
	Shutdown() error
}

Component represents a piece of ipfscluster. Cluster components usually run their own goroutines (a http server for example). They communicate with the main Cluster component and other components (both local and remote), using an instance of rpc.Client.

type Config

type Config struct {
	config.Saver

	// Libp2p ID and private key for Cluster communication (including)
	// the Consensus component.
	ID         peer.ID
	PrivateKey crypto.PrivKey

	// User-defined peername for use as human-readable identifier.
	Peername string

	// Cluster secret for private network. Peers will be in the same cluster if and
	// only if they have the same ClusterSecret. The cluster secret must be exactly
	// 64 characters and contain only hexadecimal characters (`[0-9a-f]`).
	Secret []byte

	// Leave Cluster on shutdown. Politely informs other peers
	// of the departure and removes itself from the consensus
	// peer set. The Cluster size will be reduced by one.
	LeaveOnShutdown bool

	// Listen parameters for the Cluster libp2p Host. Used by
	// the RPC and Consensus components.
	ListenAddr ma.Multiaddr

	// Time between syncs of the consensus state to the
	// tracker state. Normally states are synced anyway, but this helps
	// when new nodes are joining the cluster. Reduce for faster
	// consistency, increase with larger states.
	StateSyncInterval time.Duration

	// Number of seconds between syncs of the local state and
	// the state of the ipfs daemon. This ensures that cluster
	// provides the right status for tracked items (for example
	// to detect that a pin has been removed. Reduce for faster
	// consistency, increase when the number of pinned items is very
	// large.
	IPFSSyncInterval time.Duration

	// ReplicationFactorMax indicates the target number of nodes
	// that should pin content. For exampe, a replication_factor of
	// 3 will have cluster allocate each pinned hash to 3 peers if
	// possible.
	// See also ReplicationFactorMin. A ReplicationFactorMax of -1
	// will allocate to every available node.
	ReplicationFactorMax int

	// ReplicationFactorMin indicates the minimum number of healthy
	// nodes pinning content. If the number of nodes available to pin
	// is less than this threshold, an error will be returned.
	// In the case of peer health issues, content pinned will be
	// re-allocated if the threshold is crossed.
	// For exampe, a ReplicationFactorMin of 2 will allocate at least
	// two peer to hold content, and return an error if this is not
	// possible.
	ReplicationFactorMin int

	// MonitorPingInterval is the frequency with which a cluster peer pings
	// the monitoring component. The ping metric has a TTL set to the double
	// of this value.
	MonitorPingInterval time.Duration

	// PeerWatchInterval is the frequency that we use to watch for changes
	// in the consensus peerset and save new peers to the configuration
	// file. This also affects how soon we realize that we have
	// been removed from a cluster.
	PeerWatchInterval time.Duration

	// If true, DisableRepinning, ensures that no repinning happens
	// when a node goes down.
	// This is useful when doing certain types of maintainance, or simply
	// when not wanting to rely on the monitoring system which needs a revamp.
	DisableRepinning bool

	// Peerstore file specifies the file on which we persist the
	// libp2p host peerstore addresses. This file is regularly saved.
	PeerstoreFile string
	// contains filtered or unexported fields
}

Config is the configuration object containing customizable variables to initialize the main ipfs-cluster component. It implements the config.ComponentConfig interface.

func (*Config) ConfigKey added in v0.2.0

func (cfg *Config) ConfigKey() string

ConfigKey returns a human-readable string to identify a cluster Config.

func (*Config) Default added in v0.2.0

func (cfg *Config) Default() error

Default fills in all the Config fields with default working values. This means, it will generate a valid random ID, PrivateKey and Secret.

func (*Config) GetPeerstorePath added in v0.4.0

func (cfg *Config) GetPeerstorePath() string

GetPeerstorePath returns the full path of the PeerstoreFile, obtained by concatenating that value with BaseDir of the configuration, if set. An empty string is returned when BaseDir is not set.

func (*Config) LoadJSON added in v0.2.0

func (cfg *Config) LoadJSON(raw []byte) error

LoadJSON receives a raw json-formatted configuration and sets the Config fields from it. Note that it should be JSON as generated by ToJSON().

func (*Config) ToJSON added in v0.2.0

func (cfg *Config) ToJSON() (raw []byte, err error)

ToJSON generates a human-friendly version of Config.

func (*Config) Validate added in v0.2.0

func (cfg *Config) Validate() error

Validate will check that the values of this config seem to be working ones.

type Consensus

type Consensus interface {
	Component
	// Returns a channel to signal that the consensus layer is ready
	// allowing the main component to wait for it during start.
	Ready() <-chan struct{}
	// Logs a pin operation
	LogPin(c api.Pin) error
	// Logs an unpin operation
	LogUnpin(c api.Pin) error
	AddPeer(p peer.ID) error
	RmPeer(p peer.ID) error
	State() (state.State, error)
	// Provide a node which is responsible to perform
	// specific tasks which must only run in 1 cluster peer
	Leader() (peer.ID, error)
	// Only returns when the consensus state has all log
	// updates applied to it
	WaitForSync() error
	// Clean removes all consensus data
	Clean() error
	// Peers returns the peerset participating in the Consensus
	Peers() ([]peer.ID, error)
}

Consensus is a component which keeps a shared state in IPFS Cluster and triggers actions on updates to that state. Currently, Consensus needs to be able to elect/provide a Cluster Leader and the implementation is very tight to the Cluster main component.

type IPFSConnector

type IPFSConnector interface {
	Component
	ID() (api.IPFSID, error)
	Pin(context.Context, *cid.Cid, bool) error
	Unpin(context.Context, *cid.Cid) error
	PinLsCid(context.Context, *cid.Cid) (api.IPFSPinStatus, error)
	PinLs(ctx context.Context, typeFilter string) (map[string]api.IPFSPinStatus, error)
	// ConnectSwarms make sure this peer's IPFS daemon is connected to
	// other peers IPFS daemons.
	ConnectSwarms() error
	// SwarmPeers returns the IPFS daemon's swarm peers
	SwarmPeers() (api.SwarmPeers, error)
	// ConfigKey returns the value for a configuration key.
	// Subobjects are reached with keypaths as "Parent/Child/GrandChild...".
	ConfigKey(keypath string) (interface{}, error)
	// FreeSpace returns the amount of remaining space on the repo, calculated from
	//"repo stat"
	FreeSpace() (uint64, error)
	// RepoSize returns the current repository size as expressed
	// by "repo stat".
	RepoSize() (uint64, error)
}

IPFSConnector is a component which allows cluster to interact with an IPFS daemon. This is a base component.

type Informer added in v0.0.3

type Informer interface {
	Component
	Name() string
	GetMetric() api.Metric
}

Informer provides Metric information from a peer. The metrics produced by informers are then passed to a PinAllocator which will use them to determine where to pin content. The metric is agnostic to the rest of Cluster.

type PeerMonitor added in v0.0.3

type PeerMonitor interface {
	Component
	// LogMetric stores a metric. It can be used to manually inject
	// a metric to a monitor.
	LogMetric(api.Metric) error
	// PublishMetric sends a metric to the rest of the peers.
	// How to send it, and to who, is to be decided by the implementation.
	PublishMetric(api.Metric) error
	// LatestMetrics returns a map with the latest metrics of matching name
	// for the current cluster peers.
	LatestMetrics(name string) []api.Metric
	// Alerts delivers alerts generated when this peer monitor detects
	// a problem (i.e. metrics not arriving as expected). Alerts can be used
	// to trigger self-healing measures or re-pinnings of content.
	Alerts() <-chan api.Alert
}

PeerMonitor is a component in charge of publishing a peer's metrics and reading metrics from other peers in the cluster. The PinAllocator will use the metrics provided by the monitor as candidates for Pin allocations.

The PeerMonitor component also provides an Alert channel which is signaled when a metric is no longer received and the monitor identifies it as a problem.

type Peered

type Peered interface {
	AddPeer(p peer.ID)
	RmPeer(p peer.ID)
}

Peered represents a component which needs to be aware of the peers in the Cluster and of any changes to the peer set.

type PinAllocator added in v0.0.3

type PinAllocator interface {
	Component
	// Allocate returns the list of peers that should be assigned to
	// Pin content in order of preference (from the most preferred to the
	// least). The "current" map contains valid metrics for peers
	// which are currently pinning the content. The candidates map
	// contains the metrics for all peers which are eligible for pinning
	// the content.
	Allocate(c *cid.Cid, current, candidates, priority map[peer.ID]api.Metric) ([]peer.ID, error)
}

PinAllocator decides where to pin certain content. In order to make such decision, it receives the pin arguments, the peers which are currently allocated to the content and metrics available for all peers which could allocate the content.

type PinTracker

type PinTracker interface {
	Component
	// Track tells the tracker that a Cid is now under its supervision
	// The tracker may decide to perform an IPFS pin.
	Track(api.Pin) error
	// Untrack tells the tracker that a Cid is to be forgotten. The tracker
	// may perform an IPFS unpin operation.
	Untrack(*cid.Cid) error
	// StatusAll returns the list of pins with their local status.
	StatusAll() []api.PinInfo
	// Status returns the local status of a given Cid.
	Status(*cid.Cid) api.PinInfo
	// SyncAll makes sure that all tracked Cids reflect the real IPFS status.
	// It returns the list of pins which were updated by the call.
	SyncAll() ([]api.PinInfo, error)
	// Sync makes sure that the Cid status reflect the real IPFS status.
	// It returns the local status of the Cid.
	Sync(*cid.Cid) (api.PinInfo, error)
	// RecoverAll calls Recover() for all pins tracked.
	RecoverAll() ([]api.PinInfo, error)
	// Recover retriggers a Pin/Unpin operation in a Cids with error status.
	Recover(*cid.Cid) (api.PinInfo, error)
}

PinTracker represents a component which tracks the status of the pins in this cluster and ensures they are in sync with the IPFS daemon. This component should be thread safe.

type RPCAPI

type RPCAPI struct {
	// contains filtered or unexported fields
}

RPCAPI is a go-libp2p-gorpc service which provides the internal ipfs-cluster API, which enables components and cluster peers to communicate and request actions from each other.

The RPC API methods are usually redirects to the actual methods in the different components of ipfs-cluster, with very little added logic. Refer to documentation on those methods for details on their behaviour.

func (*RPCAPI) ConnectGraph added in v0.3.3

func (rpcapi *RPCAPI) ConnectGraph(ctx context.Context, in struct{}, out *api.ConnectGraphSerial) error

ConnectGraph runs Cluster.GetConnectGraph().

func (*RPCAPI) ConsensusAddPeer added in v0.3.0

func (rpcapi *RPCAPI) ConsensusAddPeer(ctx context.Context, in peer.ID, out *struct{}) error

ConsensusAddPeer runs Consensus.AddPeer().

func (*RPCAPI) ConsensusLogPin

func (rpcapi *RPCAPI) ConsensusLogPin(ctx context.Context, in api.PinSerial, out *struct{}) error

ConsensusLogPin runs Consensus.LogPin().

func (*RPCAPI) ConsensusLogUnpin

func (rpcapi *RPCAPI) ConsensusLogUnpin(ctx context.Context, in api.PinSerial, out *struct{}) error

ConsensusLogUnpin runs Consensus.LogUnpin().

func (*RPCAPI) ConsensusPeers added in v0.3.0

func (rpcapi *RPCAPI) ConsensusPeers(ctx context.Context, in struct{}, out *[]peer.ID) error

ConsensusPeers runs Consensus.Peers().

func (*RPCAPI) ConsensusRmPeer added in v0.3.0

func (rpcapi *RPCAPI) ConsensusRmPeer(ctx context.Context, in peer.ID, out *struct{}) error

ConsensusRmPeer runs Consensus.RmPeer().

func (*RPCAPI) ID

func (rpcapi *RPCAPI) ID(ctx context.Context, in struct{}, out *api.IDSerial) error

ID runs Cluster.ID()

func (*RPCAPI) IPFSConfigKey added in v0.0.11

func (rpcapi *RPCAPI) IPFSConfigKey(ctx context.Context, in string, out *interface{}) error

IPFSConfigKey runs IPFSConnector.ConfigKey().

func (*RPCAPI) IPFSConnectSwarms added in v0.0.11

func (rpcapi *RPCAPI) IPFSConnectSwarms(ctx context.Context, in struct{}, out *struct{}) error

IPFSConnectSwarms runs IPFSConnector.ConnectSwarms().

func (*RPCAPI) IPFSFreeSpace added in v0.2.0

func (rpcapi *RPCAPI) IPFSFreeSpace(ctx context.Context, in struct{}, out *uint64) error

IPFSFreeSpace runs IPFSConnector.FreeSpace().

func (*RPCAPI) IPFSPin

func (rpcapi *RPCAPI) IPFSPin(ctx context.Context, in api.PinSerial, out *struct{}) error

IPFSPin runs IPFSConnector.Pin().

func (*RPCAPI) IPFSPinLs added in v0.0.3

func (rpcapi *RPCAPI) IPFSPinLs(ctx context.Context, in string, out *map[string]api.IPFSPinStatus) error

IPFSPinLs runs IPFSConnector.PinLs().

func (*RPCAPI) IPFSPinLsCid added in v0.0.3

func (rpcapi *RPCAPI) IPFSPinLsCid(ctx context.Context, in api.PinSerial, out *api.IPFSPinStatus) error

IPFSPinLsCid runs IPFSConnector.PinLsCid().

func (*RPCAPI) IPFSRepoSize added in v0.0.11

func (rpcapi *RPCAPI) IPFSRepoSize(ctx context.Context, in struct{}, out *uint64) error

IPFSRepoSize runs IPFSConnector.RepoSize().

func (*RPCAPI) IPFSSwarmPeers added in v0.3.3

func (rpcapi *RPCAPI) IPFSSwarmPeers(ctx context.Context, in struct{}, out *api.SwarmPeersSerial) error

IPFSSwarmPeers runs IPFSConnector.SwarmPeers().

func (*RPCAPI) IPFSUnpin

func (rpcapi *RPCAPI) IPFSUnpin(ctx context.Context, in api.PinSerial, out *struct{}) error

IPFSUnpin runs IPFSConnector.Unpin().

func (*RPCAPI) Join added in v0.0.3

func (rpcapi *RPCAPI) Join(ctx context.Context, in api.MultiaddrSerial, out *struct{}) error

Join runs Cluster.Join().

func (*RPCAPI) PeerAdd added in v0.0.3

func (rpcapi *RPCAPI) PeerAdd(ctx context.Context, in api.MultiaddrSerial, out *api.IDSerial) error

PeerAdd runs Cluster.PeerAdd().

func (*RPCAPI) PeerManagerAddPeer added in v0.0.3

func (rpcapi *RPCAPI) PeerManagerAddPeer(ctx context.Context, in api.MultiaddrSerial, out *struct{}) error

PeerManagerAddPeer runs peerManager.addPeer().

func (*RPCAPI) PeerManagerImportAddresses added in v0.3.0

func (rpcapi *RPCAPI) PeerManagerImportAddresses(ctx context.Context, in api.MultiaddrsSerial, out *struct{}) error

PeerManagerImportAddresses runs peerManager.importAddresses().

func (*RPCAPI) PeerMonitorLatestMetrics added in v0.4.0

func (rpcapi *RPCAPI) PeerMonitorLatestMetrics(ctx context.Context, in string, out *[]api.Metric) error

PeerMonitorLatestMetrics runs PeerMonitor.LatestMetrics().

func (*RPCAPI) PeerMonitorLogMetric added in v0.0.3

func (rpcapi *RPCAPI) PeerMonitorLogMetric(ctx context.Context, in api.Metric, out *struct{}) error

PeerMonitorLogMetric runs PeerMonitor.LogMetric().

func (*RPCAPI) PeerRemove added in v0.0.3

func (rpcapi *RPCAPI) PeerRemove(ctx context.Context, in peer.ID, out *struct{}) error

PeerRemove runs Cluster.PeerRm().

func (*RPCAPI) Peers added in v0.0.3

func (rpcapi *RPCAPI) Peers(ctx context.Context, in struct{}, out *[]api.IDSerial) error

Peers runs Cluster.Peers().

func (*RPCAPI) Pin

func (rpcapi *RPCAPI) Pin(ctx context.Context, in api.PinSerial, out *struct{}) error

Pin runs Cluster.Pin().

func (*RPCAPI) PinGet added in v0.1.0

func (rpcapi *RPCAPI) PinGet(ctx context.Context, in api.PinSerial, out *api.PinSerial) error

PinGet runs Cluster.PinGet().

func (*RPCAPI) Pins added in v0.1.0

func (rpcapi *RPCAPI) Pins(ctx context.Context, in struct{}, out *[]api.PinSerial) error

Pins runs Cluster.Pins().

func (*RPCAPI) Recover added in v0.0.3

func (rpcapi *RPCAPI) Recover(ctx context.Context, in api.PinSerial, out *api.GlobalPinInfoSerial) error

Recover runs Cluster.Recover().

func (*RPCAPI) RecoverAllLocal added in v0.3.1

func (rpcapi *RPCAPI) RecoverAllLocal(ctx context.Context, in struct{}, out *[]api.PinInfoSerial) error

RecoverAllLocal runs Cluster.RecoverAllLocal().

func (*RPCAPI) RecoverLocal added in v0.3.1

func (rpcapi *RPCAPI) RecoverLocal(ctx context.Context, in api.PinSerial, out *api.PinInfoSerial) error

RecoverLocal runs Cluster.RecoverLocal().

func (*RPCAPI) RemoteMultiaddrForPeer added in v0.0.3

func (rpcapi *RPCAPI) RemoteMultiaddrForPeer(ctx context.Context, in peer.ID, out *api.MultiaddrSerial) error

RemoteMultiaddrForPeer returns the multiaddr of a peer as seen by this peer. This is necessary for a peer to figure out which of its multiaddresses the peers are seeing (also when crossing NATs). It should be called from the peer the IN parameter indicates.

func (*RPCAPI) Status

func (rpcapi *RPCAPI) Status(ctx context.Context, in api.PinSerial, out *api.GlobalPinInfoSerial) error

Status runs Cluster.Status().

func (*RPCAPI) StatusAll added in v0.0.3

func (rpcapi *RPCAPI) StatusAll(ctx context.Context, in struct{}, out *[]api.GlobalPinInfoSerial) error

StatusAll runs Cluster.StatusAll().

func (*RPCAPI) StatusAllLocal added in v0.3.1

func (rpcapi *RPCAPI) StatusAllLocal(ctx context.Context, in struct{}, out *[]api.PinInfoSerial) error

StatusAllLocal runs Cluster.StatusAllLocal().

func (*RPCAPI) StatusLocal added in v0.3.1

func (rpcapi *RPCAPI) StatusLocal(ctx context.Context, in api.PinSerial, out *api.PinInfoSerial) error

StatusLocal runs Cluster.StatusLocal().

func (*RPCAPI) Sync added in v0.0.3

func (rpcapi *RPCAPI) Sync(ctx context.Context, in api.PinSerial, out *api.GlobalPinInfoSerial) error

Sync runs Cluster.Sync().

func (*RPCAPI) SyncAll added in v0.0.3

func (rpcapi *RPCAPI) SyncAll(ctx context.Context, in struct{}, out *[]api.GlobalPinInfoSerial) error

SyncAll runs Cluster.SyncAll().

func (*RPCAPI) SyncAllLocal added in v0.0.3

func (rpcapi *RPCAPI) SyncAllLocal(ctx context.Context, in struct{}, out *[]api.PinInfoSerial) error

SyncAllLocal runs Cluster.SyncAllLocal().

func (*RPCAPI) SyncLocal added in v0.0.3

func (rpcapi *RPCAPI) SyncLocal(ctx context.Context, in api.PinSerial, out *api.PinInfoSerial) error

SyncLocal runs Cluster.SyncLocal().

func (*RPCAPI) Track

func (rpcapi *RPCAPI) Track(ctx context.Context, in api.PinSerial, out *struct{}) error

Track runs PinTracker.Track().

func (*RPCAPI) TrackerRecover added in v0.0.3

func (rpcapi *RPCAPI) TrackerRecover(ctx context.Context, in api.PinSerial, out *api.PinInfoSerial) error

TrackerRecover runs PinTracker.Recover().

func (*RPCAPI) TrackerRecoverAll added in v0.3.1

func (rpcapi *RPCAPI) TrackerRecoverAll(ctx context.Context, in struct{}, out *[]api.PinInfoSerial) error

TrackerRecoverAll runs PinTracker.RecoverAll().

func (*RPCAPI) TrackerStatus

func (rpcapi *RPCAPI) TrackerStatus(ctx context.Context, in api.PinSerial, out *api.PinInfoSerial) error

TrackerStatus runs PinTracker.Status().

func (*RPCAPI) TrackerStatusAll added in v0.0.3

func (rpcapi *RPCAPI) TrackerStatusAll(ctx context.Context, in struct{}, out *[]api.PinInfoSerial) error

TrackerStatusAll runs PinTracker.StatusAll().

func (*RPCAPI) Unpin

func (rpcapi *RPCAPI) Unpin(ctx context.Context, in api.PinSerial, out *struct{}) error

Unpin runs Cluster.Unpin().

func (*RPCAPI) Untrack

func (rpcapi *RPCAPI) Untrack(ctx context.Context, in api.PinSerial, out *struct{}) error

Untrack runs PinTracker.Untrack().

func (*RPCAPI) Version

func (rpcapi *RPCAPI) Version(ctx context.Context, in struct{}, out *api.Version) error

Version runs Cluster.Version().

Directories

Path Synopsis
allocator
ascendalloc
Package ascendalloc implements an ipfscluster.PinAllocator, which returns allocations based on sorting the metrics in ascending order.
Package ascendalloc implements an ipfscluster.PinAllocator, which returns allocations based on sorting the metrics in ascending order.
descendalloc
Package descendalloc implements an ipfscluster.PinAllocator returns allocations based on sorting the metrics in descending order.
Package descendalloc implements an ipfscluster.PinAllocator returns allocations based on sorting the metrics in descending order.
util
Package util is a utility package used by the allocator implementations.
Package util is a utility package used by the allocator implementations.
api
Package api holds declarations for types used in ipfs-cluster APIs to make them re-usable across differen tools.
Package api holds declarations for types used in ipfs-cluster APIs to make them re-usable across differen tools.
rest
Package rest implements an IPFS Cluster API component.
Package rest implements an IPFS Cluster API component.
consensus
raft
Package raft implements a Consensus component for IPFS Cluster which uses Raft (go-libp2p-raft).
Package raft implements a Consensus component for IPFS Cluster which uses Raft (go-libp2p-raft).
informer
disk
Package disk implements an ipfs-cluster informer which can provide different disk-related metrics from the IPFS daemon as an api.Metric.
Package disk implements an ipfs-cluster informer which can provide different disk-related metrics from the IPFS daemon as an api.Metric.
numpin
Package numpin implements an ipfs-cluster informer which determines how many items this peer is pinning and returns it as api.Metric
Package numpin implements an ipfs-cluster informer which determines how many items this peer is pinning and returns it as api.Metric
lock logic heavily inspired by go-ipfs/repo/fsrepo/lock/lock.go
lock logic heavily inspired by go-ipfs/repo/fsrepo/lock/lock.go
ipfsconn
ipfshttp
Package ipfshttp implements an IPFS Cluster IPFSConnector component.
Package ipfshttp implements an IPFS Cluster IPFSConnector component.
monitor
basic
Package basic implements a basic PeerMonitor component for IPFS Cluster.
Package basic implements a basic PeerMonitor component for IPFS Cluster.
metrics
Package metrics provides common functionality for working with metrics, particulary useful for monitoring components.
Package metrics provides common functionality for working with metrics, particulary useful for monitoring components.
pubsubmon
Package pubsubmon implements a PeerMonitor component for IPFS Cluster that uses PubSub to send and receive metrics.
Package pubsubmon implements a PeerMonitor component for IPFS Cluster that uses PubSub to send and receive metrics.
pintracker
maptracker
Package maptracker implements a PinTracker component for IPFS Cluster.
Package maptracker implements a PinTracker component for IPFS Cluster.
optracker
Package optracker implements functionality to track the status of pin and operations as needed by implementations of the pintracker component.
Package optracker implements functionality to track the status of pin and operations as needed by implementations of the pintracker component.
Package pstoremgr provides a Manager that simplifies handling addition, listing and removal of cluster peer multiaddresses from the libp2p Host.
Package pstoremgr provides a Manager that simplifies handling addition, listing and removal of cluster peer multiaddresses from the libp2p Host.
Package rpcutil provides utility methods to perform go-libp2p-gorpc calls, particularly gorpc.MultiCall().
Package rpcutil provides utility methods to perform go-libp2p-gorpc calls, particularly gorpc.MultiCall().
Package state holds the interface that any state implementation for IPFS Cluster must satisfy.
Package state holds the interface that any state implementation for IPFS Cluster must satisfy.
mapstate
Package mapstate implements the State interface for IPFS Cluster by using a map to keep track of the consensus-shared state.
Package mapstate implements the State interface for IPFS Cluster by using a map to keep track of the consensus-shared state.
Package test offers testing utilities to ipfs-cluster like mocks
Package test offers testing utilities to ipfs-cluster like mocks

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL