Documentation ¶
Overview ¶
Copyright 2022 The CubeFS Authors.
Note:
- Do not use after releasing it.
- You need to ensure its safety yourself before releasing it.
- You should know that its pointer would have changed after Resize.
- Manually manage the DataBuf in Ranged mode, do not Split it.
MinShardSize min size per shard, fill data into shards 0-N continuously, align with zero bytes if data size less than MinShardSize*N
Length of real data less than MinShardSize*N, ShardSize = MinShardSize.
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - | data | align bytes | partiy | local |
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - | 0 | 1 | .... | N | N+1 | ... | N+M | N+M+L |
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Length more than MinShardSize*N, ShardSize = ceil(len(data)/N)
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - | data |padding| partiy | local |
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - | 0 | 1 | .... | N | N+1 | ... | N+M | N+M+L |
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Example:
import (
"github.com/cubefs/cubefs/blobstore/common/codemode" "github.com/cubefs/cubefs/blobstore/common/resourcepool"
)
func xxx() { pool := resourcepool.NewMemPool(nil) codeModeInfo := codemode.GetTactic(codemode.EC15p12) buffer, err := NewBuffer(1024, codeModeInfo, pool) if err != nil { // ... } // release defer buffer.Release() // release it in an anonymous func if you will resize defer func() { buffer.Release() }() io.Copy(buffer.DataBuf, io.Reader) shards := ec.Split(buffer.ECDataBuf) // ... buffer.Resize(10240) // ... }
Index ¶
Constants ¶
This section is empty.
Variables ¶
var ( ErrShortData = errors.New("short data") ErrInvalidCodeMode = errors.New("invalid code mode") ErrVerify = errors.New("shards verify failed") ErrInvalidShards = errors.New("invalid shards") )
errors
Functions ¶
This section is empty.
Types ¶
type Buffer ¶
type Buffer struct { // DataBuf real data buffer DataBuf []byte // ECDataBuf ec data buffer to Split, // is nil if Buffer is Ranged mode ECDataBuf []byte // BufferSizes all sizes BufferSizes // contains filtered or unexported fields }
Buffer one ec blob's reused buffer Manually manage the DataBuf in Ranged mode, do not Split it
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - | data | align bytes | partiy | local |
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - | DataBuf | |<--DataSize->|
- - - - - - - - - - - - - - - - - - | ECDataBuf | |<-- ECDataSize -->|
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - | ECBuf | |<--- ECSize --->|
func NewRangeBuffer ¶
func NewRangeBuffer(dataSize, from, to int, tactic codemode.Tactic, pool *resourcepool.MemPool) (*Buffer, error)
NewRangeBuffer ranged buffer with least data size
type BufferSizes ¶
type BufferSizes struct { ShardSize int // per shard size DataSize int // real data size ECDataSize int // ec data size only with data ECSize int // ec buffer size with partiy and local shards From int // ranged from To int // ranged to }
BufferSizes all sizes ECSize is sum of all ec shards size, not equal the capacity of DataBuf in Ranged mode
func GetBufferSizes ¶
func GetBufferSizes(dataSize int, tactic codemode.Tactic) (BufferSizes, error)
GetBufferSizes calculate ec buffer sizes
type Encoder ¶
type Encoder interface { // encode source data into shards, whatever normal ec or LRC Encode(shards [][]byte) error // reconstruct all missing shards, you should assign the missing or bad idx in shards Reconstruct(shards [][]byte, badIdx []int) error // only reconstruct data shards, you should assign the missing or bad idx in shards ReconstructData(shards [][]byte, badIdx []int) error // split source data into adapted shards size Split(data []byte) ([][]byte, error) // get data shards(No-Copy) GetDataShards(shards [][]byte) [][]byte // get parity shards(No-Copy) GetParityShards(shards [][]byte) [][]byte // get local shards(LRC model, No-Copy) GetLocalShards(shards [][]byte) [][]byte // get shards in an idc GetShardsInIdc(shards [][]byte, idx int) [][]byte // output source data into dst(io.Writer) Join(dst io.Writer, shards [][]byte, outSize int) error // verify parity shards with data shards Verify(shards [][]byte) (bool, error) }
Encoder normal ec encoder, implements all these functions
func NewEncoder ¶
NewEncoder return an encoder which support normal EC or LRC