throughputbuffer

package module
v1.0.0 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Feb 27, 2024 License: BSD-2-Clause Imports: 6 Imported by: 0

README

throughputbuffer

GoDoc

Package throughputbuffer provides a high throughput indefinitely growing io.ReadWriter, like bytes.Buffer, but optimized for minimal copies. It does the minimum amount of copies (1 per read + 1 per write) (or fewer through ReadFrom and WriteTo) and never has to move bytes in the buffer.

Byte slices are shared in a BufferPool. All byte slices are of the same length: the size given to New().

Buffers automatically shrink as data is read from them (they return their data to the pool).

package main

import (
	"crypto/rand"

	"github.com/Jille/throughputbuffer"
)

func main() {
	pool := throughputbuffer.New(64 * 1024 * 1024)

	buf1 := pool.Get()
	io.CopyN(buf1, rand.Reader, 1024 * 1024 * 1024)
	buf2 := pool.Get()
	io.Copy(buf2, buf1) // As data is read from buf, the byteslices are freed and reused by buf2.
}

Documentation

Overview

Package throughputbuffer provides a high throughput indefinitely growing io.ReadWriter. It does the minimum amount of copies (1 per read + 1 per write) and never has to move bytes in the buffer.

Memory is freed once read.

Index

Constants

This section is empty.

Variables

This section is empty.

Functions

This section is empty.

Types

type Buffer

type Buffer struct {
	// contains filtered or unexported fields
}

Buffer is a io.ReadWriter that can grow infinitely and does the minimum amount of copies (1 per read + 1 per write) and never has to move bytes.

func (*Buffer) Bytes

func (b *Buffer) Bytes() []byte

Bytes consumes all the data and returns it as a byte slice.

func (*Buffer) Clone added in v0.3.0

func (b *Buffer) Clone() *Buffer

Clone returns a copy of this Buffer. Existing data is shared between the two, but they can be used completely independently. Reads and writes to any of the clones won't affect the other. Clones (and the original) can be cloned again. Clone can only be called concurrently with itself after this Buffer has been cloned at least once. Clone can not be called concurrently with any other methods on *this* Buffer (but is immune to calls on clones of this Buffer).

func (*Buffer) Len

func (b *Buffer) Len() int

Len returns the number of bytes in the buffer.

func (*Buffer) Read

func (b *Buffer) Read(p []byte) (int, error)

Read consumes len(p) bytes from the buffer (or less if the buffer is smaller). The only error it can return is io.EOF.

func (*Buffer) ReadFrom

func (b *Buffer) ReadFrom(r io.Reader) (int64, error)

ReadFrom reads all data from r and return the number of bytes read and the error from the reader.

func (*Buffer) Reset added in v0.2.0

func (b *Buffer) Reset()

Reset discards the contents back to the pool. The Buffer can be reused after this.

func (*Buffer) Write

func (b *Buffer) Write(p []byte) (int, error)

Write the data into the buffer and return len(p), nil. It always returns a nil error.

func (*Buffer) WriteString added in v1.0.0

func (b *Buffer) WriteString(p string) (int, error)

WriteString is like Write, but accepts a string to potentially save you a copy.

func (*Buffer) WriteTo

func (b *Buffer) WriteTo(w io.Writer) (int64, error)

WriteTo calls w.Write() repeatedly with all the data in the buffer. Any returned error is straight from w.Write(). If an error is returned, the Buffer will have consumed those bytes but is otherwise still usable. If no error is returned, the Buffer will be empty after this. If the given Writer implements `syscall.Conn`, WriteTo will try to use `unix.Writev()`.

type BufferPool

type BufferPool struct {
	// contains filtered or unexported fields
}

BufferPool holds a sync.Pool of byte slices and can be used to create new Buffers.

func New

func New(blocksize int) *BufferPool

New creates a new BufferPool. The blockSize is the size of the []byte slices internally. The blocksize should be within a few orders of magnitude of the expected size of your buffers. Using a larger blocksize results in more memory being held but unused, a smaller blocksize takes a bit more CPU cycles.

func NewFromPool added in v0.4.0

func NewFromPool(pool *sync.Pool) *BufferPool

NewFromPool creates a new BufferPool. The pool must contain []byte or *[]byte as buffers. Used buffers are not zeroed before being returned to the pool. The pool's New function must be set.

func (*BufferPool) Get

func (p *BufferPool) Get() *Buffer

Get creates a new Buffer using byte slices from this pool.

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL