xz

package
v0.0.0-...-2d055ce Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Feb 2, 2015 License: BSD-3-Clause Imports: 12 Imported by: 0

Documentation

Overview

Package xz implements reading and writing files in the XZ file format.

Index

Constants

View Source
const (
	Extreme            = lib.PRESET_EXTREME
	BestSpeed          = lib.PRESET_LEVEL0
	BestCompression    = lib.PRESET_LEVEL9
	DefaultCompression = lib.PRESET_DEFAULT
)

The available compression mode constants.

These constants are copied from the lib package, so that code that imports "goxz/xz" does not also have to import "goxz/lib". Extreme flag can be ORed with any of the other options to try and improve compression ratio at the cost of CPU cycles.

View Source
const (
	CheckNone    = lib.CHECK_NONE
	CheckCRC32   = lib.CHECK_CRC32
	CheckCRC64   = lib.CHECK_CRC64
	CheckSHA256  = lib.CHECK_SHA256
	CheckDefault = CheckCRC64
)

The available checksum mode constants.

These constants are copied from the lib package, so that code that imports "goxz/xz" does not also have to import "goxz/lib". The CheckNone flag disables the placement of checksums; it's usage is heavily discouraged.

View Source
const (
	ChunkStream  = -1            // Disables chunking, single chunk per stream
	ChunkLowest  = 64 * kiloByte // Minimum chunking size
	ChunkHighest = 1 * gigaByte  // Maximum chunking size
	ChunkDefault = 8 * megaByte  // Defaults to 8 MiB chunks
)

Chunksize encoding configuration constants.

This controls the size of the individually compressed chunks. If the ChunkStream constant is used, then a single stream with a single block of arbitrary size will be outputted. Additionally, using ChunkStream forces the encoder to be non-parallelized. On the other hand, any positive chunk size allows the encoder to speed-up compression by operating on each block in parallel. The ChunkDefault value was chosen as a compromise between fair memory usage, high compression ratio, and good random access properties.

View Source
const (
	BufferMin     = 64 * kiloByte
	BufferMax     = 1 * gigaByte
	BufferDefault = 8 * ChunkDefault
)

Maximum decoding buffer per routine constants.

This controls the maximum amount of buffer space that may be used by each decompression routine. In general, this buffer should be large enough to hold both the compressed data and uncompressed data for a chunk. The value for MaxBufferDefault was chosen to be able handle larger chunk sizes. If a chunk that is too large to handle during decoding, then the decoder will fall back to single-routine streaming mode. This loses benefits of parallelism, but ensures that progress is made.

Note that this only controls the input/output buffers before using liblzma codec. The library itself may use memory orders of magnitude more than the allocated buffer size. The latest liblzma implementation gives estimates where the codec could use up to 10x more memory than the dictionary size. For files generated by this package, the dictionary size is usually the size of each chunk.

Furthermore, if a SeekReader object is created, a single cache with a size equivalent to this buffer will be allocated to help absorb short seek-read operations that are typical of access patterns. In order to make cache logic much simpler, the buffer size will be rounded to the closest power of two.

View Source
const (
	WorkersSync    = 0  // Read and write operations are synchronous
	WorkersMin     = 1  // Use only one routine
	WorkersMax     = -1 // Use as many routines as needed
	WorkersDefault = 8  // Dynamically scales up to this many workers
)

Maximum number of worker routines constants.

The logic in this library tries to dynamically allocate the right number of workers. There is no point in trying to parallelize the work more if the calling program or underyling writer does not consume the output fast enough.

Variables

View Source
var (
	WarnNoCheck      = errors.New("XZ: No checksum in file")
	WarnUnknownCheck = errors.New("XZ: Unsupported checksum in file")
)

Errors conditions due to unsupported check types.

These warnings may be returned when first creating the decoding object or also during decoding of the stream itself. These errors are not fatal, and decompression may continue. However, a user should be aware that the decoder provides no guaruntees that the data is valid.

Functions

This section is empty.

Types

type Reader

type Reader struct {
	// contains filtered or unexported fields
}

func NewReader

func NewReader(rd io.Reader) (*Reader, error)

func NewReaderCustom

func NewReaderCustom(rd io.Reader, maxBuffer, maxWorkers int) (_ *Reader, err error)

func (*Reader) Close

func (r *Reader) Close() error

func (*Reader) Read

func (r *Reader) Read(data []byte) (cnt int, err error)

type SeekReader

type SeekReader struct {
	Index *lib.Index // The raw index structure. Only call accessor methods.
	// contains filtered or unexported fields
}

func NewSeekReader

func NewSeekReader(rd io.ReaderAt) (*SeekReader, error)

If the ReaderAt provided also satisfies the the ReadSeeker interface, then the native Read and Seek methods will be used.

func NewSeekReaderCustom

func NewSeekReaderCustom(rd io.ReaderAt, maxBuffer int, maxWorkers int) (_ *SeekReader, err error)

If the ReaderAt provided also satisfies the the ReadSeeker interface, then the native Read and Seek methods will be used.

func (*SeekReader) Close

func (r *SeekReader) Close() error

func (*SeekReader) Read

func (r *SeekReader) Read(data []byte) (n int, err error)

func (*SeekReader) ReadAt

func (r *SeekReader) ReadAt(data []byte, off int64) (n int, err error)

func (*SeekReader) Seek

func (r *SeekReader) Seek(offset int64, whence int) (int64, error)

func (*SeekReader) SeekPoints

func (r *SeekReader) SeekPoints() []int64

type Writer

type Writer struct {
	// contains filtered or unexported fields
}

func NewWriter

func NewWriter(wr io.Writer) (*Writer, error)

func NewWriterCustom

func NewWriterCustom(wr io.Writer, level, checkType, chunkSize, maxWorkers int) (_ *Writer, err error)

func NewWriterLevel

func NewWriterLevel(wr io.Writer, level int) (*Writer, error)

func (*Writer) Close

func (w *Writer) Close() (err error)

func (*Writer) Write

func (w *Writer) Write(data []byte) (cnt int, err error)

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL