Documentation ¶
Overview ¶
Package xz implements reading and writing files in the XZ file format.
Index ¶
Constants ¶
const ( Extreme = lib.PRESET_EXTREME BestSpeed = lib.PRESET_LEVEL0 BestCompression = lib.PRESET_LEVEL9 DefaultCompression = lib.PRESET_DEFAULT )
The available compression mode constants.
These constants are copied from the lib package, so that code that imports "goxz/xz" does not also have to import "goxz/lib". Extreme flag can be ORed with any of the other options to try and improve compression ratio at the cost of CPU cycles.
const ( CheckNone = lib.CHECK_NONE CheckCRC32 = lib.CHECK_CRC32 CheckCRC64 = lib.CHECK_CRC64 CheckSHA256 = lib.CHECK_SHA256 CheckDefault = CheckCRC64 )
The available checksum mode constants.
These constants are copied from the lib package, so that code that imports "goxz/xz" does not also have to import "goxz/lib". The CheckNone flag disables the placement of checksums; it's usage is heavily discouraged.
const ( ChunkStream = -1 // Disables chunking, single chunk per stream ChunkLowest = 64 * kiloByte // Minimum chunking size ChunkHighest = 1 * gigaByte // Maximum chunking size ChunkDefault = 8 * megaByte // Defaults to 8 MiB chunks )
Chunksize encoding configuration constants.
This controls the size of the individually compressed chunks. If the ChunkStream constant is used, then a single stream with a single block of arbitrary size will be outputted. Additionally, using ChunkStream forces the encoder to be non-parallelized. On the other hand, any positive chunk size allows the encoder to speed-up compression by operating on each block in parallel. The ChunkDefault value was chosen as a compromise between fair memory usage, high compression ratio, and good random access properties.
const ( BufferMin = 64 * kiloByte BufferMax = 1 * gigaByte BufferDefault = 8 * ChunkDefault )
Maximum decoding buffer per routine constants.
This controls the maximum amount of buffer space that may be used by each decompression routine. In general, this buffer should be large enough to hold both the compressed data and uncompressed data for a chunk. The value for MaxBufferDefault was chosen to be able handle larger chunk sizes. If a chunk that is too large to handle during decoding, then the decoder will fall back to single-routine streaming mode. This loses benefits of parallelism, but ensures that progress is made.
Note that this only controls the input/output buffers before using liblzma codec. The library itself may use memory orders of magnitude more than the allocated buffer size. The latest liblzma implementation gives estimates where the codec could use up to 10x more memory than the dictionary size. For files generated by this package, the dictionary size is usually the size of each chunk.
Furthermore, if a SeekReader object is created, a single cache with a size equivalent to this buffer will be allocated to help absorb short seek-read operations that are typical of access patterns. In order to make cache logic much simpler, the buffer size will be rounded to the closest power of two.
const ( WorkersSync = 0 // Read and write operations are synchronous WorkersMin = 1 // Use only one routine WorkersMax = -1 // Use as many routines as needed WorkersDefault = 8 // Dynamically scales up to this many workers )
Maximum number of worker routines constants.
The logic in this library tries to dynamically allocate the right number of workers. There is no point in trying to parallelize the work more if the calling program or underyling writer does not consume the output fast enough.
Variables ¶
var ( WarnNoCheck = errors.New("XZ: No checksum in file") WarnUnknownCheck = errors.New("XZ: Unsupported checksum in file") )
Errors conditions due to unsupported check types.
These warnings may be returned when first creating the decoding object or also during decoding of the stream itself. These errors are not fatal, and decompression may continue. However, a user should be aware that the decoder provides no guaruntees that the data is valid.
Functions ¶
This section is empty.
Types ¶
type Reader ¶
type Reader struct {
// contains filtered or unexported fields
}
func NewReaderCustom ¶
type SeekReader ¶
type SeekReader struct { Index *lib.Index // The raw index structure. Only call accessor methods. // contains filtered or unexported fields }
func NewSeekReader ¶
func NewSeekReader(rd io.ReaderAt) (*SeekReader, error)
If the ReaderAt provided also satisfies the the ReadSeeker interface, then the native Read and Seek methods will be used.
func NewSeekReaderCustom ¶
If the ReaderAt provided also satisfies the the ReadSeeker interface, then the native Read and Seek methods will be used.
func (*SeekReader) Close ¶
func (r *SeekReader) Close() error
func (*SeekReader) SeekPoints ¶
func (r *SeekReader) SeekPoints() []int64
type Writer ¶
type Writer struct {
// contains filtered or unexported fields
}