hdfs: github.com/colinmarc/hdfs Index | Files | Directories

package hdfs

import "github.com/colinmarc/hdfs"

Package hdfs provides a native, idiomatic interface to HDFS. Where possible, it mimics the functionality and signatures of the standard `os` package.

Example:

client, _ := hdfs.New("namenode:8020")

file, _ := client.Open("/mobydick.txt")

buf := make([]byte, 59)
file.ReadAt(buf, 48847)

fmt.Println(string(buf))
// => Abominable are the tumblers into which he pours his poison.

Index

Package Files

client.go content_summary.go error.go file_reader.go file_writer.go hdfs.go mkdir.go perms.go readdir.go remove.go rename.go stat.go stat_fs.go walk.go

type Client Uses

type Client struct {
    // contains filtered or unexported fields
}

A Client represents a connection to an HDFS cluster

func New Uses

func New(address string) (*Client, error)

New returns Client connected to the namenode(s) specified by address, or an error if it can't connect. Multiple namenodes can be specified by separating them with commas, for example "nn1:9000,nn2:9000".

The user will be the current system user. Any other relevant options (including the address(es) of the namenode(s), if an empty string is passed) will be loaded from the Hadoop configuration present at HADOOP_CONF_DIR or HADOOP_HOME, as specified by hadoopconf.LoadFromEnvironment and ClientOptionsFromConf.

Note, however, that New will not attempt any Kerberos authentication; use NewClient if you need that.

func NewClient Uses

func NewClient(options ClientOptions) (*Client, error)

NewClient returns a connected Client for the given options, or an error if the client could not be created.

func (*Client) Append Uses

func (c *Client) Append(name string) (*FileWriter, error)

Append opens an existing file in HDFS and returns an io.WriteCloser for writing to it. Because of the way that HDFS writes are buffered and acknowledged asynchronously, it is very important that Close is called after all data has been written.

func (*Client) Chmod Uses

func (c *Client) Chmod(name string, perm os.FileMode) error

Chmod changes the mode of the named file to mode.

func (*Client) Chown Uses

func (c *Client) Chown(name string, user, group string) error

Chown changes the user and group of the file. Unlike os.Chown, this takes a string username and group (since that's what HDFS uses.)

If an empty string is passed for user or group, that field will not be changed remotely.

func (*Client) Chtimes Uses

func (c *Client) Chtimes(name string, atime time.Time, mtime time.Time) error

Chtimes changes the access and modification times of the named file.

func (*Client) Close Uses

func (c *Client) Close() error

Close terminates all underlying socket connections to remote server.

func (*Client) CopyToLocal Uses

func (c *Client) CopyToLocal(src string, dst string) error

CopyToLocal copies the HDFS file specified by src to the local file at dst. If dst already exists, it will be overwritten.

func (*Client) CopyToRemote Uses

func (c *Client) CopyToRemote(src string, dst string) error

CopyToRemote copies the local file specified by src to the HDFS file at dst.

func (*Client) Create Uses

func (c *Client) Create(name string) (*FileWriter, error)

Create opens a new file in HDFS with the default replication, block size, and permissions (0644), and returns an io.WriteCloser for writing to it. Because of the way that HDFS writes are buffered and acknowledged asynchronously, it is very important that Close is called after all data has been written.

func (*Client) CreateEmptyFile Uses

func (c *Client) CreateEmptyFile(name string) error

CreateEmptyFile creates a empty file at the given name, with the permissions 0644.

func (*Client) CreateFile Uses

func (c *Client) CreateFile(name string, replication int, blockSize int64, perm os.FileMode) (*FileWriter, error)

CreateFile opens a new file in HDFS with the given replication, block size, and permissions, and returns an io.WriteCloser for writing to it. Because of the way that HDFS writes are buffered and acknowledged asynchronously, it is very important that Close is called after all data has been written.

func (*Client) GetContentSummary Uses

func (c *Client) GetContentSummary(name string) (*ContentSummary, error)

GetContentSummary returns a ContentSummary representing the named file or directory. The summary contains information about the entire tree rooted in the named file; for instance, it can return the total size of all

func (*Client) Mkdir Uses

func (c *Client) Mkdir(dirname string, perm os.FileMode) error

Mkdir creates a new directory with the specified name and permission bits.

func (*Client) MkdirAll Uses

func (c *Client) MkdirAll(dirname string, perm os.FileMode) error

MkdirAll creates a directory for dirname, along with any necessary parents, and returns nil, or else returns an error. The permission bits perm are used for all directories that MkdirAll creates. If dirname is already a directory, MkdirAll does nothing and returns nil.

func (*Client) Open Uses

func (c *Client) Open(name string) (*FileReader, error)

Open returns an FileReader which can be used for reading.

func (*Client) ReadDir Uses

func (c *Client) ReadDir(dirname string) ([]os.FileInfo, error)

ReadDir reads the directory named by dirname and returns a list of sorted directory entries.

The os.FileInfo values returned will not have block location attached to the struct returned by Sys().

func (*Client) ReadFile Uses

func (c *Client) ReadFile(filename string) ([]byte, error)

ReadFile reads the file named by filename and returns the contents.

func (*Client) Remove Uses

func (c *Client) Remove(name string) error

Remove removes the named file or (empty) directory.

func (*Client) RemoveAll Uses

func (c *Client) RemoveAll(name string) error

RemoveAll removes path and any children it contains. It removes everything it can but returns the first error it encounters. If the path does not exist, RemoveAll returns nil (no error).

func (*Client) Rename Uses

func (c *Client) Rename(oldpath, newpath string) error

Rename renames (moves) a file.

func (*Client) Stat Uses

func (c *Client) Stat(name string) (os.FileInfo, error)

Stat returns an os.FileInfo describing the named file or directory.

func (*Client) StatFs Uses

func (c *Client) StatFs() (FsInfo, error)

func (*Client) User Uses

func (c *Client) User() string

User returns the user that the Client is acting under. This is either the current system user or the kerberos principal.

func (*Client) Walk Uses

func (c *Client) Walk(root string, walkFn filepath.WalkFunc) error

Walk walks the file tree rooted at root, calling walkFn for each file or directory in the tree, including root. All errors that arise visiting files and directories are filtered by walkFn. The files are walked in lexical order, which makes the output deterministic but means that for very large directories Walk can be inefficient. Walk does not follow symbolic links.

type ClientOptions Uses

type ClientOptions struct {
    // Addresses specifies the namenode(s) to connect to.
    Addresses []string
    // User specifies which HDFS user the client will act as. It is required
    // unless kerberos authentication is enabled, in which case it will be
    // determined from the provided credentials if empty.
    User string
    // UseDatanodeHostname specifies whether the client should connect to the
    // datanodes via hostname (which is useful in multi-homed setups) or IP
    // address, which may be required if DNS isn't available.
    UseDatanodeHostname bool
    // NamenodeDialFunc is used to connect to the datanodes. If nil, then
    // (&net.Dialer{}).DialContext is used.
    NamenodeDialFunc func(ctx context.Context, network, addr string) (net.Conn, error)
    // DatanodeDialFunc is used to connect to the datanodes. If nil, then
    // (&net.Dialer{}).DialContext is used.
    DatanodeDialFunc func(ctx context.Context, network, addr string) (net.Conn, error)
    // KerberosClient is used to connect to kerberized HDFS clusters. If provided,
    // the client will always mutually athenticate when connecting to the
    // namenode(s).
    KerberosClient *krb.Client
    // KerberosServicePrincipleName specifies the Service Principle Name
    // (<SERVICE>/<FQDN>) for the namenode(s). Like in the
    // dfs.namenode.kerberos.principal property of core-site.xml, the special
    // string '_HOST' can be substituted for the address of the namenode in a
    // multi-namenode setup (for example: 'nn/_HOST'). It is required if
    // KerberosClient is provided.
    KerberosServicePrincipleName string
}

ClientOptions represents the configurable options for a client. The NamenodeDialFunc and DatanodeDialFunc options can be used to set connection timeouts:

dialFunc := (&net.Dialer{
    Timeout:   30 * time.Second,
    KeepAlive: 30 * time.Second,
    DualStack: true,
}).DialContext

options := ClientOptions{
    Addresses: []string{"nn1:9000"},
    NamenodeDialFunc: dialFunc,
    DatanodeDialFunc: dialFunc,
}

func ClientOptionsFromConf Uses

func ClientOptionsFromConf(conf hadoopconf.HadoopConf) ClientOptions

ClientOptionsFromConf attempts to load any relevant configuration options from the given Hadoop configuration and create a ClientOptions struct suitable for creating a Client. Currently this sets the following fields on the resulting ClientOptions:

// Determined by fs.defaultFS (or the deprecated fs.default.name), or
// fields beginning with dfs.namenode.rpc-address.
Addresses []string

// Determined by dfs.client.use.datanode.hostname.
UseDatanodeHostname bool

// Set to a non-nil but empty client (without credentials) if the value of
// hadoop.security.authentication is 'kerberos'. It must then be replaced
// with a credentialed Kerberos client.
KerberosClient *krb.Client

// Determined by dfs.namenode.kerberos.principal, with the realm
// (everything after the first '@') chopped off.
KerberosServicePrincipleName string

Because of the way Kerberos can be forced by the Hadoop configuration but not actually configured, you should check for whether KerberosClient is set in the resulting ClientOptions before proceeding:

options := ClientOptionsFromConf(conf)
if options.KerberosClient != nil {
   // Replace with a valid credentialed client.
   options.KerberosClient = getKerberosClient()
}

type ContentSummary Uses

type ContentSummary struct {
    // contains filtered or unexported fields
}

ContentSummary represents a set of information about a file or directory in HDFS. It's provided directly by the namenode, and has no unix filesystem analogue.

func (*ContentSummary) DirectoryCount Uses

func (cs *ContentSummary) DirectoryCount() int

DirectoryCount returns the number of directories under the named one, including any subdirectories, and including the root directory itself. If the named path is a file, this returns 0.

func (*ContentSummary) FileCount Uses

func (cs *ContentSummary) FileCount() int

FileCount returns the number of files under the named path, including any subdirectories. If the named path is a file, FileCount returns 1.

func (*ContentSummary) NameQuota Uses

func (cs *ContentSummary) NameQuota() int

NameQuota returns the HDFS configured "name quota" for the named path. The name quota is a hard limit on the number of directories and files inside a directory; see http://goo.gl/sOSJmJ for more information.

func (*ContentSummary) Size Uses

func (cs *ContentSummary) Size() int64

Size returns the total size of the named path, including any subdirectories.

func (*ContentSummary) SizeAfterReplication Uses

func (cs *ContentSummary) SizeAfterReplication() int64

SizeAfterReplication returns the total size of the named path, including any subdirectories. Unlike Size, it counts the total replicated size of each file, and represents the total on-disk footprint for a tree in HDFS.

func (*ContentSummary) SpaceQuota Uses

func (cs *ContentSummary) SpaceQuota() int64

SpaceQuota returns the HDFS configured "name quota" for the named path. The name quota is a hard limit on the number of directories and files inside a directory; see http://goo.gl/sOSJmJ for more information.

type Error Uses

type Error interface {
    // Method returns the RPC method that encountered an error.
    Method() string
    // Desc returns the long form of the error code (for example ERROR_CHECKSUM).
    Desc() string
    // Exception returns the java exception class name (for example
    // java.io.FileNotFoundException).
    Exception() string
    // Message returns the full error message, complete with java exception
    // traceback.
    Message() string
}

Error represents a remote java exception from an HDFS namenode or datanode.

type FileInfo Uses

type FileInfo struct {
    // contains filtered or unexported fields
}

FileInfo implements os.FileInfo, and provides information about a file or directory in HDFS.

func (*FileInfo) AccessTime Uses

func (fi *FileInfo) AccessTime() time.Time

AccessTime returns the last time the file was accessed. It's not part of the os.FileInfo interface.

func (*FileInfo) IsDir Uses

func (fi *FileInfo) IsDir() bool

func (*FileInfo) ModTime Uses

func (fi *FileInfo) ModTime() time.Time

func (*FileInfo) Mode Uses

func (fi *FileInfo) Mode() os.FileMode

func (*FileInfo) Name Uses

func (fi *FileInfo) Name() string

func (*FileInfo) Owner Uses

func (fi *FileInfo) Owner() string

Owner returns the name of the user that owns the file or directory. It's not part of the os.FileInfo interface.

func (*FileInfo) OwnerGroup Uses

func (fi *FileInfo) OwnerGroup() string

OwnerGroup returns the name of the group that owns the file or directory. It's not part of the os.FileInfo interface.

func (*FileInfo) Size Uses

func (fi *FileInfo) Size() int64

func (*FileInfo) Sys Uses

func (fi *FileInfo) Sys() interface{}

Sys returns the raw *hadoop_hdfs.HdfsFileStatusProto message from the namenode.

type FileReader Uses

type FileReader struct {
    // contains filtered or unexported fields
}

A FileReader represents an existing file or directory in HDFS. It implements io.Reader, io.ReaderAt, io.Seeker, and io.Closer, and can only be used for reads. For writes, see FileWriter and Client.Create.

func (*FileReader) Checksum Uses

func (f *FileReader) Checksum() ([]byte, error)

Checksum returns HDFS's internal "MD5MD5CRC32C" checksum for a given file.

Internally to HDFS, it works by calculating the MD5 of all the CRCs (which are stored alongside the data) for each block, and then calculating the MD5 of all of those.

func (*FileReader) Close Uses

func (f *FileReader) Close() error

Close implements io.Closer.

func (*FileReader) Name Uses

func (f *FileReader) Name() string

Name returns the name of the file.

func (*FileReader) Read Uses

func (f *FileReader) Read(b []byte) (int, error)

Read implements io.Reader.

func (*FileReader) ReadAt Uses

func (f *FileReader) ReadAt(b []byte, off int64) (int, error)

ReadAt implements io.ReaderAt.

func (*FileReader) Readdir Uses

func (f *FileReader) Readdir(n int) ([]os.FileInfo, error)

Readdir reads the contents of the directory associated with file and returns a slice of up to n os.FileInfo values, as would be returned by Stat, in directory order. Subsequent calls on the same file will yield further os.FileInfos.

If n > 0, Readdir returns at most n os.FileInfo values. In this case, if Readdir returns an empty slice, it will return a non-nil error explaining why. At the end of a directory, the error is io.EOF.

If n <= 0, Readdir returns all the os.FileInfo from the directory in a single slice. In this case, if Readdir succeeds (reads all the way to the end of the directory), it returns the slice and a nil error. If it encounters an error before the end of the directory, Readdir returns the os.FileInfo read until that point and a non-nil error.

The os.FileInfo values returned will not have block location attached to the struct returned by Sys(). To fetch that information, make a separate call to Stat.

Note that making multiple calls to Readdir with a smallish n (as you might do with the os version) is slower than just requesting everything at once. That's because HDFS has no mechanism for limiting the number of entries returned; whatever extra entries it returns are simply thrown away.

func (*FileReader) Readdirnames Uses

func (f *FileReader) Readdirnames(n int) ([]string, error)

Readdirnames reads and returns a slice of names from the directory f.

If n > 0, Readdirnames returns at most n names. In this case, if Readdirnames returns an empty slice, it will return a non-nil error explaining why. At the end of a directory, the error is io.EOF.

If n <= 0, Readdirnames returns all the names from the directory in a single slice. In this case, if Readdirnames succeeds (reads all the way to the end of the directory), it returns the slice and a nil error. If it encounters an error before the end of the directory, Readdirnames returns the names read until that point and a non-nil error.

func (*FileReader) Seek Uses

func (f *FileReader) Seek(offset int64, whence int) (int64, error)

Seek implements io.Seeker.

The seek is virtual - it starts a new block read at the new position.

func (*FileReader) SetDeadline Uses

func (f *FileReader) SetDeadline(t time.Time) error

SetDeadline sets the deadline for future Read, ReadAt, and Checksum calls. A zero value for t means those calls will not time out.

func (*FileReader) Stat Uses

func (f *FileReader) Stat() os.FileInfo

Stat returns the FileInfo structure describing file.

type FileWriter Uses

type FileWriter struct {
    // contains filtered or unexported fields
}

A FileWriter represents a writer for an open file in HDFS. It implements Writer and Closer, and can only be used for writes. For reads, see FileReader and Client.Open.

func (*FileWriter) Close Uses

func (f *FileWriter) Close() error

Close closes the file, writing any remaining data out to disk and waiting for acknowledgements from the datanodes. It is important that Close is called after all data has been written.

func (*FileWriter) Flush Uses

func (f *FileWriter) Flush() error

Flush flushes any buffered data out to the datanodes. Even immediately after a call to Flush, it is still necessary to call Close once all data has been written.

func (*FileWriter) SetDeadline Uses

func (f *FileWriter) SetDeadline(t time.Time) error

SetDeadline sets the deadline for future Write, Flush, and Close calls. A zero value for t means those calls will not time out.

Note that because of buffering, Write calls that do not result in a blocking network call may still succeed after the deadline.

func (*FileWriter) Write Uses

func (f *FileWriter) Write(b []byte) (int, error)

Write implements io.Writer for writing to a file in HDFS. Internally, it writes data to an internal buffer first, and then later out to HDFS. Because of this, it is important that Close is called after all data has been written.

type FsInfo Uses

type FsInfo struct {
    Capacity              uint64
    Used                  uint64
    Remaining             uint64
    UnderReplicated       uint64
    CorruptBlocks         uint64
    MissingBlocks         uint64
    MissingReplOneBlocks  uint64
    BlocksInFuture        uint64
    PendingDeletionBlocks uint64
}

FsInfo provides information about HDFS

Directories

PathSynopsis
cmd/hdfs
hadoopconfPackage hadoopconf provides utilities for reading and parsing Hadoop's xml configuration files.

Package hdfs imports 20 packages (graph) and is imported by 39 packages. Updated 2018-12-08. Refresh now. Tools for package owners.