csvmap

package module
v0.0.0-...-bdfca34 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Mar 31, 2017 License: MIT Imports: 4 Imported by: 0

README

GoDoc Build Status Coverage Status Go Report Card

CSV Map

CSV Map is a wrapper for the csv package in go's standard library, designed to facilitate easy map access to csv files with header rows.

Installation

This package can be installed with the go get command

go get github.com/andrewcharlton/csvmap

Documentation

API documentation can be found on GoDoc. Where possible, the API has been designed to stick as closely to that of the original csv package as possible, with the exception that maps are returned instead of slices.

Example Usage

Reading
func ExampleReader() {

	in := `name,alias,superpower
Logan,Wolverine,"Super healing and adamantium claws"
Charles Xavier,Professor X,Telepathy
`

	r := csvmap.NewReader(strings.NewReader(in))

	for {
		record, err := r.Read()
		if err == io.EOF {
			break
		}
		if err != nil {
			log.Fatal(err)
		}

		fmt.Println("Name:", record["name"])
		fmt.Println("Alias:", record["alias"])
		fmt.Println("Superpower:", record["superpower"])
		fmt.Println("")
	}

	// Output:
	// Name: Logan
	// Alias: Wolverine
	// Superpower: Super healing and adamantium claws
	//
	// Name: Charles Xavier
	// Alias: Professor X
	// Superpower: Telepathy
	//
}
Writing
func ExampleWriter() {

	headers := []string{"Name", "Alias", "Superpower"}
	data := []map[string]string{
		{"Name": "Logan", "Alias": "Wolverine", "Superpower": "Super healing"},
		{"Name": "Charles Xavier", "Alias": "Professor X", "Superpower": "Telepathy"},
	}

	out := &bytes.Buffer{}
	w := csvmap.NewWriter(out, headers)

	err := w.WriteAll(data)
	if err != nil {
		log.Fatal(err)
	}

	fmt.Println(out.String())

	// Output:
	// Name,Alias,Superpower
	// Logan,Wolverine,Super healing
	// Charles Xavier,Professor X,Telepathy
	//

}

Documentation

Overview

Package csvmap wraps the standard library's encoding/csv package to provide reading and writing maps to csv files.

Because this package only wraps encoding/csv, it only supports the csv file format specified in RFC 4180. Please see the documentation for encoding/csv for more details.

This package assumes that the first row of the csv file contains header data which provides the keys for the following map access. Reading:

Header1,Header2,Header3
Field1,Field2,Field3

results in:

{"Header1":"Field1", "Header2":"Field2", "Header3":"Field3"}

For files with lines of additional header information, the Discard function is provided to remove these before reading the header row.

A Writer method is also provided for writing mapped data to file as well.

w := NewWriter(..., []string{"Header1", "Header", "Header3"})
w.Write({"Header1":"Field1", "Header2":"Field2", "Header3":"Field3"})
w.Flush()

results in:

Header1,Header2,Header3
Field1,Field2,Field3

Index

Examples

Constants

This section is empty.

Variables

View Source
var (
	// ErrDuplicateHeaders is returned when there are duplicated items in the
	// header row.
	ErrDuplicateHeaders = errors.New("duplicate headers found")

	// ErrHeaderSet is returned when trying to discard lines after the headers
	// have already been read - either through Headers() or Read/ReadAll.
	ErrHeaderSet = errors.New("headers set, can't discard lines")
)

Functions

This section is empty.

Types

type Reader

type Reader struct {
	// Comma is the field delimiter.
	// It is set to comma (',') by NewReader.
	Comma rune
	// Comment, if not 0, is the comment character. Lines beginning with the
	// Comment character without preceding whitespace are ignored.
	// With leading whitespace the Comment character becomes part of the
	// field, even if TrimLeadingSpace is true.
	Comment rune
	// If LazyQuotes is true, a quote may appear in an unquoted field and a
	// non-doubled quote may appear in a quoted field.
	LazyQuotes bool
	// If TrimLeadingSpace is true, leading white space in a field is ignored.
	// This is done even if the field delimiter, Comma, is white space.
	TrimLeadingSpace bool
	// contains filtered or unexported fields
}

A Reader returns records (a map of values) from a csv-encoded file.

As returned by NewReader, a Reader expects input conforming to RFC 4180. The exported fields can be changed to customize the details before the first call to Headers/Read/ReadAll.

The header row will be read on the first call to Headers/Read/ReadAll. If there are duplicated keys in the header, an ErrDuplicateHeaders error will be returned at this point.

Example
package main

import (
	"fmt"
	"io"
	"log"
	"strings"

	"github.com/andrewcharlton/csvmap"
)

func main() {

	in := `name,alias,superpower
Logan,Wolverine,"Super healing and adamantium claws"
Charles Xavier,Professor X,Telepathy
`

	r := csvmap.NewReader(strings.NewReader(in))

	for {
		record, err := r.Read()
		if err == io.EOF {
			break
		}
		if err != nil {
			log.Fatal(err)
		}

		fmt.Println("Name:", record["name"])
		fmt.Println("Alias:", record["alias"])
		fmt.Println("Superpower:", record["superpower"])
		fmt.Println("")
	}

}
Output:

Name: Logan
Alias: Wolverine
Superpower: Super healing and adamantium claws

Name: Charles Xavier
Alias: Professor X
Superpower: Telepathy

func NewReader

func NewReader(r io.Reader) *Reader

NewReader returns a reader that will read from r.

func (*Reader) Discard

func (r *Reader) Discard(n int) error

Discard ignores the first n lines of the input reader before reading the headers. If should be called before the first call to Headers/Read/ReadAll, otherwise it will return an ErrHeaderSet error.

If there are insufficient lines to discard, it will return an io.EOF error.

func (*Reader) Headers

func (r *Reader) Headers() ([]string, error)

Headers returns the column headers

func (*Reader) Read

func (r *Reader) Read() (map[string]string, error)

Read reads one record (a map of fields to values). If the record has the unexpected number of fields, Read returns a map of the values present, along with a csv.ErrFieldCount error. Except for that case, Read always returns either a non-nil record or a non-nil error, but not both. If there is no data left to be read, Read returns nil, io.EOF.

On the first call of Read/ReadAll, if headers have not been set by ReadHeaders, this will be done automatically.

func (*Reader) ReadAll

func (r *Reader) ReadAll() ([]map[string]string, error)

ReadAll reads all the remaining records from r. Each record is a slice of fields. A successful call returns err == nil, not err == io.EOF. Because ReadAll is defined to read until EOF, it does not treat end of file as an error to be reported.

type Writer

type Writer struct {
	Comma   rune // Field delimiter (set to ',' by NewWriter)
	UseCRLF bool // True to use \r\n as the line terminator
	// contains filtered or unexported fields
}

A Writer writes records to a csv-encoded file.

As returned by NewWriter, a Writer writes records terminated by a newline and uses ',' as the field delimiter. The exported fields can be changed to customize the details before the call to WriteHeaders

Comma is the field delimiter.

If UseCRLF is true, the Writer ends each record with \r\n instead of \n.

Example
package main

import (
	"bytes"
	"fmt"
	"log"

	"github.com/andrewcharlton/csvmap"
)

func main() {

	headers := []string{"Name", "Alias", "Superpower"}
	data := []map[string]string{
		{"Name": "Logan", "Alias": "Wolverine", "Superpower": "Super healing"},
		{"Name": "Charles Xavier", "Alias": "Professor X", "Superpower": "Telepathy"},
	}

	out := &bytes.Buffer{}
	w := csvmap.NewWriter(out, headers)

	err := w.WriteAll(data)
	if err != nil {
		log.Fatal(err)
	}

	fmt.Println(out.String())

}
Output:

Name,Alias,Superpower
Logan,Wolverine,Super healing
Charles Xavier,Professor X,Telepathy

func NewWriter

func NewWriter(w io.Writer, headers []string) *Writer

NewWriter returns a new writer that writes to w.

The file headers must be specified to provide the order in which record fields will be written to the writer. If headers are provided which don't match the fields in the records, these columns will be left blank.

func (*Writer) Error

func (w *Writer) Error() error

Error reports any error that has occurred during a previous Write or Flush.

func (*Writer) Flush

func (w *Writer) Flush()

Flush writes any buffered data to the underlying io.Writer. To check if an error occurred during the Flush, call Error.

func (*Writer) Write

func (w *Writer) Write(record map[string]string) error

Write writes a single record to w along with any necessary quoting.

Write will only write fields whose keys are in the writer's headers. If a record has keys missing, an empty column will be written.

func (*Writer) WriteAll

func (w *Writer) WriteAll(records []map[string]string) error

WriteAll writes multiple records to w using Write and then calls flush

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL