tablib

package module
v0.0.0-...-4930582 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Mar 10, 2016 License: MIT Imports: 18 Imported by: 33

README

go-tablib: format-agnostic tabular dataset library

MIT License Go Documentation Go Report Card Build Status

Go-Tablib is a format-agnostic tabular dataset library, written in Go. This is a port of the famous Python's tablib by Kenneth Reitz with some new features.

Export formats supported:

  • JSON (Sets + Books)
  • YAML (Sets + Books)
  • XLSX (Sets + Books)
  • XML (Sets + Books)
  • TSV (Sets)
  • CSV (Sets)
  • ASCII + Markdown (Sets)
  • MySQL (Sets)
  • Postgres (Sets)

Loading formats supported:

  • JSON (Sets + Books)
  • YAML (Sets + Books)
  • XML (Sets)
  • CSV (Sets)
  • TSV (Sets)

Overview

tablib.Dataset

A Dataset is a table of tabular data. It must have a header row. Datasets can be exported to JSON, YAML, CSV, TSV, and XML. They can be filtered, sorted and validated against constraint on columns.

tablib.Databook

A Databook is a set of Datasets. The most common form of a Databook is an Excel file with multiple spreadsheets. Databooks can be exported to JSON, YAML and XML.

tablib.Exportable

An exportable is a struct that holds a buffer representing the Databook or Dataset after it has been formated to any of the supported export formats. At this point the Datbook or Dataset cannot be modified anymore, but it can be returned as a string, a []byte or written to a io.Writer or a file.

Usage

Creates a dataset and populate it:

ds := NewDataset([]string{"firstName", "lastName"})

Add new rows:

ds.Append([]interface{}{"John", "Adams"})
ds.AppendValues("George", "Washington")

Add new columns:

ds.AppendColumn("age", []interface{}{90, 67})
ds.AppendColumnValues("sex", "male", "male")

Add a dynamic column, by passing a function which has access to the current row, and must return a value:

func lastNameLen(row []interface{}) interface{} {
	return len(row[1].(string))
}
ds.AppendDynamicColumn("lastName length", lastNameLen)
ds.CSV()
// >>
// firstName, lastName, age, sex, lastName length
// John, Adams, 90, male, 5
// George, Washington, 67, male, 10

Delete rows:

ds.DeleteRow(1) // starts at 0

Delete columns:

ds.DeleteColumn("sex")

Get a row or multiple rows:

row, _ := ds.Row(0)
fmt.Println(row["firstName"]) // George

rows, _ := ds.Rows(0, 1)
fmt.Println(rows[0]["firstName"]) // George
fmt.Println(rows[1]["firstName"]) // Thomas

Slice a Dataset:

newDs, _ := ds.Slice(1, 5) // returns a fresh Dataset with rows [1..5[

Filtering

You can add tags to rows by using a specific Dataset method. This allows you to filter your Dataset later. This can be useful to separate rows of data based on arbitrary criteria (e.g. origin) that you don’t want to include in your Dataset.

ds := NewDataset([]string{"Maker", "Model"})
ds.AppendTagged([]interface{}{"Porsche", "911"}, "fast", "luxury")
ds.AppendTagged([]interface{}{"Skoda", "Octavia"}, "family")
ds.AppendTagged([]interface{}{"Ferrari", "458"}, "fast", "luxury")
ds.AppendValues("Citroen", "Picasso")
ds.AppendValues("Bentley", "Continental")
ds.Tag(4, "luxury") // Bentley
ds.AppendValuesTagged("Aston Martin", "DB9", /* these are tags */ "fast", "luxury")

Filtering the Dataset is possible by calling Filter(column):

luxuryCars, err := ds.Filter("luxury").CSV()
fmt.Println(luxuryCars)
// >>>
// Maker,Model
// Porsche,911
// Ferrari,458
// Bentley,Continental
// Aston Martin,DB9
fastCars, err := ds.Filter("fast").CSV()
fmt.Println(fastCars)
// >>>
// Maker,Model
// Porsche,911
// Ferrari,458
// Aston Martin,DB9

Tags at a specific row can be retrieved by calling Dataset.Tags(index int)

Sorting

Datasets can be sorted by a specific column.

ds := NewDataset([]string{"Maker", "Model", "Year"})
ds.AppendValues("Porsche", "991", 2012)
ds.AppendValues("Skoda", "Octavia", 2011)
ds.AppendValues("Ferrari", "458", 2009)
ds.AppendValues("Citroen", "Picasso II", 2013)
ds.AppendValues("Bentley", "Continental GT", 2003)

sorted, err := ds.Sort("Year").CSV()
fmt.Println(sorted)
// >>
// Maker, Model, Year
// Bentley, Continental GT, 2003
// Ferrari, 458, 2009
// Skoda, Octavia, 2011
// Porsche, 991, 2012
// Citroen, Picasso II, 2013

Constraining

Datasets can have columns constrained by functions and further checked if valid.

ds := NewDataset([]string{"Maker", "Model", "Year"})
ds.AppendValues("Porsche", "991", 2012)
ds.AppendValues("Skoda", "Octavia", 2011)
ds.AppendValues("Ferrari", "458", 2009)
ds.AppendValues("Citroen", "Picasso II", 2013)
ds.AppendValues("Bentley", "Continental GT", 2003)

ds.ConstrainColumn("Year", func(val interface{}) bool { return val.(int) > 2008 })
ds.ValidFailFast() // false
if !ds.Valid() { // validate the whole dataset, errors are retrieved in Dataset.ValidationErrors
	ds.ValidationErrors[0] // Row: 4, Column: 2
}

A Dataset with constrained columns can be filtered to keep only the rows satisfying the constraints.

valid := ds.ValidSubset().Tabular("simple") // Cars after 2008
fmt.Println(valid)

Will output:

------------  ---------------  ---------
      Maker            Model       Year
------------  ---------------  ---------
    Porsche              991       2012

      Skoda          Octavia       2011

    Ferrari              458       2009

    Citroen       Picasso II       2013
------------  ---------------  ---------
invalid := ds.InvalidSubset().Tabular("simple") // Cars before 2008
fmt.Println(invalid)

Will output:

------------  -------------------  ---------
      Maker                Model       Year
------------  -------------------  ---------
    Bentley       Continental GT       2003
------------  -------------------  ---------

Loading

JSON
ds, _ := LoadJSON([]byte(`[
  {"age":90,"firstName":"John","lastName":"Adams"},
  {"age":67,"firstName":"George","lastName":"Washington"},
  {"age":83,"firstName":"Henry","lastName":"Ford"}
]`))
YAML
ds, _ := LoadYAML([]byte(`- age: 90
  firstName: John
  lastName: Adams
- age: 67
  firstName: George
  lastName: Washington
- age: 83
  firstName: Henry
  lastName: Ford`))

Exports

Exportable

Any of the following export format returns an *Exportable which means you can use:

  • Bytes() to get the content as a byte array
  • String() to get the content as a string
  • WriteTo(io.Writer) to write the content to an io.Writer
  • WriteFile(filename string, perm os.FileMode) to write to a file

It avoids unnecessary conversion between string and []byte to output/write/whatever. Thanks to @figlief for the proposition.

JSON
json, _ := ds.JSON()
fmt.Println(json)

Will output:

[{"age":90,"firstName":"John","lastName":"Adams"},{"age":67,"firstName":"George","lastName":"Washington"},{"age":83,"firstName":"Henry","lastName":"Ford"}]
XML
xml, _ := ds.XML()
fmt.Println(xml)

Will ouput:

<dataset>
 <row>
   <age>90</age>
   <firstName>John</firstName>
   <lastName>Adams</lastName>
 </row>  <row>
   <age>67</age>
   <firstName>George</firstName>
   <lastName>Washington</lastName>
 </row>  <row>
   <age>83</age>
   <firstName>Henry</firstName>
   <lastName>Ford</lastName>
 </row>
</dataset>
CSV
csv, _ := ds.CSV()
fmt.Println(csv)

Will ouput:

firstName,lastName,age
John,Adams,90
George,Washington,67
Henry,Ford,83
TSV
tsv, _ := ds.TSV()
fmt.Println(tsv)

Will ouput:

firstName lastName  age
John  Adams  90
George  Washington  67
Henry Ford 83
YAML
yaml, _ := ds.YAML()
fmt.Println(yaml)

Will ouput:

- age: 90
  firstName: John
  lastName: Adams
- age: 67
  firstName: George
  lastName: Washington
- age: 83
  firstName: Henry
  lastName: Ford
HTML
html := ds.HTML()
fmt.Println(html)

Will output:

<table class="table table-striped">
	<thead>
		<tr>
			<th>firstName</th>
			<th>lastName</th>
			<th>age</th>
		</tr>
	</thead>
	<tbody>
		<tr>
			<td>George</td>
			<td>Washington</td>
			<td>90</td>
		</tr>
		<tr>
			<td>Henry</td>
			<td>Ford</td>
			<td>67</td>
		</tr>
		<tr>
			<td>Foo</td>
			<td>Bar</td>
			<td>83</td>
		</tr>
	</tbody>
</table>
XLSX
xlsx, _ := ds.XLSX()
fmt.Println(xlsx)
// >>>
// binary content
xlsx.WriteTo(...)
ASCII
Grid format
ascii := ds.Tabular("grid" /* tablib.TabularGrid */)
fmt.Println(ascii)

Will output:

+--------------+---------------+--------+
|    firstName |      lastName |    age |
+==============+===============+========+
|       George |    Washington |     90 |
+--------------+---------------+--------+
|        Henry |          Ford |     67 |
+--------------+---------------+--------+
|          Foo |           Bar |     83 |
+--------------+---------------+--------+
Simple format
ascii := ds.Tabular("simple" /* tablib.TabularSimple */)
fmt.Println(ascii)

Will output:

--------------  ---------------  --------
    firstName         lastName       age
--------------  ---------------  --------
       George       Washington        90

        Henry             Ford        67

          Foo              Bar        83
--------------  ---------------  --------
Condensed format
ascii := ds.Tabular("condensed" /* tablib.TabularCondensed */)
fmt.Println(ascii)

Similar to simple but with less line feed:

--------------  ---------------  --------
    firstName         lastName       age
--------------  ---------------  --------
       George       Washington        90
        Henry             Ford        67
          Foo              Bar        83
--------------  ---------------  --------
Markdown

Markdown tables are similar to the Tabular condensed format, except that they have pipe characters separating columns.

mkd := ds.Markdown() // or
mkd := ds.Tabular("markdown" /* tablib.TabularMarkdown */)
fmt.Println(mkd)

Will output:

|     firstName   |       lastName    |    gpa  |
| --------------  | ---------------   | ------- |
|          John   |          Adams    |     90  |
|        George   |     Washington    |     67  |
|        Thomas   |      Jefferson    |     50  |

Which equals to the following when rendered as HTML:

firstName lastName gpa
John Adams 90
George Washington 67
Thomas Jefferson 50
MySQL
sql := ds.MySQL()
fmt.Println(sql)

Will output:

CREATE TABLE IF NOT EXISTS presidents
(
	id INT NOT NULL AUTO_INCREMENT PRIMARY KEY,
	firstName VARCHAR(9),
	lastName VARCHAR(8),
	gpa DOUBLE
);

INSERT INTO presidents VALUES(1, 'Jacques', 'Chirac', 88);
INSERT INTO presidents VALUES(2, 'Nicolas', 'Sarkozy', 98);
INSERT INTO presidents VALUES(3, 'François', 'Hollande', 34);

COMMIT;

Numeric (uint, int, float, ...) are stored as DOUBLE, strings as VARCHAR with width set to the length of the longest string in the column, and time.Times are stored as TIMESTAMP.

Postgres
sql := ds.Postgres()
fmt.Println(sql)

Will output:

CREATE TABLE IF NOT EXISTS presidents
(
	id SERIAL PRIMARY KEY,
	firstName TEXT,
	lastName TEXT,
	gpa NUMERIC
);

INSERT INTO presidents VALUES(1, 'Jacques', 'Chirac', 88);
INSERT INTO presidents VALUES(2, 'Nicolas', 'Sarkozy', 98);
INSERT INTO presidents VALUES(3, 'François', 'Hollande', 34);

COMMIT;

Numerics (uint, int, float, ...) are stored as NUMERIC, strings as TEXT and time.Times are stored as TIMESTAMP.

Databooks

This is an example of how to use Databooks.

db := NewDatabook()
// or loading a JSON content
db, err := LoadDatabookJSON([]byte(`...`))
// or a YAML content
db, err := LoadDatabookYAML([]byte(`...`))

// a dataset of presidents
presidents, _ := LoadJSON([]byte(`[
  {"Age":90,"First name":"John","Last name":"Adams"},
  {"Age":67,"First name":"George","Last name":"Washington"},
  {"Age":83,"First name":"Henry","Last name":"Ford"}
]`))

// a dataset of cars
cars := NewDataset([]string{"Maker", "Model", "Year"})
cars.AppendValues("Porsche", "991", 2012)
cars.AppendValues("Skoda", "Octavia", 2011)
cars.AppendValues("Ferrari", "458", 2009)
cars.AppendValues("Citroen", "Picasso II", 2013)
cars.AppendValues("Bentley", "Continental GT", 2003)

// add the sheets to the Databook
db.AddSheet("Cars", cars.Sort("Year"))
db.AddSheet("Presidents", presidents.SortReverse("Age"))

fmt.Println(db.JSON())

Will output the following JSON representation of the Databook:

[
  {
    "title": "Cars",
    "data": [
      {"Maker":"Bentley","Model":"Continental GT","Year":2003},
      {"Maker":"Ferrari","Model":"458","Year":2009},
      {"Maker":"Skoda","Model":"Octavia","Year":2011},
      {"Maker":"Porsche","Model":"991","Year":2012},
      {"Maker":"Citroen","Model":"Picasso II","Year":2013}
    ]
  },
  {
    "title": "Presidents",
    "data": [
      {"Age":90,"First name":"John","Last name":"Adams"},
      {"Age":83,"First name":"Henry","Last name":"Ford"},
      {"Age":67,"First name":"George","Last name":"Washington"}
    ]
  }
]

Installation

go get github.com/agrison/go-tablib

For those wanting the v1 version where export methods returned a string and not an Exportable:

go get gopkg.in/agrison/go-tablib.v1

TODO

  • Loading in more formats
  • Support more formats: DBF, XLS, LATEX, ...

Contribute

It is a work in progress, so it may exist some bugs and edge cases not covered by the test suite.

But we're on Github and this is Open Source, pull requests are more than welcomed, come and have some fun :)

Acknowledgement

Thanks to kennethreitz for the first implementation in Python, github.com/bndr/gotabulate, github.com/clbanning/mxj, github.com/tealeg/xlsx, gopkg.in/yaml.v2

Documentation

Overview

Package tablib is a format-agnostic tabular Dataset library, written in Go. It allows you to import, export, and manipulate tabular data sets. Advanced features include, dynamic columns, tags & filtering, and seamless format import & export.

Index

Constants

This section is empty.

Variables

View Source
var (
	// ErrInvalidDimensions is returned when trying to append/insert too much
	// or not enough values to a row or column
	ErrInvalidDimensions = errors.New("tablib: Invalid dimension")
	// ErrInvalidColumnIndex is returned when trying to insert a column at an
	// invalid index
	ErrInvalidColumnIndex = errors.New("tablib: Invalid column index")
	// ErrInvalidRowIndex is returned when trying to insert a row at an
	// invalid index
	ErrInvalidRowIndex = errors.New("tablib: Invalid row index")
	// ErrInvalidDataset is returned when trying to validate a Dataset against
	// the constraints that have been set on its columns.
	ErrInvalidDataset = errors.New("tablib: Invalid dataset")
	// ErrInvalidTag is returned when trying to add a tag which is not a string.
	ErrInvalidTag = errors.New("tablib: A tag must be a string")
)
View Source
var (
	// TabularGrid is the value to be passed to gotabulate to render the table
	// as ASCII table with grid format
	TabularGrid = "grid"
	// TabularSimple is the value to be passed to gotabulate to render the table
	// as ASCII table with simple format
	TabularSimple = "simple"
	// TabularCondensed is the value to be passed to gotabulate to render the table
	// as ASCII table with condensed format
	TabularCondensed = "condensed"
	// TabularMarkdown is the value to be passed to gotabulate to render the table
	// as ASCII table with Markdown format
	TabularMarkdown = "markdown"
)

Functions

This section is empty.

Types

type ColumnConstraint

type ColumnConstraint func(interface{}) bool

ColumnConstraint represents a function that is bound as a constraint to the column so that it can validate its value

type Databook

type Databook struct {
	// contains filtered or unexported fields
}

Databook represents a Databook which is an array of sheets.

func LoadDatabookJSON

func LoadDatabookJSON(jsonContent []byte) (*Databook, error)

LoadDatabookJSON loads a Databook from a JSON source.

func LoadDatabookYAML

func LoadDatabookYAML(yamlContent []byte) (*Databook, error)

LoadDatabookYAML loads a Databook from a YAML source.

func NewDatabook

func NewDatabook() *Databook

NewDatabook constructs a new Databook.

func (*Databook) AddSheet

func (d *Databook) AddSheet(title string, dataset *Dataset)

AddSheet adds a sheet to the Databook.

func (*Databook) HTML

func (d *Databook) HTML() *Exportable

HTML returns a HTML representation of the Databook as an Exportable.

func (*Databook) JSON

func (d *Databook) JSON() (*Exportable, error)

JSON returns a JSON representation of the Databook as an Exportable.

func (*Databook) Sheet

func (d *Databook) Sheet(title string) Sheet

Sheet returns the sheet with a specific title.

func (*Databook) Sheets

func (d *Databook) Sheets() map[string]Sheet

Sheets returns the sheets in the Databook.

func (*Databook) Size

func (d *Databook) Size() int

Size returns the number of sheets in the Databook.

func (*Databook) Wipe

func (d *Databook) Wipe()

Wipe removes all Dataset objects from the Databook.

func (*Databook) XLSX

func (d *Databook) XLSX() (*Exportable, error)

XLSX returns a XLSX representation of the Databook as an exportable.

func (*Databook) XML

func (d *Databook) XML() (*Exportable, error)

XML returns a XML representation of the Databook as an Exportable.

func (*Databook) YAML

func (d *Databook) YAML() (*Exportable, error)

YAML returns a YAML representation of the Databook as an Exportable.

type Dataset

type Dataset struct {
	// EmptyValue represents the string value to b output if a field cannot be
	// formatted as a string during output of certain formats.
	EmptyValue string

	ValidationErrors []ValidationError
	// contains filtered or unexported fields
}

Dataset represents a set of data, which is a list of data and header for each column.

func LoadCSV

func LoadCSV(input []byte) (*Dataset, error)

LoadCSV loads a Dataset by its CSV representation.

func LoadJSON

func LoadJSON(jsonContent []byte) (*Dataset, error)

LoadJSON loads a dataset from a YAML source.

func LoadTSV

func LoadTSV(input []byte) (*Dataset, error)

LoadTSV loads a Dataset by its TSV representation.

func LoadXML

func LoadXML(input []byte) (*Dataset, error)

LoadXML loads a Dataset from an XML source.

func LoadYAML

func LoadYAML(yamlContent []byte) (*Dataset, error)

LoadYAML loads a dataset from a YAML source.

func NewDataset

func NewDataset(headers []string) *Dataset

NewDataset creates a new Dataset.

func NewDatasetWithData

func NewDatasetWithData(headers []string, data [][]interface{}) *Dataset

NewDatasetWithData creates a new Dataset.

func (*Dataset) Append

func (d *Dataset) Append(row []interface{}) error

Append appends a row of values to the Dataset.

func (*Dataset) AppendColumn

func (d *Dataset) AppendColumn(header string, cols []interface{}) error

AppendColumn appends a new column with values to the Dataset.

func (*Dataset) AppendColumnValues

func (d *Dataset) AppendColumnValues(header string, cols ...interface{}) error

AppendColumnValues appends a new column with values to the Dataset.

func (*Dataset) AppendConstrainedColumn

func (d *Dataset) AppendConstrainedColumn(header string, constraint ColumnConstraint, cols []interface{}) error

AppendConstrainedColumn appends a constrained column to the Dataset.

func (*Dataset) AppendDynamicColumn

func (d *Dataset) AppendDynamicColumn(header string, fn DynamicColumn)

AppendDynamicColumn appends a dynamic column to the Dataset.

func (*Dataset) AppendTagged

func (d *Dataset) AppendTagged(row []interface{}, tags ...string) error

AppendTagged appends a row of values to the Dataset with one or multiple tags for filtering purposes.

func (*Dataset) AppendValues

func (d *Dataset) AppendValues(row ...interface{}) error

AppendValues appends a row of values to the Dataset.

func (*Dataset) AppendValuesTagged

func (d *Dataset) AppendValuesTagged(row ...interface{}) error

AppendValuesTagged appends a row of values to the Dataset with one or multiple tags for filtering purposes.

func (*Dataset) CSV

func (d *Dataset) CSV() (*Exportable, error)

CSV returns a CSV representation of the Dataset an Exportable.

func (*Dataset) Column

func (d *Dataset) Column(header string) []interface{}

Column returns all the values for a specific column returns nil if column is not found.

func (*Dataset) ConstrainColumn

func (d *Dataset) ConstrainColumn(header string, constraint ColumnConstraint)

ConstrainColumn adds a constraint to a column in the Dataset.

func (*Dataset) DeleteColumn

func (d *Dataset) DeleteColumn(header string) error

DeleteColumn deletes a column from the Dataset.

func (*Dataset) DeleteRow

func (d *Dataset) DeleteRow(row int) error

DeleteRow deletes a row at a specific index

func (*Dataset) Dict

func (d *Dataset) Dict() []interface{}

Dict returns the Dataset as an array of map where each key is a column.

func (*Dataset) Filter

func (d *Dataset) Filter(tags ...string) *Dataset

Filter filters a Dataset, returning a fresh Dataset including only the rows previously tagged with one of the given tags. Returns a new Dataset.

func (*Dataset) HTML

func (d *Dataset) HTML() *Exportable

HTML returns the HTML representation of the Dataset as an Exportable.

func (*Dataset) HasAnyConstraint

func (d *Dataset) HasAnyConstraint() bool

HasAnyConstraint returns whether the Dataset has any constraint set.

func (*Dataset) Headers

func (d *Dataset) Headers() []string

Headers return the headers of the Dataset.

func (*Dataset) Height

func (d *Dataset) Height() int

Height returns the number of rows in the Dataset.

func (*Dataset) Insert

func (d *Dataset) Insert(index int, row []interface{}) error

Insert inserts a row at a given index.

func (*Dataset) InsertColumn

func (d *Dataset) InsertColumn(index int, header string, cols []interface{}) error

InsertColumn insert a new column at a given index.

func (*Dataset) InsertConstrainedColumn

func (d *Dataset) InsertConstrainedColumn(index int, header string,
	constraint ColumnConstraint, cols []interface{}) error

InsertConstrainedColumn insert a new constrained column at a given index.

func (*Dataset) InsertDynamicColumn

func (d *Dataset) InsertDynamicColumn(index int, header string, fn DynamicColumn) error

InsertDynamicColumn insert a new dynamic column at a given index.

func (*Dataset) InsertTagged

func (d *Dataset) InsertTagged(index int, row []interface{}, tags ...string) error

InsertTagged inserts a row at a given index with specific tags.

func (*Dataset) InsertValues

func (d *Dataset) InsertValues(index int, values ...interface{}) error

InsertValues inserts a row of values at a given index.

func (*Dataset) InvalidSubset

func (d *Dataset) InvalidSubset() *Dataset

InvalidSubset return a new Dataset containing only the rows failing to validate their constraints. If no constraints are set, it returns the same instance. Note: The returned Dataset is free of any constraints, tags are conserved.

func (*Dataset) JSON

func (d *Dataset) JSON() (*Exportable, error)

JSON returns a JSON representation of the Dataset as an Exportable.

func (*Dataset) Markdown

func (d *Dataset) Markdown() *Exportable

Markdown returns a Markdown table Exportable representation of the Dataset.

func (*Dataset) MySQL

func (d *Dataset) MySQL(table string) *Exportable

MySQL returns a string representing a suite of MySQL commands recreating the Dataset into a table.

func (*Dataset) Postgres

func (d *Dataset) Postgres(table string) *Exportable

Postgres returns a string representing a suite of Postgres commands recreating the Dataset into a table.

func (*Dataset) Records

func (d *Dataset) Records() [][]string

Records returns the Dataset as an array of array where each entry is a string. The first row of the returned 2d array represents the columns of the Dataset.

func (*Dataset) Row

func (d *Dataset) Row(index int) (map[string]interface{}, error)

Row returns a map representing a specific row of the Dataset. returns tablib.ErrInvalidRowIndex if the row cannot be found

func (*Dataset) Rows

func (d *Dataset) Rows(index ...int) ([]map[string]interface{}, error)

Rows returns an array of map representing a set of specific rows of the Dataset. returns tablib.ErrInvalidRowIndex if the row cannot be found.

func (*Dataset) Slice

func (d *Dataset) Slice(lower, upperNonInclusive int) (*Dataset, error)

Slice returns a new Dataset representing a slice of the orignal Dataset like a slice of an array. returns tablib.ErrInvalidRowIndex if the lower or upper bound is out of range.

func (*Dataset) Sort

func (d *Dataset) Sort(column string) *Dataset

Sort sorts the Dataset by a specific column. Returns a new Dataset.

func (*Dataset) SortReverse

func (d *Dataset) SortReverse(column string) *Dataset

SortReverse sorts the Dataset by a specific column in reverse order. Returns a new Dataset.

func (*Dataset) Stack

func (d *Dataset) Stack(other *Dataset) (*Dataset, error)

Stack stacks two Dataset by joining at the row level, and return new combined Dataset.

func (*Dataset) StackColumn

func (d *Dataset) StackColumn(other *Dataset) (*Dataset, error)

StackColumn stacks two Dataset by joining them at the column level, and return new combined Dataset.

func (*Dataset) TSV

func (d *Dataset) TSV() (*Exportable, error)

TSV returns a TSV representation of the Dataset as string.

func (*Dataset) Tabular

func (d *Dataset) Tabular(format string) *Exportable

Tabular returns a tabular Exportable representation of the Dataset. format is either grid, simple, condensed or markdown.

func (*Dataset) Tag

func (d *Dataset) Tag(index int, tags ...string) error

Tag tags a row at a given index with specific tags. Returns ErrInvalidRowIndex if the row does not exist.

func (*Dataset) Tags

func (d *Dataset) Tags(index int) ([]string, error)

Tags returns the tags of a row at a given index. Returns ErrInvalidRowIndex if the row does not exist.

func (*Dataset) Transpose

func (d *Dataset) Transpose() *Dataset

Transpose transposes a Dataset, turning rows into columns and vice versa, returning a new Dataset instance. The first row of the original instance becomes the new header row. Tags, constraints and dynamic columns are lost in the returned Dataset. TODO

func (*Dataset) Valid

func (d *Dataset) Valid() bool

Valid returns whether the Dataset is valid regarding constraints that have been previously set on columns. Its behaviour is different of ValidFailFast in a sense that it will validate the whole Dataset and all the validation errors will be available by using Dataset.ValidationErrors

func (*Dataset) ValidFailFast

func (d *Dataset) ValidFailFast() bool

ValidFailFast returns whether the Dataset is valid regarding constraints that have been previously set on columns.

func (*Dataset) ValidSubset

func (d *Dataset) ValidSubset() *Dataset

ValidSubset return a new Dataset containing only the rows validating their constraints. This is similar to what Filter() does with tags, but with constraints. If no constraints are set, it returns the same instance. Note: The returned Dataset is free of any constraints, tags are conserved.

func (*Dataset) Width

func (d *Dataset) Width() int

Width returns the number of columns in the Dataset.

func (*Dataset) XLSX

func (d *Dataset) XLSX() (*Exportable, error)

XLSX exports the Dataset as a byte array representing the .xlsx format.

func (*Dataset) XML

func (d *Dataset) XML() (*Exportable, error)

XML returns a XML representation of the Dataset as an Exportable.

func (*Dataset) XMLWithTagNamePrefixIndent

func (d *Dataset) XMLWithTagNamePrefixIndent(tagName, prefix, indent string) (*Exportable, error)

XMLWithTagNamePrefixIndent returns a XML representation with custom tag, prefix and indent.

func (*Dataset) YAML

func (d *Dataset) YAML() (*Exportable, error)

YAML returns a YAML representation of the Dataset as an Exportable.

type DynamicColumn

type DynamicColumn func([]interface{}) interface{}

DynamicColumn represents a function that can be evaluated dynamically when exporting to a predefined format.

type Exportable

type Exportable struct {
	// contains filtered or unexported fields
}

Exportable represents an exportable dataset, it cannot be manipulated at this point and it can just be converted to a string, []byte or written to a io.Writer. The exportable struct just holds a bytes.Buffer that is used by the tablib library to write export formats content. Real work is delegated to bytes.Buffer.

func (*Exportable) Bytes

func (e *Exportable) Bytes() []byte

Bytes returns the contentes of the exported dataset as a byte array.

func (*Exportable) String

func (e *Exportable) String() string

String returns the contents of the exported dataset as a string.

func (*Exportable) WriteFile

func (e *Exportable) WriteFile(filename string, perm os.FileMode) error

WriteFile writes the databook or dataset content to a file named by filename. If the file does not exist, WriteFile creates it with permissions perm; otherwise WriteFile truncates it before writing.

func (*Exportable) WriteTo

func (e *Exportable) WriteTo(w io.Writer) (int64, error)

WriteTo writes the exported dataset to w.

type Sheet

type Sheet struct {
	// contains filtered or unexported fields
}

Sheet represents a sheet in a Databook, holding a title (if any) and a dataset.

func (Sheet) Dataset

func (s Sheet) Dataset() *Dataset

Dataset returns the dataset of the sheet.

func (Sheet) Title

func (s Sheet) Title() string

Title return the title of the sheet.

type ValidationError

type ValidationError struct {
	Row    int
	Column int
}

ValidationError holds the position of a value in the Dataset that have failed to validate a constraint.

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL