trdsql

package module
v1.0.0 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Apr 5, 2024 License: MIT Imports: 41 Imported by: 13

README

trdsql

PkgGoDev Go Report Card Go Coverage GitHub Actions

trdsql is a CLI tool that can execute SQL queries on CSV, LTSV, JSON, YAML and TBLN files.

This tool is similar to others such as q and textql, with a key distinction: it allows the use of PostgreSQL or MySQL syntax.

For usage as a library, please refer to the godoc and the provided examples.

trdsql.gif

1. INSTALL

1.1. go get
go get -d github.com/noborus/trdsql
cd $GOPATH/src/github.com/noborus/trdsql
make
make install
1.1.1. Requirements

go 1.19 or higher.

1.2. Download binary

Download binary from the releases page(Linux/Windows/macOS).

1.3. Homebrew
brew install noborus/tap/trdsql
1.4. MacPorts
sudo port selfupdate
sudo port install trdsql
1.5. FreeBSD

freshports

pkg install trdsql
1.6. Cgo free

Typically, go-sqlite3 is used for building. However, if you're building with CGO_ENABLED=0, consider using sqlite instead.

Building without CGO (CGO Free) can reduce issues related to cross-compiling, but it may result in slower execution times.

2. Docker

2.1. Docker pull

Pull the latest image from the Docker hub.

docker pull noborus/trdsql
2.2. image build

Or build it yourself.

docker build -t trdsql .
2.3. Docker Run

Docker run.

docker run --rm -it -v $(pwd):/tmp trdsql [options and commands]

3. Usage

To use trdsql, you can either specify an SQL query or simply provide a file for conversion.

trdsql [options] SQL

For file conversion, this is equivalent to executing 'trdsql -o[output format] "SELECT * FROM filename"'.

trdsql -o[output format] -t [input filename]
3.1. Global options
  • -a filename analyze the file and suggest SQL.
  • -A filename analyze the file but only suggest SQL.
  • -config filename configuration file location.
  • -db db name specify db name of the setting.
  • -dblist display db list of configure.
  • -driver string database driver. [ mysql | postgres | sqlite3 | sqlite(CGO Free) ] (default "sqlite3")
  • -dsn string database driver specific data source name.
  • -debug debug print.
  • -help display usage information.
  • -q filename read query from the specified file.
  • -t filename read table name from the specified file.
  • -version display version information.
3.2. Input formats
  • -ig guess format from extension. (default)
  • -icsv CSV format for input.
  • -ijson JSON format for input.
  • -iltsv LTSV format for input.
  • -iyaml YAML format for input.
  • -itbln TBLN format for input.
  • -iwidth width specification format for input.
3.2.1. Input options
  • -ih the first line is interpreted as column names(CSV only).
  • -id character field delimiter for input(default ",")(CSV only).
  • -ijq string jq expression string for input(JSON/JSONL only).
  • -ilr int limited number of rows to read.
  • -inull string value(string) to convert to null on input.
  • -ir int number of rows to preread. (default 1)
  • -is int skip header row.
3.3. Output formats
  • -ocsv CSV format for output. (default)
  • -ojson JSON format for output.
  • -ojsonl JSONL(JSON Lines) format for output.
  • -oltsv LTSV format for output.
  • -oat ASCII Table format for output.
  • -omd Markdown format for output.
  • -oraw Raw format for output.
  • -ovf Vertical format for output.
  • -oyaml YAML format for output.
  • -otbln TBLN format for output.

Or, guess the output format by file name.

3.3.1. Output options
  • -out filename output file name.
  • -out-without-guess output without guessing (when using -out).
  • -oh output column name as header.
  • -od character field delimiter for output. (default ",")(CSV and RAW only).
  • -oq character quote character for output. (default """)(CSV only).
  • -oaq enclose all fields in quotes for output(CSV only).
  • -ocrlf use CRLF for output. End each output line with '\r\n' instead of '\n'."(CSV only).
  • -onowrap do not wrap long columns(AT and MD only).
  • -onull value(string) to convert from null on output.
  • -oz string compression format for output. [ gzip | bz2 | zstd | lz4 | xz ]
3.4. Handling of NULL

NULL is undecided in many text formats. JSON null is considered the same as SQL NULL. For formats other than JSON, you must specify a string that is considered NULL. In most cases you will need to specify an empty string ("").

If -inull "" is specified, an empty string will be treated as SQL NULL.

SQL NULL is an empty string by default. Specify the -onull "(NULL)" option if you want a different string.

$ echo "1,,v" | trdsql -inull "" -onull "(NULL)" "SELECT * FROM -"
1,(NULL),v

In the case of JSON, null is NULL as it is, and the specified string is converted to NULL.

$ echo '[1,null,""]' | trdsql -inull "" -ojson -ijson "SELECT * FROM -"
[
  {
    "c1": "1"
  },
  {
    "c1": null
  },
  {
    "c1": null
  }
]
3.5. Multiple queries (v0.20.0 or later)

Multiple queries can be executed by separating them with a semicolon. Update queries must be followed by a SELECT statement.

$ trdsql "UPDATE SET c2='banana' WHERE c3='1';SELECT * FROM test.csv"
1,Orange
2,Melon
3,banana

You can perform multiple SELECTs, but the output will be in one format.

$ trdsql -oh "SELECT c1,c2 FROM test.csv;SELECT c2,c1 FROM test.csv"
c1,c2
1,Orange
2,Melon
3,Apple
c2,c1
Orange,1
Melon,2
Apple,3

4. Example

test.csv file.

1,Orange
2,Melon
3,Apple

Please write a file name like a table name.

trdsql "SELECT * FROM test.csv"

-q filename can execute SQL from file

trdsql -q test.sql
4.1. STDIN input

"-" or "stdin" is received from standard input instead of file name.

cat test.csv | trdsql "SELECT * FROM -"

or

cat test.csv | trdsql "SELECT * FROM stdin"
4.2. Multiple files

Multiple matched files can be executed as one table.

$ trdsql -ih "SELECT * FROM tt*.csv"
1,test1
2,test2
3,test3

[!NOTE] It is not possible to mix different formats (ex: CSV and LTSV).

4.3. Compressed files

If the file is compressed with gzip, bz2, zstd, lz4, xz, it will be automatically uncompressed.

trdsql "SELECT * FROM testdata/test.csv.gz"
trdsql "SELECT * FROM testdata/test.csv.zst"

It is possible to mix uncompressed and compressed files using wildcards.

trdsql "SELECT * FROM testdata/test.csv*"
4.4. Output file

-out filename option to output the file to a file.

trdsql -out result.csv "SELECT * FROM testdata/test.csv ORDER BY c1"
4.5. Output compression

-oz compression type to compress and output.

trdsql -oz gz "SELECT * FROM testdata/test.csv ORDER BY c1" > result.csv.gz
4.6. Guess by output file name

The filename of -out filename option determines the output format(csv, ltsv, json, tbln, raw, md, at, vf, jsonl) and compression format(gzip, bz2, zstd,lz4, xz) by guess.

Guess by extension output format + output compression (eg .csv.gz, .ltsv.lz4, .md.zst ...).

The following is an LTSV file compressed in zstd.

trdsql -out result.ltsv.zst "SELECT * FROM testdata/test.csv"
4.7. Columns is not constant

If the number of columns is not a constant, read and decide multiple rows.

$ trdsql -ir 3 -iltsv "SELECT * FROM test_indefinite.ltsv"
1,Orange,50,,
2,Melon,500,ibaraki,
3,Apple,100,aomori,red
4.8. TSV (Tab Separated Value)

-id "\\t" is input from TSV (Tab Separated Value)

1	Orange
2	Melon
3	Apple
trdsql -id "\t" "SELECT * FROM test-tab.csv"

-od "\\t" is TSV (Tab Separated Value) output.

$ trdsql -od "\t" "SELECT * FROM test.csv"
1	Orange
2	Melon
3	Apple
4.9. LTSV (Labeled Tab-separated Values)

-iltsv is input from LTSV(Labeled Tab-separated Values).

sample.ltsv

id:1	name:Orange	price:50
id:2	name:Melon	price:500
id:3	name:Apple	price:100
trdsql -iltsv "SELECT * FROM sample.ltsv"
1,Orange,50
2,Melon,500
3,Apple,100

[!NOTE] Only the columns in the first row are targeted.

-oltsv is LTSV(Labeled Tab-separated Values) output.

$ trdsql -iltsv -oltsv "SELECT * FROM sample.ltsv"
id:1	name:Orange	price:50
id:2	name:Melon	price:500
id:3	name:Apple	price:100
4.10. JSON

-ijson is input from JSON.

sample.json

[
  {
    "id": "1",
    "name": "Orange",
    "price": "50"
  },
  {
    "id": "2",
    "name": "Melon",
    "price": "500"
  },
  {
    "id": "3",
    "name": "Apple",
    "price": "100"
  }
]
$ trdsql -ijson "SELECT * FROM sample.json"
1,Orange,50
2,Melon,500
3,Apple,100

JSON can contain structured types, but trdsql is stored as it is as JSON string.

sample2.json

[
    {
      "id": 1,
      "name": "Drolet",
      "attribute": { "country": "Maldives", "color": "burlywood" }
    },
    {
      "id": 2,
      "name": "Shelly",
      "attribute": { "country": "Yemen", "color": "plum" }
    },
    {
      "id": 3,
      "name": "Tuck",
      "attribute": { "country": "Mayotte", "color": "antiquewhite" }
    }
]
$ trdsql -ijson "SELECT * FROM sample2.json"
1,Drolet,"{""color"":""burlywood"",""country"":""Maldives""}"
2,Shelly,"{""color"":""plum"",""country"":""Yemen""}"
3,Tuck,"{""color"":""antiquewhite"",""country"":""Mayotte""}"

Please use SQL function.

$ trdsql -ijson "SELECT id, name, JSON_EXTRACT(attribute,'$.country'), JSON_EXTRACT(attribute,'$.color') FROM sample2.json"
1,Drolet,Maldives,burlywood
2,Shelly,Yemen,plum
3,Tuck,Mayotte,antiquewhite
4.10.1. jq expression

If json has a hierarchy, you can filter by jq expression.

The jq expression is implemented using gojq.

menu.json

{
	"menu": {
		"id": "file",
		"value": "File",
		"popup": {
			"menuitem": [
				{
					"value": "New",
					"onclick": "CreateDoc()"
				},
				{
					"value": "Open",
					"onclick": "OpenDoc()"
				},
				{
					"value": "Save",
					"onclick": "SaveDoc()"
				}
			]
		}
	}
}

You can write a jq expression by connecting :: after the json file name. Enclose the jq expression in double quotes if needed.

trdsql -oat 'SELECT value, onclick FROM menu.json::".menu.popup.menuitem"'

Or specify with the -ijq option.

$ trdsql -oat -ijq ".menu.popup.menuitem" "SELECT * FROM menu.json"
+-------+-------------+
| value |   onclick   |
+-------+-------------+
| New   | CreateDoc() |
| Open  | OpenDoc()   |
| Save  | SaveDoc()   |
+-------+-------------+

Example to use instead of gojq.

$ echo '{"foo": 128}' | trdsql -ijson "SELECT * FROM -::'.foo'"
128
$ echo '{"a": {"b": 42}}' | trdsql -ijson "SELECT * FROM -::'.a.b'"
42
$ echo '{"id": "sample", "10": {"b": 42}}' | trdsql -ijson "SELECT * FROM -::'{(.id): .[\"10\"].b}'"
42
$ echo '[{"id":1},{"id":2},{"id":3}]' | trdsql -ijson "SELECT * FROM -::'.[] | .id'"
1
2
3
$ echo '{"a":1,"b":2}' | trdsql -ijson "SELECT * FROM -::'.a += 1 | .b *= 2'"
4,2
$ echo '{"a":1} [2] 3' | trdsql -ijson "SELECT * FROM -::'. as {\$a} ?// [\$a] ?// \$a | \$a'"
1
2
3
4.11. JSONL(NDJSON)

Another json format. JSONL(JSON Lines). It is also called ndjson.

sample2.json

{"id": "1","name": "Orange","price": "50"}
{"id": "2","name": "Melon","price": "500"}
{"id": "3","name": "Apple","price": "100"}

-ojson is JSON Output.

$ trdsql -ojson "SELECT * FROM test.csv"
[
  {
    "c1": "1",
    "c2": "Orange"
  },
  {
    "c1": "2",
    "c2": "Melon"
  },
  {
    "c1": "3",
    "c2": "Apple"
  }
]

To output in JSONL, specify -ojsonl.

$ trdsql -ojsonl "SELECT * FROM test.csv"
{"c1":"1","c2":"Orange"}
{"c1":"2","c2":"Melon"}
{"c1":"3","c2":"Apple"}
4.12. YAML

-iyaml is input from YAML (Or if the extension is yaml or yml, it is considered a YAML file).

sample.yaml

- id: 1
  name: Orange
  price: 50
- id: 2
  name: Melon
  price: 500
- id: 3
  name: Apple
  price: 100
$ trdsql -iyaml -ocsv "SELECT * FROM sample.yaml"
1,Orange,50
2,Melon,500
3,Apple,100

Since yaml is internally converted to JSON, it can be converted to json and output.

sample2.yaml

a: true
b:
  c: 2
  d: [3, 4, 5]
  e:
    - name: fred
      value: 3
    - name: sam
      value: 4%
$ trdsql -ojson "SELECT * FROM sample2.yaml"
[
  {
    "a": "true",
    "b": {
      "c": 2,
      "d": [
        3,
        4,
        5
      ],
      "e": [
        {
          "name": "fred",
          "value": 3
        },
        {
          "name": "sam",
          "value": "4%"
        }
      ]
    }
  }
]

So in addition you can also use jq syntax.

$ trdsql  -ojson "SELECT * FROM sample2.yaml::.b.e"
[
  {
    "name": "fred",
    "value": "3"
  },
  {
    "name": "sam",
    "value": "4%"
  }
]

json can be converted to yaml.

$ trdsql  -ojson "SELECT * FROM sample2.yaml::.b.e"
- name: fred
  value: 3
- name: sam
  value: 4%
4.13. TBLN

-itbln is input from TBLN.

sample.tbln

; name: | id | name |
; type: | int | text |
| 1 | Bob |
| 2 | Alice |
$ trdsql -itbln "SELECT * FROM sample.tbln"
1,Bob
2,Alice

TBLN file reflects extras name and type.

-otbln is TBLN Output.

$ trdsql -otbln "SELECT c1::int as id, c2::text as name FROM test.csv"
; created_at: 2019-03-22T13:20:31+09:00
; name: | id | name |
; type: | int | text |
| 1 | Orange |
| 2 | Melon |
| 3 | Apple |

TBLN can contain column names and type definitions. Please refer to https://tbln.dev/ for details of TBLN.

4.14. WIDTH

-iwidth inputs the format specifying the width. This is used when the header column width represents the body column width.

$ ps | trdsql -oh -iwidth "SELECT * FROM -"
PID,TTY,TIME,CMD
302965,pts/3,00:00:12,zsh
733211,pts/3,00:00:00,ps
733212,pts/3,00:00:00,tee
733213,pts/3,00:00:00,guesswidth

-id " " for CSV also works in many cases. But -id " " does not recognize spaces in columns very well.

-iwidth recognizes column widths and space separators.

4.15. Raw output

-oraw is Raw Output. It is used when "escape processing is unnecessary" in CSV output. (For example, when outputting JSON in the database).

$ trdsql -oraw "SELECT row_to_json(t,TRUE) FROM test.csv AS t"
{"c1":"1",
 "c2":"Orange"}
{"c1":"2",
 "c2":"Melon"}
{"c1":"3",
 "c2":"Apple"}

Multiple delimiter characters can be used for raw.

$ trdsql -oraw -od "\t|\t" -db pdb "SELECT * FROM test.csv"
1	|	Orange
2	|	Melon
3	|	Apple
4.16. ASCII Table & MarkDown output

-oat is ASCII table output.

$ trdsql -oat "SELECT * FROM test.csv"
+----+--------+
| C1 |   C2   |
+----+--------+
|  1 | Orange |
|  2 | Melon  |
|  3 | Apple  |
+----+--------+

-omd is Markdown output.

$ trdsql -omd "SELECT * FROM test.csv"
| C1 |   C2   |
|----|--------|
|  1 | Orange |
|  2 | Melon  |
|  3 | Apple  |

The -onowrap option does not wrap long columns in at or md output.

4.17. Vertical format output

-ovf is Vertical format output("column name | value" vertically).

$ trdsql -ovf "SELECT * FROM test.csv"
---[ 1]--------------------------------------------------------
  c1 | 1
  c2 | Orange
---[ 2]--------------------------------------------------------
  c1 | 2
  c2 | Melon
---[ 3]--------------------------------------------------------
  c1 | 3
  c2 | Apple

5. SQL

5.1. SQL function
$ trdsql "SELECT count(*) FROM test.csv"
3

The default column names are c1, c2,...

$ trdsql "SELECT c2,c1 FROM test.csv"
Orange,1
Melon,2
Apple,3

[!NOTE] the available functions and their syntax depend on the driver you have chosen (mysql or postgres or sqlite). The default one is sqlite.

5.2. JOIN

The SQL JOIN can be used.

user.csv

1,userA
2,userB

hist.csv

1,2017-7-10
2,2017-7-10
2,2017-7-11
$ trdsql "SELECT u.c1,u.c2,h.c2 FROM user.csv as u LEFT JOIN hist.csv as h ON(u.c1=h.c1)"
1,userA,2017-7-10
2,userB,2017-7-10
2,userB,2017-7-11
5.3. PostgreSQL

When using PostgreSQL, specify postgres for driver and driver-specific data source name for dsn.

trdsql -driver postgres -dsn "dbname=test" "SELECT count(*) FROM test.csv "
5.3.1. Function

The PostgreSQL driver can use the window function.

$ trdsql -driver postgres -dsn "dbname=test" "SELECT row_number() OVER (ORDER BY c2),c1,c2 FROM test.csv"
1,3,Apple
2,2,Melon
3,1,Orange

For example, the generate_series function can be used.

$ trdsql -driver postgres -dsn "dbname=test" "SELECT generate_series(1,3);"
1
2
3
5.3.2. Join table and CSV file is possible

Test database has a colors table.

$ psql test -c "SELECT * FROM colors"
 id |  name  
----+--------
  1 | orange
  2 | green
  3 | red
(3 rows)

Join table and CSV file.

$ trdsql -driver postgres -dsn "dbname=test" "SELECT t.c1,t.c2,c.name FROM test.csv AS t LEFT JOIN colors AS c ON (t.c1::int = c.id)"
1,Orange,orange
2,Melon,green
3,Apple,red

To create a table from a file, use "CREATE TABLE ... AS SELECT...".

trdsql -driver postgres -dns "dbname=test" "CREATE TABLE fruits (id, name) AS SELECT c1::int, c2 FROM fruits.csv "
$ psql -c "SELECT * FROM fruits;"
 id |  name  
----+--------
  1 | Orange
  2 | Melon
  3 | Apple
(3 rows)
5.4. MySQL

When using MySQL, specify mysql for driver and connection information for dsn.

$ trdsql -driver mysql -dsn "user:password@/test" "SELECT GROUP_CONCAT(c2 ORDER BY c2 DESC) FROM testdata/test.csv"
"g,d,a"
$ trdsql -driver mysql -dsn "user:password@/test" "SELECT c1, SHA2(c2,224) FROM test.csv"
1,a063876767f00792bac16d0dac57457fc88863709361a1bb33f13dfb
2,2e7906d37e9523efeefb6fd2bc3be6b3f2991678427bedc296f9ddb6
3,d0b8d1d417a45c7c58202f55cbb617865f1ef72c606f9bce54322802

MySQL can join tables and CSV files as well as PostgreSQL.

5.5. Analyze

The -a filename option parses the file and outputs table information and SQL examples.

$ trdsql -a testdata/test.ltsv
The table name is testdata/header.csv.
The file type is CSV.

Data types:
+-------------+------+
| column name | type |
+-------------+------+
| id          | text |
| \`name\`    | text |
+-------------+------+

Data samples:
+----+----------+
| id | \`name\` |
+----+----------+
|  1 | Orange   |
+----+----------+

Examples:
trdsql -db sdb -ih "SELECT id, \`name\` FROM testdata/header.csv"
trdsql -db sdb -ih "SELECT id, \`name\` FROM testdata/header.csv WHERE id = '1'"
trdsql -db sdb -ih "SELECT id, count(id) FROM testdata/header.csv GROUP BY id"
trdsql -db sdb -ih "SELECT id, \`name\` FROM testdata/header.csv ORDER BY id LIMIT 10"

Other options(-id,-ih,-ir,-is,icsv,iltsv,-ijson,-itbln...) are available.

trdsql -ih -a testdata/header.csv

Similarly, with -A filename option, only Examples (SQL) is output.

$ trdsql -ih -A testdata/header.csv
trdsql -ih "SELECT id, \`name\` FROM testdata/header.csv"
trdsql -ih "SELECT id, \`name\` FROM testdata/header.csv WHERE id = '1'"
trdsql -ih "SELECT id, count(id) FROM testdata/header.csv GROUP BY id"
trdsql -ih "SELECT id, \`name\` FROM testdata/header.csv ORDER BY id LIMIT 10"
5.6. Configuration

You can specify driver and dsn in the configuration file.

Unix like.

${HOME}/.config/trdsql/config.json

Windows (ex).

C:\Users\{"User"}\AppData\Roaming\trdsql\config.json

Or use the -config file option.

$ trdsql -config config.json "SELECT * FROM test.csv"

sample: config.json

{
  "db": "pdb",
  "database": {
    "sdb": {
      "driver": "sqlite3",
      "dsn": ""
    },
    "pdb": {
      "driver": "postgres",
      "dsn": "user=test dbname=test"
    },
    "mdb": {
      "driver": "mysql",
      "dsn": "user:password@/dbname"
    }
  }
}

The default database is an entry of "db".

If you put the setting in you can specify the name with -db.

$ trdsql -debug -db pdb "SELECT * FROM test.csv"
2017/07/18 02:27:47 driver: postgres, dsn: user=test dbname=test
2017/07/18 02:27:47 CREATE TEMPORARY TABLE "test.csv" ( c1 text,c2 text );
2017/07/18 02:27:47 INSERT INTO "test.csv" (c1,c2) VALUES ($1,$2);
2017/07/18 02:27:47 SELECT * FROM "test.csv"
1,Orange
2,Melon
3,Apple

6. Library

Example of use as a library.

package main

import (
        "log"

        "github.com/noborus/trdsql"
)

func main() {
        trd := trdsql.NewTRDSQL(
                trdsql.NewImporter(trdsql.InDelimiter(":")),
                trdsql.NewExporter(trdsql.NewWriter()),
        )
        if err := trd.Exec("SELECT c1 FROM /etc/passwd"); err != nil {
                log.Fatal(err)
        }
}

Please refer to godoc and _example for usage as a library.

7. See also

  • psutilsql - A tool for querying system status in SQL.
  • mdtsql - A tool for querying markdown tables in SQL.
  • xlsxsql - A tool for querying Excel files in SQL.

8. Learn More

9. License

MIT

Please check each license of SQL driver.

Documentation

Overview

Package trdsql implements execute SQL queries on tabular data.

trdsql imports tabular data into a database, executes SQL queries, and executes exports.

Example
package main

import (
	"fmt"
	"log"
	"os"

	"github.com/noborus/trdsql"
)

func main() {
	in := []byte(`"Rob","Pike",rob
Ken,Thompson,ken
"Robert","Griesemer","gri"
`)
	tmpfile, err := os.CreateTemp(os.TempDir(), "xxx")
	if err != nil {
		log.Print(err)
		return
	}
	defer func() {
		defer os.Remove(tmpfile.Name())
	}()

	if _, err := tmpfile.Write(in); err != nil {
		log.Print(err)
		return
	}
	trd := trdsql.NewTRDSQL(
		trdsql.NewImporter(),
		trdsql.NewExporter(trdsql.NewWriter()),
	)
	// #nosec G201
	query := fmt.Sprintf("SELECT c1 FROM %s ORDER BY c1", tmpfile.Name())
	if err := trd.Exec(query); err != nil {
		log.Print(err)
		return
	}
}
Output:

Ken
Rob
Robert
Example (Options)
package main

import (
	"fmt"
	"log"
	"os"

	"github.com/noborus/trdsql"
)

func main() {
	in := []byte(`first_name,last_name,username
"Rob","Pike",rob
Ken,Thompson,ken
"Robert","Griesemer","gri"
`)
	tmpfile, err := os.CreateTemp(os.TempDir(), "xxx")
	if err != nil {
		log.Print(err)
		return
	}
	defer func() {
		defer os.Remove(tmpfile.Name())
	}()
	_, err = tmpfile.Write(in)
	if err != nil {
		log.Print(err)
		return
	}

	// NewImporter
	importer := trdsql.NewImporter(
		trdsql.InFormat(trdsql.CSV),
		trdsql.InHeader(true),
	)

	// NewWriter & NewExporter
	writer := trdsql.NewWriter(
		trdsql.OutFormat(trdsql.JSON),
	)
	exporter := trdsql.NewExporter(writer)

	trd := trdsql.NewTRDSQL(importer, exporter)
	// #nosec G201
	query := fmt.Sprintf("SELECT * FROM %s ORDER BY username", tmpfile.Name())
	err = trd.Exec(query)
	if err != nil {
		log.Print(err)
		return
	}
}
Output:

[
  {
    "first_name": "Robert",
    "last_name": "Griesemer",
    "username": "gri"
  },
  {
    "first_name": "Ken",
    "last_name": "Thompson",
    "username": "ken"
  },
  {
    "first_name": "Rob",
    "last_name": "Pike",
    "username": "rob"
  }
]

Index

Examples

Constants

View Source
const DefaultDBType = "text"

DefaultDBType is default type.

Variables

View Source
var (
	// ErrNoTransaction is returned if SQL is executed when a transaction has not started.
	// SQL must be executed within a transaction.
	ErrNoTransaction = errors.New("transaction has not been started")
	// ErrNilReader is returned by Set reader of the specified file is nil error.
	ErrNilReader = errors.New("nil reader")
	// ErrInvalidNames is returned by Set if invalid names(number of columns is 0).
	ErrInvalidNames = errors.New("invalid names")
	// ErrInvalidTypes is returned by Set if invalid column types (does not match the number of column names).
	ErrInvalidTypes = errors.New("invalid types")
	// ErrNoStatement is returned by no SQL statement.
	ErrNoStatement = errors.New("no SQL statement")
)
View Source
var (
	// ErrInvalidColumn is returned if invalid column.
	ErrInvalidColumn = errors.New("invalid column")
	// ErrNoReader is returned when there is no reader.
	ErrNoReader = errors.New("no reader")
	// ErrUnknownFormat is returned if the format is unknown.
	ErrUnknownFormat = errors.New("unknown format")
	// ErrNoRows returned when there are no rows.
	ErrNoRows = errors.New("no rows")
	// ErrUnableConvert is returned if it cannot be converted to a table.
	ErrUnableConvert = errors.New("unable to convert")
	// ErrNoMatchFound is returned if no match is found.
	ErrNoMatchFound = errors.New("no match found")
	// ErrNonDefinition is returned when there is no definition.
	ErrNonDefinition = errors.New("no definition")
	// ErrInvalidJSON is returned when the JSON is invalid.
	ErrInvalidJSON = errors.New("invalid JSON")
	// ErrInvalidYAML is returned when the YAML is invalid.
	ErrInvalidYAML = errors.New("invalid YAML")
)
View Source
var AppName = "trdsql"

AppName is used for command names.

View Source
var DefaultDriver = "sqlite3"
View Source
var Version = `devel`

Version is trdsql version.

Functions

func Analyze added in v0.7.2

func Analyze(fileName string, opts *AnalyzeOpts, readOpts *ReadOpts) error

Analyze analyzes the file and outputs the table information. In addition, SQL execution examples are output.

func ConvertTypes added in v0.6.0

func ConvertTypes(dbTypes []string) []string

ConvertTypes is converts database types to common types.

func EnableDebug added in v0.6.0

func EnableDebug()

EnableDebug is enable verbose output for debug.

func EnableMultipleQueries added in v0.20.0

func EnableMultipleQueries()

EnableMultipleQueries enables multiple queries.

func ImportFile added in v0.6.0

func ImportFile(db *DB, fileName string, readOpts *ReadOpts) (string, error)

ImportFile is imports a file. Return the quoted table name and error. Do not import if file not found (no error). Wildcards can be passed as fileName.

func ImportFileContext added in v0.7.9

func ImportFileContext(ctx context.Context, db *DB, fileName string, readOpts *ReadOpts) (string, error)

ImportFileContext is imports a file. Return the quoted table name and error. Do not import if file not found (no error). Wildcards can be passed as fileName.

func RegisterReaderFunc added in v0.13.0

func RegisterReaderFunc(ext string, readerFunc ReaderFunc)

func SQLFields added in v0.7.0

func SQLFields(query string) []string

SQLFields returns an array of string fields (interpreting quotes) from the argument query.

func TableNames added in v0.6.3

func TableNames(parsedQuery []string) (map[string]string, []int)

TableNames returns a map of table names that may be tables by a simple SQL parser from the query string of the argument, along with the locations within the parsed query where those table names were found.

func ValString added in v0.6.0

func ValString(v any) string

ValString converts database value to string.

Types

type AnalyzeOpts added in v0.7.2

type AnalyzeOpts struct {
	// Command is string of the execution command.
	Command string
	// Quote is the quote character(s) that varies depending on the sql driver.
	Quote string
	// Detail is outputs detailed information.
	Detail bool
	// OutStream is the output destination.
	OutStream io.Writer
}

AnalyzeOpts represents the options for the operation of Analyze.

func NewAnalyzeOpts added in v0.7.2

func NewAnalyzeOpts() *AnalyzeOpts

NewAnalyzeOpts returns AnalyzeOpts.

type BufferImporter added in v0.6.0

type BufferImporter struct {
	Reader
	// contains filtered or unexported fields
}

BufferImporter a structure that includes tableName and Reader.

Example
package main

import (
	"bytes"
	"log"

	"github.com/noborus/trdsql"
)

func main() {
	jsonString := `
	[
		{
		  "name": "Sarah Carpenter",
		  "gender": "female",
		  "company": "ACCUSAGE",
		  "tags": [
			"veniam",
			"exercitation",
			"nulla",
			"anim",
			"ea",
			"ullamco",
			"ut"
		  ],
		  "greeting": "Hello, Sarah Carpenter! You have 1 unread messages."
		},
		{
		  "name": "Perez Atkinson",
		  "gender": "male",
		  "company": "JOVIOLD",
		  "tags": [
			"minim",
			"adipisicing",
			"ad",
			"occaecat",
			"incididunt",
			"eu",
			"esse"
		  ],
		  "greeting": "Hello, Perez Atkinson! You have 10 unread messages."
		},
		{
		  "name": "Valeria Potts",
		  "gender": "female",
		  "company": "EXOZENT",
		  "tags": [
			"esse",
			"pariatur",
			"nisi",
			"commodo",
			"adipisicing",
			"ut",
			"consectetur"
		  ],
		  "greeting": "Hello, Valeria Potts! You have 8 unread messages."
		}
	  ]
`
	r := bytes.NewBufferString(jsonString)
	importer, err := trdsql.NewBufferImporter("test", r, trdsql.InFormat(trdsql.JSON))
	if err != nil {
		log.Print(err)
		return
	}
	writer := trdsql.NewWriter(
		trdsql.OutFormat(trdsql.CSV),
		trdsql.OutDelimiter("\t"),
	)
	trd := trdsql.NewTRDSQL(importer, trdsql.NewExporter(writer))
	err = trd.Exec("SELECT name,gender,company FROM test")
	if err != nil {
		log.Print(err)
		return
	}
}
Output:

Sarah Carpenter	female	ACCUSAGE
Perez Atkinson	male	JOVIOLD
Valeria Potts	female	EXOZENT

func NewBufferImporter added in v0.6.0

func NewBufferImporter(tableName string, r io.Reader, options ...ReadOpt) (*BufferImporter, error)

NewBufferImporter returns trdsql BufferImporter.

func (*BufferImporter) Import added in v0.6.0

func (i *BufferImporter) Import(db *DB, query string) (string, error)

Import is a method to import from Reader in BufferImporter.

func (*BufferImporter) ImportContext added in v0.7.9

func (i *BufferImporter) ImportContext(ctx context.Context, db *DB, query string) (string, error)

ImportContext is a method to import from Reader in BufferImporter.

type CSVReader added in v0.6.0

type CSVReader struct {
	// contains filtered or unexported fields
}

CSVReader provides methods of the Reader interface.

func NewCSVReader added in v0.6.0

func NewCSVReader(reader io.Reader, opts *ReadOpts) (*CSVReader, error)

NewCSVReader returns CSVReader and error.

func NewPSVReader added in v0.13.0

func NewPSVReader(reader io.Reader, opts *ReadOpts) (*CSVReader, error)

func NewTSVReader added in v0.13.0

func NewTSVReader(reader io.Reader, opts *ReadOpts) (*CSVReader, error)

func (*CSVReader) Names added in v0.6.0

func (r *CSVReader) Names() ([]string, error)

Names returns column names.

func (*CSVReader) PreReadRow added in v0.6.0

func (r *CSVReader) PreReadRow() [][]any

PreReadRow is returns only columns that store preread rows.

func (*CSVReader) ReadRow added in v0.6.0

func (r *CSVReader) ReadRow(row []any) ([]any, error)

ReadRow is read the rest of the row.

func (*CSVReader) Types added in v0.6.0

func (r *CSVReader) Types() ([]string, error)

Types returns column types. All CSV types return the DefaultDBType.

type CSVWriter added in v0.6.0

type CSVWriter struct {
	// contains filtered or unexported fields
}

CSVWriter provides methods of the Writer interface.

func NewCSVWriter added in v0.6.0

func NewCSVWriter(writeOpts *WriteOpts) *CSVWriter

NewCSVWriter returns CSVWriter.

func (*CSVWriter) PostWrite added in v0.6.0

func (w *CSVWriter) PostWrite() error

PostWrite is flush.

func (*CSVWriter) PreWrite added in v0.6.0

func (w *CSVWriter) PreWrite(columns []string, types []string) error

PreWrite is output of header and preparation.

func (*CSVWriter) WriteRow added in v0.6.0

func (w *CSVWriter) WriteRow(values []any, _ []string) error

WriteRow is row write.

type DB added in v0.6.0

type DB struct {

	// *sql.DB represents the database connection.
	*sql.DB
	// Tx represents a database transaction.
	Tx *sql.Tx
	// contains filtered or unexported fields
}

DB represents database information.

func Connect

func Connect(driver, dsn string) (*DB, error)

Connect is connects to the database. Currently supported drivers are sqlite3, mysql, postgres. Set quote character and maxBulk depending on the driver type.

func (*DB) CreateTable added in v0.6.0

func (db *DB) CreateTable(tableName string, columnNames []string, columnTypes []string, isTemporary bool) error

CreateTable is create a (temporary) table in the database. The arguments are the table name, column name, column type, and temporary flag.

func (*DB) CreateTableContext added in v0.7.9

func (db *DB) CreateTableContext(ctx context.Context, tableName string, columnNames []string, columnTypes []string, isTemporary bool) error

CreateTableContext is create a (temporary) table in the database. The arguments are the table name, column name, column type, and temporary flag.

func (*DB) Disconnect added in v0.6.0

func (db *DB) Disconnect() error

Disconnect is disconnect the database.

func (*DB) Import added in v0.6.0

func (db *DB) Import(tableName string, columnNames []string, reader Reader) error

Import is imports data into a table.

func (*DB) ImportContext added in v0.7.9

func (db *DB) ImportContext(ctx context.Context, tableName string, columnNames []string, reader Reader) error

ImportContext is imports data into a table.

func (*DB) OtherExecContext added in v0.20.0

func (db *DB) OtherExecContext(ctx context.Context, query string) error

func (*DB) QuotedName added in v0.7.0

func (db *DB) QuotedName(orgName string) string

QuotedName returns the table name quoted. Returns as is, if already quoted.

func (*DB) Select added in v0.6.0

func (db *DB) Select(query string) (*sql.Rows, error)

Select is executes SQL select statements.

func (*DB) SelectContext added in v0.7.9

func (db *DB) SelectContext(ctx context.Context, query string) (*sql.Rows, error)

SelectContext is executes SQL select statements with context. SelectContext is a wrapper for QueryContext.

type Exporter added in v0.6.0

type Exporter interface {
	Export(db *DB, sql string) error
	ExportContext(ctx context.Context, db *DB, sql string) error
}

Exporter is the interface for processing query results. Exporter executes SQL and outputs to Writer.

type Format added in v0.6.0

type Format int

Format represents the import/export format.

const (
	// import (guesses for import format).
	GUESS Format = iota

	// import/export
	// Format using go standard CSV library.
	CSV

	// import/export
	// Labeled Tab-separated Values.
	LTSV

	// import/export
	// Format using go standard JSON library.
	JSON

	// import/export
	// TBLN format(https://tbln.dev).
	TBLN

	// import
	// Format using guesswidth library.
	WIDTH

	// export
	// Output as it is.
	// Multiple characters can be selected as delimiter.
	RAW

	// export
	// MarkDown format.
	MD

	// export
	// ASCII Table format.
	AT

	// export
	// Vertical format.
	VF

	// export
	// JSON Lines format(http://jsonlines.org/).
	JSONL

	// import/export
	// YAML format.
	YAML

	// import
	// Tab-Separated Values format. Format using go standard CSV library.
	TSV

	// import
	// Pipe-Separated Values format. Format using go standard CSV library.
	PSV
)

Represents Format.

func OutputFormat added in v0.13.0

func OutputFormat(ext string) Format

OutputFormat returns the format from the extension.

func (Format) String added in v0.6.0

func (f Format) String() string

String returns the string representation of the Format.

type GWReader added in v0.11.0

type GWReader struct {
	// contains filtered or unexported fields
}

GWReader provides methods of the Reader interface.

func NewGWReader added in v0.11.0

func NewGWReader(reader io.Reader, opts *ReadOpts) (*GWReader, error)

NewGWReader returns GWReader and error.

func (*GWReader) Names added in v0.11.0

func (r *GWReader) Names() ([]string, error)

Names returns column names.

func (*GWReader) PreReadRow added in v0.11.0

func (r *GWReader) PreReadRow() [][]any

PreReadRow is returns only columns that store preread rows.

func (*GWReader) ReadRow added in v0.11.0

func (r *GWReader) ReadRow(row []any) ([]any, error)

ReadRow is read the rest of the row.

func (*GWReader) Types added in v0.11.0

func (r *GWReader) Types() ([]string, error)

Types returns column types. All GW types return the DefaultDBType.

type Importer added in v0.6.0

type Importer interface {
	Import(db *DB, query string) (string, error)
	ImportContext(ctx context.Context, db *DB, query string) (string, error)
}

Importer is the interface import data into the database. Importer parses sql query to decide which file to Import. Therefore, the reader does not receive it directly.

type JSONLWriter added in v0.7.3

type JSONLWriter struct {
	// contains filtered or unexported fields
}

JSONLWriter provides methods of the Writer interface.

func NewJSONLWriter added in v0.7.3

func NewJSONLWriter(writeOpts *WriteOpts) *JSONLWriter

NewJSONLWriter returns JSONLWriter.

func (*JSONLWriter) PostWrite added in v0.7.3

func (w *JSONLWriter) PostWrite() error

PostWrite does nothing.

func (*JSONLWriter) PreWrite added in v0.7.3

func (w *JSONLWriter) PreWrite(columns []string, types []string) error

PreWrite does nothing.

func (*JSONLWriter) WriteRow added in v0.7.3

func (w *JSONLWriter) WriteRow(values []any, columns []string) error

WriteRow is write one JSONL.

type JSONReader added in v0.6.0

type JSONReader struct {
	// contains filtered or unexported fields
}

JSONReader provides methods of the Reader interface.

func NewJSONReader added in v0.6.0

func NewJSONReader(reader io.Reader, opts *ReadOpts) (*JSONReader, error)

NewJSONReader returns JSONReader and error.

func (*JSONReader) Names added in v0.6.0

func (r *JSONReader) Names() ([]string, error)

Names returns column names.

func (*JSONReader) PreReadRow added in v0.6.0

func (r *JSONReader) PreReadRow() [][]any

PreReadRow is returns only columns that store preRead rows. One json (not jsonl) returns all rows with preRead.

func (*JSONReader) ReadRow added in v0.6.0

func (r *JSONReader) ReadRow(row []any) ([]any, error)

ReadRow is read the rest of the row. Only jsonl requires ReadRow in json.

func (*JSONReader) Types added in v0.6.0

func (r *JSONReader) Types() ([]string, error)

Types returns column types. All JSON types return the DefaultDBType.

type JSONWriter added in v0.6.0

type JSONWriter struct {
	// contains filtered or unexported fields
}

JSONWriter provides methods of the Writer interface.

func NewJSONWriter added in v0.6.0

func NewJSONWriter(writeOpts *WriteOpts) *JSONWriter

NewJSONWriter returns JSONWriter.

func (*JSONWriter) PostWrite added in v0.6.0

func (w *JSONWriter) PostWrite() error

PostWrite is Actual output.

func (*JSONWriter) PreWrite added in v0.6.0

func (w *JSONWriter) PreWrite(columns []string, types []string) error

PreWrite is area preparation.

func (*JSONWriter) WriteRow added in v0.6.0

func (w *JSONWriter) WriteRow(values []any, columns []string) error

WriteRow is Addition to array.

type LTSVReader added in v0.6.0

type LTSVReader struct {
	// contains filtered or unexported fields
}

LTSVReader provides methods of the Reader interface.

func NewLTSVReader added in v0.6.0

func NewLTSVReader(reader io.Reader, opts *ReadOpts) (*LTSVReader, error)

NewLTSVReader returns LTSVReader and error.

func (*LTSVReader) Names added in v0.6.0

func (r *LTSVReader) Names() ([]string, error)

Names returns column names.

func (*LTSVReader) PreReadRow added in v0.6.0

func (r *LTSVReader) PreReadRow() [][]any

PreReadRow is returns only columns that store preread rows.

func (*LTSVReader) ReadRow added in v0.6.0

func (r *LTSVReader) ReadRow(row []any) ([]any, error)

ReadRow is read the rest of the row.

func (*LTSVReader) Types added in v0.6.0

func (r *LTSVReader) Types() ([]string, error)

Types returns column types. All LTSV types return the DefaultDBType.

type LTSVWriter added in v0.6.0

type LTSVWriter struct {
	// contains filtered or unexported fields
}

LTSVWriter provides methods of the Writer interface.

func NewLTSVWriter added in v0.6.0

func NewLTSVWriter(writeOpts *WriteOpts) *LTSVWriter

NewLTSVWriter returns LTSVWriter.

func (*LTSVWriter) PostWrite added in v0.6.0

func (w *LTSVWriter) PostWrite() error

PostWrite is flush.

func (*LTSVWriter) PreWrite added in v0.6.0

func (w *LTSVWriter) PreWrite(columns []string, types []string) error

PreWrite is area preparation.

func (*LTSVWriter) WriteRow added in v0.6.0

func (w *LTSVWriter) WriteRow(values []any, labels []string) error

WriteRow is row write.

type RAWWriter added in v0.6.0

type RAWWriter struct {
	// contains filtered or unexported fields
}

RAWWriter provides methods of the Writer interface.

func NewRAWWriter added in v0.6.0

func NewRAWWriter(writeOpts *WriteOpts) *RAWWriter

NewRAWWriter returns RAWWriter.

func (*RAWWriter) PostWrite added in v0.6.0

func (w *RAWWriter) PostWrite() error

PostWrite is flush.

func (*RAWWriter) PreWrite added in v0.6.0

func (w *RAWWriter) PreWrite(columns []string, types []string) error

PreWrite is output of header and preparation.

func (*RAWWriter) WriteRow added in v0.6.0

func (w *RAWWriter) WriteRow(values []any, _ []string) error

WriteRow is row write.

type ReadFormat added in v0.6.0

type ReadFormat struct {
	*ReadOpts
}

ReadFormat represents a structure that satisfies the Importer.

func NewImporter added in v0.6.0

func NewImporter(options ...ReadOpt) *ReadFormat

NewImporter returns trdsql default Importer. The argument is an option of Functional Option Pattern.

usage:

trdsql.NewImporter(
	trdsql.InFormat(trdsql.CSV),
	trdsql.InHeader(true),
	trdsql.InDelimiter(";"),
)

func (*ReadFormat) Import added in v0.6.0

func (i *ReadFormat) Import(db *DB, query string) (string, error)

Import is parses the SQL statement and imports one or more tables. Import is called from Exec. Return the rewritten SQL and error. No error is returned if there is no table to import.

func (*ReadFormat) ImportContext added in v0.7.9

func (i *ReadFormat) ImportContext(ctx context.Context, db *DB, query string) (string, error)

ImportContext is parses the SQL statement and imports one or more tables. ImportContext is called from ExecContext. Return the rewritten SQL and error. No error is returned if there is no table to import.

type ReadOpt added in v0.6.0

type ReadOpt func(*ReadOpts)

ReadOpt returns a *ReadOpts structure. Used when calling NewImporter.

func InDelimiter added in v0.6.0

func InDelimiter(d string) ReadOpt

InDelimiter is the field delimiter.

func InFormat added in v0.6.0

func InFormat(f Format) ReadOpt

InFormat is read format.

func InHeader added in v0.6.0

func InHeader(h bool) ReadOpt

InHeader is true if there is a header.

func InJQ added in v0.9.0

func InJQ(p string) ReadOpt

InJQ is jq expression.

func InLimitRead added in v0.9.1

func InLimitRead(p bool) ReadOpt

func InNULL added in v0.10.0

func InNULL(s string) ReadOpt

In NULL is a string to replace with NULL.

func InNeedNULL added in v0.10.0

func InNeedNULL(n bool) ReadOpt

InNeedNULL sets a flag as to whether it should be replaced with NULL.

func InPreRead added in v0.6.0

func InPreRead(p int) ReadOpt

InPreRead is number of lines to read ahead.

func InSkip added in v0.6.0

func InSkip(s int) ReadOpt

InSkip is number of lines to skip.

func IsTemporary added in v0.6.0

func IsTemporary(t bool) ReadOpt

IsTemporary is a flag whether to make temporary table.

type ReadOpts added in v0.6.0

type ReadOpts struct {
	// InDelimiter is the field delimiter.
	// default is ','
	InDelimiter string

	// InNULL is a string to replace with NULL.
	InNULL string

	// InJQuery is a jq expression.
	InJQuery string

	// InFormat is read format.
	// The supported format is CSV/LTSV/JSON/TBLN.
	InFormat Format

	// InPreRead is number of rows to read ahead.
	// CSV/LTSV reads the specified number of rows to
	// determine the number of columns.
	InPreRead int

	// InSkip is number of rows to skip.
	// Skip reading specified number of lines.
	InSkip int

	// InLimitRead is limit read.
	InLimitRead bool

	// InHeader is true if there is a header.
	// It is used as a column name.
	InHeader bool
	// InNeedNULL is true, replace InNULL with NULL.
	InNeedNULL bool

	// IsTemporary is a flag whether to make temporary table.
	// default is true.
	IsTemporary bool
	// contains filtered or unexported fields
}

ReadOpts represents options that determine the behavior of the reader.

func GuessOpts added in v0.10.0

func GuessOpts(readOpts *ReadOpts, fileName string) (*ReadOpts, string)

GuessOpts guesses ReadOpts from the file name and sets it.

func NewReadOpts added in v0.6.0

func NewReadOpts(options ...ReadOpt) *ReadOpts

NewReadOpts Returns ReadOpts.

type Reader added in v0.6.0

type Reader interface {
	// Names returns column names.
	Names() ([]string, error)
	// Types returns column types.
	Types() ([]string, error)
	// PreReadRow is returns only columns that store preRead rows.
	PreReadRow() [][]any
	// ReadRow is read the rest of the row.
	ReadRow(row []any) ([]any, error)
}

Reader is wrap the reader. Reader reads from tabular files.

func NewReader added in v0.6.0

func NewReader(reader io.Reader, readOpts *ReadOpts) (Reader, error)

NewReader returns an Reader interface depending on the file to be imported.

type ReaderFunc added in v0.13.0

type ReaderFunc func(io.Reader, *ReadOpts) (Reader, error)

ReaderFunc is a function that creates a new Reader.

type SliceImporter added in v0.6.0

type SliceImporter struct {
	*SliceReader
}

SliceImporter is a structure that includes SliceReader. SliceImporter can be used as a library from another program. It is not used from the command. SliceImporter is an importer that reads one slice data.

Example
package main

import (
	"log"

	"github.com/noborus/trdsql"
)

func main() {
	data := []struct {
		id   int
		name string
	}{
		{id: 1, name: "Bod"},
		{id: 2, name: "Alice"},
		{id: 3, name: "Henry"},
	}
	tableName := "slice"
	importer := trdsql.NewSliceImporter(tableName, data)
	trd := trdsql.NewTRDSQL(importer, trdsql.NewExporter(trdsql.NewWriter()))

	err := trd.Exec("SELECT name,id FROM slice ORDER BY id DESC")
	if err != nil {
		log.Print(err)
		return
	}
}
Output:

Henry,3
Alice,2
Bod,1

func NewSliceImporter added in v0.6.0

func NewSliceImporter(tableName string, data any) *SliceImporter

NewSliceImporter returns trdsql SliceImporter.

func (*SliceImporter) Import added in v0.6.0

func (i *SliceImporter) Import(db *DB, query string) (string, error)

Import is a method to import from SliceReader in SliceImporter.

func (*SliceImporter) ImportContext added in v0.7.9

func (i *SliceImporter) ImportContext(ctx context.Context, db *DB, query string) (string, error)

ImportContext is a method to import from SliceReader in SliceImporter.

type SliceReader added in v0.6.0

type SliceReader struct {
	// contains filtered or unexported fields
}

SliceReader is a structure for reading tabular data in memory. It can be used as the trdsql reader interface.

func NewSliceReader added in v0.6.0

func NewSliceReader(tableName string, args any) *SliceReader

NewSliceReader takes a tableName and tabular data in memory and returns SliceReader. The tabular data that can be received is a one-dimensional array, a two-dimensional array, a map, and an array of structures.

func (*SliceReader) Names added in v0.6.0

func (r *SliceReader) Names() ([]string, error)

Names returns column names.

func (*SliceReader) PreReadRow added in v0.6.0

func (r *SliceReader) PreReadRow() [][]any

PreReadRow is returns entity of the data.

func (*SliceReader) ReadRow added in v0.6.0

func (r *SliceReader) ReadRow(row []any) ([]any, error)

ReadRow only returns EOF.

func (*SliceReader) TableName added in v0.6.3

func (r *SliceReader) TableName() (string, error)

TableName returns Table name.

func (*SliceReader) Types added in v0.6.0

func (r *SliceReader) Types() ([]string, error)

Types returns column types.

type SliceWriter added in v0.6.0

type SliceWriter struct {
	Table [][]any
}

SliceWriter is a structure to receive the result in slice.

Example
package main

import (
	"fmt"
	"log"

	"github.com/noborus/trdsql"
)

func main() {
	data := []struct {
		id   int
		name string
	}{
		{id: 1, name: "Bod"},
		{id: 2, name: "Alice"},
		{id: 3, name: "Henry"},
	}
	tableName := "slice"
	importer := trdsql.NewSliceImporter(tableName, data)
	writer := trdsql.NewSliceWriter()
	trd := trdsql.NewTRDSQL(importer, trdsql.NewExporter(writer))

	err := trd.Exec("SELECT name,id FROM slice ORDER BY id DESC")
	if err != nil {
		log.Print(err)
		return
	}
	table := writer.Table
	fmt.Print(table)
}
Output:

[[Henry 3] [Alice 2] [Bod 1]]

func NewSliceWriter added in v0.6.0

func NewSliceWriter() *SliceWriter

NewSliceWriter return SliceWriter.

func (*SliceWriter) PostWrite added in v0.6.0

func (w *SliceWriter) PostWrite() error

PostWrite does nothing.

func (*SliceWriter) PreWrite added in v0.6.0

func (w *SliceWriter) PreWrite(columns []string, types []string) error

PreWrite prepares the area.

func (*SliceWriter) WriteRow added in v0.6.0

func (w *SliceWriter) WriteRow(values []any, columns []string) error

WriteRow stores the result in Table.

type TBLNRead added in v0.6.0

type TBLNRead struct {
	// contains filtered or unexported fields
}

TBLNRead provides methods of the Reader interface.

func NewTBLNReader added in v0.6.0

func NewTBLNReader(reader io.Reader, opts *ReadOpts) (*TBLNRead, error)

NewTBLNReader returns TBLNRead and error.

func (*TBLNRead) Names added in v0.6.0

func (r *TBLNRead) Names() ([]string, error)

Names returns column names.

func (*TBLNRead) PreReadRow added in v0.6.0

func (r *TBLNRead) PreReadRow() [][]any

PreReadRow is returns only columns that store preread rows.

func (*TBLNRead) ReadRow added in v0.6.0

func (r *TBLNRead) ReadRow(row []any) ([]any, error)

ReadRow is read the rest of the row.

func (*TBLNRead) Types added in v0.6.0

func (r *TBLNRead) Types() ([]string, error)

Types returns column types.

type TBLNWriter added in v0.6.0

type TBLNWriter struct {
	// contains filtered or unexported fields
}

TBLNWriter provides methods of the Writer interface.

func NewTBLNWriter added in v0.6.0

func NewTBLNWriter(writeOpts *WriteOpts) *TBLNWriter

NewTBLNWriter returns TBLNWriter.

func (*TBLNWriter) PostWrite added in v0.6.0

func (w *TBLNWriter) PostWrite() error

PostWrite is nil.

func (*TBLNWriter) PreWrite added in v0.6.0

func (w *TBLNWriter) PreWrite(columns []string, types []string) error

PreWrite is prepare tbln definition body.

func (*TBLNWriter) WriteRow added in v0.6.0

func (w *TBLNWriter) WriteRow(values []any, columns []string) error

WriteRow is row write.

type TRDSQL

type TRDSQL struct {
	// Importer is interface of processing to
	// import(create/insert) data.
	Importer Importer
	// Exporter is interface export to the process of
	// export(select) from the database.
	Exporter Exporter

	// Driver is database driver name(sqlite3/sqlite/mysql/postgres).
	Driver string
	// Dsn is data source name.
	Dsn string
}

TRDSQL represents DB definition and Importer/Exporter interface.

func NewTRDSQL added in v0.6.0

func NewTRDSQL(im Importer, ex Exporter) *TRDSQL

NewTRDSQL returns a new TRDSQL structure.

func (*TRDSQL) Exec added in v0.6.0

func (trd *TRDSQL) Exec(sql string) error

Exec is actually executed.

func (*TRDSQL) ExecContext added in v0.7.9

func (trd *TRDSQL) ExecContext(ctx context.Context, sqlQuery string) error

ExecContext is actually executed.

type TWWriter added in v0.6.0

type TWWriter struct {
	// contains filtered or unexported fields
}

TWWriter provides methods of the Writer interface.

func NewTWWriter added in v0.6.0

func NewTWWriter(writeOpts *WriteOpts, markdown bool) *TWWriter

NewTWWriter returns TWWriter.

func (*TWWriter) PostWrite added in v0.6.0

func (w *TWWriter) PostWrite() error

PostWrite is Actual output.

func (*TWWriter) PreWrite added in v0.6.0

func (w *TWWriter) PreWrite(columns []string, types []string) error

PreWrite is preparation.

func (*TWWriter) WriteRow added in v0.6.0

func (w *TWWriter) WriteRow(values []any, columns []string) error

WriteRow is Addition to array.

type VFWriter added in v0.6.0

type VFWriter struct {
	// contains filtered or unexported fields
}

VFWriter is Vertical Format output.

func NewVFWriter added in v0.6.0

func NewVFWriter(writeOpts *WriteOpts) *VFWriter

NewVFWriter returns VFWriter.

func (*VFWriter) PostWrite added in v0.6.0

func (w *VFWriter) PostWrite() error

PostWrite is flush.

func (*VFWriter) PreWrite added in v0.6.0

func (w *VFWriter) PreWrite(columns []string, types []string) error

PreWrite is preparation.

func (*VFWriter) WriteRow added in v0.6.0

func (w *VFWriter) WriteRow(values []any, columns []string) error

WriteRow is Actual output.

type WriteFormat added in v0.6.0

type WriteFormat struct {
	Writer
	// contains filtered or unexported fields
}

WriteFormat represents a structure that satisfies Exporter.

func NewExporter added in v0.6.0

func NewExporter(writer Writer) *WriteFormat

NewExporter returns trdsql default Exporter.

func (*WriteFormat) Export added in v0.6.0

func (e *WriteFormat) Export(db *DB, sql string) error

Export is execute SQL(Select) and the result is written out by the writer. Export is called from Exec.

func (*WriteFormat) ExportContext added in v0.7.9

func (e *WriteFormat) ExportContext(ctx context.Context, db *DB, sqlQuery string) error

ExportContext is execute SQL(Select) and the result is written out by the writer. ExportContext is called from ExecContext.

type WriteOpt added in v0.6.0

type WriteOpt func(*WriteOpts)

WriteOpt is a function to set WriteOpts.

func ErrStream added in v0.6.0

func ErrStream(w io.Writer) WriteOpt

ErrStream sets the error output destination.

func OutAllQuotes added in v0.7.4

func OutAllQuotes(a bool) WriteOpt

OutAllQuotes sets all quotes.

func OutDelimiter added in v0.6.0

func OutDelimiter(d string) WriteOpt

OutDelimiter sets delimiter.

func OutFormat added in v0.6.0

func OutFormat(f Format) WriteOpt

OutFormat sets Format.

func OutHeader added in v0.6.0

func OutHeader(h bool) WriteOpt

OutHeader sets flag to output header.

func OutNULL added in v0.10.0

func OutNULL(s string) WriteOpt

OutNULL sets the output NULL string.

func OutNeedNULL added in v0.10.0

func OutNeedNULL(n bool) WriteOpt

OutNeedNULL sets a flag to replace NULL.

func OutNoWrap added in v0.7.6

func OutNoWrap(w bool) WriteOpt

OutNoWrap sets flag to output do not wrap long columns.

func OutQuote added in v0.7.4

func OutQuote(q string) WriteOpt

OutQuote sets quote.

func OutStream added in v0.6.0

func OutStream(w io.Writer) WriteOpt

OutStream sets the output destination.

func OutUseCRLF added in v0.7.4

func OutUseCRLF(c bool) WriteOpt

OutUseCRLF sets use CRLF.

type WriteOpts added in v0.6.0

type WriteOpts struct {
	// OutStream is the output destination.
	OutStream io.Writer
	// ErrStream is the error output destination.
	ErrStream io.Writer

	// OutDelimiter is the output delimiter (Use only CSV and Raw).
	OutDelimiter string
	// OutQuote is the output quote character (Use only CSV).
	OutQuote string
	// OutNeedNULL is true, replace NULL with OutNULL.
	OutNULL string
	// OutFormat is the writing format.
	OutFormat Format
	// OutAllQuotes is true if Enclose all fields (Use only CSV).
	OutAllQuotes bool
	// True to use \r\n as the line terminator (Use only CSV).
	OutUseCRLF bool
	// OutHeader is true if it outputs a header(Use only CSV and Raw).
	OutHeader bool
	// OutNoWrap is true, do not wrap long columns(Use only AT and MD).
	OutNoWrap bool
	// OutNeedNULL is true, replace NULL with OutNULL.
	OutNeedNULL bool
	// OutJSONToYAML is true, convert JSON to YAML(Use only YAML).
	OutJSONToYAML bool
}

WriteOpts represents options that determine the behavior of the writer.

type Writer added in v0.6.0

type Writer interface {
	// PreWrite is called first to write.
	// The arguments are a list of column names and a list of type names.
	PreWrite(columns []string, types []string) error
	// WriteRow is row write.
	WriteRow(row []any, columns []string) error
	// PostWrite is called last in the write.
	PostWrite() error
}

Writer is an interface that wraps the Write method that writes from the database to a file. Writer is a group of methods called from Export.

func NewWriter added in v0.6.0

func NewWriter(options ...WriteOpt) Writer

NewWriter returns a Writer interface. The argument is an option of Functional Option Pattern.

usage:

NewWriter(
	trdsql.OutFormat(trdsql.CSV),
	trdsql.OutHeader(true),
	trdsql.OutDelimiter(";"),
)

type YAMLReader added in v0.12.0

type YAMLReader struct {
	// contains filtered or unexported fields
}

YAMLReader provides methods of the Reader interface.

func NewYAMLReader added in v0.12.0

func NewYAMLReader(reader io.Reader, opts *ReadOpts) (*YAMLReader, error)

NewYAMLReader returns YAMLReader and error.

func (*YAMLReader) Names added in v0.12.0

func (r *YAMLReader) Names() ([]string, error)

Names returns column names.

func (*YAMLReader) PreReadRow added in v0.12.0

func (r *YAMLReader) PreReadRow() [][]any

PreReadRow is returns only columns that store preRead rows. One YAML (not YAMLl) returns all rows with preRead.

func (*YAMLReader) ReadRow added in v0.12.0

func (r *YAMLReader) ReadRow(row []any) ([]any, error)

ReadRow is read the rest of the row. Only YAMLl requires ReadRow in YAML.

func (*YAMLReader) Types added in v0.12.0

func (r *YAMLReader) Types() ([]string, error)

Types returns column types. All YAML types return the DefaultDBType.

type YAMLWriter added in v0.12.0

type YAMLWriter struct {
	// contains filtered or unexported fields
}

YAMLWriter provides methods of the Writer interface.

func NewYAMLWriter added in v0.12.0

func NewYAMLWriter(writeOpts *WriteOpts) *YAMLWriter

NewYAMLWriter returns YAMLWriter.

func (*YAMLWriter) PostWrite added in v0.12.0

func (w *YAMLWriter) PostWrite() error

PostWrite is Actual output.

func (*YAMLWriter) PreWrite added in v0.12.0

func (w *YAMLWriter) PreWrite(columns []string, types []string) error

PreWrite is area preparation.

func (*YAMLWriter) WriteRow added in v0.12.0

func (w *YAMLWriter) WriteRow(values []any, columns []string) error

WriteRow is Addition to array.

Directories

Path Synopsis
_example
buffer
buffer is an example using NewBufferImporter.
buffer is an example using NewBufferImporter.
import
import is an example of using a customized import.
import is an example of using a customized import.
quotecsv
quotecsv is an example of outputting a column quoted in double quotes.
quotecsv is an example of outputting a column quoted in double quotes.
simple
simple is an example of using trdsql as a library.
simple is an example of using trdsql as a library.
slice
slice is to import data using NewSliceImporter.
slice is to import data using NewSliceImporter.
writer
writer is an example of using a customized writer.
writer is an example of using a customized writer.
cmd

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL