netquery

package
v1.6.5 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Jan 11, 2024 License: AGPL-3.0 Imports: 36 Imported by: 0

Documentation

Index

Constants

View Source
const (
	ConnTypeDNS = "dns"
	ConnTypeIP  = "ip"
)

Available connection types as their string representation.

View Source
const (
	LiveDatabase    = DatabaseName("main")
	HistoryDatabase = DatabaseName("history")
)

Databases.

View Source
const InMemory = "file:inmem.db?mode=memory"

InMemory is the "file path" to open a new in-memory database.

Variables

ConnectionTypeToString is a lookup map to get the string representation of a network.ConnectionType as used by this package.

View Source
var DefaultModule *module

DefaultModule is the default netquery module.

Functions

func VacuumHistory added in v1.4.0

func VacuumHistory(ctx context.Context) (err error)

VacuumHistory rewrites the history database in order to purge deleted records.

Types

type ActiveChartHandler added in v1.5.0

type ActiveChartHandler struct {
	Database *Database
}

ActiveChartHandler handles requests for connection charts.

func (*ActiveChartHandler) ServeHTTP added in v1.5.0

func (ch *ActiveChartHandler) ServeHTTP(resp http.ResponseWriter, req *http.Request)

type BandwidthChartHandler added in v1.5.0

type BandwidthChartHandler struct {
	Database *Database
}

BandwidthChartHandler handles requests for connection charts.

func (*BandwidthChartHandler) ServeHTTP added in v1.5.0

func (ch *BandwidthChartHandler) ServeHTTP(resp http.ResponseWriter, req *http.Request)

type BandwidthChartRequest added in v1.5.0

type BandwidthChartRequest struct {
	Interval int      `json:"interval"`
	Query    Query    `json:"query"`
	GroupBy  []string `json:"groupBy"`
}

BandwidthChartRequest holds a request for a bandwidth chart.

type BatchExecute added in v1.4.6

type BatchExecute struct {
	ID     string
	SQL    string
	Params map[string]any
	Result *[]map[string]any
}

BatchExecute executes multiple queries in one transaction.

type BatchQueryHandler added in v1.4.6

type BatchQueryHandler struct {
	IsDevMode func() bool
	Database  *Database
}

BatchQueryHandler implements http.Handler and allows to perform SQL query and aggregate functions on Database in batches.

func (*BatchQueryHandler) ServeHTTP added in v1.4.6

func (batch *BatchQueryHandler) ServeHTTP(resp http.ResponseWriter, req *http.Request)

type BatchQueryRequestPayload added in v1.4.6

type BatchQueryRequestPayload map[string]QueryRequestPayload

BatchQueryRequestPayload describes the payload of a batch netquery query. The map key is used in the response to identify the results for each query of the batch request.

type Conn

type Conn struct {
	// ID is a device-unique identifier for the connection. It is built
	// from network.Connection by hashing the connection ID and the start
	// time. We cannot just use the network.Connection.ID because it is only unique
	// as long as the connection is still active and might be, although unlikely,
	// reused afterwards.
	ID              string            `sqlite:"id,primary"`
	ProfileID       string            `sqlite:"profile"`
	Path            string            `sqlite:"path"`
	Type            string            `sqlite:"type,varchar(8)"`
	External        bool              `sqlite:"external"`
	IPVersion       packet.IPVersion  `sqlite:"ip_version"`
	IPProtocol      packet.IPProtocol `sqlite:"ip_protocol"`
	LocalIP         string            `sqlite:"local_ip"`
	LocalPort       uint16            `sqlite:"local_port"`
	RemoteIP        string            `sqlite:"remote_ip"`
	RemotePort      uint16            `sqlite:"remote_port"`
	Domain          string            `sqlite:"domain"`
	Country         string            `sqlite:"country,varchar(2)"`
	ASN             uint              `sqlite:"asn"`
	ASOwner         string            `sqlite:"as_owner"`
	Latitude        float64           `sqlite:"latitude"`
	Longitude       float64           `sqlite:"longitude"`
	Scope           netutils.IPScope  `sqlite:"scope"`
	WorstVerdict    network.Verdict   `sqlite:"worst_verdict"`
	ActiveVerdict   network.Verdict   `sqlite:"verdict"`
	FirewallVerdict network.Verdict   `sqlite:"firewall_verdict"`
	Started         time.Time         `sqlite:"started,text,time"`
	Ended           *time.Time        `sqlite:"ended,text,time"`
	Tunneled        bool              `sqlite:"tunneled"`
	Encrypted       bool              `sqlite:"encrypted"`
	Internal        bool              `sqlite:"internal"`
	Direction       string            `sqlite:"direction"`
	ExtraData       json.RawMessage   `sqlite:"extra_data"`
	Allowed         *bool             `sqlite:"allowed"`
	ProfileRevision int               `sqlite:"profile_revision"`
	ExitNode        *string           `sqlite:"exit_node"`
	BytesReceived   uint64            `sqlite:"bytes_received,default=0"`
	BytesSent       uint64            `sqlite:"bytes_sent,default=0"`

	// TODO(ppacher): support "NOT" in search query to get rid of the following helper fields
	Active bool `sqlite:"active"` // could use "ended IS NOT NULL" or "ended IS NULL"

	// TODO(ppacher): we need to profile here for "suggestion" support. It would be better to keep a table of profiles in sqlite and use joins here
	ProfileName string `sqlite:"profile_name"`
}

Conn is a network connection that is stored in a SQLite database and accepted by the *Database type of this package. This also defines, using the ./orm package, the table schema and the model that is exposed via the runtime database as well as the query API.

Use ConvertConnection from this package to convert a network.Connection to this representation.

type ConnectionStore

type ConnectionStore interface {
	// Save is called to perists the new or updated connection. If required,
	// It's up to the implementation to figure out if the operation is an
	// insert or an update.
	// The ID of Conn is unique and can be trusted to never collide with other
	// connections of the save device.
	Save(context.Context, Conn, bool) error

	// MarkAllHistoryConnectionsEnded marks all active connections in the history
	// database as ended NOW.
	MarkAllHistoryConnectionsEnded(context.Context) error

	// RemoveAllHistoryData removes all connections from the history database.
	RemoveAllHistoryData(context.Context) error

	// RemoveHistoryForProfile removes all connections from the history database.
	// for a given profile ID (source/id)
	RemoveHistoryForProfile(context.Context, string) error

	// UpdateBandwidth updates bandwidth data for the connection and optionally also writes
	// the bandwidth data to the history database.
	UpdateBandwidth(ctx context.Context, enableHistory bool, profileKey string, processKey string, connID string, bytesReceived uint64, bytesSent uint64) error

	// CleanupHistory deletes data outside of the retention time frame from the history database.
	CleanupHistory(ctx context.Context) error

	// Close closes the connection store. It must not be used afterwards.
	Close() error
}

ConnectionStore describes the interface that is used by Manager to save new or updated connection objects. It is implemented by the *Database type of this package.

type Count

type Count struct {
	As       string `json:"as"`
	Field    string `json:"field"`
	Distinct bool   `json:"distinct"`
}

Collection of Query and Matcher types. NOTE: whenever adding support for new operators make sure to update UnmarshalJSON as well.

type Database

type Database struct {
	Schema *orm.TableSchema
	// contains filtered or unexported fields
}

Database represents a SQLite3 backed connection database. It's use is tailored for persistence and querying of network.Connection. Access to the underlying SQLite database is synchronized.

func New

func New(dbPath string) (*Database, error)

New opens a new in-memory database named path and attaches a persistent history database.

The returned Database used connection pooling for read-only connections (see Execute). To perform database writes use either Save() or ExecuteWrite(). Note that write connections are serialized by the Database object before being handed over to SQLite.

func NewInMemory

func NewInMemory() (*Database, error)

NewInMemory is like New but creates a new in-memory database and automatically applies the connection table schema.

func (*Database) ApplyMigrations

func (db *Database) ApplyMigrations() error

ApplyMigrations applies any table and data migrations that are needed to bring db up-to-date with the built-in schema. TODO(ppacher): right now this only applies the current schema and ignores any data-migrations. Once the history module is implemented this should become/use a full migration system -- use zombiezen.com/go/sqlite/sqlitemigration.

func (*Database) Cleanup

func (db *Database) Cleanup(ctx context.Context, threshold time.Time) (int, error)

Cleanup removes all connections that have ended before threshold from the live database.

NOTE(ppacher): there is no easy way to get the number of removed rows other than counting them in a first step. Though, that's probably not worth the cylces...

func (*Database) CleanupHistory added in v1.4.0

func (db *Database) CleanupHistory(ctx context.Context) error

CleanupHistory deletes history data outside of the (per-app) retention time frame.

func (*Database) Close

func (db *Database) Close() error

Close closes the database, including pools and connections.

func (*Database) CountRows

func (db *Database) CountRows(ctx context.Context) (int, error)

CountRows returns the number of rows stored in the database.

func (*Database) Execute

func (db *Database) Execute(ctx context.Context, sql string, args ...orm.QueryOption) error

Execute executes a custom SQL query using a read-only connection against the SQLite database used by db. It uses orm.RunQuery() under the hood so please refer to the orm package for more information about available options.

func (*Database) ExecuteBatch added in v1.4.6

func (db *Database) ExecuteBatch(ctx context.Context, batches []BatchExecute) error

ExecuteBatch executes multiple custom SQL query using a read-only connection against the SQLite database used by db.

func (*Database) ExecuteWrite added in v0.9.1

func (db *Database) ExecuteWrite(ctx context.Context, sql string, args ...orm.QueryOption) error

ExecuteWrite executes a custom SQL query using a writable connection against the SQLite database used by db. It uses orm.RunQuery() under the hood so please refer to the orm package for more information about available options.

func (*Database) MarkAllHistoryConnectionsEnded added in v1.3.0

func (db *Database) MarkAllHistoryConnectionsEnded(ctx context.Context) error

MarkAllHistoryConnectionsEnded marks all connections in the history database as ended.

func (*Database) MigrateProfileID added in v1.6.0

func (db *Database) MigrateProfileID(ctx context.Context, from string, to string) error

MigrateProfileID migrates the given profile IDs in the history database. This needs to be done when profiles are deleted and replaced by a different profile.

func (*Database) RemoveAllHistoryData added in v1.3.0

func (db *Database) RemoveAllHistoryData(ctx context.Context) error

RemoveAllHistoryData removes all connections from the history database.

func (*Database) RemoveHistoryForProfile added in v1.3.0

func (db *Database) RemoveHistoryForProfile(ctx context.Context, profileID string) error

RemoveHistoryForProfile removes all connections from the history database for a given profile ID (source/id).

func (*Database) Save

func (db *Database) Save(ctx context.Context, conn Conn, enableHistory bool) error

Save inserts the connection conn into the SQLite database. If conn already exists the table row is updated instead.

Save uses the database write connection instead of relying on the connection pool.

func (*Database) UpdateBandwidth added in v1.3.0

func (db *Database) UpdateBandwidth(ctx context.Context, enableHistory bool, profileKey string, processKey string, connID string, bytesReceived uint64, bytesSent uint64) error

UpdateBandwidth updates bandwidth data for the connection and optionally also writes the bandwidth data to the history database.

type DatabaseName added in v1.3.0

type DatabaseName string

DatabaseName is a database name constant.

type Equal

type Equal interface{}

Collection of Query and Matcher types. NOTE: whenever adding support for new operators make sure to update UnmarshalJSON as well.

type FieldSelect added in v1.5.0

type FieldSelect struct {
	Field string `json:"field"`
	As    string `json:"as"`
}

Collection of Query and Matcher types. NOTE: whenever adding support for new operators make sure to update UnmarshalJSON as well.

type Manager

type Manager struct {
	// contains filtered or unexported fields
}

Manager handles new and updated network.Connections feeds and persists them at a connection store. Manager also registers itself as a runtime database and pushes updates to connections using the local format. Users should use this update feed rather than the deprecated "network:" database.

func NewManager

func NewManager(store ConnectionStore, pushPrefix string, reg *runtime.Registry) (*Manager, error)

NewManager returns a new connection manager that persists all newly created or updated connections at store.

func (*Manager) HandleFeed

func (mng *Manager) HandleFeed(ctx context.Context, feed <-chan *network.Connection)

HandleFeed starts reading new and updated connections from feed and persists them in the configured ConnectionStore. HandleFeed blocks until either ctx is cancelled or feed is closed. Any errors encountered when processing new or updated connections are logged but otherwise ignored. HandleFeed handles and persists updates one after each other! Depending on the system load the user might want to use a buffered channel for feed.

type MatchType

type MatchType interface {
	Operator() string
}

Collection of Query and Matcher types. NOTE: whenever adding support for new operators make sure to update UnmarshalJSON as well.

type Matcher

type Matcher struct {
	Equal          interface{}   `json:"$eq,omitempty"`
	NotEqual       interface{}   `json:"$ne,omitempty"`
	In             []interface{} `json:"$in,omitempty"`
	NotIn          []interface{} `json:"$notIn,omitempty"`
	Like           string        `json:"$like,omitempty"`
	Greater        *float64      `json:"$gt,omitempty"`
	GreaterOrEqual *float64      `json:"$ge,omitempty"`
	Less           *float64      `json:"$lt,omitempty"`
	LessOrEqual    *float64      `json:"$le,omitempty"`
}

Collection of Query and Matcher types. NOTE: whenever adding support for new operators make sure to update UnmarshalJSON as well.

func (Matcher) Validate

func (match Matcher) Validate() error

Validate validates the matcher.

type Min added in v1.3.0

type Min struct {
	Condition *Query `json:"condition,omitempty"`
	Field     string `json:"field"`
	As        string `json:"as"`
	Distinct  bool   `json:"distinct"`
}

Collection of Query and Matcher types. NOTE: whenever adding support for new operators make sure to update UnmarshalJSON as well.

type OrderBy

type OrderBy struct {
	Field string `json:"field"`
	Desc  bool   `json:"desc"`
}

Collection of Query and Matcher types. NOTE: whenever adding support for new operators make sure to update UnmarshalJSON as well.

func (*OrderBy) UnmarshalJSON

func (orderBy *OrderBy) UnmarshalJSON(blob []byte) error

UnmarshalJSON unmarshals a OrderBy from json.

type OrderBys

type OrderBys []OrderBy

Collection of Query and Matcher types. NOTE: whenever adding support for new operators make sure to update UnmarshalJSON as well.

func (*OrderBys) UnmarshalJSON

func (orderBys *OrderBys) UnmarshalJSON(blob []byte) error

UnmarshalJSON unmarshals a OrderBys from json.

type Pagination

type Pagination struct {
	PageSize int `json:"pageSize"`
	Page     int `json:"page"`
}

Collection of Query and Matcher types. NOTE: whenever adding support for new operators make sure to update UnmarshalJSON as well.

type Query

type Query map[string][]Matcher

Collection of Query and Matcher types. NOTE: whenever adding support for new operators make sure to update UnmarshalJSON as well.

func (*Query) UnmarshalJSON

func (query *Query) UnmarshalJSON(blob []byte) error

UnmarshalJSON unmarshals a Query from json.

type QueryActiveConnectionChartPayload

type QueryActiveConnectionChartPayload struct {
	Query      Query       `json:"query"`
	TextSearch *TextSearch `json:"textSearch"`
}

Collection of Query and Matcher types. NOTE: whenever adding support for new operators make sure to update UnmarshalJSON as well.

type QueryHandler

type QueryHandler struct {
	IsDevMode func() bool
	Database  *Database
}

QueryHandler implements http.Handler and allows to perform SQL query and aggregate functions on Database.

func (*QueryHandler) ServeHTTP

func (qh *QueryHandler) ServeHTTP(resp http.ResponseWriter, req *http.Request)

type QueryRequestPayload

type QueryRequestPayload struct {
	Select     Selects     `json:"select"`
	Query      Query       `json:"query"`
	OrderBy    OrderBys    `json:"orderBy"`
	GroupBy    []string    `json:"groupBy"`
	TextSearch *TextSearch `json:"textSearch"`
	// A list of databases to query. If left empty,
	// both, the LiveDatabase and the HistoryDatabase are queried
	Databases []DatabaseName `json:"databases"`

	Pagination
	// contains filtered or unexported fields
}

QueryRequestPayload describes the payload of a netquery query.

type RuntimeQueryRunner

type RuntimeQueryRunner struct {
	// contains filtered or unexported fields
}

RuntimeQueryRunner provides a simple interface for the runtime database that allows direct SQL queries to be performed against db. Each resulting row of that query are marshaled as map[string]interface{} and returned as a single record to the caller.

Using portbase/database#Query is not possible because portbase/database will complain about the SQL query being invalid. To work around that issue, RuntimeQueryRunner uses a 'GET key' request where the SQL query is embedded into the record key.

func NewRuntimeQueryRunner

func NewRuntimeQueryRunner(db *Database, prefix string, reg *runtime.Registry) (*RuntimeQueryRunner, error)

NewRuntimeQueryRunner returns a new runtime SQL query runner that parses and serves SQL queries form GET <prefix>/<plain sql query> requests.

type Select

type Select struct {
	Field       string       `json:"field"`
	FieldSelect *FieldSelect `json:"$field"`
	Count       *Count       `json:"$count,omitempty"`
	Sum         *Sum         `json:"$sum,omitempty"`
	Min         *Min         `json:"$min,omitempty"`
	Distinct    *string      `json:"$distinct,omitempty"`
}

Collection of Query and Matcher types. NOTE: whenever adding support for new operators make sure to update UnmarshalJSON as well.

func (*Select) UnmarshalJSON

func (sel *Select) UnmarshalJSON(blob []byte) error

UnmarshalJSON unmarshals a Select from json.

type Selects

type Selects []Select

Collection of Query and Matcher types. NOTE: whenever adding support for new operators make sure to update UnmarshalJSON as well.

func (*Selects) UnmarshalJSON

func (sel *Selects) UnmarshalJSON(blob []byte) error

UnmarshalJSON unmarshals a Selects from json.

type Sum

type Sum struct {
	Condition Query  `json:"condition"`
	Field     string `json:"field"`
	As        string `json:"as"`
	Distinct  bool   `json:"distinct"`
}

Collection of Query and Matcher types. NOTE: whenever adding support for new operators make sure to update UnmarshalJSON as well.

type TextSearch

type TextSearch struct {
	Fields []string `json:"fields"`
	Value  string   `json:"value"`
}

Collection of Query and Matcher types. NOTE: whenever adding support for new operators make sure to update UnmarshalJSON as well.

Directories

Path Synopsis

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL