odam

package module
v0.8.3 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Aug 27, 2021 License: Apache-2.0 Imports: 21 Imported by: 2

README

ODaM - Object Detection and Monitoring

GoDoc Sourcegraph Go Report Card GitHub tag

v0.8.3

ODaM is project aimed to do monitoring such as: pedestrian detection and counting, vehicle detection and counting, speed estimation of objects, sending detected objects to gRPC server for detailed analysis.

It's written on Go with a lot of CGO.

YOLOv4 + Kalman filter for tracking YOLOv4 + simple centroid tracking
YOLOv4 Tiny + Kalman filter for tracking YOLOv4 Tiny + simple centroid tracking

Work in progress

We are working on this.

Not too fast, but it is what it is.

Table of Contents

About

ODaM is tool for doing monitoring via Darknet's neural network called Yolo V4 (paper: https://arxiv.org/abs/2004.10934).

It's built on top of go-darknet which uses AlexeyAB's fork of Darknet. For doing computer vision stuff and video reading GoCV is used.

QA section

Who are you and what do you do?

There is info about me here: https://github.com/LdDl

You can have chat with me in Telegram/Gmail

Is this library / software or even framework?

I think about it as software with library capabilities.

What it capable of?

Not that much currently:

  • Object detection via darknet: both YOLOv3 and YOLOv4 (thanks to Go bindings for it)
  • Object tracking via two possible techniques: Kalman tracking (filtering) or Centroid tracking;
  • Sending data to dedicated gRPC server;
  • MJPEG / imshow optional visual output;
  • Speed estimation based of GIS calculations (via matching pixels to WGS84).

Why Go?

Well, C++ is a killer in computer vision field and Python has a great battery included bindings for C++ code.

But I do no think that I'm ready to build gRPC/REST or any other web components of this software in C++ or Python (C++ is not that easy and Python...I just don't like Python syntax). That's why I prefer to stick with Go.

Why did you pick JSON for configuration purposes instead of TOML/YAML/INI or any other well-suited formats?

  1. Compared to TOML, JSON is not that 'human friendly', but still readable.
  2. It is in standart Go's library.
  3. Well, it is in standart Go's library.
  4. You got the idea.

Why bindings to Darknet instead of Opencv included stuff?

Sometimes you just do not need full OpenCV installation for object detection. I have such ANPR projet here: https://github.com/LdDl/license_plate_recognition I guess when I'm done with stable core I might switch from Go's Darknet bindings to OpenCV one (since ODaM-project requires OpenCV installation obviously)

What are your plans?

There is ROADMAP.md, but overall I am planning to extend capabilities of software:

  • Improve perfomance
  • Implement some cool tracking techniques (e.g. SORT)
  • Do gRPC accepting microservice for enabling software to catch information from external devices/systems/microservices and etc. E.g: you want to send message 'there is red light on traffic light" to instance of software, then it would look like grpcServer.Send('there is red light on traffic light'). After that any captured object will have state with message above in it. So you can catch traffic offenders.
  • Introduce convex polygon based calculations (same as virtual lines but for polygons)

How to help you?

If you are here, then you are already helped a lot, since you noticed my existence :harold_face:

If you want to make PR for some undone features (algorithms mainly) I'll glad to take a look.

Installation

notice: targeted for Linux users (no Windows/OSX instructions currenlty)

Need to enable CUDA (GPU) in every installation step where it's possible.

  1. Install CUDA (we recommend version 10.2)

    wget https://developer.download.nvidia.com/compute/cuda/repos/ubuntu1804/x86_64/cuda-ubuntu1804.pin
    sudo mv cuda-ubuntu1804.pin /etc/apt/preferences.d/cuda-repository-pin-600
    wget http://developer.download.nvidia.com/compute/cuda/10.2/Prod/local_installers/cuda-repo-ubuntu1804-10-2-local-10.2.89-440.33.01_1.0-1_amd64.deb
    sudo dpkg -i cuda-repo-ubuntu1804-10-2-local-10.2.89-440.33.01_1.0-1_amd64.deb
    sudo apt-key add /var/cuda-repo-10-2-local-10.2.89-440.33.01/7fa2af80.pub
    sudo apt-get update
    sudo apt-get -y install cuda
    echo 'export PATH=/usr/local/cuda/bin:$PATH' >> ~/.bashrc
    echo 'export LD_LIBRARY_PATH=/usr/local/cuda/lib64:LD_LIBRARY_PATH'  >> ~/.bashrc
    source ~/.bashrc
    
  2. Install cuDNN (we recommend version v7.6.5 (November 18th, 2019), for CUDA 10.2) Go to NVIDIA's site and download *.deb package. After downloading *.deb package install it:

    sudo dpkg -i libcudnn7_7.6.5.32-1+cuda10.2_amd64.deb
    sudo dpkg -i libcudnn7-dev_7.6.5.32-1+cuda10.2_amd64.deb
    sudo dpkg -i libcudnn7-doc_7.6.5.32-1+cuda10.2_amd64.deb
    

    Do not forget to check if cuDNN installed properly:

    cp -r /usr/src/cudnn_samples_v7/ $HOME
    cd  $HOME/cudnn_samples_v7/mnistCUDNN
    make clean && make
    ./mnistCUDNN
    cd -
    
  3. Install AlexeyAb's fork of Darknet

    git clone https://github.com/AlexeyAB/darknet
    cd ./darknet
    # Checkout to last battle-tested commit
    git checkout f056fc3b6a11528fa0522a468eca1e909b7004b7
    # Enable GPU acceleration
    sed 's/GPU=0/GPU=1/' ./Makefile
    # Enable cuDNN
    sed 's/CUDNN=0/CUDNN=1/' ./Makefile
    # Prepare *.so
    sed 's/LIBSO=0/LIBSO=1/' ./Makefile
    make
    # Copy *.so to /usr/lib + /usr/include (or /usr/local/lib + /usr/local/include)
    sudo cp libdarknet.so /usr/lib/libdarknet.so && sudo cp include/darknet.h /usr/include/darknet.h
    # sudo cp libdarknet.so /usr/local/lib/libdarknet.so && sudo cp include/darknet.h /usr/local/include/darknet.h
    

    Alternatively you can use Makefile from go-darknet repository: https://github.com/LdDl/go-darknet/blob/master/Makefile

  4. Go bindings for Darknet - instructions link

  5. GoCV - instructions link.

  6. Blob tracking library - instructions link

  7. If you want to use gRPC client-server model: gRPC - instructions link

    You need to implement your gRPC server as following proto-file: https://github.com/LdDl/odam/blob/master/yolo_grpc.proto.

    If you need to rebuild *.pb.go file, call this is from project root folder:

    protoc -I . yolo_grpc.proto --go_out=plugins=grpc:.
    

    In case of my needs I need to detect license plates on vehicles and do OCR on server-side: you can take a look on https://github.com/LdDl/license_plate_recognition for gRPC server example

After steps above done:

go install github.com/LdDl/odam/cmd/odam

Check if executable available

odam -h

and you will see something like this:

Usage of ./odam:
-settings string
        Path to application's settings (default "conf.json")

Usage

notice: targeted for Linux users (no Windows/OSX instructions currenlty)
  • Prepare neural network stuff
    • Download YOLO's weights, configuration file and *.names file. Your way may warry, but here is our script: download_data.sh
      ./download_data_v4.sh
      
    • Make sure there is link to *.names file in YOLO's configuration file:
      [yolo]
      mask = 0,1,2
      anchors = 10,13,  16,30,  33,23,  30,61,  62,45,  59,119,  116,90,  156,198,  373,326
      classes=80
      num=9
      jitter=.3
      ignore_thresh = .7
      truth_thresh = 1
      random=1
      names = coco.names # <<========= here is the link to 'coco.names' file
      
  • Prepare configuration file for application. Example of file: conf.json. Description of fields:
{
    "video_settings": { # Video input settings
        "source": "rtsp://127.0.0.1:554/h264", # Link to RTSP stream
        "width": 1920, # Width of image in video source
        "height": 1080, # Height of image in video source
        "reduced_width": 640, # Desired width of image (for imshow and MJPEG streaming, also reduces inference time (processing > accuracy) for neural network)
        "reduced_height": 360, # Desired height of image (for imshow and MJPEG streaming, also reduces inference time (processing > accuracy) for neural network)
        "camera_id": "f2abe45e-aad8-40a2-a3b7-0c610c0f3dda" # Unique ID for video source (useful for 'client-server' model)
    },
    "neural_network_settings": { # YOLO neural network settings
        "darknet_cfg": "yolov3.cfg", # Path to configuration.file
        "darknet_weights": "yolov3.weights", # Path to weights wile
        "darknet_classes": "coco.names", # Path to *.names file (labels of objects)
        "conf_threshold": 0.2, # Confidence threshold
        "nms_threshold": 0.4, # NMS threshold (postprocessing)
        "target_classes": ["car", "motorbike", "bus", "train", "truck"] # What classes you want to detect (if you want to use public dataset, but ignore some classes)
    },
    "cuda_settings":{ # CUDA settings, currently useless
        "enable": true # CUDA settings, currently useless
    },
    "mjpeg_settings":{ # MJPEG streaming settings
        "imshow_enable": false, # Do you want to enable imshow() feature (useful for testing purposes)
        "enable": true, # Do you want to enable this feature?
        "port": 35678 # Listening port fo connections
    },
    "grpc_settings": { # gRPC 'client-server' model settings
        "enable": true, # Do you want to enable this feature?
        "server_ip": "localhost", # gRPC server's IP
        "server_port": 50051 # gRPC server's listening port
    },
    "classes_settings": [ # classes settings (according to 'target_classes' in 'neural_network_settings')
        {
            "class_name": "car", # Corresponding class label
            "drawing_settings": {
                "bbox_settings": { # Setting for bounding boxes (detected objects)
                    "rgba": [255, 255, 0, 0], # Color of bounding box border
                    "thickness": 2 # Thickness as is
                },
                "centroid_settings": { # Setting for centroid of bounding boxes
                    "rgba": [255, 0, 0, 0], # Color of circle
                    "radius": 4, # Radius of circle
                    "thickness": 2 # Thickness as is
                },
                "text_settings": { # Setting for text above bounding boxes
                    "rgba": [0, 255, 0, 0], # Text color
                    "scale": 0.5, # Size of text
                    "thickness": 1, # Thickness as is
                    "font": "hershey_simplex" # Text font
                },
                "display_object_id": true # If you want to display object identifier
            }
        },
        {
            "class_name": "motorbike", # see "car" ref.
            "drawing_settings": {} # if propetry is empty, then default values are used
        },
        {
            "class_name": "bus", # see "car" ref.
            "drawing_settings": {} # if propetry is empty, then default values are used
        },
        {
            "class_name": "train", # see "car" ref.
            "drawing_settings": {} # if propetry is empty, then default values are used
        },
        {
            "class_name": "truck", # see "car" ref.
            "drawing_settings": {} # if propetry is empty, then default values are used
        }
    ],
    "tracker_settings": { # Tracked settings
        "tracker_type": "simple/kalman" # Use one of supported trackers. Simple tracker should fit realy simple scenes, while Kalman should be used with complicated scenes.
        "max_points_in_track": 150, # Restriction for maximum points in single track (>=1). Default value 10 (in case of value less than 1)
        "lines_settings":[
            {
                "line_id": 1, # Unique ID for line id (useful for 'client-server' model)
                "begin": [150, 800], # [X1,Y1], start point of line (usually, left side)
                "end": [1600, 800], # [X2,Y2], end point of line (usually, right side)
                "direction": "to_detector", # Direction of line (possible values: 'to_detector' and 'from_detector')
                "detect_classes": ["car", "motorbike", "bus", "train", "truck"], # What classes must be cropped (as detected objects) that were captured by detection line.
                "rgba": [255, 0, 0, 0], # Color of detection line
                "crop_mode": "crop" # When 'grpc_settings' field 'enable' is set to TRUE this option will be used for sending either cropped detected object (bbox==crop) or full image with bbox info to gRPC server-side application. Default is 'crop'
            }
        ],
        "speed_estimation_settings": { # Setting for speed estimation bas on GIS convertion between different spatial systems
            "enabled": false, # Enable this feature or not
            "mapper": [ # Map pixel coordinate to EPSG4326 coordinates
                # You should provide coordinates in correct order.
                # E.g. right bottom -> left bottom -> left top -> right top
                # Coordinates should match reduced_width and reduces_height attributes.
                {"image_coordinates": [640, 360], "epsg4326": [37.61891380882616, 54.20564268115055]},
                {"image_coordinates": [640, 0], "epsg4326": [37.61875545294513, 54.20546281228973]},
                {"image_coordinates": [0, 0], "epsg4326": [37.61903085447736, 54.20543126804313]},
                {"image_coordinates": [0, 360], "epsg4326": [37.61906183714973, 54.20562590237201]}
            ]
        }
    },
    "matpprof_settings": { # pprof for GoCV. Useful for debugging
        "enable": true # Do you want to enable this feature?
    }
}
  • Run
    odam --settings=conf.json
    

Screenshots

  • gocv.Imshow() output:

  • MJPEG streaming output:

Support

If you have troubles or questions please open an issue. Feel free to make PR's (we do not have contributing guidelines currently, but we will someday)

Roadmap

Please see ROADMAP.md

Dependencies

  • Bindings to OpenCV - GoCV. License is Apache-2.0
  • MJPEG streaming via GoCV - mjpeg. No license currently
  • Darknet (AlexeyAB's fork) - darknet. License is YOLO LICENSE
  • Golang binding to darknet - go-darknet. License is Apache-2.0
  • Tracking objects - gocv-blob. No license currently
  • gRPC for doing "'client-server'" application - grpc. License is Apache-2.0

License

You can check it here

Developers

LdDl https://github.com/LdDl

Pavel7824 https://github.com/Pavel7824

Former one: cpllbstr https://github.com/cpllbstr

Documentation

Index

Constants

View Source
const (
	TRACKER_SIMPLE = TRACKER_TYPE(1)
	TRACKER_KALMAN = TRACKER_TYPE(2)
)
View Source
const (
	// HORIZONTAL_LINE Represents the line with Y{1} of (X{1}Y{1}) = Y{2} of (X{2}Y{2})
	HORIZONTAL_LINE = VIRTUAL_LINE_TYPE(iota + 1)
	// OBLIQUE_LINE Represents the line with Y{1} of (X{1}Y{1}) <> Y{2} of (X{2}Y{2}) (so it has some angle)
	OBLIQUE_LINE
)

Variables

View Source
var File_yolo_grpc_proto protoreflect.FileDescriptor

Functions

func EstimateSpeed added in v0.8.0

func EstimateSpeed(firstPoint, lastPoint gocv.Point2f, start, end time.Time, perspectiveTransformer func(gocv.Point2f) gocv.Point2f) float32

EstimateSpeed Estimates speed approximately

func FixRectForOpenCV

func FixRectForOpenCV(r *image.Rectangle, maxCols, maxRows int)

FixRectForOpenCV Corrects rectangle's bounds for provided max-widtht and max-height Helps to avoid BBox error assertion

func GetPerspectiveTransformer added in v0.8.0

func GetPerspectiveTransformer(srcPoints, dstPoints []gocv.Point2f) func(gocv.Point2f) gocv.Point2f

GetPerspectiveTransformer Initializates gocv.Point2f for GIS conversion purposes

func RegisterServiceYOLOServer added in v0.4.1

func RegisterServiceYOLOServer(s *grpc.Server, srv ServiceYOLOServer)

func Round

func Round(v float64) int

Round Rounds float64 to int

func STDPointToGoCVPoint2F added in v0.8.0

func STDPointToGoCVPoint2F(p image.Point) gocv.Point2f

STDPointToGoCVPoint2F Convertes image.Point to gocv.Point2f

Types

type AppSettings

type AppSettings struct {
	VideoSettings         VideoSettings         `json:"video_settings"`
	NeuralNetworkSettings NeuralNetworkSettings `json:"neural_network_settings"`
	CudaSettings          CudaSettings          `json:"cuda_settings"`
	MjpegSettings         MjpegSettings         `json:"mjpeg_settings"`
	GrpcSettings          GrpcSettings          `json:"grpc_settings"`
	ClassesSettings       []*ClassesSettings    `json:"classes_settings"`
	TrackerSettings       TrackerSettings       `json:"tracker_settings"`
	MatPPROFSettings      MatPPROFSettings      `json:"matpprof_settings"`

	sync.RWMutex
	// Exported, but not from JSON
	ClassesDrawOptions map[string]*DrawOptions `json:"-"`
}

AppSettings Settings for application

func NewSettings

func NewSettings(fname string) (*AppSettings, error)

NewSettings Create new AppSettings from content of configuration file

func (*AppSettings) GetDrawOptions added in v0.8.2

func (settings *AppSettings) GetDrawOptions(className string) *DrawOptions

type BBoxSettings

type BBoxSettings struct {
	RGBA      [4]uint8 `json:"rgba"`
	Thickness int      `json:"thickness"`
}

BBoxSettings Options for detection rectangle

type CentroidSettings

type CentroidSettings struct {
	RGBA      [4]uint8 `json:"rgba"`
	Radius    int      `json:"radius"`
	Thickness int      `json:"thickness"`
}

CentroidSettings Options for center of detection rectangle

type ClassInfo added in v0.4.1

type ClassInfo struct {
	ClassId   int32  `protobuf:"varint,1,opt,name=class_id,json=classId,proto3" json:"class_id,omitempty"`
	ClassName string `protobuf:"bytes,2,opt,name=class_name,json=className,proto3" json:"class_name,omitempty"`
	// contains filtered or unexported fields
}

Reference information about object class

func ClassInfoGRPC added in v0.8.3

func ClassInfoGRPC(b blob.Blobie) *ClassInfo

ClassInfoGRPC Prepares gRPC message 'ClassInfo' Blob object should be provided

func (*ClassInfo) Descriptor deprecated added in v0.4.1

func (*ClassInfo) Descriptor() ([]byte, []int)

Deprecated: Use ClassInfo.ProtoReflect.Descriptor instead.

func (*ClassInfo) GetClassId added in v0.4.1

func (x *ClassInfo) GetClassId() int32

func (*ClassInfo) GetClassName added in v0.4.1

func (x *ClassInfo) GetClassName() string

func (*ClassInfo) ProtoMessage added in v0.4.1

func (*ClassInfo) ProtoMessage()

func (*ClassInfo) ProtoReflect added in v0.4.1

func (x *ClassInfo) ProtoReflect() protoreflect.Message

func (*ClassInfo) Reset added in v0.4.1

func (x *ClassInfo) Reset()

func (*ClassInfo) String added in v0.4.1

func (x *ClassInfo) String() string

type ClassesSettings added in v0.8.2

type ClassesSettings struct {
	// Classname basically
	ClassName string `json:"class_name"`
	// Options for visual output (usefull when either imshow or mjpeg output is used)
	DrawingSettings *ObjectDrawingSettings `json:"drawing_settings"`
}

ClassesSettings Settings for each possible class

func (*ClassesSettings) PrepareDrawingOptions added in v0.8.2

func (classInfo *ClassesSettings) PrepareDrawingOptions() *DrawOptions

PrepareDrawingOptions Prepares drawing options for blob library

type CudaSettings

type CudaSettings struct {
	Enable bool `json:"enable"`
}

CudaSettings CUDA settings

type DetectedObject

type DetectedObject struct {
	Rect       image.Rectangle
	ClassID    int
	ClassName  string
	Confidence float32
}

DetectedObject Store detected object info

type DetectedObjects

type DetectedObjects []*DetectedObject

DetectedObjects Just alias to slice of DetectedObject

type Detection

type Detection struct {
	XLeft  int32 `protobuf:"varint,1,opt,name=x_left,json=xLeft,proto3" json:"x_left,omitempty"`
	YTop   int32 `protobuf:"varint,2,opt,name=y_top,json=yTop,proto3" json:"y_top,omitempty"`
	Height int32 `protobuf:"varint,3,opt,name=height,proto3" json:"height,omitempty"`
	Width  int32 `protobuf:"varint,4,opt,name=width,proto3" json:"width,omitempty"`
	// contains filtered or unexported fields
}

Reference information about detection rectangle

func DetectionInfoGRPC added in v0.8.3

func DetectionInfoGRPC(xmin, ymin, width, height int32) *Detection

DetectionInfoGRPC Prepares gRPC message 'Detection' BBox (x-leftop, y-leftop, width and height of bounding box) information should be provided

func (*Detection) Descriptor deprecated

func (*Detection) Descriptor() ([]byte, []int)

Deprecated: Use Detection.ProtoReflect.Descriptor instead.

func (*Detection) GetHeight

func (x *Detection) GetHeight() int32

func (*Detection) GetWidth

func (x *Detection) GetWidth() int32

func (*Detection) GetXLeft

func (x *Detection) GetXLeft() int32

func (*Detection) GetYTop

func (x *Detection) GetYTop() int32

func (*Detection) ProtoMessage

func (*Detection) ProtoMessage()

func (*Detection) ProtoReflect added in v0.4.1

func (x *Detection) ProtoReflect() protoreflect.Message

func (*Detection) Reset

func (x *Detection) Reset()

func (*Detection) String

func (x *Detection) String() string

type DrawOptions added in v0.8.2

type DrawOptions struct {
	*blob.DrawOptions
	DisplayObjectID bool
}

Wrap blob.DrawOptions

func PrepareDrawingOptionsDefault added in v0.8.2

func PrepareDrawingOptionsDefault() *DrawOptions

type EuclideanPoint added in v0.8.0

type EuclideanPoint struct {
	X float32 `protobuf:"fixed32,1,opt,name=x,proto3" json:"x,omitempty"`
	Y float32 `protobuf:"fixed32,2,opt,name=y,proto3" json:"y,omitempty"`
	// contains filtered or unexported fields
}

Representation of a point in Euclidean space

func (*EuclideanPoint) Descriptor deprecated added in v0.8.0

func (*EuclideanPoint) Descriptor() ([]byte, []int)

Deprecated: Use EuclideanPoint.ProtoReflect.Descriptor instead.

func (*EuclideanPoint) GetX added in v0.8.0

func (x *EuclideanPoint) GetX() float32

func (*EuclideanPoint) GetY added in v0.8.0

func (x *EuclideanPoint) GetY() float32

func (*EuclideanPoint) ProtoMessage added in v0.8.0

func (*EuclideanPoint) ProtoMessage()

func (*EuclideanPoint) ProtoReflect added in v0.8.0

func (x *EuclideanPoint) ProtoReflect() protoreflect.Message

func (*EuclideanPoint) Reset added in v0.8.0

func (x *EuclideanPoint) Reset()

func (*EuclideanPoint) String added in v0.8.0

func (x *EuclideanPoint) String() string

type FrameData

type FrameData struct {
	ImgSource gocv.Mat //  Source image
	ImgScaled gocv.Mat // Scaled image
	ImgSTD    image.Image
}

FrameData Wrapper around gocv.Mat

func NewFrameData

func NewFrameData() *FrameData

NewFrameData Simplifies creation of FrameData

func (*FrameData) Close

func (fd *FrameData) Close()

Close Simplify memory management for each gocv.Mat of FrameData

func (*FrameData) Preprocess

func (fd *FrameData) Preprocess(width, height int) error

Preprocess Scales image to given width and height

type GISMapper added in v0.8.0

type GISMapper struct {
	ImageCoordinates [2]float32 `json:"image_coordinates"`
	EPSG4326         [2]float32 `json:"epsg4326"`
}

GISMapper Map image coordinates to GIS coordinates

type GrpcSettings

type GrpcSettings struct {
	Enable     bool   `json:"enable"`
	ServerIP   string `json:"server_ip"`
	ServerPort int    `json:"server_port"`
}

GrpcSettings gRPC-server address

type LinesSetting

type LinesSetting struct {
	LineID        int64    `json:"line_id"`
	Begin         [2]int   `json:"begin"`
	End           [2]int   `json:"end"`
	Direction     string   `json:"direction"`
	DetectClasses []string `json:"detect_classes"`
	RGBA          [4]uint8 `json:"rgba"`
	CropMode      string   `json:"crop_mode"`
	// Exported, but not from JSON
	VLine *VirtualLine `json:"-"`
}

LinesSetting Virtual lines

type MatPPROFSettings

type MatPPROFSettings struct {
	Enable bool `json:"enable"`
}

MatPPROFSettings pprof settings of gocv.Mat

type MjpegSettings

type MjpegSettings struct {
	ImshowEnable bool `json:"imshow_enable"`
	Enable       bool `json:"enable"`
	Port         int  `json:"port"`
}

MjpegSettings settings for output

type NeuralNetworkSettings

type NeuralNetworkSettings struct {
	DarknetCFG     string `json:"darknet_cfg"`
	DarknetWeights string `json:"darknet_weights"`
	// DarknetClasses string   `json:"darknet_classes"`
	ConfThreshold float64  `json:"conf_threshold"`
	NmsThreshold  float64  `json:"nms_threshold"`
	TargetClasses []string `json:"target_classes"`
}

NeuralNetworkSettings Neural network

type ObjectDrawingSettings added in v0.8.2

type ObjectDrawingSettings struct {
	// Drawing options for detection rectangle
	BBoxSettings BBoxSettings `json:"bbox_settings"`
	// Drawing options for center of detection rectangle
	CentroidSettings CentroidSettings `json:"centroid_settings"`
	// Drawing options for text in top left corner of detection rectangle
	TextSettings TextSettings `json:"text_settings"`
	// Do you want to display ID of object (uuid)
	DisplayObjectID bool `json:"display_object_id"`
}

ObjectDrawingSettings Drawing settings for MJPEG/imshow

type ObjectInformation added in v0.4.1

type ObjectInformation struct {

	// Camera identifier
	CamId string `protobuf:"bytes,1,opt,name=cam_id,json=camId,proto3" json:"cam_id,omitempty"`
	// Timestamp in Unix UTC
	Timestamp int64 `protobuf:"varint,2,opt,name=timestamp,proto3" json:"timestamp,omitempty"`
	// Bytes representation of image (PNG)
	Image []byte `protobuf:"bytes,3,opt,name=image,proto3" json:"image,omitempty"`
	// Reference information about detection rectangle
	Detection *Detection `protobuf:"bytes,4,opt,name=detection,proto3" json:"detection,omitempty"`
	// Reference information about object class
	Class *ClassInfo `protobuf:"bytes,5,opt,name=class,proto3" json:"class,omitempty"`
	// Reference information about virtual line (detection line)
	VirtualLine *VirtualLineInfo `protobuf:"bytes,6,opt,name=virtual_line,json=virtualLine,proto3" json:"virtual_line,omitempty"`
	// Reference information about tracking parameters of object (speed + track points)
	TrackInformation *TrackInfo `protobuf:"bytes,7,opt,name=track_information,json=trackInformation,proto3" json:"track_information,omitempty"`
	// contains filtered or unexported fields
}

Reference info about detection, camera, timestamp and etc.

func (*ObjectInformation) Descriptor deprecated added in v0.4.1

func (*ObjectInformation) Descriptor() ([]byte, []int)

Deprecated: Use ObjectInformation.ProtoReflect.Descriptor instead.

func (*ObjectInformation) GetCamId added in v0.4.1

func (x *ObjectInformation) GetCamId() string

func (*ObjectInformation) GetClass added in v0.4.1

func (x *ObjectInformation) GetClass() *ClassInfo

func (*ObjectInformation) GetDetection added in v0.4.1

func (x *ObjectInformation) GetDetection() *Detection

func (*ObjectInformation) GetImage added in v0.4.1

func (x *ObjectInformation) GetImage() []byte

func (*ObjectInformation) GetTimestamp added in v0.4.1

func (x *ObjectInformation) GetTimestamp() int64

func (*ObjectInformation) GetTrackInformation added in v0.8.0

func (x *ObjectInformation) GetTrackInformation() *TrackInfo

func (*ObjectInformation) GetVirtualLine added in v0.4.1

func (x *ObjectInformation) GetVirtualLine() *VirtualLineInfo

func (*ObjectInformation) ProtoMessage added in v0.4.1

func (*ObjectInformation) ProtoMessage()

func (*ObjectInformation) ProtoReflect added in v0.4.1

func (x *ObjectInformation) ProtoReflect() protoreflect.Message

func (*ObjectInformation) Reset added in v0.4.1

func (x *ObjectInformation) Reset()

func (*ObjectInformation) String added in v0.4.1

func (x *ObjectInformation) String() string

type Point added in v0.8.0

type Point struct {
	EuclideanPoint *EuclideanPoint `protobuf:"bytes,1,opt,name=euclidean_point,json=euclideanPoint,proto3" json:"euclidean_point,omitempty"`
	Wgs84Point     *WGS84Point     `protobuf:"bytes,2,opt,name=wgs84_point,json=wgs84Point,proto3" json:"wgs84_point,omitempty"`
	// contains filtered or unexported fields
}

Union of EuclideanPoint and WGS84Point structures

func (*Point) Descriptor deprecated added in v0.8.0

func (*Point) Descriptor() ([]byte, []int)

Deprecated: Use Point.ProtoReflect.Descriptor instead.

func (*Point) GetEuclideanPoint added in v0.8.0

func (x *Point) GetEuclideanPoint() *EuclideanPoint

func (*Point) GetWgs84Point added in v0.8.0

func (x *Point) GetWgs84Point() *WGS84Point

func (*Point) ProtoMessage added in v0.8.0

func (*Point) ProtoMessage()

func (*Point) ProtoReflect added in v0.8.0

func (x *Point) ProtoReflect() protoreflect.Message

func (*Point) Reset added in v0.8.0

func (x *Point) Reset()

func (*Point) String added in v0.8.0

func (x *Point) String() string

type Response

type Response struct {
	Message string `protobuf:"bytes,1,opt,name=message,proto3" json:"message,omitempty"`
	Warning string `protobuf:"bytes,2,opt,name=warning,proto3" json:"warning,omitempty"`
	Error   string `protobuf:"bytes,3,opt,name=error,proto3" json:"error,omitempty"`
	// contains filtered or unexported fields
}

Response from server-side application

func (*Response) Descriptor deprecated

func (*Response) Descriptor() ([]byte, []int)

Deprecated: Use Response.ProtoReflect.Descriptor instead.

func (*Response) GetError

func (x *Response) GetError() string

func (*Response) GetMessage

func (x *Response) GetMessage() string

func (*Response) GetWarning

func (x *Response) GetWarning() string

func (*Response) ProtoMessage

func (*Response) ProtoMessage()

func (*Response) ProtoReflect added in v0.4.1

func (x *Response) ProtoReflect() protoreflect.Message

func (*Response) Reset

func (x *Response) Reset()

func (*Response) String

func (x *Response) String() string

type ServiceYOLOClient added in v0.4.1

type ServiceYOLOClient interface {
	SendDetection(ctx context.Context, in *ObjectInformation, opts ...grpc.CallOption) (*Response, error)
}

ServiceYOLOClient is the client API for ServiceYOLO service.

For semantics around ctx use and closing/ending streaming RPCs, please refer to https://godoc.org/google.golang.org/grpc#ClientConn.NewStream.

func NewServiceYOLOClient added in v0.4.1

func NewServiceYOLOClient(cc grpc.ClientConnInterface) ServiceYOLOClient

type ServiceYOLOServer added in v0.4.1

type ServiceYOLOServer interface {
	SendDetection(context.Context, *ObjectInformation) (*Response, error)
}

ServiceYOLOServer is the server API for ServiceYOLO service.

type SpeedEstimationSettings added in v0.8.0

type SpeedEstimationSettings struct {
	// Is this feature enabled?
	Enabled bool `json:"enabled"`
	// Is gRPC sending needed? If yes make sure that 'grpc_settings.enable' is set to 'true' also
	SendGRPC bool `json:"send_grpc"`
	// Map image coordinates to GIS coordinates. EPSG 4326 is handled only currently
	Mapper []GISMapper `json:"mapper"`
}

SpeedEstimationSettings Settings speed estimation

type TRACKER_TYPE added in v0.7.0

type TRACKER_TYPE int

type TextSettings

type TextSettings struct {
	RGBA      [4]uint8 `json:"rgba"`
	Scale     float64  `json:"scale"`
	Thickness int      `json:"thickness"`
	Font      string   `json:"font"` // Possible values are: hershey_simplex, hershey_plain, hershey_duplex, hershey_complex, hershey_triplex, hershey_complex_small, hershey_script_simplex, hershey_script_cddomplex, italic
}

TextSettings Options for text in top left corner of detection rectangle

type TrackInfo added in v0.8.0

type TrackInfo struct {
	EstimatedSpeed float32  `protobuf:"fixed32,1,opt,name=estimated_speed,json=estimatedSpeed,proto3" json:"estimated_speed,omitempty"`
	Points         []*Point `protobuf:"bytes,2,rep,name=points,proto3" json:"points,omitempty"`
	// contains filtered or unexported fields
}

Information about estimated speed and track itself

func TrackInfoInfoGRPC added in v0.8.3

func TrackInfoInfoGRPC(b blob.Blobie, speedKey string, scalex, scaley float32, gisConverter func(gocv.Point2f) gocv.Point2f) *TrackInfo

TrackInfoInfoGRPC Prepares gRPC message 'TrackInfo' Next data should be provided: Blob object for track extraction Key for extracting speed infromation Width/Height scale for EuclideanPoint correction to actual coordinates Coverter function (from pixel to WGS84)

func (*TrackInfo) Descriptor deprecated added in v0.8.0

func (*TrackInfo) Descriptor() ([]byte, []int)

Deprecated: Use TrackInfo.ProtoReflect.Descriptor instead.

func (*TrackInfo) GetEstimatedSpeed added in v0.8.0

func (x *TrackInfo) GetEstimatedSpeed() float32

func (*TrackInfo) GetPoints added in v0.8.0

func (x *TrackInfo) GetPoints() []*Point

func (*TrackInfo) ProtoMessage added in v0.8.0

func (*TrackInfo) ProtoMessage()

func (*TrackInfo) ProtoReflect added in v0.8.0

func (x *TrackInfo) ProtoReflect() protoreflect.Message

func (*TrackInfo) Reset added in v0.8.0

func (x *TrackInfo) Reset()

func (*TrackInfo) String added in v0.8.0

func (x *TrackInfo) String() string

type TrackerSettings

type TrackerSettings struct {
	TrackerType string `json:"tracker_type"`

	// Restriction for maximum points in single track
	MaxPointsInTrack        int                     `json:"max_points_in_track"`
	LinesSettings           []LinesSetting          `json:"lines_settings"`
	SpeedEstimationSettings SpeedEstimationSettings `json:"speed_estimation_settings"`
	// contains filtered or unexported fields
}

TrackerSettings Object tracker settings

func (*TrackerSettings) GetTrackerType added in v0.7.0

func (trs *TrackerSettings) GetTrackerType() TRACKER_TYPE

GetTrackerType Returns enum for tracker type option

type UnimplementedServiceYOLOServer added in v0.4.1

type UnimplementedServiceYOLOServer struct {
}

UnimplementedServiceYOLOServer can be embedded to have forward compatible implementations.

func (*UnimplementedServiceYOLOServer) SendDetection added in v0.4.1

type VIRTUAL_LINE_TYPE added in v0.8.0

type VIRTUAL_LINE_TYPE int

VIRTUAL_LINE_TYPE Alias to int

type VideoSettings

type VideoSettings struct {
	Source        string `json:"source"`
	Width         int    `json:"width"`
	Height        int    `json:"height"`
	ReducedWidth  int    `json:"reduced_width"`
	ReducedHeight int    `json:"reduced_height"`
	CameraID      string `json:"camera_id"`

	// Exported, but not from JSON
	ScaleX float64 `json:"-"`
	ScaleY float64 `json:"-"`
}

VideoSettings Settings for video

type VirtualLine

type VirtualLine struct {
	// Point on the left [scaled]
	LeftPT image.Point `json:"-"`
	// Point on the right [scaled]
	RightPT image.Point `json:"-"`
	// Color of line
	Color color.RGBA `json:"-"`
	// Direction of traffic flow
	Direction bool `json:"-"`
	// Is crossing object should be cropped for futher work with it?
	CropObject bool `json:"-"`
	// Point on the left [non-scaled]
	SourceLeftPT image.Point `json:"-"`
	// Point on the right [non-scaled]
	SourceRightPT image.Point `json:"-"`
	// Type of virtual line: could be horizontal or oblique
	LineType VIRTUAL_LINE_TYPE `json:"-"`
}

VirtualLine Detection line attributes

func NewVirtualLine added in v0.8.3

func NewVirtualLine(x1, y1, x2, y2 int) *VirtualLine

Constructor for VirtualLine (x1, y1) - Left (x2, y2) - Right

func (*VirtualLine) Draw

func (vline *VirtualLine) Draw(img *gocv.Mat)

Draw Draw virtual line on image

func (*VirtualLine) IsBlobCrossedLine added in v0.8.3

func (vline *VirtualLine) IsBlobCrossedLine(b blob.Blobie) bool

IsBlobCrossedLine Wrapper around b.IsCrossedTheLine(y2,x1,y1,direction) and b.IsCrossedTheObliqueLine(x2,y2,x1,y1,direction). See ref. https://github.com/LdDl/gocv-blob/blob/master/v2/blob/line_cross.go

func (*VirtualLine) Scale added in v0.8.3

func (vline *VirtualLine) Scale(scaleX, scaleY float64)

Scale Scales down (so scale factor can be > 1.0 ) virtual line (scaleX, scaleY) - How to scale source (x1,y1) and (x2,y2) coordinates Important notice: 1. Source coordinates won't be modified 2. Source coordinates would be used for scaling. So you can't scale line multiple times

type VirtualLineInfo added in v0.2.0

type VirtualLineInfo struct {
	Id     int64 `protobuf:"varint,1,opt,name=id,proto3" json:"id,omitempty"`
	LeftX  int32 `protobuf:"varint,2,opt,name=left_x,json=leftX,proto3" json:"left_x,omitempty"`
	LeftY  int32 `protobuf:"varint,3,opt,name=left_y,json=leftY,proto3" json:"left_y,omitempty"`
	RightX int32 `protobuf:"varint,4,opt,name=right_x,json=rightX,proto3" json:"right_x,omitempty"`
	RightY int32 `protobuf:"varint,5,opt,name=right_y,json=rightY,proto3" json:"right_y,omitempty"`
	// contains filtered or unexported fields
}

Reference information about virtual line (detection line)

func VirtualLineInfoGRPC added in v0.8.3

func VirtualLineInfoGRPC(lineID int64, virtualLine *VirtualLine) *VirtualLineInfo

VirtualLineInfoGRPC Prepares gRPC message 'VirtualLineInfo' Identifier of a line (int64) and its parameters (x0,y0 and x1,y1) should be provide

func (*VirtualLineInfo) Descriptor deprecated added in v0.2.0

func (*VirtualLineInfo) Descriptor() ([]byte, []int)

Deprecated: Use VirtualLineInfo.ProtoReflect.Descriptor instead.

func (*VirtualLineInfo) GetId added in v0.2.0

func (x *VirtualLineInfo) GetId() int64

func (*VirtualLineInfo) GetLeftX added in v0.2.0

func (x *VirtualLineInfo) GetLeftX() int32

func (*VirtualLineInfo) GetLeftY added in v0.2.0

func (x *VirtualLineInfo) GetLeftY() int32

func (*VirtualLineInfo) GetRightX added in v0.2.0

func (x *VirtualLineInfo) GetRightX() int32

func (*VirtualLineInfo) GetRightY added in v0.2.0

func (x *VirtualLineInfo) GetRightY() int32

func (*VirtualLineInfo) ProtoMessage added in v0.2.0

func (*VirtualLineInfo) ProtoMessage()

func (*VirtualLineInfo) ProtoReflect added in v0.4.1

func (x *VirtualLineInfo) ProtoReflect() protoreflect.Message

func (*VirtualLineInfo) Reset added in v0.2.0

func (x *VirtualLineInfo) Reset()

func (*VirtualLineInfo) String added in v0.2.0

func (x *VirtualLineInfo) String() string

type WGS84Point added in v0.8.0

type WGS84Point struct {
	Longitude float32 `protobuf:"fixed32,1,opt,name=longitude,proto3" json:"longitude,omitempty"`
	Latitude  float32 `protobuf:"fixed32,2,opt,name=latitude,proto3" json:"latitude,omitempty"`
	// contains filtered or unexported fields
}

Representation of a point in spatial system called WGS84. See ref. https://en.wikipedia.org/wiki/World_Geodetic_System#WGS84

func (*WGS84Point) Descriptor deprecated added in v0.8.0

func (*WGS84Point) Descriptor() ([]byte, []int)

Deprecated: Use WGS84Point.ProtoReflect.Descriptor instead.

func (*WGS84Point) GetLatitude added in v0.8.0

func (x *WGS84Point) GetLatitude() float32

func (*WGS84Point) GetLongitude added in v0.8.0

func (x *WGS84Point) GetLongitude() float32

func (*WGS84Point) ProtoMessage added in v0.8.0

func (*WGS84Point) ProtoMessage()

func (*WGS84Point) ProtoReflect added in v0.8.0

func (x *WGS84Point) ProtoReflect() protoreflect.Message

func (*WGS84Point) Reset added in v0.8.0

func (x *WGS84Point) Reset()

func (*WGS84Point) String added in v0.8.0

func (x *WGS84Point) String() string

Directories

Path Synopsis
cmd

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL