gstvideo

package
v0.0.0-...-ccbbe8a Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Apr 7, 2023 License: MPL-2.0 Imports: 15 Imported by: 0

Documentation

Index

Constants

View Source
const BUFFER_POOL_OPTION_VIDEO_AFFINE_TRANSFORMATION_META = "GstBufferPoolOptionVideoAffineTransformation"
View Source
const BUFFER_POOL_OPTION_VIDEO_ALIGNMENT = "GstBufferPoolOptionVideoAlignment"

BUFFER_POOL_OPTION_VIDEO_ALIGNMENT: bufferpool option to enable extra padding. When a bufferpool supports this option, gst_buffer_pool_config_set_video_alignment() can be called.

When this option is enabled on the bufferpool, T_BUFFER_POOL_OPTION_VIDEO_META should also be enabled.

View Source
const BUFFER_POOL_OPTION_VIDEO_GL_TEXTURE_UPLOAD_META = "GstBufferPoolOptionVideoGLTextureUploadMeta"

BUFFER_POOL_OPTION_VIDEO_GL_TEXTURE_UPLOAD_META: option that can be activated on a bufferpool to request gl texture upload meta on buffers from the pool.

When this option is enabled on the bufferpool, GST_BUFFER_POOL_OPTION_VIDEO_META should also be enabled.

View Source
const BUFFER_POOL_OPTION_VIDEO_META = "GstBufferPoolOptionVideoMeta"

BUFFER_POOL_OPTION_VIDEO_META: option that can be activated on bufferpool to request video metadata on buffers from the pool.

View Source
const CAPS_FEATURE_FORMAT_INTERLACED = "format:Interlaced"

CAPS_FEATURE_FORMAT_INTERLACED: name of the caps feature indicating that the stream is interlaced.

Currently it is only used for video with 'interlace-mode=alternate' to ensure backwards compatibility for this new mode. In this mode each buffer carries a single field of interlaced video. GST_VIDEO_BUFFER_FLAG_TOP_FIELD and GST_VIDEO_BUFFER_FLAG_BOTTOM_FIELD indicate whether the buffer carries a top or bottom field. The order of buffers/fields in the stream and the timestamps on the buffers indicate the temporal order of the fields. Top and bottom fields are expected to alternate in this mode. The frame rate in the caps still signals the frame rate, so the notional field rate will be twice the frame rate from the caps (see GST_VIDEO_INFO_FIELD_RATE_N).

View Source
const CAPS_FEATURE_META_GST_VIDEO_AFFINE_TRANSFORMATION_META = "meta:GstVideoAffineTransformation"
View Source
const CAPS_FEATURE_META_GST_VIDEO_GL_TEXTURE_UPLOAD_META = "meta:GstVideoGLTextureUploadMeta"
View Source
const CAPS_FEATURE_META_GST_VIDEO_META = "meta:GstVideoMeta"
View Source
const CAPS_FEATURE_META_GST_VIDEO_OVERLAY_COMPOSITION = "meta:GstVideoOverlayComposition"
View Source
const META_TAG_VIDEO_COLORSPACE_STR = "colorspace"

META_TAG_VIDEO_COLORSPACE_STR: this metadata stays relevant as long as video colorspace is unchanged.

View Source
const META_TAG_VIDEO_ORIENTATION_STR = "orientation"

META_TAG_VIDEO_ORIENTATION_STR: this metadata stays relevant as long as video orientation is unchanged.

View Source
const META_TAG_VIDEO_SIZE_STR = "size"

META_TAG_VIDEO_SIZE_STR: this metadata stays relevant as long as video size is unchanged.

View Source
const META_TAG_VIDEO_STR = "video"

META_TAG_VIDEO_STR: this metadata is relevant for video streams.

View Source
const VIDEO_COLORIMETRY_BT2020 = "bt2020"
View Source
const VIDEO_COLORIMETRY_BT2020_10 = "bt2020-10"
View Source
const VIDEO_COLORIMETRY_BT2100_HLG = "bt2100-hlg"
View Source
const VIDEO_COLORIMETRY_BT2100_PQ = "bt2100-pq"
View Source
const VIDEO_COLORIMETRY_BT601 = "bt601"
View Source
const VIDEO_COLORIMETRY_BT709 = "bt709"
View Source
const VIDEO_COLORIMETRY_SMPTE240M = "smpte240m"
View Source
const VIDEO_COLORIMETRY_SRGB = "sRGB"
View Source
const VIDEO_COMP_A = 3
View Source
const VIDEO_COMP_B = 2
View Source
const VIDEO_COMP_G = 1
View Source
const VIDEO_COMP_INDEX = 0
View Source
const VIDEO_COMP_PALETTE = 1
View Source
const VIDEO_COMP_R = 0
View Source
const VIDEO_COMP_U = 1
View Source
const VIDEO_COMP_V = 2
View Source
const VIDEO_COMP_Y = 0
View Source
const VIDEO_CONVERTER_OPT_ALPHA_MODE = "GstVideoConverter.alpha-mode"

VIDEO_CONVERTER_OPT_ALPHA_MODE the alpha mode to use. Default is T_VIDEO_ALPHA_MODE_COPY.

View Source
const VIDEO_CONVERTER_OPT_ALPHA_VALUE = "GstVideoConverter.alpha-value"

VIDEO_CONVERTER_OPT_ALPHA_VALUE the alpha color value to use. Default to 1.0.

View Source
const VIDEO_CONVERTER_OPT_ASYNC_TASKS = "GstVideoConverter.async-tasks"

VIDEO_CONVERTER_OPT_ASYNC_TASKS whether gst_video_converter_frame() will return immediately without waiting for the conversion to complete. A subsequent gst_video_converter_frame_finish() must be performed to ensure completion of the conversion before subsequent use. Default FALSE.

View Source
const VIDEO_CONVERTER_OPT_BORDER_ARGB = "GstVideoConverter.border-argb"

VIDEO_CONVERTER_OPT_BORDER_ARGB the border color to use if T_VIDEO_CONVERTER_OPT_FILL_BORDER is set to TRUE. The color is in ARGB format. Default 0xff000000.

View Source
const VIDEO_CONVERTER_OPT_CHROMA_MODE = "GstVideoConverter.chroma-mode"

VIDEO_CONVERTER_OPT_CHROMA_MODE set the chroma resample mode subsampled formats. Default is T_VIDEO_CHROMA_MODE_FULL.

View Source
const VIDEO_CONVERTER_OPT_CHROMA_RESAMPLER_METHOD = "GstVideoConverter.chroma-resampler-method"

VIDEO_CONVERTER_OPT_CHROMA_RESAMPLER_METHOD The resampler method to use for chroma resampling. Other options for the resampler can be used, see the VideoResampler. Default is T_VIDEO_RESAMPLER_METHOD_LINEAR.

View Source
const VIDEO_CONVERTER_OPT_DEST_HEIGHT = "GstVideoConverter.dest-height"

VIDEO_CONVERTER_OPT_DEST_HEIGHT height in the destination frame, default destination height.

View Source
const VIDEO_CONVERTER_OPT_DEST_WIDTH = "GstVideoConverter.dest-width"

VIDEO_CONVERTER_OPT_DEST_WIDTH width in the destination frame, default destination width.

View Source
const VIDEO_CONVERTER_OPT_DEST_X = "GstVideoConverter.dest-x"

VIDEO_CONVERTER_OPT_DEST_X x position in the destination frame, default 0.

View Source
const VIDEO_CONVERTER_OPT_DEST_Y = "GstVideoConverter.dest-y"

VIDEO_CONVERTER_OPT_DEST_Y y position in the destination frame, default 0.

View Source
const VIDEO_CONVERTER_OPT_DITHER_METHOD = "GstVideoConverter.dither-method"

VIDEO_CONVERTER_OPT_DITHER_METHOD The dither method to use when changing bit depth. Default is T_VIDEO_DITHER_BAYER.

View Source
const VIDEO_CONVERTER_OPT_DITHER_QUANTIZATION = "GstVideoConverter.dither-quantization"

VIDEO_CONVERTER_OPT_DITHER_QUANTIZATION The quantization amount to dither to. Components will be quantized to multiples of this value. Default is 1.

View Source
const VIDEO_CONVERTER_OPT_FILL_BORDER = "GstVideoConverter.fill-border"

VIDEO_CONVERTER_OPT_FILL_BORDER if the destination rectangle does not fill the complete destination image, render a border with T_VIDEO_CONVERTER_OPT_BORDER_ARGB. Otherwise the unusded pixels in the destination are untouched. Default TRUE.

View Source
const VIDEO_CONVERTER_OPT_GAMMA_MODE = "GstVideoConverter.gamma-mode"

VIDEO_CONVERTER_OPT_GAMMA_MODE set the gamma mode. Default is T_VIDEO_GAMMA_MODE_NONE.

View Source
const VIDEO_CONVERTER_OPT_MATRIX_MODE = "GstVideoConverter.matrix-mode"

VIDEO_CONVERTER_OPT_MATRIX_MODE set the color matrix conversion mode for converting between Y'PbPr and non-linear RGB (R'G'B'). Default is T_VIDEO_MATRIX_MODE_FULL.

View Source
const VIDEO_CONVERTER_OPT_PRIMARIES_MODE = "GstVideoConverter.primaries-mode"

VIDEO_CONVERTER_OPT_PRIMARIES_MODE set the primaries conversion mode. Default is T_VIDEO_PRIMARIES_MODE_NONE.

View Source
const VIDEO_CONVERTER_OPT_RESAMPLER_METHOD = "GstVideoConverter.resampler-method"

VIDEO_CONVERTER_OPT_RESAMPLER_METHOD The resampler method to use for resampling. Other options for the resampler can be used, see the VideoResampler. Default is T_VIDEO_RESAMPLER_METHOD_CUBIC.

View Source
const VIDEO_CONVERTER_OPT_RESAMPLER_TAPS = "GstVideoConverter.resampler-taps"

VIDEO_CONVERTER_OPT_RESAMPLER_TAPS The number of taps for the resampler. Default is 0: let the resampler choose a good value.

View Source
const VIDEO_CONVERTER_OPT_SRC_HEIGHT = "GstVideoConverter.src-height"

VIDEO_CONVERTER_OPT_SRC_HEIGHT source height to convert, default source height.

View Source
const VIDEO_CONVERTER_OPT_SRC_WIDTH = "GstVideoConverter.src-width"

VIDEO_CONVERTER_OPT_SRC_WIDTH source width to convert, default source width.

View Source
const VIDEO_CONVERTER_OPT_SRC_X = "GstVideoConverter.src-x"

VIDEO_CONVERTER_OPT_SRC_X source x position to start conversion, default 0.

View Source
const VIDEO_CONVERTER_OPT_SRC_Y = "GstVideoConverter.src-y"

VIDEO_CONVERTER_OPT_SRC_Y source y position to start conversion, default 0.

View Source
const VIDEO_CONVERTER_OPT_THREADS = "GstVideoConverter.threads"

VIDEO_CONVERTER_OPT_THREADS maximum number of threads to use. Default 1, 0 for the number of cores.

View Source
const VIDEO_DECODER_MAX_ERRORS = 10

VIDEO_DECODER_MAX_ERRORS: default maximum number of errors tolerated before signaling error.

View Source
const VIDEO_DECODER_SINK_NAME = "sink"

VIDEO_DECODER_SINK_NAME: name of the templates for the sink pad.

View Source
const VIDEO_DECODER_SRC_NAME = "src"

VIDEO_DECODER_SRC_NAME: name of the templates for the source pad.

View Source
const VIDEO_ENCODER_SINK_NAME = "sink"

VIDEO_ENCODER_SINK_NAME: name of the templates for the sink pad.

View Source
const VIDEO_ENCODER_SRC_NAME = "src"

VIDEO_ENCODER_SRC_NAME: name of the templates for the source pad.

View Source
const VIDEO_FORMATS_ALL = "" /* 933-byte string literal not displayed */

VIDEO_FORMATS_ALL: list of all video formats, for use in template caps strings.

Formats are sorted by decreasing "quality", using these criteria by priority: - number of components - depth - subsampling factor of the width - subsampling factor of the height - number of planes - native endianness preferred - pixel stride - poffset - prefer non-complex formats - prefer YUV formats over RGB ones - prefer I420 over YV12 - format name.

View Source
const VIDEO_FPS_RANGE = "(fraction) [ 0, max ]"
View Source
const VIDEO_MAX_COMPONENTS = 4
View Source
const VIDEO_MAX_PLANES = 4
View Source
const VIDEO_RESAMPLER_OPT_CUBIC_B = "GstVideoResampler.cubic-b"

VIDEO_RESAMPLER_OPT_CUBIC_B: g_TYPE_DOUBLE, B parameter of the cubic filter. The B parameter controls the bluriness. Values between 0.0 and 2.0 are accepted. 1/3 is the default.

Below are some values of popular filters: B C Hermite 0.0 0.0 Spline 1.0 0.0 Catmull-Rom 0.0 1/2 Mitchell 1/3 1/3 Robidoux 0.3782 0.3109 Robidoux Sharp 0.2620 0.3690 Robidoux Soft 0.6796 0.1602.

View Source
const VIDEO_RESAMPLER_OPT_CUBIC_C = "GstVideoResampler.cubic-c"

VIDEO_RESAMPLER_OPT_CUBIC_C: g_TYPE_DOUBLE, C parameter of the cubic filter. The C parameter controls the Keys alpha value. Values between 0.0 and 2.0 are accepted. 1/3 is the default.

See T_VIDEO_RESAMPLER_OPT_CUBIC_B for some more common values.

View Source
const VIDEO_RESAMPLER_OPT_ENVELOPE = "GstVideoResampler.envelope"

VIDEO_RESAMPLER_OPT_ENVELOPE: g_TYPE_DOUBLE, specifies the size of filter envelope for GST_VIDEO_RESAMPLER_METHOD_LANCZOS. values are clamped between 1.0 and 5.0. 2.0 is the default.

View Source
const VIDEO_RESAMPLER_OPT_MAX_TAPS = "GstVideoResampler.max-taps"

VIDEO_RESAMPLER_OPT_MAX_TAPS: g_TYPE_INT, limits the maximum number of taps to use. 16 is the default.

View Source
const VIDEO_RESAMPLER_OPT_SHARPEN = "GstVideoResampler.sharpen"

VIDEO_RESAMPLER_OPT_SHARPEN: g_TYPE_DOUBLE, specifies sharpening of the filter for GST_VIDEO_RESAMPLER_METHOD_LANCZOS. values are clamped between 0.0 and 1.0. 0.0 is the default.

View Source
const VIDEO_RESAMPLER_OPT_SHARPNESS = "GstVideoResampler.sharpness"

VIDEO_RESAMPLER_OPT_SHARPNESS: g_TYPE_DOUBLE, specifies sharpness of the filter for GST_VIDEO_RESAMPLER_METHOD_LANCZOS. values are clamped between 0.5 and 1.5. 1.0 is the default.

View Source
const VIDEO_SCALER_OPT_DITHER_METHOD = "GstVideoScaler.dither-method"

VIDEO_SCALER_OPT_DITHER_METHOD The dither method to use for propagating quatization errors.

View Source
const VIDEO_SIZE_RANGE = "(int) [ 1, max ]"
View Source
const VIDEO_TILE_TYPE_MASK = 65535
View Source
const VIDEO_TILE_TYPE_SHIFT = 16
View Source
const VIDEO_TILE_X_TILES_MASK = 65535
View Source
const VIDEO_TILE_Y_TILES_SHIFT = 16

Variables

View Source
var (
	GTypeColorBalanceType = coreglib.Type(C.gst_color_balance_type_get_type())
	GTypeColorBalance     = coreglib.Type(C.gst_color_balance_get_type())
)

GType values.

View Source
var (
	GTypeVideoAncillaryDID = coreglib.Type(C.gst_video_ancillary_did_get_type())
	GTypeVideoGammaMode    = coreglib.Type(C.gst_video_gamma_mode_get_type())
)

GType values.

View Source
var (
	GTypeVideoAggregator           = coreglib.Type(C.gst_video_aggregator_get_type())
	GTypeVideoAggregatorConvertPad = coreglib.Type(C.gst_video_aggregator_convert_pad_get_type())
	GTypeVideoAggregatorPad        = coreglib.Type(C.gst_video_aggregator_pad_get_type())
)

GType values.

View Source
var (
	GTypeVideoGLTextureOrientation = coreglib.Type(C.gst_video_gl_texture_orientation_get_type())
	GTypeVideoGLTextureType        = coreglib.Type(C.gst_video_gl_texture_type_get_type())
)

GType values.

View Source
var (
	GTypeVideoTimeCodeFlags = coreglib.Type(C.gst_video_time_code_flags_get_type())
	GTypeVideoTimeCode      = coreglib.Type(C.gst_video_time_code_get_type())
)

GType values.

View Source
var (
	GTypeVideoCodecFrameFlags = coreglib.Type(C.gst_video_codec_frame_flags_get_type())
	GTypeVideoCodecFrame      = coreglib.Type(C.gst_video_codec_frame_get_type())
	GTypeVideoCodecState      = coreglib.Type(C.gst_video_codec_state_get_type())
)

GType values.

View Source
var (
	GTypeNavigationCommand     = coreglib.Type(C.gst_navigation_command_get_type())
	GTypeNavigationEventType   = coreglib.Type(C.gst_navigation_event_type_get_type())
	GTypeNavigationMessageType = coreglib.Type(C.gst_navigation_message_type_get_type())
	GTypeNavigationQueryType   = coreglib.Type(C.gst_navigation_query_type_get_type())
	GTypeNavigation            = coreglib.Type(C.gst_navigation_get_type())
)

GType values.

View Source
var (
	GTypeVideoAncillaryDID16  = coreglib.Type(C.gst_video_ancillary_di_d16_get_type())
	GTypeVideoCaptionType     = coreglib.Type(C.gst_video_caption_type_get_type())
	GTypeVideoVBIParserResult = coreglib.Type(C.gst_video_vbi_parser_result_get_type())
	GTypeVideoVBIEncoder      = coreglib.Type(C.gst_video_vbi_encoder_get_type())
	GTypeVideoVBIParser       = coreglib.Type(C.gst_video_vbi_parser_get_type())
)

GType values.

View Source
var (
	GTypeVideoAFDSpec  = coreglib.Type(C.gst_video_afd_spec_get_type())
	GTypeVideoAFDValue = coreglib.Type(C.gst_video_afd_value_get_type())
)

GType values.

View Source
var (
	GTypeVideoChromaMethod = coreglib.Type(C.gst_video_chroma_method_get_type())
	GTypeVideoChromaFlags  = coreglib.Type(C.gst_video_chroma_flags_get_type())
	GTypeVideoChromaSite   = coreglib.Type(C.gst_video_chroma_site_get_type())
)

GType values.

View Source
var (
	GTypeVideoColorMatrix      = coreglib.Type(C.gst_video_color_matrix_get_type())
	GTypeVideoColorPrimaries   = coreglib.Type(C.gst_video_color_primaries_get_type())
	GTypeVideoColorRange       = coreglib.Type(C.gst_video_color_range_get_type())
	GTypeVideoTransferFunction = coreglib.Type(C.gst_video_transfer_function_get_type())
)

GType values.

View Source
var (
	GTypeVideoAlphaMode     = coreglib.Type(C.gst_video_alpha_mode_get_type())
	GTypeVideoChromaMode    = coreglib.Type(C.gst_video_chroma_mode_get_type())
	GTypeVideoMatrixMode    = coreglib.Type(C.gst_video_matrix_mode_get_type())
	GTypeVideoPrimariesMode = coreglib.Type(C.gst_video_primaries_mode_get_type())
)

GType values.

View Source
var (
	GTypeVideoDitherMethod = coreglib.Type(C.gst_video_dither_method_get_type())
	GTypeVideoDitherFlags  = coreglib.Type(C.gst_video_dither_flags_get_type())
)

GType values.

View Source
var (
	GTypeVideoFormat      = coreglib.Type(C.gst_video_format_get_type())
	GTypeVideoFormatFlags = coreglib.Type(C.gst_video_format_flags_get_type())
	GTypeVideoPackFlags   = coreglib.Type(C.gst_video_pack_flags_get_type())
)

GType values.

View Source
var (
	GTypeVideoBufferFlags = coreglib.Type(C.gst_video_buffer_flags_get_type())
	GTypeVideoFrameFlags  = coreglib.Type(C.gst_video_frame_flags_get_type())
)

GType values.

View Source
var (
	GTypeVideoInterlaceMode         = coreglib.Type(C.gst_video_interlace_mode_get_type())
	GTypeVideoMultiviewFramePacking = coreglib.Type(C.gst_video_multiview_frame_packing_get_type())
	GTypeVideoMultiviewMode         = coreglib.Type(C.gst_video_multiview_mode_get_type())
	GTypeVideoFlags                 = coreglib.Type(C.gst_video_flags_get_type())
	GTypeVideoMultiviewFlags        = coreglib.Type(C.gst_video_multiview_flags_get_type())
	GTypeVideoInfo                  = coreglib.Type(C.gst_video_info_get_type())
)

GType values.

View Source
var (
	GTypeVideoOverlayFormatFlags = coreglib.Type(C.gst_video_overlay_format_flags_get_type())
	GTypeVideoOverlayComposition = coreglib.Type(C.gst_video_overlay_composition_get_type())
	GTypeVideoOverlayRectangle   = coreglib.Type(C.gst_video_overlay_rectangle_get_type())
)

GType values.

View Source
var (
	GTypeVideoResamplerMethod = coreglib.Type(C.gst_video_resampler_method_get_type())
	GTypeVideoResamplerFlags  = coreglib.Type(C.gst_video_resampler_flags_get_type())
)

GType values.

View Source
var (
	GTypeVideoTileMode = coreglib.Type(C.gst_video_tile_mode_get_type())
	GTypeVideoTileType = coreglib.Type(C.gst_video_tile_type_get_type())
)

GType values.

View Source
var (
	GTypeColorBalanceChannel = coreglib.Type(C.gst_color_balance_channel_get_type())
)

GType values.

View Source
var (
	GTypeVideoAggregatorParallelConvertPad = coreglib.Type(C.gst_video_aggregator_parallel_convert_pad_get_type())
)

GType values.

View Source
var (
	GTypeVideoBufferPool = coreglib.Type(C.gst_video_buffer_pool_get_type())
)

GType values.

View Source
var (
	GTypeVideoDecoder = coreglib.Type(C.gst_video_decoder_get_type())
)

GType values.

View Source
var (
	GTypeVideoDecoderRequestSyncPointFlags = coreglib.Type(C.gst_video_decoder_request_sync_point_flags_get_type())
)

GType values.

View Source
var (
	GTypeVideoDirection = coreglib.Type(C.gst_video_direction_get_type())
)

GType values.

View Source
var (
	GTypeVideoEncoder = coreglib.Type(C.gst_video_encoder_get_type())
)

GType values.

View Source
var (
	GTypeVideoFieldOrder = coreglib.Type(C.gst_video_field_order_get_type())
)

GType values.

View Source
var (
	GTypeVideoFilter = coreglib.Type(C.gst_video_filter_get_type())
)

GType values.

View Source
var (
	GTypeVideoFrameMapFlags = coreglib.Type(C.gst_video_frame_map_flags_get_type())
)

GType values.

View Source
var (
	GTypeVideoMultiviewFlagsSet = coreglib.Type(C.gst_video_multiview_flagset_get_type())
)

GType values.

View Source
var (
	GTypeVideoOrientation = coreglib.Type(C.gst_video_orientation_get_type())
)

GType values.

View Source
var (
	GTypeVideoOrientationMethod = coreglib.Type(C.gst_video_orientation_method_get_type())
)

GType values.

View Source
var (
	GTypeVideoOverlay = coreglib.Type(C.gst_video_overlay_get_type())
)

GType values.

View Source
var (
	GTypeVideoScalerFlags = coreglib.Type(C.gst_video_scaler_flags_get_type())
)

GType values.

View Source
var (
	GTypeVideoSink = coreglib.Type(C.gst_video_sink_get_type())
)

GType values.

View Source
var (
	GTypeVideoTimeCodeInterval = coreglib.Type(C.gst_video_time_code_interval_get_type())
)

GType values.

Functions

func BufferPoolConfigGetVideoAlignment

func BufferPoolConfigGetVideoAlignment(config *gst.Structure, align *VideoAlignment) bool

BufferPoolConfigGetVideoAlignment: get the video alignment from the bufferpool configuration config in in align.

The function takes the following parameters:

  • config: Structure.
  • align: VideoAlignment.

The function returns the following values:

  • ok: TRUE if config could be parsed correctly.

func BufferPoolConfigSetVideoAlignment

func BufferPoolConfigSetVideoAlignment(config *gst.Structure, align *VideoAlignment)

BufferPoolConfigSetVideoAlignment: set the video alignment in align to the bufferpool configuration config.

The function takes the following parameters:

  • config: Structure.
  • align: VideoAlignment.

func IsVideoOverlayPrepareWindowHandleMessage

func IsVideoOverlayPrepareWindowHandleMessage(msg *gst.Message) bool

IsVideoOverlayPrepareWindowHandleMessage: convenience function to check if the given message is a "prepare-window-handle" message from a VideoOverlay.

The function takes the following parameters:

  • msg: Message.

The function returns the following values:

  • ok: whether msg is a "prepare-window-handle" message.
func NavigationEventParseKeyEvent(event *gst.Event) (string, bool)

The function takes the following parameters:

  • event to inspect.

The function returns the following values:

  • key (optional): pointer to a location to receive the string identifying the key press. The returned string is owned by the event, and valid only until the event is unreffed.
  • ok
func NavigationEventParseMouseButtonEvent(event *gst.Event) (button int, x, y float64, ok bool)

NavigationEventParseMouseButtonEvent: retrieve the details of either a Navigation mouse button press event or a mouse button release event. Determine which type the event is using gst_navigation_event_get_type() to retrieve the NavigationEventType.

The function takes the following parameters:

  • event to inspect.

The function returns the following values:

  • button (optional): pointer to a gint that will receive the button number associated with the event.
  • x (optional): pointer to a gdouble to receive the x coordinate of the mouse button event.
  • y (optional): pointer to a gdouble to receive the y coordinate of the mouse button event.
  • ok: TRUE if the button number and both coordinates could be extracted, otherwise FALSE.
func NavigationEventParseMouseMoveEvent(event *gst.Event) (x, y float64, ok bool)

NavigationEventParseMouseMoveEvent: inspect a Navigation mouse movement event and extract the coordinates of the event.

The function takes the following parameters:

  • event to inspect.

The function returns the following values:

  • x (optional): pointer to a gdouble to receive the x coordinate of the mouse movement.
  • y (optional): pointer to a gdouble to receive the y coordinate of the mouse movement.
  • ok: TRUE if both coordinates could be extracted, otherwise FALSE.
func NavigationEventParseMouseScrollEvent(event *gst.Event) (x, y, deltaX, deltaY float64, ok bool)

NavigationEventParseMouseScrollEvent: inspect a Navigation mouse scroll event and extract the coordinates of the event.

The function takes the following parameters:

  • event to inspect.

The function returns the following values:

  • x (optional): pointer to a gdouble to receive the x coordinate of the mouse movement.
  • y (optional): pointer to a gdouble to receive the y coordinate of the mouse movement.
  • deltaX (optional): pointer to a gdouble to receive the delta_x coordinate of the mouse movement.
  • deltaY (optional): pointer to a gdouble to receive the delta_y coordinate of the mouse movement.
  • ok: TRUE if all coordinates could be extracted, otherwise FALSE.
func NavigationMessageNewAnglesChanged(src gst.GstObjector, curAngle, nAngles uint) *gst.Message

NavigationMessageNewAnglesChanged creates a new Navigation message with type T_NAVIGATION_MESSAGE_ANGLES_CHANGED for notifying an application that the current angle, or current number of angles available in a multiangle video has changed.

The function takes the following parameters:

  • src to set as source of the new message.
  • curAngle: currently selected angle.
  • nAngles: number of viewing angles now available.

The function returns the following values:

  • message: new Message.
func NavigationMessageNewCommandsChanged(src gst.GstObjector) *gst.Message

NavigationMessageNewCommandsChanged creates a new Navigation message with type T_NAVIGATION_MESSAGE_COMMANDS_CHANGED.

The function takes the following parameters:

  • src to set as source of the new message.

The function returns the following values:

  • message: new Message.
func NavigationMessageNewEvent(src gst.GstObjector, event *gst.Event) *gst.Message

NavigationMessageNewEvent creates a new Navigation message with type T_NAVIGATION_MESSAGE_EVENT.

The function takes the following parameters:

  • src to set as source of the new message.
  • event: navigation Event.

The function returns the following values:

  • message: new Message.
func NavigationMessageNewMouseOver(src gst.GstObjector, active bool) *gst.Message

NavigationMessageNewMouseOver creates a new Navigation message with type T_NAVIGATION_MESSAGE_MOUSE_OVER.

The function takes the following parameters:

  • src to set as source of the new message.
  • active: TRUE if the mouse has entered a clickable area of the display. FALSE if it over a non-clickable area.

The function returns the following values:

  • message: new Message.
func NavigationMessageParseAnglesChanged(message *gst.Message) (curAngle, nAngles uint, ok bool)

NavigationMessageParseAnglesChanged: parse a Navigation message of type GST_NAVIGATION_MESSAGE_ANGLES_CHANGED and extract the cur_angle and n_angles parameters.

The function takes the following parameters:

  • message to inspect.

The function returns the following values:

  • curAngle (optional): pointer to a #guint to receive the new current angle number, or NULL.
  • nAngles (optional): pointer to a #guint to receive the new angle count, or NULL.
  • ok: TRUE if the message could be successfully parsed. FALSE if not.
func NavigationMessageParseEvent(message *gst.Message) (*gst.Event, bool)

NavigationMessageParseEvent: parse a Navigation message of type T_NAVIGATION_MESSAGE_EVENT and extract contained Event. The caller must unref the event when done with it.

The function takes the following parameters:

  • message to inspect.

The function returns the following values:

  • event (optional): pointer to a Event to receive the contained navigation event.
  • ok: TRUE if the message could be successfully parsed. FALSE if not.
func NavigationMessageParseMouseOver(message *gst.Message) (active, ok bool)

NavigationMessageParseMouseOver: parse a Navigation message of type T_NAVIGATION_MESSAGE_MOUSE_OVER and extract the active/inactive flag. If the mouse over event is marked active, it indicates that the mouse is over a clickable area.

The function takes the following parameters:

  • message to inspect.

The function returns the following values:

  • active (optional): pointer to a gboolean to receive the active/inactive state, or NULL.
  • ok: TRUE if the message could be successfully parsed. FALSE if not.
func NavigationQueryNewAngles() *gst.Query

NavigationQueryNewAngles: create a new Navigation angles query. When executed, it will query the pipeline for the set of currently available angles, which may be greater than one in a multiangle video.

The function returns the following values:

  • query: new query.
func NavigationQueryNewCommands() *gst.Query

NavigationQueryNewCommands: create a new Navigation commands query. When executed, it will query the pipeline for the set of currently available commands.

The function returns the following values:

  • query: new query.
func NavigationQueryParseAngles(query *gst.Query) (curAngle, nAngles uint, ok bool)

NavigationQueryParseAngles: parse the current angle number in the Navigation angles query into the #guint pointed to by the cur_angle variable, and the number of available angles into the #guint pointed to by the n_angles variable.

The function takes the following parameters:

  • query: Query.

The function returns the following values:

  • curAngle (optional): pointer to a #guint into which to store the currently selected angle value from the query, or NULL.
  • nAngles (optional): pointer to a #guint into which to store the number of angles value from the query, or NULL.
  • ok: TRUE if the query could be successfully parsed. FALSE if not.
func NavigationQueryParseCommandsLength(query *gst.Query) (uint, bool)

NavigationQueryParseCommandsLength: parse the number of commands in the Navigation commands query.

The function takes the following parameters:

  • query: Query.

The function returns the following values:

  • nCmds (optional): number of commands in this query.
  • ok: TRUE if the query could be successfully parsed. FALSE if not.
func NavigationQuerySetAngles(query *gst.Query, curAngle, nAngles uint)

NavigationQuerySetAngles: set the Navigation angles query result field in query.

The function takes the following parameters:

  • query: Query.
  • curAngle: current viewing angle to set.
  • nAngles: number of viewing angles to set.
func NavigationQuerySetCommandsv(query *gst.Query, cmds []NavigationCommand)

NavigationQuerySetCommandsv: set the Navigation command query result fields in query. The number of commands passed must be equal to n_commands.

The function takes the following parameters:

  • query: Query.
  • cmds: array containing n_cmds GstNavigationCommand values.

func VideoAFDMetaGetInfo

func VideoAFDMetaGetInfo() *gst.MetaInfo

The function returns the following values:

func VideoAfdMetaApiGetType

func VideoAfdMetaApiGetType() coreglib.Type

The function returns the following values:

func VideoAffineTransformationMetaApiGetType

func VideoAffineTransformationMetaApiGetType() coreglib.Type

The function returns the following values:

func VideoAffineTransformationMetaGetInfo

func VideoAffineTransformationMetaGetInfo() *gst.MetaInfo

The function returns the following values:

func VideoBarMetaApiGetType

func VideoBarMetaApiGetType() coreglib.Type

The function returns the following values:

func VideoBarMetaGetInfo

func VideoBarMetaGetInfo() *gst.MetaInfo

The function returns the following values:

func VideoBlend

func VideoBlend(dest, src *VideoFrame, x, y int, globalAlpha float32) bool

VideoBlend lets you blend the src image into the dest image.

The function takes the following parameters:

  • dest where to blend src in.
  • src that we want to blend into.
  • x offset in pixel where the src image should be blended.
  • y offset in pixel where the src image should be blended.
  • globalAlpha: global_alpha each per-pixel alpha value is multiplied with.

The function returns the following values:

func VideoCalculateDisplayRatio

func VideoCalculateDisplayRatio(videoWidth, videoHeight, videoParN, videoParD, displayParN, displayParD uint) (darN, darD uint, ok bool)

VideoCalculateDisplayRatio: given the Pixel Aspect Ratio and size of an input video frame, and the pixel aspect ratio of the intended display device, calculates the actual display ratio the video will be rendered with.

The function takes the following parameters:

  • videoWidth: width of the video frame in pixels.
  • videoHeight: height of the video frame in pixels.
  • videoParN: numerator of the pixel aspect ratio of the input video.
  • videoParD: denominator of the pixel aspect ratio of the input video.
  • displayParN: numerator of the pixel aspect ratio of the display device.
  • displayParD: denominator of the pixel aspect ratio of the display device.

The function returns the following values:

  • darN: numerator of the calculated display_ratio.
  • darD: denominator of the calculated display_ratio.
  • ok: boolean indicating success and a calculated Display Ratio in the dar_n and dar_d parameters. The return value is FALSE in the case of integer overflow or other error.

func VideoCaptionMetaApiGetType

func VideoCaptionMetaApiGetType() coreglib.Type

The function returns the following values:

func VideoCaptionMetaGetInfo

func VideoCaptionMetaGetInfo() *gst.MetaInfo

The function returns the following values:

func VideoCaptionTypeToCaps

func VideoCaptionTypeToCaps(typ VideoCaptionType) *gst.Caps

VideoCaptionTypeToCaps creates new caps corresponding to type.

The function takes the following parameters:

  • typ: VideoCaptionType.

The function returns the following values:

  • caps: new Caps.

func VideoChromaSiteToString

func VideoChromaSiteToString(site VideoChromaSite) string

VideoChromaSiteToString converts site to its string representation.

The function takes the following parameters:

  • site: VideoChromaSite.

The function returns the following values:

  • utf8 (optional): string representation of site or NULL if site contains undefined value or is equal to GST_VIDEO_CHROMA_SITE_UNKNOWN.

func VideoChromaToString deprecated

func VideoChromaToString(site VideoChromaSite) string

VideoChromaToString converts site to its string representation.

Deprecated: Use gst_video_chroma_site_to_string() instead.

The function takes the following parameters:

  • site: VideoChromaSite.

The function returns the following values:

  • utf8: string describing site.

func VideoCodecAlphaMetaApiGetType

func VideoCodecAlphaMetaApiGetType() coreglib.Type

The function returns the following values:

  • gType for the VideoCodecAlphaMeta structure.

func VideoCodecAlphaMetaGetInfo

func VideoCodecAlphaMetaGetInfo() *gst.MetaInfo

The function returns the following values:

  • metaInfo pointer that describes VideoCodecAlphaMeta.

func VideoColorMatrixGetKrKb

func VideoColorMatrixGetKrKb(matrix VideoColorMatrix) (Kr, Kb float64, ok bool)

VideoColorMatrixGetKrKb: get the coefficients used to convert between Y'PbPr and R'G'B' using matrix.

When:

0.0 <= [Y',R',G',B'] <= 1.0)
(-0.5 <= [Pb,Pr] <= 0.5)

the general conversion is given by:

Y' = Kr*R' + (1-Kr-Kb)*G' + Kb*B'
Pb = (B'-Y')/(2*(1-Kb))
Pr = (R'-Y')/(2*(1-Kr))

and the other way around:

R' = Y' + Cr*2*(1-Kr)
G' = Y' - Cb*2*(1-Kb)*Kb/(1-Kr-Kb) - Cr*2*(1-Kr)*Kr/(1-Kr-Kb)
B' = Y' + Cb*2*(1-Kb).

The function takes the following parameters:

  • matrix: VideoColorMatrix.

The function returns the following values:

  • Kr: result red channel coefficient.
  • Kb: result blue channel coefficient.
  • ok: TRUE if matrix was a YUV color format and Kr and Kb contain valid values.

func VideoColorMatrixToISO

func VideoColorMatrixToISO(matrix VideoColorMatrix) uint

VideoColorMatrixToISO converts VideoColorMatrix to the "matrix coefficients" (MatrixCoefficients) value defined by "ISO/IEC 23001-8 Section 7.3 Table 4" and "ITU-T H.273 Table 4". "H.264 Table E-5" and "H.265 Table E.5" share the identical values.

The function takes the following parameters:

  • matrix: VideoColorMatrix.

The function returns the following values:

  • guint: value of ISO/IEC 23001-8 matrix coefficients.

func VideoColorPrimariesToISO

func VideoColorPrimariesToISO(primaries VideoColorPrimaries) uint

VideoColorPrimariesToISO converts VideoColorPrimaries to the "colour primaries" (ColourPrimaries) value defined by "ISO/IEC 23001-8 Section 7.1 Table 2" and "ITU-T H.273 Table 2". "H.264 Table E-3" and "H.265 Table E.3" share the identical values.

The function takes the following parameters:

  • primaries: VideoColorPrimaries.

The function returns the following values:

  • guint: value of ISO/IEC 23001-8 colour primaries.

func VideoColorRangeOffsets

func VideoColorRangeOffsets(_range VideoColorRange, info *VideoFormatInfo) (offset, scale [4]int)

VideoColorRangeOffsets: compute the offset and scale values for each component of info. For each component, (c[i] - offset[i]) / scale[i] will scale the component c[i] to the range [0.0 .. 1.0].

The reverse operation (c[i] * scale[i]) + offset[i] can be used to convert the component values in range [0.0 .. 1.0] back to their representation in info and range.

The function takes the following parameters:

  • range: VideoColorRange.
  • info: VideoFormatInfo.

The function returns the following values:

  • offset: output offsets.
  • scale: output scale.

func VideoColorTransferDecode

func VideoColorTransferDecode(fn VideoTransferFunction, val float64) float64

VideoColorTransferDecode: deprecated: Use gst_video_transfer_function_decode() instead.

The function takes the following parameters:

  • fn: VideoTransferFunction.
  • val: value.

The function returns the following values:

func VideoColorTransferEncode

func VideoColorTransferEncode(fn VideoTransferFunction, val float64) float64

VideoColorTransferEncode: deprecated: Use gst_video_transfer_function_encode() instead.

The function takes the following parameters:

  • fn: VideoTransferFunction.
  • val: value.

The function returns the following values:

func VideoConvertSample

func VideoConvertSample(sample *gst.Sample, toCaps *gst.Caps, timeout gst.ClockTime) (*gst.Sample, error)

VideoConvertSample converts a raw video buffer into the specified output caps.

The output caps can be any raw video formats or any image formats (jpeg, png, ...).

The width, height and pixel-aspect-ratio can also be specified in the output caps.

The function takes the following parameters:

  • sample: Sample.
  • toCaps to convert to.
  • timeout: maximum amount of time allowed for the processing.

The function returns the following values:

  • ret: converted Sample, or NULL if an error happened (in which case err will point to the #GError).

func VideoConvertSampleAsync

func VideoConvertSampleAsync(sample *gst.Sample, toCaps *gst.Caps, timeout gst.ClockTime, callback VideoConvertSampleCallback)

VideoConvertSampleAsync converts a raw video buffer into the specified output caps.

The output caps can be any raw video formats or any image formats (jpeg, png, ...).

The width, height and pixel-aspect-ratio can also be specified in the output caps.

callback will be called after conversion, when an error occurred or if conversion didn't finish after timeout. callback will always be called from the thread default GMainContext, see g_main_context_get_thread_default(). If GLib before 2.22 is used, this will always be the global default main context.

destroy_notify will be called after the callback was called and user_data is not needed anymore.

The function takes the following parameters:

  • sample: Sample.
  • toCaps to convert to.
  • timeout: maximum amount of time allowed for the processing.
  • callback: GstVideoConvertSampleCallback that will be called after conversion.

func VideoCropMetaApiGetType

func VideoCropMetaApiGetType() coreglib.Type

The function returns the following values:

func VideoCropMetaGetInfo

func VideoCropMetaGetInfo() *gst.MetaInfo

The function returns the following values:

func VideoEventIsForceKeyUnit

func VideoEventIsForceKeyUnit(event *gst.Event) bool

VideoEventIsForceKeyUnit checks if an event is a force key unit event. Returns true for both upstream and downstream force key unit events.

The function takes the following parameters:

  • event to check.

The function returns the following values:

  • ok: TRUE if the event is a valid force key unit event.

func VideoEventNewDownstreamForceKeyUnit

func VideoEventNewDownstreamForceKeyUnit(timestamp, streamTime, runningTime gst.ClockTime, allHeaders bool, count uint) *gst.Event

VideoEventNewDownstreamForceKeyUnit creates a new downstream force key unit event. A downstream force key unit event can be sent down the pipeline to request downstream elements to produce a key unit. A downstream force key unit event must also be sent when handling an upstream force key unit event to notify downstream that the latter has been handled.

To parse an event created by gst_video_event_new_downstream_force_key_unit() use gst_video_event_parse_downstream_force_key_unit().

The function takes the following parameters:

  • timestamp of the buffer that starts a new key unit.
  • streamTime: stream_time of the buffer that starts a new key unit.
  • runningTime: running_time of the buffer that starts a new key unit.
  • allHeaders: TRUE to produce headers when starting a new key unit.
  • count: integer that can be used to number key units.

The function returns the following values:

  • event: new GstEvent.

func VideoEventNewStillFrame

func VideoEventNewStillFrame(inStill bool) *gst.Event

VideoEventNewStillFrame creates a new Still Frame event. If in_still is TRUE, then the event represents the start of a still frame sequence. If it is FALSE, then the event ends a still frame sequence.

To parse an event created by gst_video_event_new_still_frame() use gst_video_event_parse_still_frame().

The function takes the following parameters:

  • inStill: boolean value for the still-frame state of the event.

The function returns the following values:

  • event: new GstEvent.

func VideoEventNewUpstreamForceKeyUnit

func VideoEventNewUpstreamForceKeyUnit(runningTime gst.ClockTime, allHeaders bool, count uint) *gst.Event

VideoEventNewUpstreamForceKeyUnit creates a new upstream force key unit event. An upstream force key unit event can be sent to request upstream elements to produce a key unit.

running_time can be set to request a new key unit at a specific running_time. If set to GST_CLOCK_TIME_NONE, upstream elements will produce a new key unit as soon as possible.

To parse an event created by gst_video_event_new_downstream_force_key_unit() use gst_video_event_parse_downstream_force_key_unit().

The function takes the following parameters:

  • runningTime: running_time at which a new key unit should be produced.
  • allHeaders: TRUE to produce headers when starting a new key unit.
  • count: integer that can be used to number key units.

The function returns the following values:

  • event: new GstEvent.

func VideoEventParseDownstreamForceKeyUnit

func VideoEventParseDownstreamForceKeyUnit(event *gst.Event) (timestamp, streamTime, runningTime gst.ClockTime, allHeaders bool, count uint, ok bool)

VideoEventParseDownstreamForceKeyUnit: get timestamp, stream-time, running-time, all-headers and count in the force key unit event. See gst_video_event_new_downstream_force_key_unit() for a full description of the downstream force key unit event.

running_time will be adjusted for any pad offsets of pads it was passing through.

The function takes the following parameters:

  • event to parse.

The function returns the following values:

  • timestamp: pointer to the timestamp in the event.
  • streamTime: pointer to the stream-time in the event.
  • runningTime: pointer to the running-time in the event.
  • allHeaders: pointer to the all_headers flag in the event.
  • count: pointer to the count field of the event.
  • ok: TRUE if the event is a valid downstream force key unit event.

func VideoEventParseStillFrame

func VideoEventParseStillFrame(event *gst.Event) (inStill, ok bool)

VideoEventParseStillFrame: parse a Event, identify if it is a Still Frame event, and return the still-frame state from the event if it is. If the event represents the start of a still frame, the in_still variable will be set to TRUE, otherwise FALSE. It is OK to pass NULL for the in_still variable order to just check whether the event is a valid still-frame event.

Create a still frame event using gst_video_event_new_still_frame().

The function takes the following parameters:

  • event to parse.

The function returns the following values:

  • inStill: A boolean to receive the still-frame status from the event, or NULL.
  • ok: TRUE if the event is a valid still-frame event. FALSE if not.

func VideoEventParseUpstreamForceKeyUnit

func VideoEventParseUpstreamForceKeyUnit(event *gst.Event) (runningTime gst.ClockTime, allHeaders bool, count uint, ok bool)

VideoEventParseUpstreamForceKeyUnit: get running-time, all-headers and count in the force key unit event. See gst_video_event_new_upstream_force_key_unit() for a full description of the upstream force key unit event.

Create an upstream force key unit event using gst_video_event_new_upstream_force_key_unit()

running_time will be adjusted for any pad offsets of pads it was passing through.

The function takes the following parameters:

  • event to parse.

The function returns the following values:

  • runningTime: pointer to the running_time in the event.
  • allHeaders: pointer to the all_headers flag in the event.
  • count: pointer to the count field in the event.
  • ok: TRUE if the event is a valid upstream force-key-unit event. FALSE if not.

func VideoFieldOrderToString

func VideoFieldOrderToString(order VideoFieldOrder) string

VideoFieldOrderToString: convert order to its string representation.

The function takes the following parameters:

  • order: VideoFieldOrder.

The function returns the following values:

  • utf8: order as a string or NULL if order in invalid.

func VideoFormatGetPalette

func VideoFormatGetPalette(format VideoFormat) (uint, unsafe.Pointer)

VideoFormatGetPalette: get the default palette of format. This the palette used in the pack function for paletted formats.

The function takes the following parameters:

  • format: VideoFormat.

The function returns the following values:

  • size of the palette in bytes.
  • gpointer (optional): default palette of format or NULL when format does not have a palette.

func VideoFormatToFourcc

func VideoFormatToFourcc(format VideoFormat) uint32

VideoFormatToFourcc converts a VideoFormat value into the corresponding FOURCC. Only a few YUV formats have corresponding FOURCC values. If format has no corresponding FOURCC value, 0 is returned.

The function takes the following parameters:

  • format video format.

The function returns the following values:

  • guint32: FOURCC corresponding to format.

func VideoFormatToString

func VideoFormatToString(format VideoFormat) string

VideoFormatToString returns a string containing a descriptive name for the VideoFormat if there is one, or NULL otherwise.

The function takes the following parameters:

  • format video format.

The function returns the following values:

  • utf8: name corresponding to format.

func VideoGLTextureUploadMetaApiGetType

func VideoGLTextureUploadMetaApiGetType() coreglib.Type

The function returns the following values:

func VideoGLTextureUploadMetaGetInfo

func VideoGLTextureUploadMetaGetInfo() *gst.MetaInfo

The function returns the following values:

func VideoGuessFramerate

func VideoGuessFramerate(duration gst.ClockTime) (destN, destD int, ok bool)

VideoGuessFramerate: given the nominal duration of one video frame, this function will check some standard framerates for a close match (within 0.1%) and return one if possible,

It will calculate an arbitrary framerate if no close match was found, and return FALSE.

It returns FALSE if a duration of 0 is passed.

The function takes the following parameters:

  • duration: nominal duration of one frame.

The function returns the following values:

  • destN (optional): numerator of the calculated framerate.
  • destD (optional): denominator of the calculated framerate.
  • ok: TRUE if a close "standard" framerate was recognised, and FALSE otherwise.

func VideoInterlaceModeToString

func VideoInterlaceModeToString(mode VideoInterlaceMode) string

VideoInterlaceModeToString: convert mode to its string representation.

The function takes the following parameters:

  • mode: VideoInterlaceMode.

The function returns the following values:

  • utf8: mode as a string or NULL if mode in invalid.

func VideoMakeRawCaps

func VideoMakeRawCaps(formats []VideoFormat) *gst.Caps

VideoMakeRawCaps: return a generic raw video caps for formats defined in formats. If formats is NULL returns a caps for all the supported raw video formats, see gst_video_formats_raw().

The function takes the following parameters:

  • formats (optional): array of raw VideoFormat, or NULL.

The function returns the following values:

  • caps: video GstCaps.

func VideoMakeRawCapsWithFeatures

func VideoMakeRawCapsWithFeatures(formats []VideoFormat, features *gst.CapsFeatures) *gst.Caps

VideoMakeRawCapsWithFeatures: return a generic raw video caps for formats defined in formats with features features. If formats is NULL returns a caps for all the supported video formats, see gst_video_formats_raw().

The function takes the following parameters:

  • formats (optional): array of raw VideoFormat, or NULL.
  • features (optional) to set on the caps.

The function returns the following values:

  • caps: video GstCaps.

func VideoMetaApiGetType

func VideoMetaApiGetType() coreglib.Type

The function returns the following values:

func VideoMetaGetInfo

func VideoMetaGetInfo() *gst.MetaInfo

The function returns the following values:

func VideoMetaTransformScaleGetQuark

func VideoMetaTransformScaleGetQuark() glib.Quark

VideoMetaTransformScaleGetQuark: get the #GQuark for the "gst-video-scale" metadata transform operation.

The function returns the following values:

  • quark: #GQuark.

func VideoMultiviewGetDoubledHeightModes

func VideoMultiviewGetDoubledHeightModes() *coreglib.Value

The function returns the following values:

  • value: const #GValue containing a list of stereo video modes

    Utility function that returns a #GValue with a GstList of packed stereo video modes with double the height of a single view for use in caps negotiations. Currently this is top-bottom and row-interleaved.

func VideoMultiviewGetDoubledSizeModes

func VideoMultiviewGetDoubledSizeModes() *coreglib.Value

The function returns the following values:

  • value: const #GValue containing a list of stereo video modes

    Utility function that returns a #GValue with a GstList of packed stereo video modes that have double the width/height of a single view for use in caps negotiation. Currently this is just 'checkerboard' layout.

func VideoMultiviewGetDoubledWidthModes

func VideoMultiviewGetDoubledWidthModes() *coreglib.Value

The function returns the following values:

  • value: const #GValue containing a list of stereo video modes

    Utility function that returns a #GValue with a GstList of packed stereo video modes with double the width of a single view for use in caps negotiations. Currently this is side-by-side, side-by-side-quincunx and column-interleaved.

func VideoMultiviewGetMonoModes

func VideoMultiviewGetMonoModes() *coreglib.Value

The function returns the following values:

  • value: const #GValue containing a list of mono video modes

    Utility function that returns a #GValue with a GstList of mono video modes (mono/left/right) for use in caps negotiations.

func VideoMultiviewGetUnpackedModes

func VideoMultiviewGetUnpackedModes() *coreglib.Value

The function returns the following values:

  • value: const #GValue containing a list of 'unpacked' stereo video modes

    Utility function that returns a #GValue with a GstList of unpacked stereo video modes (separated/frame-by-frame/frame-by-frame-multiview) for use in caps negotiations.

func VideoMultiviewGuessHalfAspect

func VideoMultiviewGuessHalfAspect(mvMode VideoMultiviewMode, width, height, parN, parD uint) bool

The function takes the following parameters:

  • mvMode: VideoMultiviewMode.
  • width: video frame width in pixels.
  • height: video frame height in pixels.
  • parN: numerator of the video pixel-aspect-ratio.
  • parD: denominator of the video pixel-aspect-ratio.

The function returns the following values:

  • ok: boolean indicating whether the T_VIDEO_MULTIVIEW_FLAGS_HALF_ASPECT flag should be set.

    Utility function that heuristically guess whether a frame-packed stereoscopic video contains half width/height encoded views, or full-frame views by looking at the overall display aspect ratio.

func VideoMultiviewModeToCapsString

func VideoMultiviewModeToCapsString(mviewMode VideoMultiviewMode) string

The function takes the following parameters:

  • mviewMode: VideoMultiviewMode value.

The function returns the following values:

  • utf8 caps string representation of the mode, or NULL if invalid.

    Given a VideoMultiviewMode returns the multiview-mode caps string for insertion into a caps structure.

func VideoMultiviewVideoInfoChangeMode

func VideoMultiviewVideoInfoChangeMode(info *VideoInfo, outMviewMode VideoMultiviewMode, outMviewFlags VideoMultiviewFlags)

VideoMultiviewVideoInfoChangeMode: utility function that transforms the width/height/PAR and multiview mode and flags of a VideoInfo into the requested mode.

The function takes the following parameters:

  • info structure to operate on.
  • outMviewMode: VideoMultiviewMode value.
  • outMviewFlags: set of VideoMultiviewFlags.

func VideoOverlayCompositionMetaApiGetType

func VideoOverlayCompositionMetaApiGetType() coreglib.Type

The function returns the following values:

func VideoOverlayCompositionMetaGetInfo

func VideoOverlayCompositionMetaGetInfo() *gst.MetaInfo

The function returns the following values:

func VideoOverlaySetProperty

func VideoOverlaySetProperty(object *coreglib.Object, lastPropId int, propertyId uint, value *coreglib.Value) bool

VideoOverlaySetProperty: this helper shall be used by classes implementing the VideoOverlay interface that want the render rectangle to be controllable using properties. This helper will parse and set the render rectangle calling gst_video_overlay_set_render_rectangle().

The function takes the following parameters:

  • object: instance on which the property is set.
  • lastPropId: highest property ID.
  • propertyId: property ID.
  • value to be set.

The function returns the following values:

  • ok: TRUE if the property_id matches the GstVideoOverlay property.

func VideoRegionOfInterestMetaApiGetType

func VideoRegionOfInterestMetaApiGetType() coreglib.Type

The function returns the following values:

func VideoRegionOfInterestMetaGetInfo

func VideoRegionOfInterestMetaGetInfo() *gst.MetaInfo

The function returns the following values:

func VideoTileGetIndex

func VideoTileGetIndex(mode VideoTileMode, x, y, xTiles, yTiles int) uint

VideoTileGetIndex: get the tile index of the tile at coordinates x and y in the tiled image of x_tiles by y_tiles.

Use this method when mode is of type GST_VIDEO_TILE_TYPE_INDEXED.

The function takes the following parameters:

  • mode: VideoTileMode.
  • x coordinate.
  • y coordinate.
  • xTiles: number of horizintal tiles.
  • yTiles: number of vertical tiles.

The function returns the following values:

  • guint: index of the tile at x and y in the tiled image of x_tiles by y_tiles.

func VideoTimeCodeMetaApiGetType

func VideoTimeCodeMetaApiGetType() coreglib.Type

The function returns the following values:

func VideoTimeCodeMetaGetInfo

func VideoTimeCodeMetaGetInfo() *gst.MetaInfo

The function returns the following values:

func VideoTransferFunctionDecode

func VideoTransferFunctionDecode(fn VideoTransferFunction, val float64) float64

VideoTransferFunctionDecode: convert val to its gamma decoded value. This is the inverse operation of gst_video_color_transfer_encode().

For a non-linear value L' in the range [0..1], conversion to the linear L is in general performed with a power function like:

L = L' ^ gamma

Depending on func, different formulas might be applied. Some formulas encode a linear segment in the lower range.

The function takes the following parameters:

  • fn: VideoTransferFunction.
  • val: value.

The function returns the following values:

  • gdouble: gamma decoded value of val.

func VideoTransferFunctionEncode

func VideoTransferFunctionEncode(fn VideoTransferFunction, val float64) float64

VideoTransferFunctionEncode: convert val to its gamma encoded value.

For a linear value L in the range [0..1], conversion to the non-linear (gamma encoded) L' is in general performed with a power function like:

L' = L ^ (1 / gamma)

Depending on func, different formulas might be applied. Some formulas encode a linear segment in the lower range.

The function takes the following parameters:

  • fn: VideoTransferFunction.
  • val: value.

The function returns the following values:

  • gdouble: gamma encoded value of val.

func VideoTransferFunctionIsEquivalent

func VideoTransferFunctionIsEquivalent(fromFunc VideoTransferFunction, fromBpp uint, toFunc VideoTransferFunction, toBpp uint) bool

VideoTransferFunctionIsEquivalent returns whether from_func and to_func are equivalent. There are cases (e.g. BT601, BT709, and BT2020_10) where several functions are functionally identical. In these cases, when doing conversion, we should consider them as equivalent. Also, BT2020_12 is the same as the aforementioned three for less than 12 bits per pixel.

The function takes the following parameters:

  • fromFunc to convert from.
  • fromBpp bits per pixel to convert from.
  • toFunc to convert into.
  • toBpp bits per pixel to convert into.

The function returns the following values:

  • ok: TRUE if from_func and to_func can be considered equivalent.

func VideoTransferFunctionToISO

func VideoTransferFunctionToISO(fn VideoTransferFunction) uint

VideoTransferFunctionToISO converts VideoTransferFunction to the "transfer characteristics" (TransferCharacteristics) value defined by "ISO/IEC 23001-8 Section 7.2 Table 3" and "ITU-T H.273 Table 3". "H.264 Table E-4" and "H.265 Table E.4" share the identical values.

The function takes the following parameters:

  • fn: VideoTransferFunction.

The function returns the following values:

  • guint: value of ISO/IEC 23001-8 transfer characteristics.

Types

type ColorBalance

type ColorBalance struct {
	*coreglib.Object
	// contains filtered or unexported fields
}

ColorBalance: this interface is implemented by elements which can perform some color balance operation on video frames they process. For example, modifying the brightness, contrast, hue or saturation.

Example elements are 'xvimagesink' and 'colorbalance'.

ColorBalance wraps an interface. This means the user can get the underlying type by calling Cast().

func (*ColorBalance) BalanceType

func (balance *ColorBalance) BalanceType() ColorBalanceType

BalanceType: get the ColorBalanceType of this implementation.

The function returns the following values:

  • colorBalanceType: the ColorBalanceType.

func (*ColorBalance) ConnectValueChanged

func (balance *ColorBalance) ConnectValueChanged(f func(channel *ColorBalanceChannel, value int)) coreglib.SignalHandle

ConnectValueChanged: fired when the value of the indicated channel has changed.

func (*ColorBalance) ListChannels

func (balance *ColorBalance) ListChannels() []*ColorBalanceChannel

ListChannels: retrieve a list of the available channels.

The function returns the following values:

  • list: a GList containing pointers to ColorBalanceChannel objects. The list is owned by the ColorBalance instance and must not be freed.

func (*ColorBalance) SetValue

func (balance *ColorBalance) SetValue(channel *ColorBalanceChannel, value int)

SetValue sets the current value of the channel to the passed value, which must be between min_value and max_value.

See Also: The ColorBalanceChannel.min_value and ColorBalanceChannel.max_value members of the ColorBalanceChannel object.

The function takes the following parameters:

  • channel: ColorBalanceChannel instance.
  • value: new value for the channel.

func (*ColorBalance) Value

func (balance *ColorBalance) Value(channel *ColorBalanceChannel) int

Value: retrieve the current value of the indicated channel, between min_value and max_value.

See Also: The ColorBalanceChannel.min_value and ColorBalanceChannel.max_value members of the ColorBalanceChannel object.

The function takes the following parameters:

  • channel: ColorBalanceChannel instance.

The function returns the following values:

  • gint: current value of the channel.

func (*ColorBalance) ValueChanged

func (balance *ColorBalance) ValueChanged(channel *ColorBalanceChannel, value int)

ValueChanged: helper function called by implementations of the GstColorBalance interface. It fires the ColorBalance::value-changed signal on the instance, and the ColorBalanceChannel::value-changed signal on the channel object.

The function takes the following parameters:

  • channel whose value has changed.
  • value: new value of the channel.

type ColorBalanceChannel

type ColorBalanceChannel struct {
	*coreglib.Object
	// contains filtered or unexported fields
}

ColorBalanceChannel object represents a parameter for modifying the color balance implemented by an element providing the ColorBalance interface. For example, Hue or Saturation.

func (*ColorBalanceChannel) ConnectValueChanged

func (v *ColorBalanceChannel) ConnectValueChanged(f func(value int)) coreglib.SignalHandle

ConnectValueChanged: fired when the value of the indicated channel has changed.

type ColorBalanceChannelClass

type ColorBalanceChannelClass struct {
	// contains filtered or unexported fields
}

ColorBalanceChannelClass: color-balance channel class.

An instance of this type is always passed by reference.

type ColorBalanceChannelOverrides

type ColorBalanceChannelOverrides struct {
	// The function takes the following parameters:
	//
	ValueChanged func(value int)
}

ColorBalanceChannelOverrides contains methods that are overridable.

type ColorBalanceInterface

type ColorBalanceInterface struct {
	// contains filtered or unexported fields
}

ColorBalanceInterface: color-balance interface.

An instance of this type is always passed by reference.

type ColorBalanceType

type ColorBalanceType C.gint

ColorBalanceType: enumeration indicating whether an element implements color balancing operations in software or in dedicated hardware. In general, dedicated hardware implementations (such as those provided by xvimagesink) are preferred.

const (
	// ColorBalanceHardware: color balance is implemented with dedicated
	// hardware.
	ColorBalanceHardware ColorBalanceType = iota
	// ColorBalanceSoftware: color balance is implemented via software
	// processing.
	ColorBalanceSoftware
)

func (ColorBalanceType) String

func (c ColorBalanceType) String() string

String returns the name in string for ColorBalanceType.

type ColorBalancer

type ColorBalancer interface {
	coreglib.Objector

	// BalanceType: get the ColorBalanceType of this implementation.
	BalanceType() ColorBalanceType
	// Value: retrieve the current value of the indicated channel, between
	// min_value and max_value.
	Value(channel *ColorBalanceChannel) int
	// ListChannels: retrieve a list of the available channels.
	ListChannels() []*ColorBalanceChannel
	// SetValue sets the current value of the channel to the passed value, which
	// must be between min_value and max_value.
	SetValue(channel *ColorBalanceChannel, value int)
	// ValueChanged: helper function called by implementations of the
	// GstColorBalance interface.
	ValueChanged(channel *ColorBalanceChannel, value int)

	// Value-changed: fired when the value of the indicated channel has changed.
	ConnectValueChanged(func(channel *ColorBalanceChannel, value int)) coreglib.SignalHandle
}

ColorBalancer describes ColorBalance's interface methods.

type Navigation struct {
	*coreglib.Object
	// contains filtered or unexported fields
}

Navigation interface is used for creating and injecting navigation related events such as mouse button presses, cursor motion and key presses. The associated library also provides methods for parsing received events, and for sending and receiving navigation related bus events. One main usecase is DVD menu navigation.

The main parts of the API are:

* The GstNavigation interface, implemented by elements which provide an application with the ability to create and inject navigation events into the pipeline. * GstNavigation event handling API. GstNavigation events are created in response to calls on a GstNavigation interface implementation, and sent in the pipeline. Upstream elements can use the navigation event API functions to parse the contents of received messages.

* GstNavigation message handling API. GstNavigation messages may be sent on the message bus to inform applications of navigation related changes in the pipeline, such as the mouse moving over a clickable region, or the set of available angles changing.

The GstNavigation message functions provide functions for creating and parsing custom bus messages for signaling GstNavigation changes.

Navigation wraps an interface. This means the user can get the underlying type by calling Cast().

func (navigation *Navigation) SendCommand(command NavigationCommand)

SendCommand sends the indicated command to the navigation interface.

The function takes the following parameters:

  • command to issue.
func (navigation *Navigation) SendEvent(structure *gst.Structure)

The function takes the following parameters:

func (navigation *Navigation) SendKeyEvent(event, key string)

The function takes the following parameters:

  • event: type of the key event. Recognised values are "key-press" and "key-release".
  • key: character representation of the key. This is typically as produced by XKeysymToString.
func (navigation *Navigation) SendMouseEvent(event string, button int, x, y float64)

SendMouseEvent sends a mouse event to the navigation interface. Mouse event coordinates are sent relative to the display space of the related output area. This is usually the size in pixels of the window associated with the element implementing the Navigation interface.

The function takes the following parameters:

  • event: type of mouse event, as a text string. Recognised values are "mouse-button-press", "mouse-button-release" and "mouse-move".
  • button number of the button being pressed or released. Pass 0 for mouse-move events.
  • x coordinate of the mouse event.
  • y coordinate of the mouse event.
func (navigation *Navigation) SendMouseScrollEvent(x, y, deltaX, deltaY float64)

SendMouseScrollEvent sends a mouse scroll event to the navigation interface. Mouse event coordinates are sent relative to the display space of the related output area. This is usually the size in pixels of the window associated with the element implementing the Navigation interface.

The function takes the following parameters:

  • x coordinate of the mouse event.
  • y coordinate of the mouse event.
  • deltaX: delta_x coordinate of the mouse event.
  • deltaY: delta_y coordinate of the mouse event.
type NavigationCommand C.gint

NavigationCommand: set of commands that may be issued to an element providing the Navigation interface. The available commands can be queried via the gst_navigation_query_new_commands() query.

For convenience in handling DVD navigation, the MENU commands are aliased as: GST_NAVIGATION_COMMAND_DVD_MENU = GST_NAVIGATION_COMMAND_MENU1 GST_NAVIGATION_COMMAND_DVD_TITLE_MENU = GST_NAVIGATION_COMMAND_MENU2 GST_NAVIGATION_COMMAND_DVD_ROOT_MENU = GST_NAVIGATION_COMMAND_MENU3 GST_NAVIGATION_COMMAND_DVD_SUBPICTURE_MENU = GST_NAVIGATION_COMMAND_MENU4 GST_NAVIGATION_COMMAND_DVD_AUDIO_MENU = GST_NAVIGATION_COMMAND_MENU5 GST_NAVIGATION_COMMAND_DVD_ANGLE_MENU = GST_NAVIGATION_COMMAND_MENU6 GST_NAVIGATION_COMMAND_DVD_CHAPTER_MENU = GST_NAVIGATION_COMMAND_MENU7.

const (
	// NavigationCommandInvalid: invalid command entry.
	NavigationCommandInvalid NavigationCommand = 0
	// NavigationCommandMenu1: execute navigation menu command 1. For DVD, this
	// enters the DVD root menu, or exits back to the title from the menu.
	NavigationCommandMenu1 NavigationCommand = 1
	// NavigationCommandMenu2: execute navigation menu command 2. For DVD, this
	// jumps to the DVD title menu.
	NavigationCommandMenu2 NavigationCommand = 2
	// NavigationCommandMenu3: execute navigation menu command 3. For DVD, this
	// jumps into the DVD root menu.
	NavigationCommandMenu3 NavigationCommand = 3
	// NavigationCommandMenu4: execute navigation menu command 4. For DVD, this
	// jumps to the Subpicture menu.
	NavigationCommandMenu4 NavigationCommand = 4
	// NavigationCommandMenu5: execute navigation menu command 5. For DVD, the
	// jumps to the audio menu.
	NavigationCommandMenu5 NavigationCommand = 5
	// NavigationCommandMenu6: execute navigation menu command 6. For DVD, this
	// jumps to the angles menu.
	NavigationCommandMenu6 NavigationCommand = 6
	// NavigationCommandMenu7: execute navigation menu command 7. For DVD, this
	// jumps to the chapter menu.
	NavigationCommandMenu7 NavigationCommand = 7
	// NavigationCommandLeft: select the next button to the left in a menu, if
	// such a button exists.
	NavigationCommandLeft NavigationCommand = 20
	// NavigationCommandRight: select the next button to the right in a menu, if
	// such a button exists.
	NavigationCommandRight NavigationCommand = 21
	// NavigationCommandUp: select the button above the current one in a menu,
	// if such a button exists.
	NavigationCommandUp NavigationCommand = 22
	// NavigationCommandDown: select the button below the current one in a menu,
	// if such a button exists.
	NavigationCommandDown NavigationCommand = 23
	// NavigationCommandActivate: activate (click) the currently selected button
	// in a menu, if such a button exists.
	NavigationCommandActivate NavigationCommand = 24
	// NavigationCommandPrevAngle: switch to the previous angle in a multiangle
	// feature.
	NavigationCommandPrevAngle NavigationCommand = 30
	// NavigationCommandNextAngle: switch to the next angle in a multiangle
	// feature.
	NavigationCommandNextAngle NavigationCommand = 31
)
func NavigationEventParseCommand(event *gst.Event) (NavigationCommand, bool)

NavigationEventParseCommand: inspect a Navigation command event and retrieve the enum value of the associated command.

The function takes the following parameters:

  • event to inspect.

The function returns the following values:

  • command (optional): pointer to GstNavigationCommand to receive the type of the navigation event.
  • ok: TRUE if the navigation command could be extracted, otherwise FALSE.
func NavigationQueryParseCommandsNth(query *gst.Query, nth uint) (NavigationCommand, bool)

NavigationQueryParseCommandsNth: parse the Navigation command query and retrieve the nth command from it into cmd. If the list contains less elements than nth, cmd will be set to T_NAVIGATION_COMMAND_INVALID.

The function takes the following parameters:

  • query: Query.
  • nth command to retrieve.

The function returns the following values:

  • cmd (optional): pointer to store the nth command into.
  • ok: TRUE if the query could be successfully parsed. FALSE if not.
func (n NavigationCommand) String() string

String returns the name in string for NavigationCommand.

type NavigationEventType C.gint

NavigationEventType: enum values for the various events that an element implementing the GstNavigation interface might send up the pipeline.

const (
	// NavigationEventInvalid: returned from gst_navigation_event_get_type()
	// when the passed event is not a navigation event.
	NavigationEventInvalid NavigationEventType = iota
	// NavigationEventKeyPress: key press event. Use
	// gst_navigation_event_parse_key_event() to extract the details from the
	// event.
	NavigationEventKeyPress
	// NavigationEventKeyRelease: key release event. Use
	// gst_navigation_event_parse_key_event() to extract the details from the
	// event.
	NavigationEventKeyRelease
	// NavigationEventMouseButtonPress: mouse button press event. Use
	// gst_navigation_event_parse_mouse_button_event() to extract the details
	// from the event.
	NavigationEventMouseButtonPress
	// NavigationEventMouseButtonRelease: mouse button release event. Use
	// gst_navigation_event_parse_mouse_button_event() to extract the details
	// from the event.
	NavigationEventMouseButtonRelease
	// NavigationEventMouseMove: mouse movement event. Use
	// gst_navigation_event_parse_mouse_move_event() to extract the details from
	// the event.
	NavigationEventMouseMove
	// NavigationEventCommand: navigation command event. Use
	// gst_navigation_event_parse_command() to extract the details from the
	// event.
	NavigationEventCommand
	// NavigationEventMouseScroll: mouse scroll event. Use
	// gst_navigation_event_parse_mouse_scroll_event() to extract the details
	// from the event.
	NavigationEventMouseScroll
)
func NavigationEventGetType(event *gst.Event) NavigationEventType

NavigationEventGetType: inspect a Event and return the NavigationEventType of the event, or T_NAVIGATION_EVENT_INVALID if the event is not a Navigation event.

The function takes the following parameters:

  • event to inspect.

The function returns the following values:

func (n NavigationEventType) String() string

String returns the name in string for NavigationEventType.

type NavigationInterface struct {
	// contains filtered or unexported fields
}

NavigationInterface: navigation interface.

An instance of this type is always passed by reference.

type NavigationMessageType C.gint

NavigationMessageType: set of notifications that may be received on the bus when navigation related status changes.

const (
	// NavigationMessageInvalid: returned from gst_navigation_message_get_type()
	// when the passed message is not a navigation message.
	NavigationMessageInvalid NavigationMessageType = iota
	// NavigationMessageMouseOver: sent when the mouse moves over or leaves a
	// clickable region of the output, such as a DVD menu button.
	NavigationMessageMouseOver
	// NavigationMessageCommandsChanged: sent when the set of available commands
	// changes and should re-queried by interested applications.
	NavigationMessageCommandsChanged
	// NavigationMessageAnglesChanged: sent when display angles in a multi-angle
	// feature (such as a multiangle DVD) change - either angles have appeared
	// or disappeared.
	NavigationMessageAnglesChanged
	// NavigationMessageEvent: sent when a navigation event was not handled by
	// any element in the pipeline (Since: 1.6).
	NavigationMessageEvent
)
func NavigationMessageGetType(message *gst.Message) NavigationMessageType

NavigationMessageGetType: check a bus message to see if it is a Navigation event, and return the NavigationMessageType identifying the type of the message if so.

The function takes the following parameters:

  • message to inspect.

The function returns the following values:

  • navigationMessageType: type of the Message, or T_NAVIGATION_MESSAGE_INVALID if the message is not a Navigation notification.
func (n NavigationMessageType) String() string

String returns the name in string for NavigationMessageType.

type NavigationQueryType C.gint

NavigationQueryType types of navigation interface queries.

const (
	// NavigationQueryInvalid: invalid query.
	NavigationQueryInvalid NavigationQueryType = iota
	// NavigationQueryCommands: command query.
	NavigationQueryCommands
	// NavigationQueryAngles: viewing angle query.
	NavigationQueryAngles
)
func NavigationQueryGetType(query *gst.Query) NavigationQueryType

NavigationQueryGetType: inspect a Query and return the NavigationQueryType associated with it if it is a Navigation query.

The function takes the following parameters:

  • query to inspect.

The function returns the following values:

  • navigationQueryType of the query, or T_NAVIGATION_QUERY_INVALID.
func (n NavigationQueryType) String() string

String returns the name in string for NavigationQueryType.

type Navigationer interface {
	coreglib.Objector

	// SendCommand sends the indicated command to the navigation interface.
	SendCommand(command NavigationCommand)
	SendEvent(structure *gst.Structure)
	SendKeyEvent(event, key string)
	// SendMouseEvent sends a mouse event to the navigation interface.
	SendMouseEvent(event string, button int, x, y float64)
	// SendMouseScrollEvent sends a mouse scroll event to the navigation
	// interface.
	SendMouseScrollEvent(x, y, deltaX, deltaY float64)
}

Navigationer describes Navigation's interface methods.

type VideoAFDMeta

type VideoAFDMeta struct {
	// contains filtered or unexported fields
}

VideoAFDMeta: active Format Description (AFD)

For details, see Table 6.14 Active Format in:

ATSC Digital Television Standard: Part 4 – MPEG-2 Video System Characteristics

https://www.atsc.org/wp-content/uploads/2015/03/a_53-Part-4-2009.pdf

and Active Format Description in Complete list of AFD codes

https://en.wikipedia.org/wiki/Active_Format_DescriptionAFD_codes

and SMPTE ST2016-1

An instance of this type is always passed by reference.

func BufferAddVideoAfdMeta

func BufferAddVideoAfdMeta(buffer *gst.Buffer, field byte, spec VideoAFDSpec, afd VideoAFDValue) *VideoAFDMeta

BufferAddVideoAfdMeta attaches VideoAFDMeta metadata to buffer with the given parameters.

The function takes the following parameters:

  • buffer: Buffer.
  • field: 0 for progressive or field 1 and 1 for field 2.
  • spec that applies to AFD value.
  • afd AFD enumeration.

The function returns the following values:

  • videoAFDMeta on buffer.

func (*VideoAFDMeta) Afd

func (v *VideoAFDMeta) Afd() VideoAFDValue

Afd AFD value.

func (*VideoAFDMeta) Field

func (v *VideoAFDMeta) Field() byte

Field: 0 for progressive or field 1 and 1 for field 2.

func (*VideoAFDMeta) Meta

func (v *VideoAFDMeta) Meta() *gst.Meta

Meta: parent Meta.

func (*VideoAFDMeta) SetField

func (v *VideoAFDMeta) SetField(field byte)

Field: 0 for progressive or field 1 and 1 for field 2.

func (*VideoAFDMeta) Spec

func (v *VideoAFDMeta) Spec() VideoAFDSpec

Spec that applies to afd.

type VideoAFDSpec

type VideoAFDSpec C.gint

VideoAFDSpec: enumeration of the different standards that may apply to AFD data:

0) ETSI/DVB: https://www.etsi.org/deliver/etsi_ts/101100_101199/101154/02.01.01_60/ts_101154v020101p.pdf

1) ATSC A/53: https://www.atsc.org/wp-content/uploads/2015/03/a_53-Part-4-2009.pdf

2) SMPTE ST2016-1:.

const (
	// VideoAfdSpecDvbEtsi: AFD value is from DVB/ETSI standard.
	VideoAfdSpecDvbEtsi VideoAFDSpec = iota
	// VideoAfdSpecAtscA53: AFD value is from ATSC A/53 standard.
	VideoAfdSpecAtscA53
	VideoAfdSpecSmpteSt20161
)

func (VideoAFDSpec) String

func (v VideoAFDSpec) String() string

String returns the name in string for VideoAFDSpec.

type VideoAFDValue

type VideoAFDValue C.gint

VideoAFDValue: enumeration of the various values for Active Format Description (AFD)

AFD should be included in video user data whenever the rectangular picture area containing useful information does not extend to the full height or width of the coded frame. AFD data may also be included in user data when the rectangular picture area containing useful information extends to the full height and width of the coded frame.

For details, see Table 6.14 Active Format in:

ATSC Digital Television Standard: Part 4 – MPEG-2 Video System Characteristics

https://www.atsc.org/wp-content/uploads/2015/03/a_53-Part-4-2009.pdf

and Active Format Description in Complete list of AFD codes

https://en.wikipedia.org/wiki/Active_Format_DescriptionAFD_codes

and SMPTE ST2016-1

Notes:

1) AFD 0 is undefined for ATSC and SMPTE ST2016-1, indicating that AFD data is not available: If Bar Data is not present, AFD '0000' indicates that exact information is not available and the active image should be assumed to be the same as the coded frame. AFD '0000'. AFD '0000' accompanied by Bar Data signals that the active image’s aspect ratio is narrower than 16:9, but is not 4:3 or 14:9. As the exact aspect ratio cannot be conveyed by AFD alone, wherever possible, AFD ‘0000’ should be accompanied by Bar Data to define the exact vertical or horizontal extent of the active image. 2) AFD 0 is reserved for DVB/ETSI 3) values 1, 5, 6, 7, and 12 are reserved for both ATSC and DVB/ETSI 4) values 2 and 3 are not recommended for ATSC, but are valid for DVB/ETSI.

const (
	// VideoAfdUnavailable: unavailable (see note 0 below).
	VideoAfdUnavailable VideoAFDValue = 0
	// VideoAfd169_TopAligned: for 4:3 coded frame, letterbox 16:9 image, at top
	// of the coded frame. For 16:9 coded frame, full frame 16:9 image, the same
	// as the coded frame.
	VideoAfd169_TopAligned VideoAFDValue = 2
	// VideoAfd149_TopAligned: for 4:3 coded frame, letterbox 14:9 image, at top
	// of the coded frame. For 16:9 coded frame, pillarbox 14:9 image,
	// horizontally centered in the coded frame.
	VideoAfd149_TopAligned VideoAFDValue = 3
	// VideoAfdGreaterThan169: for 4:3 coded frame, letterbox image with an
	// aspect ratio greater than 16:9, vertically centered in the coded frame.
	// For 16:9 coded frame, letterbox image with an aspect ratio greater than
	// 16:9.
	VideoAfdGreaterThan169 VideoAFDValue = 4
	// VideoAfd43_Full169_Full: for 4:3 coded frame, full frame 4:3 image, the
	// same as the coded frame. For 16:9 coded frame, full frame 16:9 image, the
	// same as the coded frame.
	VideoAfd43_Full169_Full VideoAFDValue = 8
	// VideoAfd43_Full43_Pillar: for 4:3 coded frame, full frame 4:3 image, the
	// same as the coded frame. For 16:9 coded frame, pillarbox 4:3 image,
	// horizontally centered in the coded frame.
	VideoAfd43_Full43_Pillar VideoAFDValue = 9
	// VideoAfd169_Letter169_Full: for 4:3 coded frame, letterbox 16:9 image,
	// vertically centered in the coded frame with all image areas protected.
	// For 16:9 coded frame, full frame 16:9 image, with all image areas
	// protected.
	VideoAfd169_Letter169_Full VideoAFDValue = 10
	// VideoAfd149_Letter149_Pillar: for 4:3 coded frame, letterbox 14:9 image,
	// vertically centered in the coded frame. For 16:9 coded frame, pillarbox
	// 14:9 image, horizontally centered in the coded frame.
	VideoAfd149_Letter149_Pillar VideoAFDValue = 11
	// VideoAfd43_Full149_Center: for 4:3 coded frame, full frame 4:3 image,
	// with alternative 14:9 center. For 16:9 coded frame, pillarbox 4:3 image,
	// with alternative 14:9 center.
	VideoAfd43_Full149_Center VideoAFDValue = 13
	// VideoAfd169_Letter149_Center: for 4:3 coded frame, letterbox 16:9 image,
	// with alternative 14:9 center. For 16:9 coded frame, full frame 16:9
	// image, with alternative 14:9 center.
	VideoAfd169_Letter149_Center VideoAFDValue = 14
	// VideoAfd169_Letter43_Center: for 4:3 coded frame, letterbox 16:9 image,
	// with alternative 4:3 center. For 16:9 coded frame, full frame 16:9 image,
	// with alternative 4:3 center.
	VideoAfd169_Letter43_Center VideoAFDValue = 15
)

func (VideoAFDValue) String

func (v VideoAFDValue) String() string

String returns the name in string for VideoAFDValue.

type VideoAffineTransformationMeta

type VideoAffineTransformationMeta struct {
	// contains filtered or unexported fields
}

VideoAffineTransformationMeta: extra buffer metadata for performing an affine transformation using a 4x4 matrix. The transformation matrix can be composed with gst_video_affine_transformation_meta_apply_matrix().

The vertices operated on are all in the range 0 to 1, not in Normalized Device Coordinates (-1 to +1). Transforming points in this space are assumed to have an origin at (0.5, 0.5, 0.5) in a left-handed coordinate system with the x-axis moving horizontally (positive values to the right), the y-axis moving vertically (positive values up the screen) and the z-axis perpendicular to the screen (positive values into the screen).

An instance of this type is always passed by reference.

func BufferAddVideoAffineTransformationMeta

func BufferAddVideoAffineTransformationMeta(buffer *gst.Buffer) *VideoAffineTransformationMeta

BufferAddVideoAffineTransformationMeta attaches GstVideoAffineTransformationMeta metadata to buffer with the given parameters.

The function takes the following parameters:

  • buffer: Buffer.

The function returns the following values:

  • videoAffineTransformationMeta on buffer.

func (*VideoAffineTransformationMeta) ApplyMatrix

func (meta *VideoAffineTransformationMeta) ApplyMatrix(matrix [16]float32)

ApplyMatrix: apply a transformation using the given 4x4 transformation matrix. Performs the multiplication, meta->matrix X matrix.

The function takes the following parameters:

  • matrix: 4x4 transformation matrix to be applied.

func (*VideoAffineTransformationMeta) Matrix

func (v *VideoAffineTransformationMeta) Matrix() [16]float32

Matrix: column-major 4x4 transformation matrix.

func (*VideoAffineTransformationMeta) Meta

Meta: parent Meta.

type VideoAggregator

type VideoAggregator struct {
	gstbase.Aggregator
	// contains filtered or unexported fields
}

VideoAggregator can accept AYUV, ARGB and BGRA video streams. For each of the requested sink pads it will compare the incoming geometry and framerate to define the output parameters. Indeed output video frames will have the geometry of the biggest incoming video stream and the framerate of the fastest incoming one.

VideoAggregator will do colorspace conversion.

Zorder for each input stream can be configured on the VideoAggregatorPad.

func BaseVideoAggregator

func BaseVideoAggregator(obj VideoAggregatorrer) *VideoAggregator

BaseVideoAggregator returns the underlying base object.

func (*VideoAggregator) ExecutionTaskPool

func (vagg *VideoAggregator) ExecutionTaskPool() *gst.TaskPool

ExecutionTaskPool: returned TaskPool is used internally for performing parallel video format conversions/scaling/etc during the VideoAggregatorPadClass::prepare_frame_start() process. Subclasses can add their own operation to perform using the returned TaskPool during VideoAggregatorClass::aggregate_frames().

The function returns the following values:

  • taskPool that can be used by subclasses for performing concurrent operations.

type VideoAggregatorClass

type VideoAggregatorClass struct {
	// contains filtered or unexported fields
}

VideoAggregatorClass: instance of this type is always passed by reference.

type VideoAggregatorConvertPad

type VideoAggregatorConvertPad struct {
	VideoAggregatorPad
	// contains filtered or unexported fields
}

VideoAggregatorConvertPad: implementation of GstPad that can be used with VideoAggregator.

See VideoAggregator for more details.

func (*VideoAggregatorConvertPad) UpdateConversionInfo

func (pad *VideoAggregatorConvertPad) UpdateConversionInfo()

UpdateConversionInfo requests the pad to check and update the converter before the next usage to update for any changes that have happened.

type VideoAggregatorConvertPadClass

type VideoAggregatorConvertPadClass struct {
	// contains filtered or unexported fields
}

VideoAggregatorConvertPadClass: instance of this type is always passed by reference.

func (*VideoAggregatorConvertPadClass) ParentClass

type VideoAggregatorConvertPadOverrides

type VideoAggregatorConvertPadOverrides struct {
	// The function takes the following parameters:
	//
	//    - agg
	//    - conversionInfo
	//
	CreateConversionInfo func(agg VideoAggregatorrer, conversionInfo *VideoInfo)
}

VideoAggregatorConvertPadOverrides contains methods that are overridable.

type VideoAggregatorOverrides

type VideoAggregatorOverrides struct {
	// The function takes the following parameters:
	//
	// The function returns the following values:
	//
	AggregateFrames func(outbuffer *gst.Buffer) gst.FlowReturn
	// The function takes the following parameters:
	//
	// The function returns the following values:
	//
	UpdateCaps func(caps *gst.Caps) *gst.Caps
}

VideoAggregatorOverrides contains methods that are overridable.

type VideoAggregatorPad

type VideoAggregatorPad struct {
	gstbase.AggregatorPad
	// contains filtered or unexported fields
}

func (*VideoAggregatorPad) CurrentBuffer

func (pad *VideoAggregatorPad) CurrentBuffer() *gst.Buffer

CurrentBuffer returns the currently queued buffer that is going to be used for the current output frame.

This must only be called from the VideoAggregatorClass::aggregate_frames virtual method, or from the VideoAggregatorPadClass::prepare_frame virtual method of the aggregator pads.

The return value is only valid until VideoAggregatorClass::aggregate_frames or VideoAggregatorPadClass::prepare_frame returns.

The function returns the following values:

  • buffer: currently queued buffer.

func (*VideoAggregatorPad) HasCurrentBuffer

func (pad *VideoAggregatorPad) HasCurrentBuffer() bool

HasCurrentBuffer checks if the pad currently has a buffer queued that is going to be used for the current output frame.

This must only be called from the VideoAggregatorClass::aggregate_frames virtual method, or from the VideoAggregatorPadClass::prepare_frame virtual method of the aggregator pads.

The function returns the following values:

  • ok: TRUE if the pad has currently a buffer queued.

func (*VideoAggregatorPad) PreparedFrame

func (pad *VideoAggregatorPad) PreparedFrame() *VideoFrame

PreparedFrame returns the currently prepared video frame that has to be aggregated into the current output frame.

This must only be called from the VideoAggregatorClass::aggregate_frames virtual method, or from the VideoAggregatorPadClass::prepare_frame virtual method of the aggregator pads.

The return value is only valid until VideoAggregatorClass::aggregate_frames or VideoAggregatorPadClass::prepare_frame returns.

The function returns the following values:

  • videoFrame: currently prepared video frame.

func (*VideoAggregatorPad) SetNeedsAlpha

func (pad *VideoAggregatorPad) SetNeedsAlpha(needsAlpha bool)

SetNeedsAlpha allows selecting that this pad requires an output format with alpha.

The function takes the following parameters:

  • needsAlpha: TRUE if this pad requires alpha output.

type VideoAggregatorPadClass

type VideoAggregatorPadClass struct {
	// contains filtered or unexported fields
}

VideoAggregatorPadClass: instance of this type is always passed by reference.

func (*VideoAggregatorPadClass) GstReserved

func (v *VideoAggregatorPadClass) GstReserved() [18]unsafe.Pointer

func (*VideoAggregatorPadClass) ParentClass

type VideoAggregatorPadOverrides

type VideoAggregatorPadOverrides struct {
	// The function takes the following parameters:
	//
	//    - videoaggregator
	//    - preparedFrame
	//
	CleanFrame func(videoaggregator VideoAggregatorrer, preparedFrame *VideoFrame)
	// The function takes the following parameters:
	//
	//    - videoaggregator
	//    - buffer
	//    - preparedFrame
	//
	// The function returns the following values:
	//
	PrepareFrame func(videoaggregator VideoAggregatorrer, buffer *gst.Buffer, preparedFrame *VideoFrame) bool
	// PrepareFrameFinish: finish preparing prepared_frame.
	//
	// If overriden, prepare_frame_start must also be overriden.
	//
	// The function takes the following parameters:
	//
	//    - videoaggregator: parent VideoAggregator.
	//    - preparedFrame to prepare into.
	//
	PrepareFrameFinish func(videoaggregator VideoAggregatorrer, preparedFrame *VideoFrame)
	// PrepareFrameStart: begin preparing the frame from the pad buffer and sets
	// it to prepared_frame.
	//
	// If overriden, prepare_frame_finish must also be overriden.
	//
	// The function takes the following parameters:
	//
	//    - videoaggregator: parent VideoAggregator.
	//    - buffer: input Buffer to prepare.
	//    - preparedFrame to prepare into.
	//
	PrepareFrameStart    func(videoaggregator VideoAggregatorrer, buffer *gst.Buffer, preparedFrame *VideoFrame)
	UpdateConversionInfo func()
}

VideoAggregatorPadOverrides contains methods that are overridable.

type VideoAggregatorParallelConvertPad

type VideoAggregatorParallelConvertPad struct {
	VideoAggregatorConvertPad
	// contains filtered or unexported fields
}

VideoAggregatorParallelConvertPad: implementation of GstPad that can be used with VideoAggregator.

See VideoAggregator for more details.

type VideoAggregatorParallelConvertPadClass

type VideoAggregatorParallelConvertPadClass struct {
	// contains filtered or unexported fields
}

VideoAggregatorParallelConvertPadClass: instance of this type is always passed by reference.

func (*VideoAggregatorParallelConvertPadClass) ParentClass

type VideoAggregatorParallelConvertPadOverrides

type VideoAggregatorParallelConvertPadOverrides struct {
}

VideoAggregatorParallelConvertPadOverrides contains methods that are overridable.

type VideoAggregatorrer

type VideoAggregatorrer interface {
	coreglib.Objector
	// contains filtered or unexported methods
}

VideoAggregatorrer describes types inherited from class VideoAggregator.

To get the original type, the caller must assert this to an interface or another type.

type VideoAlignment

type VideoAlignment struct {
	// contains filtered or unexported fields
}

VideoAlignment: extra alignment parameters for the memory of video buffers. This structure is usually used to configure the bufferpool if it supports the T_BUFFER_POOL_OPTION_VIDEO_ALIGNMENT.

An instance of this type is always passed by reference.

func (*VideoAlignment) PaddingBottom

func (v *VideoAlignment) PaddingBottom() uint

PaddingBottom: extra pixels on the bottom.

func (*VideoAlignment) PaddingLeft

func (v *VideoAlignment) PaddingLeft() uint

PaddingLeft: extra pixels on the left side.

func (*VideoAlignment) PaddingRight

func (v *VideoAlignment) PaddingRight() uint

PaddingRight: extra pixels on the right side.

func (*VideoAlignment) PaddingTop

func (v *VideoAlignment) PaddingTop() uint

PaddingTop: extra pixels on the top.

func (*VideoAlignment) Reset

func (align *VideoAlignment) Reset()

Reset: set align to its default values with no padding and no alignment.

func (*VideoAlignment) SetPaddingBottom

func (v *VideoAlignment) SetPaddingBottom(paddingBottom uint)

PaddingBottom: extra pixels on the bottom.

func (*VideoAlignment) SetPaddingLeft

func (v *VideoAlignment) SetPaddingLeft(paddingLeft uint)

PaddingLeft: extra pixels on the left side.

func (*VideoAlignment) SetPaddingRight

func (v *VideoAlignment) SetPaddingRight(paddingRight uint)

PaddingRight: extra pixels on the right side.

func (*VideoAlignment) SetPaddingTop

func (v *VideoAlignment) SetPaddingTop(paddingTop uint)

PaddingTop: extra pixels on the top.

func (*VideoAlignment) StrideAlign

func (v *VideoAlignment) StrideAlign() [4]uint

StrideAlign: array with extra alignment requirements for the strides.

type VideoAlphaMode

type VideoAlphaMode C.gint

VideoAlphaMode: different alpha modes.

const (
	// VideoAlphaModeCopy: when input and output have alpha, it will be copied.
	// When the input has no alpha, alpha will be set to
	// T_VIDEO_CONVERTER_OPT_ALPHA_VALUE.
	VideoAlphaModeCopy VideoAlphaMode = iota
	// VideoAlphaModeSet: set all alpha to T_VIDEO_CONVERTER_OPT_ALPHA_VALUE.
	VideoAlphaModeSet
	// VideoAlphaModeMult: multiply all alpha with
	// T_VIDEO_CONVERTER_OPT_ALPHA_VALUE. When the input format has no alpha but
	// the output format has, the alpha value will be set to
	// T_VIDEO_CONVERTER_OPT_ALPHA_VALUE.
	VideoAlphaModeMult
)

func (VideoAlphaMode) String

func (v VideoAlphaMode) String() string

String returns the name in string for VideoAlphaMode.

type VideoAncillary

type VideoAncillary struct {
	// contains filtered or unexported fields
}

VideoAncillary: video Ancillary data, according to SMPTE-291M specification.

Note that the contents of the data are always stored as 8bit data (i.e. do not contain the parity check bits).

An instance of this type is always passed by reference.

type VideoAncillaryDID

type VideoAncillaryDID C.gint
const (
	VideoAncillaryDidUndefined               VideoAncillaryDID = 0
	VideoAncillaryDidDeletion                VideoAncillaryDID = 128
	VideoAncillaryDidHanc3GAudioDataFirst    VideoAncillaryDID = 160
	VideoAncillaryDidHanc3GAudioDataLast     VideoAncillaryDID = 167
	VideoAncillaryDidHancHdtvAudioDataFirst  VideoAncillaryDID = 224
	VideoAncillaryDidHancHdtvAudioDataLast   VideoAncillaryDID = 231
	VideoAncillaryDidHancSdtvAudioData1First VideoAncillaryDID = 236
	VideoAncillaryDidHancSdtvAudioData1Last  VideoAncillaryDID = 239
	VideoAncillaryDidCameraPosition          VideoAncillaryDID = 240
	VideoAncillaryDidHancErrorDetection      VideoAncillaryDID = 244
	VideoAncillaryDidHancSdtvAudioData2First VideoAncillaryDID = 248
	VideoAncillaryDidHancSdtvAudioData2Last  VideoAncillaryDID = 255
)

func (VideoAncillaryDID) String

func (v VideoAncillaryDID) String() string

String returns the name in string for VideoAncillaryDID.

type VideoAncillaryDID16

type VideoAncillaryDID16 C.gint

VideoAncillaryDID16: some know types of Ancillary Data identifiers.

const (
	// VideoAncillaryDid16S334Eia708: CEA 708 Ancillary data according to SMPTE
	// 334.
	VideoAncillaryDid16S334Eia708 VideoAncillaryDID16 = 24833
	// VideoAncillaryDid16S334Eia608: CEA 608 Ancillary data according to SMPTE
	// 334.
	VideoAncillaryDid16S334Eia608 VideoAncillaryDID16 = 24834
	// VideoAncillaryDid16S20163AfdBar: AFD/Bar Ancillary data according to
	// SMPTE 2016-3 (Since: 1.18).
	VideoAncillaryDid16S20163AfdBar VideoAncillaryDID16 = 16645
)

func (VideoAncillaryDID16) String

func (v VideoAncillaryDID16) String() string

String returns the name in string for VideoAncillaryDID16.

type VideoBarMeta

type VideoBarMeta struct {
	// contains filtered or unexported fields
}

VideoBarMeta: bar data should be included in video user data whenever the rectangular picture area containing useful information does not extend to the full height or width of the coded frame and AFD alone is insufficient to describe the extent of the image.

Note: either vertical or horizontal bars are specified, but not both.

For more details, see:

https://www.atsc.org/wp-content/uploads/2015/03/a_53-Part-4-2009.pdf

and SMPTE ST2016-1

An instance of this type is always passed by reference.

func BufferAddVideoBarMeta

func BufferAddVideoBarMeta(buffer *gst.Buffer, field byte, isLetterbox bool, barData1, barData2 uint) *VideoBarMeta

BufferAddVideoBarMeta attaches VideoBarMeta metadata to buffer with the given parameters.

The function takes the following parameters:

  • buffer: Buffer.
  • field: 0 for progressive or field 1 and 1 for field 2.
  • isLetterbox: if true then bar data specifies letterbox, otherwise pillarbox.
  • barData1: if is_letterbox is true, then the value specifies the last line of a horizontal letterbox bar area at top of reconstructed frame. Otherwise, it specifies the last horizontal luminance sample of a vertical pillarbox bar area at the left side of the reconstructed frame.
  • barData2: if is_letterbox is true, then the value specifies the first line of a horizontal letterbox bar area at bottom of reconstructed frame. Otherwise, it specifies the first horizontal luminance sample of a vertical pillarbox bar area at the right side of the reconstructed frame.

The function returns the following values:

func (*VideoBarMeta) BarData1

func (v *VideoBarMeta) BarData1() uint

BarData1: if is_letterbox is true, then the value specifies the last line of a horizontal letterbox bar area at top of reconstructed frame. Otherwise, it specifies the last horizontal luminance sample of a vertical pillarbox bar area at the left side of the reconstructed frame.

func (*VideoBarMeta) BarData2

func (v *VideoBarMeta) BarData2() uint

BarData2: if is_letterbox is true, then the value specifies the first line of a horizontal letterbox bar area at bottom of reconstructed frame. Otherwise, it specifies the first horizontal luminance sample of a vertical pillarbox bar area at the right side of the reconstructed frame.

func (*VideoBarMeta) Field

func (v *VideoBarMeta) Field() byte

Field: 0 for progressive or field 1 and 1 for field 2.

func (*VideoBarMeta) IsLetterbox

func (v *VideoBarMeta) IsLetterbox() bool

IsLetterbox: if true then bar data specifies letterbox, otherwise pillarbox.

func (*VideoBarMeta) Meta

func (v *VideoBarMeta) Meta() *gst.Meta

Meta: parent Meta.

func (*VideoBarMeta) SetBarData1

func (v *VideoBarMeta) SetBarData1(barData1 uint)

BarData1: if is_letterbox is true, then the value specifies the last line of a horizontal letterbox bar area at top of reconstructed frame. Otherwise, it specifies the last horizontal luminance sample of a vertical pillarbox bar area at the left side of the reconstructed frame.

func (*VideoBarMeta) SetBarData2

func (v *VideoBarMeta) SetBarData2(barData2 uint)

BarData2: if is_letterbox is true, then the value specifies the first line of a horizontal letterbox bar area at bottom of reconstructed frame. Otherwise, it specifies the first horizontal luminance sample of a vertical pillarbox bar area at the right side of the reconstructed frame.

func (*VideoBarMeta) SetField

func (v *VideoBarMeta) SetField(field byte)

Field: 0 for progressive or field 1 and 1 for field 2.

func (*VideoBarMeta) SetIsLetterbox

func (v *VideoBarMeta) SetIsLetterbox(isLetterbox bool)

IsLetterbox: if true then bar data specifies letterbox, otherwise pillarbox.

type VideoBufferFlags

type VideoBufferFlags C.guint

VideoBufferFlags: additional video buffer flags. These flags can potentially be used on any buffers carrying closed caption data, or video data - even encoded data.

Note that these are only valid for Caps of type: video/... and caption/... They can conflict with other extended buffer flags.

const (
	// VideoBufferFlagInterlaced: if the Buffer is interlaced. In mixed
	// interlace-mode, this flags specifies if the frame is interlaced or
	// progressive.
	VideoBufferFlagInterlaced VideoBufferFlags = 0b100000000000000000000
	// VideoBufferFlagTff: if the Buffer is interlaced, then the first field in
	// the video frame is the top field. If unset, the bottom field is first.
	VideoBufferFlagTff VideoBufferFlags = 0b1000000000000000000000
	// VideoBufferFlagRff: if the Buffer is interlaced, then the first field (as
	// defined by the GST_VIDEO_BUFFER_FLAG_TFF flag setting) is repeated.
	VideoBufferFlagRff VideoBufferFlags = 0b10000000000000000000000
	// VideoBufferFlagOnefield: if the Buffer is interlaced, then only the first
	// field (as defined by the GST_VIDEO_BUFFER_FLAG_TFF flag setting) is to be
	// displayed (Since: 1.16).
	VideoBufferFlagOnefield VideoBufferFlags = 0b100000000000000000000000
	// VideoBufferFlagMultipleView contains one or more specific views, such as
	// left or right eye view. This flags is set on any buffer that contains
	// non-mono content - even for streams that contain only a single viewpoint.
	// In mixed mono / non-mono streams, the absence of the flag marks mono
	// buffers.
	VideoBufferFlagMultipleView VideoBufferFlags = 0b1000000000000000000000000
	// VideoBufferFlagFirstInBundle: when conveying stereo/multiview content
	// with frame-by-frame methods, this flag marks the first buffer in a bundle
	// of frames that belong together.
	VideoBufferFlagFirstInBundle VideoBufferFlags = 0b10000000000000000000000000
	// VideoBufferFlagTopField: video frame has the top field only. This is the
	// same as GST_VIDEO_BUFFER_FLAG_TFF | GST_VIDEO_BUFFER_FLAG_ONEFIELD
	// (Since: 1.16). Use GST_VIDEO_BUFFER_IS_TOP_FIELD() to check for this
	// flag.
	VideoBufferFlagTopField VideoBufferFlags = 0b101000000000000000000000
	// VideoBufferFlagOnefield: if the Buffer is interlaced, then only the first
	// field (as defined by the GST_VIDEO_BUFFER_FLAG_TFF flag setting) is to be
	// displayed (Since: 1.16).
	VideoBufferFlagOnefield VideoBufferFlags = 0b100000000000000000000000
	// VideoBufferFlagBottomField: video frame has the bottom field only. This
	// is the same as GST_VIDEO_BUFFER_FLAG_ONEFIELD (GST_VIDEO_BUFFER_FLAG_TFF
	// flag unset) (Since: 1.16). Use GST_VIDEO_BUFFER_IS_BOTTOM_FIELD() to
	// check for this flag.
	VideoBufferFlagBottomField VideoBufferFlags = 0b100000000000000000000000
	// VideoBufferFlagMarker contains the end of a video field or frame boundary
	// such as the last subframe or packet (Since: 1.18).
	VideoBufferFlagMarker VideoBufferFlags = 0b1000000000
	// VideoBufferFlagLast: offset to define more flags.
	VideoBufferFlagLast VideoBufferFlags = 0b10000000000000000000000000000
)

func (VideoBufferFlags) Has

func (v VideoBufferFlags) Has(other VideoBufferFlags) bool

Has returns true if v contains other.

func (VideoBufferFlags) String

func (v VideoBufferFlags) String() string

String returns the names in string for VideoBufferFlags.

type VideoBufferPool

type VideoBufferPool struct {
	gst.BufferPool
	// contains filtered or unexported fields
}

func NewVideoBufferPool

func NewVideoBufferPool() *VideoBufferPool

NewVideoBufferPool: create a new bufferpool that can allocate video frames. This bufferpool supports all the video bufferpool options.

The function returns the following values:

  • videoBufferPool: new BufferPool to allocate video frames.

type VideoBufferPoolClass

type VideoBufferPoolClass struct {
	// contains filtered or unexported fields
}

VideoBufferPoolClass: instance of this type is always passed by reference.

func (*VideoBufferPoolClass) ParentClass

func (v *VideoBufferPoolClass) ParentClass() *gst.BufferPoolClass

type VideoBufferPoolOverrides

type VideoBufferPoolOverrides struct {
}

VideoBufferPoolOverrides contains methods that are overridable.

type VideoCaptionMeta

type VideoCaptionMeta struct {
	// contains filtered or unexported fields
}

VideoCaptionMeta: extra buffer metadata providing Closed Caption.

An instance of this type is always passed by reference.

func BufferAddVideoCaptionMeta

func BufferAddVideoCaptionMeta(buffer *gst.Buffer, captionType VideoCaptionType, data []byte) *VideoCaptionMeta

BufferAddVideoCaptionMeta attaches VideoCaptionMeta metadata to buffer with the given parameters.

The function takes the following parameters:

  • buffer: Buffer.
  • captionType: type of Closed Caption to add.
  • data: closed Caption data.

The function returns the following values:

  • videoCaptionMeta on buffer.

type VideoCaptionType

type VideoCaptionType C.gint

VideoCaptionType various known types of Closed Caption (CC).

const (
	// VideoCaptionTypeUnknown: unknown type of CC.
	VideoCaptionTypeUnknown VideoCaptionType = iota
	// VideoCaptionTypeCea608Raw: CEA-608 as byte pairs. Note that this format
	// is not recommended since is does not specify to which field the caption
	// comes from and therefore assumes it comes from the first field (and that
	// there is no information on the second field). Use
	// GST_VIDEO_CAPTION_TYPE_CEA708_RAW if you wish to store CEA-608 from two
	// fields and prefix each byte pair with 0xFC for the first field and 0xFD
	// for the second field.
	VideoCaptionTypeCea608Raw
	// VideoCaptionTypeCea608S3341A: CEA-608 as byte triplets as defined in
	// SMPTE S334-1 Annex A. The second and third byte of the byte triplet is
	// the raw CEA608 data, the first byte is a bitfield: The top/7th bit is 0
	// for the second field, 1 for the first field, bit 6 and 5 are 0 and bits 4
	// to 0 are a 5 bit unsigned integer that represents the line offset
	// relative to the base-line of the original image format (line 9 for
	// 525-line field 1, line 272 for 525-line field 2, line 5 for 625-line
	// field 1 and line 318 for 625-line field 2).
	VideoCaptionTypeCea608S3341A
	// VideoCaptionTypeCea708Raw: CEA-708 as cc_data byte triplets. They can
	// also contain 608-in-708 and the first byte of each triplet has to be
	// inspected for detecting the type.
	VideoCaptionTypeCea708Raw
	// VideoCaptionTypeCea708Cdp: CEA-708 (and optionally CEA-608) in a CDP
	// (Caption Distribution Packet) defined by SMPTE S-334-2. Contains the
	// whole CDP (starting with 0x9669).
	VideoCaptionTypeCea708Cdp
)

func VideoCaptionTypeFromCaps

func VideoCaptionTypeFromCaps(caps *gst.Caps) VideoCaptionType

VideoCaptionTypeFromCaps parses fixed Closed Caption Caps and returns the corresponding caption type, or GST_VIDEO_CAPTION_TYPE_UNKNOWN.

The function takes the following parameters:

  • caps: fixed Caps to parse.

The function returns the following values:

  • videoCaptionType: VideoCaptionType.

func (VideoCaptionType) String

func (v VideoCaptionType) String() string

String returns the name in string for VideoCaptionType.

type VideoChromaFlags

type VideoChromaFlags C.guint

VideoChromaFlags: extra flags that influence the result from gst_video_chroma_resample_new().

const (
	// VideoChromaFlagNone: no flags.
	VideoChromaFlagNone VideoChromaFlags = 0b0
	// VideoChromaFlagInterlaced: input is interlaced.
	VideoChromaFlagInterlaced VideoChromaFlags = 0b1
)

func (VideoChromaFlags) Has

func (v VideoChromaFlags) Has(other VideoChromaFlags) bool

Has returns true if v contains other.

func (VideoChromaFlags) String

func (v VideoChromaFlags) String() string

String returns the names in string for VideoChromaFlags.

type VideoChromaMethod

type VideoChromaMethod C.gint

VideoChromaMethod: different subsampling and upsampling methods.

const (
	// VideoChromaMethodNearest duplicates the chroma samples when upsampling
	// and drops when subsampling.
	VideoChromaMethodNearest VideoChromaMethod = iota
	// VideoChromaMethodLinear uses linear interpolation to reconstruct missing
	// chroma and averaging to subsample.
	VideoChromaMethodLinear
)

func (VideoChromaMethod) String

func (v VideoChromaMethod) String() string

String returns the name in string for VideoChromaMethod.

type VideoChromaMode

type VideoChromaMode C.gint

VideoChromaMode: different chroma downsampling and upsampling modes.

const (
	// VideoChromaModeFull: do full chroma up and down sampling.
	VideoChromaModeFull VideoChromaMode = iota
	// VideoChromaModeUpsampleOnly: only perform chroma upsampling.
	VideoChromaModeUpsampleOnly
	// VideoChromaModeDownsampleOnly: only perform chroma downsampling.
	VideoChromaModeDownsampleOnly
	// VideoChromaModeNone: disable chroma resampling.
	VideoChromaModeNone
)

func (VideoChromaMode) String

func (v VideoChromaMode) String() string

String returns the name in string for VideoChromaMode.

type VideoChromaSite

type VideoChromaSite C.guint

VideoChromaSite various Chroma sitings.

const (
	// VideoChromaSiteUnknown: unknown cositing.
	VideoChromaSiteUnknown VideoChromaSite = 0b0
	// VideoChromaSiteNone: no cositing.
	VideoChromaSiteNone VideoChromaSite = 0b1
	// VideoChromaSiteHCosited: chroma is horizontally cosited.
	VideoChromaSiteHCosited VideoChromaSite = 0b10
	// VideoChromaSiteVCosited: chroma is vertically cosited.
	VideoChromaSiteVCosited VideoChromaSite = 0b100
	// VideoChromaSiteAltLine: choma samples are sited on alternate lines.
	VideoChromaSiteAltLine VideoChromaSite = 0b1000
	// VideoChromaSiteCosited: chroma samples cosited with luma samples.
	VideoChromaSiteCosited VideoChromaSite = 0b110
	// VideoChromaSiteJPEG: jpeg style cositing, also for mpeg1 and mjpeg.
	VideoChromaSiteJPEG VideoChromaSite = 0b1
	// VideoChromaSiteMpeg2: mpeg2 style cositing.
	VideoChromaSiteMpeg2 VideoChromaSite = 0b10
	// VideoChromaSiteDv: DV style cositing.
	VideoChromaSiteDv VideoChromaSite = 0b1110
)

func VideoChromaFromString deprecated

func VideoChromaFromString(s string) VideoChromaSite

VideoChromaFromString: convert s to a VideoChromaSite

Deprecated: Use gst_video_chroma_site_from_string() instead.

The function takes the following parameters:

  • s: chromasite string.

The function returns the following values:

  • videoChromaSite or GST_VIDEO_CHROMA_SITE_UNKNOWN when s does not contain a valid chroma description.

func VideoChromaSiteFromString

func VideoChromaSiteFromString(s string) VideoChromaSite

VideoChromaSiteFromString: convert s to a VideoChromaSite.

The function takes the following parameters:

  • s: chromasite string.

The function returns the following values:

  • videoChromaSite or GST_VIDEO_CHROMA_SITE_UNKNOWN when s does not contain a valid chroma-site description.

func (VideoChromaSite) Has

func (v VideoChromaSite) Has(other VideoChromaSite) bool

Has returns true if v contains other.

func (VideoChromaSite) String

func (v VideoChromaSite) String() string

String returns the names in string for VideoChromaSite.

type VideoCodecAlphaMeta

type VideoCodecAlphaMeta struct {
	// contains filtered or unexported fields
}

VideoCodecAlphaMeta: this meta is primarily for internal use in GStreamer elements to support VP8/VP9 transparent video stored into WebM or Matroska containers, or transparent static AV1 images. Nothing prevents you from using this meta for custom purposes, but it generally can't be used to easily to add support for alpha channels to CODECs or formats that don't support that out of the box.

An instance of this type is always passed by reference.

func BufferAddVideoCodecAlphaMeta

func BufferAddVideoCodecAlphaMeta(buffer, alphaBuffer *gst.Buffer) *VideoCodecAlphaMeta

BufferAddVideoCodecAlphaMeta attaches a VideoCodecAlphaMeta metadata to buffer with the given alpha buffer.

The function takes the following parameters:

  • buffer: Buffer.
  • alphaBuffer: Buffer.

The function returns the following values:

  • videoCodecAlphaMeta on buffer.

func (*VideoCodecAlphaMeta) Buffer

func (v *VideoCodecAlphaMeta) Buffer() *gst.Buffer

Buffer: encoded alpha frame.

func (*VideoCodecAlphaMeta) Meta

func (v *VideoCodecAlphaMeta) Meta() *gst.Meta

Meta: parent Meta.

type VideoCodecFrame

type VideoCodecFrame struct {
	// contains filtered or unexported fields
}

VideoCodecFrame represents a video frame both in raw and encoded form.

An instance of this type is always passed by reference.

func (*VideoCodecFrame) Deadline

func (v *VideoCodecFrame) Deadline() gst.ClockTime

Deadline: running time when the frame will be used.

func (*VideoCodecFrame) DistanceFromSync

func (v *VideoCodecFrame) DistanceFromSync() int

DistanceFromSync: distance in frames from the last synchronization point.

func (*VideoCodecFrame) Dts

func (v *VideoCodecFrame) Dts() gst.ClockTime

Dts: decoding timestamp.

func (*VideoCodecFrame) Duration

func (v *VideoCodecFrame) Duration() gst.ClockTime

Duration of the frame.

func (*VideoCodecFrame) InputBuffer

func (v *VideoCodecFrame) InputBuffer() *gst.Buffer

InputBuffer: input Buffer that created this frame. The buffer is owned by the frame and references to the frame instead of the buffer should be kept.

func (*VideoCodecFrame) OutputBuffer

func (v *VideoCodecFrame) OutputBuffer() *gst.Buffer

OutputBuffer: output Buffer. Implementations should set this either directly, or by using the gst_video_decoder_allocate_output_frame() or gst_video_decoder_allocate_output_buffer() methods. The buffer is owned by the frame and references to the frame instead of the buffer should be kept.

func (*VideoCodecFrame) Pts

func (v *VideoCodecFrame) Pts() gst.ClockTime

Pts: presentation timestamp.

func (*VideoCodecFrame) SetDistanceFromSync

func (v *VideoCodecFrame) SetDistanceFromSync(distanceFromSync int)

DistanceFromSync: distance in frames from the last synchronization point.

func (*VideoCodecFrame) SetSystemFrameNumber

func (v *VideoCodecFrame) SetSystemFrameNumber(systemFrameNumber uint32)

SystemFrameNumber: unique identifier for the frame. Use this if you need to get hold of the frame later (like when data is being decoded). Typical usage in decoders is to set this on the opaque value provided to the library and get back the frame using gst_video_decoder_get_frame().

func (*VideoCodecFrame) SystemFrameNumber

func (v *VideoCodecFrame) SystemFrameNumber() uint32

SystemFrameNumber: unique identifier for the frame. Use this if you need to get hold of the frame later (like when data is being decoded). Typical usage in decoders is to set this on the opaque value provided to the library and get back the frame using gst_video_decoder_get_frame().

func (*VideoCodecFrame) UserData

func (frame *VideoCodecFrame) UserData() unsafe.Pointer

UserData gets private data set on the frame by the subclass via gst_video_codec_frame_set_user_data() previously.

The function returns the following values:

  • gpointer (optional): previously set user_data.

type VideoCodecFrameFlags

type VideoCodecFrameFlags C.guint

VideoCodecFrameFlags flags for VideoCodecFrame.

const (
	// VideoCodecFrameFlagDecodeOnly is the frame only meant to be decoded.
	VideoCodecFrameFlagDecodeOnly VideoCodecFrameFlags = 0b1
	// VideoCodecFrameFlagSyncPoint is the frame a synchronization point
	// (keyframe).
	VideoCodecFrameFlagSyncPoint VideoCodecFrameFlags = 0b10
	// VideoCodecFrameFlagForceKeyframe: should the output frame be made a
	// keyframe.
	VideoCodecFrameFlagForceKeyframe VideoCodecFrameFlags = 0b100
	// VideoCodecFrameFlagForceKeyframeHeaders: should the encoder output stream
	// headers.
	VideoCodecFrameFlagForceKeyframeHeaders VideoCodecFrameFlags = 0b1000
	// VideoCodecFrameFlagCorrupted: buffer data is corrupted.
	VideoCodecFrameFlagCorrupted VideoCodecFrameFlags = 0b10000
)

func (VideoCodecFrameFlags) Has

Has returns true if v contains other.

func (VideoCodecFrameFlags) String

func (v VideoCodecFrameFlags) String() string

String returns the names in string for VideoCodecFrameFlags.

type VideoCodecState

type VideoCodecState struct {
	// contains filtered or unexported fields
}

VideoCodecState: structure representing the state of an incoming or outgoing video stream for encoders and decoders.

Decoders and encoders will receive such a state through their respective set_format vmethods.

Decoders and encoders can set the downstream state, by using the gst_video_decoder_set_output_state() or gst_video_encoder_set_output_state() methods.

An instance of this type is always passed by reference.

func (*VideoCodecState) AllocationCaps

func (v *VideoCodecState) AllocationCaps() *gst.Caps

AllocationCaps for allocation query and pool negotiation. Since: 1.10.

func (*VideoCodecState) Caps

func (v *VideoCodecState) Caps() *gst.Caps

Caps used in the caps negotiation of the pad.

func (*VideoCodecState) CodecData

func (v *VideoCodecState) CodecData() *gst.Buffer

CodecData corresponding to the 'codec_data' field of a stream, or NULL.

func (*VideoCodecState) ContentLightLevel

func (v *VideoCodecState) ContentLightLevel() *VideoContentLightLevel

ContentLightLevel: content light level information for the stream.

func (*VideoCodecState) Info

func (v *VideoCodecState) Info() *VideoInfo

Info describing the stream.

func (*VideoCodecState) MasteringDisplayInfo

func (v *VideoCodecState) MasteringDisplayInfo() *VideoMasteringDisplayInfo

MasteringDisplayInfo: mastering display color volume information (HDR metadata) for the stream.

type VideoColorMatrix

type VideoColorMatrix C.gint

VideoColorMatrix: color matrix is used to convert between Y'PbPr and non-linear RGB (R'G'B').

const (
	// VideoColorMatrixUnknown: unknown matrix.
	VideoColorMatrixUnknown VideoColorMatrix = iota
	// VideoColorMatrixRGB: identity matrix. Order of coefficients is actually
	// GBR, also IEC 61966-2-1 (sRGB).
	VideoColorMatrixRGB
	// VideoColorMatrixFcc: FCC Title 47 Code of Federal Regulations 73.682
	// (a)(20).
	VideoColorMatrixFcc
	// VideoColorMatrixBt709: ITU-R BT.709 color matrix, also ITU-R BT1361 / IEC
	// 61966-2-4 xvYCC709 / SMPTE RP177 Annex B.
	VideoColorMatrixBt709
	// VideoColorMatrixBt601: ITU-R BT.601 color matrix, also SMPTE170M / ITU-R
	// BT1358 525 / ITU-R BT1700 NTSC.
	VideoColorMatrixBt601
	// VideoColorMatrixSmpte240M: SMPTE 240M color matrix.
	VideoColorMatrixSmpte240M
	// VideoColorMatrixBt2020: ITU-R BT.2020 color matrix. Since: 1.6.
	VideoColorMatrixBt2020
)

func VideoColorMatrixFromISO

func VideoColorMatrixFromISO(value uint) VideoColorMatrix

VideoColorMatrixFromISO converts the value to the VideoColorMatrix The matrix coefficients (MatrixCoefficients) value is defined by "ISO/IEC 23001-8 Section 7.3 Table 4" and "ITU-T H.273 Table 4". "H.264 Table E-5" and "H.265 Table E.5" share the identical values.

The function takes the following parameters:

  • value: ITU-T H.273 matrix coefficients value.

The function returns the following values:

  • videoColorMatrix: matched VideoColorMatrix.

func (VideoColorMatrix) String

func (v VideoColorMatrix) String() string

String returns the name in string for VideoColorMatrix.

type VideoColorPrimaries

type VideoColorPrimaries C.gint

VideoColorPrimaries: color primaries define the how to transform linear RGB values to and from the CIE XYZ colorspace.

const (
	// VideoColorPrimariesUnknown: unknown color primaries.
	VideoColorPrimariesUnknown VideoColorPrimaries = iota
	// VideoColorPrimariesBt709: BT709 primaries, also ITU-R BT1361 / IEC
	// 61966-2-4 / SMPTE RP177 Annex B.
	VideoColorPrimariesBt709
	// VideoColorPrimariesBt470M: BT470M primaries, also FCC Title 47 Code of
	// Federal Regulations 73.682 (a)(20).
	VideoColorPrimariesBt470M
	// VideoColorPrimariesBt470Bg: BT470BG primaries, also ITU-R BT601-6 625 /
	// ITU-R BT1358 625 / ITU-R BT1700 625 PAL & SECAM.
	VideoColorPrimariesBt470Bg
	// VideoColorPrimariesSmpte170M: SMPTE170M primaries, also ITU-R BT601-6 525
	// / ITU-R BT1358 525 / ITU-R BT1700 NTSC.
	VideoColorPrimariesSmpte170M
	// VideoColorPrimariesSmpte240M: SMPTE240M primaries.
	VideoColorPrimariesSmpte240M
	// VideoColorPrimariesFilm: generic film (colour filters using Illuminant
	// C).
	VideoColorPrimariesFilm
	// VideoColorPrimariesBt2020: ITU-R BT2020 primaries. Since: 1.6.
	VideoColorPrimariesBt2020
	// VideoColorPrimariesAdobergb: adobe RGB primaries. Since: 1.8.
	VideoColorPrimariesAdobergb
	// VideoColorPrimariesSmptest428: SMPTE ST 428 primaries (CIE 1931 XYZ).
	// Since: 1.16.
	VideoColorPrimariesSmptest428
	// VideoColorPrimariesSmpterp431: SMPTE RP 431 primaries (ST 431-2 (2011) /
	// DCI P3). Since: 1.16.
	VideoColorPrimariesSmpterp431
	// VideoColorPrimariesSmpteeg432: SMPTE EG 432 primaries (ST 432-1 (2010) /
	// P3 D65). Since: 1.16.
	VideoColorPrimariesSmpteeg432
	// VideoColorPrimariesEbu3213: EBU 3213 primaries (JEDEC P22 phosphors).
	// Since: 1.16.
	VideoColorPrimariesEbu3213
)

func VideoColorPrimariesFromISO

func VideoColorPrimariesFromISO(value uint) VideoColorPrimaries

VideoColorPrimariesFromISO converts the value to the VideoColorPrimaries The colour primaries (ColourPrimaries) value is defined by "ISO/IEC 23001-8 Section 7.1 Table 2" and "ITU-T H.273 Table 2". "H.264 Table E-3" and "H.265 Table E.3" share the identical values.

The function takes the following parameters:

  • value: ITU-T H.273 colour primaries value.

The function returns the following values:

  • videoColorPrimaries: matched VideoColorPrimaries.

func (VideoColorPrimaries) String

func (v VideoColorPrimaries) String() string

String returns the name in string for VideoColorPrimaries.

type VideoColorPrimariesInfo

type VideoColorPrimariesInfo struct {
	// contains filtered or unexported fields
}

VideoColorPrimariesInfo: structure describing the chromaticity coordinates of an RGB system. These values can be used to construct a matrix to transform RGB to and from the XYZ colorspace.

An instance of this type is always passed by reference.

func VideoColorPrimariesGetInfo

func VideoColorPrimariesGetInfo(primaries VideoColorPrimaries) *VideoColorPrimariesInfo

VideoColorPrimariesGetInfo: get information about the chromaticity coordinates of primaries.

The function takes the following parameters:

  • primaries: VideoColorPrimaries.

The function returns the following values:

  • videoColorPrimariesInfo for primaries.

func (*VideoColorPrimariesInfo) Bx

Bx: blue x coordinate.

func (*VideoColorPrimariesInfo) By

By: blue y coordinate.

func (*VideoColorPrimariesInfo) Gx

Gx: green x coordinate.

func (*VideoColorPrimariesInfo) Gy

Gy: green y coordinate.

func (*VideoColorPrimariesInfo) Primaries

Primaries: VideoColorPrimaries.

func (*VideoColorPrimariesInfo) Rx

Rx: red x coordinate.

func (*VideoColorPrimariesInfo) Ry

Ry: red y coordinate.

func (*VideoColorPrimariesInfo) SetBx

func (v *VideoColorPrimariesInfo) SetBx(Bx float64)

Bx: blue x coordinate.

func (*VideoColorPrimariesInfo) SetBy

func (v *VideoColorPrimariesInfo) SetBy(By float64)

By: blue y coordinate.

func (*VideoColorPrimariesInfo) SetGx

func (v *VideoColorPrimariesInfo) SetGx(Gx float64)

Gx: green x coordinate.

func (*VideoColorPrimariesInfo) SetGy

func (v *VideoColorPrimariesInfo) SetGy(Gy float64)

Gy: green y coordinate.

func (*VideoColorPrimariesInfo) SetRx

func (v *VideoColorPrimariesInfo) SetRx(Rx float64)

Rx: red x coordinate.

func (*VideoColorPrimariesInfo) SetRy

func (v *VideoColorPrimariesInfo) SetRy(Ry float64)

Ry: red y coordinate.

func (*VideoColorPrimariesInfo) SetWx

func (v *VideoColorPrimariesInfo) SetWx(Wx float64)

Wx: reference white x coordinate.

func (*VideoColorPrimariesInfo) SetWy

func (v *VideoColorPrimariesInfo) SetWy(Wy float64)

Wy: reference white y coordinate.

func (*VideoColorPrimariesInfo) Wx

Wx: reference white x coordinate.

func (*VideoColorPrimariesInfo) Wy

Wy: reference white y coordinate.

type VideoColorRange

type VideoColorRange C.gint

VideoColorRange: possible color range values. These constants are defined for 8 bit color values and can be scaled for other bit depths.

const (
	// VideoColorRangeUnknown: unknown range.
	VideoColorRangeUnknown VideoColorRange = iota
	// VideoColorRange0255: [0..255] for 8 bit components.
	VideoColorRange0255
	// VideoColorRange16235: [16..235] for 8 bit components. Chroma has
	// [16..240] range.
	VideoColorRange16235
)

func (VideoColorRange) String

func (v VideoColorRange) String() string

String returns the name in string for VideoColorRange.

type VideoColorimetry

type VideoColorimetry struct {
	// contains filtered or unexported fields
}

VideoColorimetry: structure describing the color info.

An instance of this type is always passed by reference.

func (*VideoColorimetry) FromString

func (cinfo *VideoColorimetry) FromString(color string) bool

FromString: parse the colorimetry string and update cinfo with the parsed values.

The function takes the following parameters:

  • color: colorimetry string.

The function returns the following values:

  • ok: TRUE if color points to valid colorimetry info.

func (*VideoColorimetry) IsEqual

func (cinfo *VideoColorimetry) IsEqual(other *VideoColorimetry) bool

IsEqual: compare the 2 colorimetry sets for equality.

The function takes the following parameters:

  • other VideoColorimetry.

The function returns the following values:

  • ok: TRUE if cinfo and other are equal.

func (*VideoColorimetry) Matches

func (cinfo *VideoColorimetry) Matches(color string) bool

Matches: check if the colorimetry information in info matches that of the string color.

The function takes the following parameters:

  • color: colorimetry string.

The function returns the following values:

  • ok: TRUE if color conveys the same colorimetry info as the color information in info.

func (*VideoColorimetry) Matrix

func (v *VideoColorimetry) Matrix() VideoColorMatrix

Matrix: color matrix. Used to convert between Y'PbPr and non-linear RGB (R'G'B').

func (*VideoColorimetry) Primaries

func (v *VideoColorimetry) Primaries() VideoColorPrimaries

Primaries: color primaries. used to convert between R'G'B' and CIE XYZ.

func (*VideoColorimetry) Range

func (v *VideoColorimetry) Range() VideoColorRange

Range: color range. This is the valid range for the samples. It is used to convert the samples to Y'PbPr values.

func (*VideoColorimetry) String

func (cinfo *VideoColorimetry) String() string

String: make a string representation of cinfo.

The function returns the following values:

  • utf8 (optional): string representation of cinfo or NULL if all the entries of cinfo are unknown values.

func (*VideoColorimetry) Transfer

Transfer: transfer function. used to convert between R'G'B' and RGB.

type VideoContentLightLevel

type VideoContentLightLevel struct {
	// contains filtered or unexported fields
}

VideoContentLightLevel: content light level information specified in CEA-861.3, Appendix A.

An instance of this type is always passed by reference.

func (*VideoContentLightLevel) AddToCaps

func (linfo *VideoContentLightLevel) AddToCaps(caps *gst.Caps) bool

AddToCaps: parse caps and update linfo.

The function takes the following parameters:

  • caps: Caps.

The function returns the following values:

  • ok: TRUE if linfo was successfully set to caps.

func (*VideoContentLightLevel) FromCaps

func (linfo *VideoContentLightLevel) FromCaps(caps *gst.Caps) bool

FromCaps: parse caps and update linfo.

The function takes the following parameters:

  • caps: Caps.

The function returns the following values:

  • ok: if caps has VideoContentLightLevel and could be parsed.

func (*VideoContentLightLevel) FromString

func (linfo *VideoContentLightLevel) FromString(level string) bool

FromString: parse the value of content-light-level caps field and update minfo with the parsed values.

The function takes the following parameters:

  • level string from caps.

The function returns the following values:

  • ok: TRUE if linfo points to valid VideoContentLightLevel.

func (*VideoContentLightLevel) Init

func (linfo *VideoContentLightLevel) Init()

Init: initialize linfo.

func (*VideoContentLightLevel) IsEqual

func (linfo *VideoContentLightLevel) IsEqual(other *VideoContentLightLevel) bool

IsEqual checks equality between linfo and other.

The function takes the following parameters:

  • other: VideoContentLightLevel.

The function returns the following values:

  • ok: TRUE if linfo and other are equal.

func (*VideoContentLightLevel) MaxContentLightLevel

func (v *VideoContentLightLevel) MaxContentLightLevel() uint16

MaxContentLightLevel: maximum content light level (abbreviated to MaxCLL) in candelas per square meter (cd/m^2 and nit).

func (*VideoContentLightLevel) MaxFrameAverageLightLevel

func (v *VideoContentLightLevel) MaxFrameAverageLightLevel() uint16

MaxFrameAverageLightLevel: maximum frame average light level (abbreviated to MaxFLL) in candelas per square meter (cd/m^2 and nit).

func (*VideoContentLightLevel) SetMaxContentLightLevel

func (v *VideoContentLightLevel) SetMaxContentLightLevel(maxContentLightLevel uint16)

MaxContentLightLevel: maximum content light level (abbreviated to MaxCLL) in candelas per square meter (cd/m^2 and nit).

func (*VideoContentLightLevel) SetMaxFrameAverageLightLevel

func (v *VideoContentLightLevel) SetMaxFrameAverageLightLevel(maxFrameAverageLightLevel uint16)

MaxFrameAverageLightLevel: maximum frame average light level (abbreviated to MaxFLL) in candelas per square meter (cd/m^2 and nit).

func (*VideoContentLightLevel) String

func (linfo *VideoContentLightLevel) String() string

String: convert linfo to its string representation.

The function returns the following values:

  • utf8: string representation of linfo.

type VideoConvertSampleCallback

type VideoConvertSampleCallback func(sample *gst.Sample, err error)

type VideoCropMeta

type VideoCropMeta struct {
	// contains filtered or unexported fields
}

VideoCropMeta: extra buffer metadata describing image cropping.

An instance of this type is always passed by reference.

func (*VideoCropMeta) Height

func (v *VideoCropMeta) Height() uint

Height: cropped height.

func (*VideoCropMeta) Meta

func (v *VideoCropMeta) Meta() *gst.Meta

Meta: parent Meta.

func (*VideoCropMeta) SetHeight

func (v *VideoCropMeta) SetHeight(height uint)

Height: cropped height.

func (*VideoCropMeta) SetWidth

func (v *VideoCropMeta) SetWidth(width uint)

Width: cropped width.

func (*VideoCropMeta) SetX

func (v *VideoCropMeta) SetX(x uint)

X: horizontal offset.

func (*VideoCropMeta) SetY

func (v *VideoCropMeta) SetY(y uint)

Y: vertical offset.

func (*VideoCropMeta) Width

func (v *VideoCropMeta) Width() uint

Width: cropped width.

func (*VideoCropMeta) X

func (v *VideoCropMeta) X() uint

X: horizontal offset.

func (*VideoCropMeta) Y

func (v *VideoCropMeta) Y() uint

Y: vertical offset.

type VideoDecoder

type VideoDecoder struct {
	gst.Element
	// contains filtered or unexported fields
}

VideoDecoder: this base class is for video decoders turning encoded data into raw video frames.

The GstVideoDecoder base class and derived subclasses should cooperate as follows:

Configuration

  • Initially, GstVideoDecoder calls start when the decoder element is activated, which allows the subclass to perform any global setup.

  • GstVideoDecoder calls set_format to inform the subclass of caps describing input video data that it is about to receive, including possibly configuration data. While unlikely, it might be called more than once, if changing input parameters require reconfiguration.

  • Incoming data buffers are processed as needed, described in Data Processing below.

  • GstVideoDecoder calls stop at end of all processing.

Data processing

  • The base class gathers input data, and optionally allows subclass to parse this into subsequently manageable chunks, typically corresponding to and referred to as 'frames'.

  • Each input frame is provided in turn to the subclass' handle_frame callback.

  • When the subclass enables the subframe mode with gst_video_decoder_set_subframe_mode, the base class will provide to the subclass the same input frame with different input buffers to the subclass handle_frame callback. During this call, the subclass needs to take ownership of the input_buffer as GstVideoCodecFrame.input_buffer will have been changed before the next subframe buffer is received. The subclass will call gst_video_decoder_have_last_subframe when a new input frame can be created by the base class. Every subframe will share the same GstVideoCodecFrame.output_buffer to write the decoding result. The subclass is responsible to protect its access.

  • If codec processing results in decoded data, the subclass should call gst_video_decoder_finish_frame to have decoded data pushed downstream. In subframe mode the subclass should call gst_video_decoder_finish_subframe until the last subframe where it should call gst_video_decoder_finish_frame. The subclass can detect the last subframe using GST_VIDEO_BUFFER_FLAG_MARKER on buffers or using its own logic to collect the subframes. In case of decoding failure, the subclass must call gst_video_decoder_drop_frame or gst_video_decoder_drop_subframe, to allow the base class to do timestamp and offset tracking, and possibly to requeue the frame for a later attempt in the case of reverse playback.

Shutdown phase

  • The GstVideoDecoder class calls stop to inform the subclass that data parsing will be stopped.

Additional Notes

  • Seeking/Flushing

  • When the pipeline is seeked or otherwise flushed, the subclass is informed via a call to its reset callback, with the hard parameter set to true. This indicates the subclass should drop any internal data queues and timestamps and prepare for a fresh set of buffers to arrive for parsing and decoding.

  • End Of Stream

  • At end-of-stream, the subclass parse function may be called some final times with the at_eos parameter set to true, indicating that the element should not expect any more data to be arriving, and it should parse and remaining frames and call gst_video_decoder_have_frame() if possible.

The subclass is responsible for providing pad template caps for source and sink pads. The pads need to be named "sink" and "src". It also needs to provide information about the output caps, when they are known. This may be when the base class calls the subclass' set_format function, though it might be during decoding, before calling gst_video_decoder_finish_frame. This is done via gst_video_decoder_set_output_state

The subclass is also responsible for providing (presentation) timestamps (likely based on corresponding input ones). If that is not applicable or possible, the base class provides limited framerate based interpolation.

Similarly, the base class provides some limited (legacy) seeking support if specifically requested by the subclass, as full-fledged support should rather be left to upstream demuxer, parser or alike. This simple approach caters for seeking and duration reporting using estimated input bitrates. To enable it, a subclass should call gst_video_decoder_set_estimate_rate to enable handling of incoming byte-streams.

The base class provides some support for reverse playback, in particular in case incoming data is not packetized or upstream does not provide fragments on keyframe boundaries. However, the subclass should then be prepared for the parsing and frame processing stage to occur separately (in normal forward processing, the latter immediately follows the former), The subclass also needs to ensure the parsing stage properly marks keyframes, unless it knows the upstream elements will do so properly for incoming data.

The bare minimum that a functional subclass needs to implement is:

  • Provide pad templates

  • Inform the base class of output caps via gst_video_decoder_set_output_state

  • Parse input data, if it is not considered packetized from upstream Data will be provided to parse which should invoke gst_video_decoder_add_to_frame and gst_video_decoder_have_frame to separate the data belonging to each video frame.

  • Accept data in handle_frame and provide decoded results to gst_video_decoder_finish_frame, or call gst_video_decoder_drop_frame.

func BaseVideoDecoder

func BaseVideoDecoder(obj VideoDecoderer) *VideoDecoder

BaseVideoDecoder returns the underlying base object.

func (*VideoDecoder) AddToFrame

func (decoder *VideoDecoder) AddToFrame(nBytes int)

AddToFrame removes next n_bytes of input data and adds it to currently parsed frame.

The function takes the following parameters:

  • nBytes: number of bytes to add.

func (*VideoDecoder) AllocateOutputBuffer

func (decoder *VideoDecoder) AllocateOutputBuffer() *gst.Buffer

AllocateOutputBuffer: helper function that allocates a buffer to hold a video frame for decoder's current VideoCodecState.

You should use gst_video_decoder_allocate_output_frame() instead of this function, if possible at all.

The function returns the following values:

  • buffer: allocated buffer, or NULL if no buffer could be allocated (e.g. when downstream is flushing or shutting down).

func (*VideoDecoder) AllocateOutputFrame

func (decoder *VideoDecoder) AllocateOutputFrame(frame *VideoCodecFrame) gst.FlowReturn

AllocateOutputFrame: helper function that allocates a buffer to hold a video frame for decoder's current VideoCodecState. Subclass should already have configured video state and set src pad caps.

The buffer allocated here is owned by the frame and you should only keep references to the frame, not the buffer.

The function takes the following parameters:

  • frame: VideoCodecFrame.

The function returns the following values:

  • flowReturn: GST_FLOW_OK if an output buffer could be allocated.

func (*VideoDecoder) AllocateOutputFrameWithParams

func (decoder *VideoDecoder) AllocateOutputFrameWithParams(frame *VideoCodecFrame, params *gst.BufferPoolAcquireParams) gst.FlowReturn

AllocateOutputFrameWithParams: same as #gst_video_decoder_allocate_output_frame except it allows passing BufferPoolAcquireParams to the sub call gst_buffer_pool_acquire_buffer.

The function takes the following parameters:

  • frame: VideoCodecFrame.
  • params: BufferPoolAcquireParams.

The function returns the following values:

  • flowReturn: GST_FLOW_OK if an output buffer could be allocated.

func (*VideoDecoder) Allocator

func (decoder *VideoDecoder) Allocator() (gst.Allocatorrer, *gst.AllocationParams)

Allocator lets VideoDecoder sub-classes to know the memory allocator used by the base class and its params.

Unref the allocator after use it.

The function returns the following values:

  • allocator (optional): Allocator used.
  • params (optional) the AllocationParams of allocator.

func (*VideoDecoder) BufferPool

func (decoder *VideoDecoder) BufferPool() *gst.BufferPool

The function returns the following values:

  • bufferPool: instance of the BufferPool used by the decoder; free it after use it.

func (*VideoDecoder) DropFrame

func (dec *VideoDecoder) DropFrame(frame *VideoCodecFrame) gst.FlowReturn

DropFrame: similar to gst_video_decoder_finish_frame(), but drops frame in any case and posts a QoS message with the frame's details on the bus. In any case, the frame is considered finished and released.

The function takes the following parameters:

  • frame to drop.

The function returns the following values:

  • flowReturn usually GST_FLOW_OK.

func (*VideoDecoder) DropSubframe

func (dec *VideoDecoder) DropSubframe(frame *VideoCodecFrame) gst.FlowReturn

DropSubframe drops input data. The frame is not considered finished until the whole frame is finished or dropped by the subclass.

The function takes the following parameters:

  • frame: VideoCodecFrame.

The function returns the following values:

  • flowReturn usually GST_FLOW_OK.

func (*VideoDecoder) EstimateRate

func (dec *VideoDecoder) EstimateRate() int

The function returns the following values:

  • gint: currently configured byte to time conversion setting.

func (*VideoDecoder) FinishFrame

func (decoder *VideoDecoder) FinishFrame(frame *VideoCodecFrame) gst.FlowReturn

FinishFrame: frame should have a valid decoded data buffer, whose metadata fields are then appropriately set according to frame data and pushed downstream. If no output data is provided, frame is considered skipped. In any case, the frame is considered finished and released.

After calling this function the output buffer of the frame is to be considered read-only. This function will also change the metadata of the buffer.

The function takes the following parameters:

  • frame: decoded VideoCodecFrame.

The function returns the following values:

  • flowReturn resulting from sending data downstream.

func (*VideoDecoder) FinishSubframe

func (decoder *VideoDecoder) FinishSubframe(frame *VideoCodecFrame) gst.FlowReturn

FinishSubframe: indicate that a subframe has been finished to be decoded by the subclass. This method should be called for all subframes except the last subframe where gst_video_decoder_finish_frame should be called instead.

The function takes the following parameters:

  • frame: VideoCodecFrame.

The function returns the following values:

  • flowReturn usually GST_FLOW_OK.

func (*VideoDecoder) Frame

func (decoder *VideoDecoder) Frame(frameNumber int) *VideoCodecFrame

Frame: get a pending unfinished VideoCodecFrame.

The function takes the following parameters:

  • frameNumber: system_frame_number of a frame.

The function returns the following values:

  • videoCodecFrame: pending unfinished VideoCodecFrame identified by frame_number.

func (*VideoDecoder) Frames

func (decoder *VideoDecoder) Frames() []*VideoCodecFrame

Frames: get all pending unfinished VideoCodecFrame.

The function returns the following values:

  • list: pending unfinished VideoCodecFrame.

func (*VideoDecoder) HaveFrame

func (decoder *VideoDecoder) HaveFrame() gst.FlowReturn

HaveFrame gathers all data collected for currently parsed frame, gathers corresponding metadata and passes it along for further processing, i.e. handle_frame.

The function returns the following values:

  • flowReturn: FlowReturn.

func (*VideoDecoder) HaveLastSubframe

func (decoder *VideoDecoder) HaveLastSubframe(frame *VideoCodecFrame) gst.FlowReturn

HaveLastSubframe indicates that the last subframe has been processed by the decoder in frame. This will release the current frame in video decoder allowing to receive new frames from upstream elements. This method must be called in the subclass handle_frame callback.

The function takes the following parameters:

  • frame to update.

The function returns the following values:

  • flowReturn usually GST_FLOW_OK.

func (*VideoDecoder) InputSubframeIndex

func (decoder *VideoDecoder) InputSubframeIndex(frame *VideoCodecFrame) uint

InputSubframeIndex queries the number of the last subframe received by the decoder baseclass in the frame.

The function takes the following parameters:

  • frame to update.

The function returns the following values:

  • guint: current subframe index received in subframe mode, 1 otherwise.

func (*VideoDecoder) Latency

func (decoder *VideoDecoder) Latency() (minLatency, maxLatency gst.ClockTime)

Latency: query the configured decoder latency. Results will be returned via min_latency and max_latency.

The function returns the following values:

  • minLatency (optional) address of variable in which to store the configured minimum latency, or NULL.
  • maxLatency (optional) address of variable in which to store the configured mximum latency, or NULL.

func (*VideoDecoder) MaxDecodeTime

func (decoder *VideoDecoder) MaxDecodeTime(frame *VideoCodecFrame) gst.ClockTimeDiff

MaxDecodeTime determines maximum possible decoding time for frame that will allow it to decode and arrive in time (as determined by QoS events). In particular, a negative result means decoding in time is no longer possible and should therefore occur as soon/skippy as possible.

The function takes the following parameters:

  • frame: VideoCodecFrame.

The function returns the following values:

  • clockTimeDiff: max decoding time.

func (*VideoDecoder) MaxErrors

func (dec *VideoDecoder) MaxErrors() int

The function returns the following values:

  • gint: currently configured decoder tolerated error count.

func (*VideoDecoder) NeedsFormat

func (dec *VideoDecoder) NeedsFormat() bool

NeedsFormat queries decoder required format handling.

The function returns the following values:

  • ok: TRUE if required format handling is enabled.

func (*VideoDecoder) NeedsSyncPoint

func (dec *VideoDecoder) NeedsSyncPoint() bool

NeedsSyncPoint queries if the decoder requires a sync point before it starts outputting data in the beginning.

The function returns the following values:

  • ok: TRUE if a sync point is required in the beginning.

func (*VideoDecoder) Negotiate

func (decoder *VideoDecoder) Negotiate() bool

Negotiate with downstream elements to currently configured VideoCodecState. Unmark GST_PAD_FLAG_NEED_RECONFIGURE in any case. But mark it again if negotiate fails.

The function returns the following values:

  • ok: TRUE if the negotiation succeeded, else FALSE.

func (*VideoDecoder) OldestFrame

func (decoder *VideoDecoder) OldestFrame() *VideoCodecFrame

OldestFrame: get the oldest pending unfinished VideoCodecFrame.

The function returns the following values:

  • videoCodecFrame: oldest pending unfinished VideoCodecFrame.

func (*VideoDecoder) OutputState

func (decoder *VideoDecoder) OutputState() *VideoCodecState

OutputState: get the VideoCodecState currently describing the output stream.

The function returns the following values:

  • videoCodecState describing format of video data.

func (*VideoDecoder) Packetized

func (decoder *VideoDecoder) Packetized() bool

Packetized queries whether input data is considered packetized or not by the base class.

The function returns the following values:

  • ok: TRUE if input data is considered packetized.

func (*VideoDecoder) PendingFrameSize

func (decoder *VideoDecoder) PendingFrameSize() uint

PendingFrameSize returns the number of bytes previously added to the current frame by calling gst_video_decoder_add_to_frame().

The function returns the following values:

  • gsize: number of bytes pending for the current frame.

func (*VideoDecoder) ProcessedSubframeIndex

func (decoder *VideoDecoder) ProcessedSubframeIndex(frame *VideoCodecFrame) uint

ProcessedSubframeIndex queries the number of subframes in the frame processed by the decoder baseclass.

The function takes the following parameters:

  • frame to update.

The function returns the following values:

  • guint: current subframe processed received in subframe mode.

func (*VideoDecoder) ProxyGetcaps

func (decoder *VideoDecoder) ProxyGetcaps(caps, filter *gst.Caps) *gst.Caps

ProxyGetcaps returns caps that express caps (or sink template caps if caps == NULL) restricted to resolution/format/... combinations supported by downstream elements.

The function takes the following parameters:

  • caps (optional): initial caps.
  • filter (optional) caps.

The function returns the following values:

  • ret owned by caller.

func (*VideoDecoder) QosProportion

func (decoder *VideoDecoder) QosProportion() float64

The function returns the following values:

  • gdouble: current QoS proportion.

func (*VideoDecoder) ReleaseFrame

func (dec *VideoDecoder) ReleaseFrame(frame *VideoCodecFrame)

ReleaseFrame: similar to gst_video_decoder_drop_frame(), but simply releases frame without any processing other than removing it from list of pending frames, after which it is considered finished and released.

The function takes the following parameters:

  • frame to release.

func (*VideoDecoder) RequestSyncPoint

func (dec *VideoDecoder) RequestSyncPoint(frame *VideoCodecFrame, flags VideoDecoderRequestSyncPointFlags)

RequestSyncPoint allows the VideoDecoder subclass to request from the base class that a new sync should be requested from upstream, and that frame was the frame when the subclass noticed that a new sync point is required. A reason for the subclass to do this could be missing reference frames, for example.

The base class will then request a new sync point from upstream as long as the time that passed since the last one is exceeding VideoDecoder:min-force-key-unit-interval.

The subclass can signal via flags how the frames until the next sync point should be handled:

  • If GST_VIDEO_DECODER_REQUEST_SYNC_POINT_DISCARD_INPUT is selected then all following input frames until the next sync point are discarded. This can be useful if the lack of a sync point will prevent all further decoding and the decoder implementation is not very robust in handling missing references frames.
  • If GST_VIDEO_DECODER_REQUEST_SYNC_POINT_CORRUPT_OUTPUT is selected then all output frames following frame are marked as corrupted via GST_BUFFER_FLAG_CORRUPTED. Corrupted frames can be automatically dropped by the base class, see VideoDecoder:discard-corrupted-frames. Subclasses can manually mark frames as corrupted via GST_VIDEO_CODEC_FRAME_FLAG_CORRUPTED before calling gst_video_decoder_finish_frame().

The function takes the following parameters:

  • frame: VideoCodecFrame.
  • flags: VideoDecoderRequestSyncPointFlags.

func (*VideoDecoder) SetEstimateRate

func (dec *VideoDecoder) SetEstimateRate(enabled bool)

SetEstimateRate allows baseclass to perform byte to time estimated conversion.

The function takes the following parameters:

  • enabled: whether to enable byte to time conversion.

func (*VideoDecoder) SetInterlacedOutputState

func (decoder *VideoDecoder) SetInterlacedOutputState(fmt VideoFormat, interlaceMode VideoInterlaceMode, width, height uint, reference *VideoCodecState) *VideoCodecState

SetInterlacedOutputState: same as #gst_video_decoder_set_output_state() but also allows you to also set the interlacing mode.

The function takes the following parameters:

  • fmt: VideoFormat.
  • interlaceMode: VideoInterlaceMode.
  • width in pixels.
  • height in pixels.
  • reference (optional): optional reference VideoCodecState.

The function returns the following values:

  • videoCodecState: newly configured output state.

func (*VideoDecoder) SetLatency

func (decoder *VideoDecoder) SetLatency(minLatency, maxLatency gst.ClockTime)

SetLatency lets VideoDecoder sub-classes tell the baseclass what the decoder latency is. Will also post a LATENCY message on the bus so the pipeline can reconfigure its global latency.

The function takes the following parameters:

  • minLatency: minimum latency.
  • maxLatency: maximum latency.

func (*VideoDecoder) SetMaxErrors

func (dec *VideoDecoder) SetMaxErrors(num int)

SetMaxErrors sets numbers of tolerated decoder errors, where a tolerated one is then only warned about, but more than tolerated will lead to fatal error. You can set -1 for never returning fatal errors. Default is set to GST_VIDEO_DECODER_MAX_ERRORS.

The '-1' option was added in 1.4.

The function takes the following parameters:

  • num: max tolerated errors.

func (*VideoDecoder) SetNeedsFormat

func (dec *VideoDecoder) SetNeedsFormat(enabled bool)

SetNeedsFormat configures decoder format needs. If enabled, subclass needs to be negotiated with format caps before it can process any data. It will then never be handed any data before it has been configured. Otherwise, it might be handed data without having been configured and is then expected being able to do so either by default or based on the input data.

The function takes the following parameters:

  • enabled: new state.

func (*VideoDecoder) SetNeedsSyncPoint

func (dec *VideoDecoder) SetNeedsSyncPoint(enabled bool)

SetNeedsSyncPoint configures whether the decoder requires a sync point before it starts outputting data in the beginning. If enabled, the base class will discard all non-sync point frames in the beginning and after a flush and does not pass it to the subclass.

If the first frame is not a sync point, the base class will request a sync point via the force-key-unit event.

The function takes the following parameters:

  • enabled: new state.

func (*VideoDecoder) SetOutputState

func (decoder *VideoDecoder) SetOutputState(fmt VideoFormat, width, height uint, reference *VideoCodecState) *VideoCodecState

SetOutputState creates a new VideoCodecState with the specified fmt, width and height as the output state for the decoder. Any previously set output state on decoder will be replaced by the newly created one.

If the subclass wishes to copy over existing fields (like pixel aspec ratio, or framerate) from an existing VideoCodecState, it can be provided as a reference.

If the subclass wishes to override some fields from the output state (like pixel-aspect-ratio or framerate) it can do so on the returned VideoCodecState.

The new output state will only take effect (set on pads and buffers) starting from the next call to #gst_video_decoder_finish_frame().

The function takes the following parameters:

  • fmt: VideoFormat.
  • width in pixels.
  • height in pixels.
  • reference (optional): optional reference VideoCodecState.

The function returns the following values:

  • videoCodecState: newly configured output state.

func (*VideoDecoder) SetPacketized

func (decoder *VideoDecoder) SetPacketized(packetized bool)

SetPacketized allows baseclass to consider input data as packetized or not. If the input is packetized, then the parse method will not be called.

The function takes the following parameters:

  • packetized: whether the input data should be considered as packetized.

func (*VideoDecoder) SetSubframeMode

func (decoder *VideoDecoder) SetSubframeMode(subframeMode bool)

SetSubframeMode: if this is set to TRUE, it informs the base class that the subclass can receive the data at a granularity lower than one frame.

Note that in this mode, the subclass has two options. It can either require the presence of a GST_VIDEO_BUFFER_FLAG_MARKER to mark the end of a frame. Or it can operate in such a way that it will decode a single frame at a time. In this second case, every buffer that arrives to the element is considered part of the same frame until gst_video_decoder_finish_frame() is called.

In either case, the same VideoCodecFrame will be passed to the GstVideoDecoderClass:handle_frame vmethod repeatedly with a different GstVideoCodecFrame:input_buffer every time until the end of the frame has been signaled using either method. This method must be called during the decoder subclass set_format call.

The function takes the following parameters:

  • subframeMode: whether the input data should be considered as subframes.

func (*VideoDecoder) SetUseDefaultPadAcceptcaps

func (decoder *VideoDecoder) SetUseDefaultPadAcceptcaps(use bool)

SetUseDefaultPadAcceptcaps lets VideoDecoder sub-classes decide if they want the sink pad to use the default pad query handler to reply to accept-caps queries.

By setting this to true it is possible to further customize the default handler with GST_PAD_SET_ACCEPT_INTERSECT and GST_PAD_SET_ACCEPT_TEMPLATE.

The function takes the following parameters:

  • use: if the default pad accept-caps query handling should be used.

func (*VideoDecoder) SubframeMode

func (decoder *VideoDecoder) SubframeMode() bool

SubframeMode queries whether input data is considered as subframes or not by the base class. If FALSE, each input buffer will be considered as a full frame.

The function returns the following values:

  • ok: TRUE if input data is considered as sub frames.

type VideoDecoderClass

type VideoDecoderClass struct {
	// contains filtered or unexported fields
}

VideoDecoderClass subclasses can override any of the available virtual methods or not, as needed. At minimum handle_frame needs to be overridden, and set_format and likely as well. If non-packetized input is supported or expected, parse needs to be overridden as well.

An instance of this type is always passed by reference.

type VideoDecoderOverrides

type VideoDecoderOverrides struct {
	// The function returns the following values:
	//
	Close func() bool
	// The function takes the following parameters:
	//
	// The function returns the following values:
	//
	DecideAllocation func(query *gst.Query) bool
	// The function returns the following values:
	//
	Drain func() gst.FlowReturn
	// The function returns the following values:
	//
	Finish func() gst.FlowReturn
	// The function returns the following values:
	//
	Flush func() bool

	// The function takes the following parameters:
	//
	//    - frame to handle.
	//
	// The function returns the following values:
	//
	HandleFrame func(frame *VideoCodecFrame) gst.FlowReturn
	// The function takes the following parameters:
	//
	//    - timestamp: timestamp of the missing data.
	//    - duration: duration of the missing data.
	//
	// The function returns the following values:
	//
	//    - ok: TRUE if the decoder should be drained afterwards.
	//
	HandleMissingData func(timestamp, duration gst.ClockTime) bool
	// Negotiate with downstream elements to currently configured
	// VideoCodecState. Unmark GST_PAD_FLAG_NEED_RECONFIGURE in any case. But
	// mark it again if negotiate fails.
	//
	// The function returns the following values:
	//
	//    - ok: TRUE if the negotiation succeeded, else FALSE.
	//
	Negotiate func() bool
	// The function returns the following values:
	//
	Open func() bool
	// The function takes the following parameters:
	//
	//    - frame
	//    - adapter
	//    - atEos
	//
	// The function returns the following values:
	//
	Parse func(frame *VideoCodecFrame, adapter *gstbase.Adapter, atEos bool) gst.FlowReturn
	// The function takes the following parameters:
	//
	// The function returns the following values:
	//
	ProposeAllocation func(query *gst.Query) bool
	// The function takes the following parameters:
	//
	// The function returns the following values:
	//
	Reset func(hard bool) bool
	// The function takes the following parameters:
	//
	// The function returns the following values:
	//
	SetFormat func(state *VideoCodecState) bool
	// The function takes the following parameters:
	//
	// The function returns the following values:
	//
	SinkEvent func(event *gst.Event) bool
	// The function takes the following parameters:
	//
	// The function returns the following values:
	//
	SinkQuery func(query *gst.Query) bool
	// The function takes the following parameters:
	//
	// The function returns the following values:
	//
	SrcEvent func(event *gst.Event) bool
	// The function takes the following parameters:
	//
	// The function returns the following values:
	//
	SrcQuery func(query *gst.Query) bool
	// The function returns the following values:
	//
	Start func() bool
	// The function returns the following values:
	//
	Stop func() bool
	// The function takes the following parameters:
	//
	//    - frame
	//    - meta
	//
	// The function returns the following values:
	//
	TransformMeta func(frame *VideoCodecFrame, meta *gst.Meta) bool
	// contains filtered or unexported fields
}

VideoDecoderOverrides contains methods that are overridable.

type VideoDecoderRequestSyncPointFlags

type VideoDecoderRequestSyncPointFlags C.guint

VideoDecoderRequestSyncPointFlags flags to be used in combination with gst_video_decoder_request_sync_point(). See the function documentation for more details.

const (
	// VideoDecoderRequestSyncPointDiscardInput: discard all following input
	// until the next sync point.
	VideoDecoderRequestSyncPointDiscardInput VideoDecoderRequestSyncPointFlags = 0b1
	// VideoDecoderRequestSyncPointCorruptOutput: discard all following output
	// until the next sync point.
	VideoDecoderRequestSyncPointCorruptOutput VideoDecoderRequestSyncPointFlags = 0b10
)

func (VideoDecoderRequestSyncPointFlags) Has

Has returns true if v contains other.

func (VideoDecoderRequestSyncPointFlags) String

String returns the names in string for VideoDecoderRequestSyncPointFlags.

type VideoDecoderer

type VideoDecoderer interface {
	coreglib.Objector
	// contains filtered or unexported methods
}

VideoDecoderer describes types inherited from class VideoDecoder.

To get the original type, the caller must assert this to an interface or another type.

type VideoDirection

type VideoDirection struct {
	*coreglib.Object
	// contains filtered or unexported fields
}

VideoDirection: interface allows unified access to control flipping and rotation operations of video-sources or operators.

VideoDirection wraps an interface. This means the user can get the underlying type by calling Cast().

func BaseVideoDirection

func BaseVideoDirection(obj VideoDirectioner) *VideoDirection

BaseVideoDirection returns the underlying base object.

type VideoDirectionInterface

type VideoDirectionInterface struct {
	// contains filtered or unexported fields
}

VideoDirectionInterface interface.

An instance of this type is always passed by reference.

type VideoDirectionOverrider

type VideoDirectionOverrider interface {
}

VideoDirectionOverrider contains methods that are overridable.

type VideoDirectioner

type VideoDirectioner interface {
	coreglib.Objector
	// contains filtered or unexported methods
}

VideoDirectioner describes VideoDirection's interface methods.

type VideoDitherFlags

type VideoDitherFlags C.guint

VideoDitherFlags: extra flags that influence the result from gst_video_chroma_resample_new().

const (
	// VideoDitherFlagNone: no flags.
	VideoDitherFlagNone VideoDitherFlags = 0b0
	// VideoDitherFlagInterlaced: input is interlaced.
	VideoDitherFlagInterlaced VideoDitherFlags = 0b1
	// VideoDitherFlagQuantize: quantize values in addition to adding dither.
	VideoDitherFlagQuantize VideoDitherFlags = 0b10
)

func (VideoDitherFlags) Has

func (v VideoDitherFlags) Has(other VideoDitherFlags) bool

Has returns true if v contains other.

func (VideoDitherFlags) String

func (v VideoDitherFlags) String() string

String returns the names in string for VideoDitherFlags.

type VideoDitherMethod

type VideoDitherMethod C.gint

VideoDitherMethod: different dithering methods to use.

const (
	// VideoDitherNone: no dithering.
	VideoDitherNone VideoDitherMethod = iota
	// VideoDitherVerterr: propagate rounding errors downwards.
	VideoDitherVerterr
	// VideoDitherFloydSteinberg: dither with floyd-steinberg error diffusion.
	VideoDitherFloydSteinberg
	// VideoDitherSierraLite: dither with Sierra Lite error diffusion.
	VideoDitherSierraLite
	// VideoDitherBayer: ordered dither using a bayer pattern.
	VideoDitherBayer
)

func (VideoDitherMethod) String

func (v VideoDitherMethod) String() string

String returns the name in string for VideoDitherMethod.

type VideoEncoder

type VideoEncoder struct {
	gst.Element

	gst.Preset
	// contains filtered or unexported fields
}

VideoEncoder: this base class is for video encoders turning raw video into encoded video data.

GstVideoEncoder and subclass should cooperate as follows.

Configuration

  • Initially, GstVideoEncoder calls start when the encoder element is activated, which allows subclass to perform any global setup.
  • GstVideoEncoder calls set_format to inform subclass of the format of input video data that it is about to receive. Subclass should setup for encoding and configure base class as appropriate (e.g. latency). While unlikely, it might be called more than once, if changing input parameters require reconfiguration. Baseclass will ensure that processing of current configuration is finished.
  • GstVideoEncoder calls stop at end of all processing.

Data processing

  • Base class collects input data and metadata into a frame and hands this to subclass' handle_frame.

  • If codec processing results in encoded data, subclass should call gst_video_encoder_finish_frame to have encoded data pushed downstream.

  • If implemented, baseclass calls subclass pre_push just prior to pushing to allow subclasses to modify some metadata on the buffer. If it returns GST_FLOW_OK, the buffer is pushed downstream.

  • GstVideoEncoderClass will handle both srcpad and sinkpad events. Sink events will be passed to subclass if event callback has been provided.

Shutdown phase

  • GstVideoEncoder class calls stop to inform the subclass that data parsing will be stopped.

Subclass is responsible for providing pad template caps for source and sink pads. The pads need to be named "sink" and "src". It should also be able to provide fixed src pad caps in getcaps by the time it calls gst_video_encoder_finish_frame.

Things that subclass need to take care of:

  • Provide pad templates

  • Provide source pad caps before pushing the first buffer

  • Accept data in handle_frame and provide encoded results to gst_video_encoder_finish_frame.

    The VideoEncoder:qos property will enable the Quality-of-Service features of the encoder which gather statistics about the real-time performance of the downstream elements. If enabled, subclasses can use gst_video_encoder_get_max_encode_time() to check if input frames are already late and drop them right away to give a chance to the pipeline to catch up.

func BaseVideoEncoder

func BaseVideoEncoder(obj VideoEncoderer) *VideoEncoder

BaseVideoEncoder returns the underlying base object.

func (*VideoEncoder) AllocateOutputBuffer

func (encoder *VideoEncoder) AllocateOutputBuffer(size uint) *gst.Buffer

AllocateOutputBuffer: helper function that allocates a buffer to hold an encoded video frame for encoder's current VideoCodecState.

The function takes the following parameters:

  • size of the buffer.

The function returns the following values:

  • buffer: allocated buffer.

func (*VideoEncoder) AllocateOutputFrame

func (encoder *VideoEncoder) AllocateOutputFrame(frame *VideoCodecFrame, size uint) gst.FlowReturn

AllocateOutputFrame: helper function that allocates a buffer to hold an encoded video frame for encoder's current VideoCodecState. Subclass should already have configured video state and set src pad caps.

The buffer allocated here is owned by the frame and you should only keep references to the frame, not the buffer.

The function takes the following parameters:

  • frame: VideoCodecFrame.
  • size of the buffer.

The function returns the following values:

  • flowReturn: GST_FLOW_OK if an output buffer could be allocated.

func (*VideoEncoder) Allocator

func (encoder *VideoEncoder) Allocator() (gst.Allocatorrer, *gst.AllocationParams)

Allocator lets VideoEncoder sub-classes to know the memory allocator used by the base class and its params.

Unref the allocator after use it.

The function returns the following values:

  • allocator (optional): Allocator used.
  • params (optional) the AllocationParams of allocator.

func (*VideoEncoder) FinishFrame

func (encoder *VideoEncoder) FinishFrame(frame *VideoCodecFrame) gst.FlowReturn

FinishFrame: frame must have a valid encoded data buffer, whose metadata fields are then appropriately set according to frame data or no buffer at all if the frame should be dropped. It is subsequently pushed downstream or provided to pre_push. In any case, the frame is considered finished and released.

After calling this function the output buffer of the frame is to be considered read-only. This function will also change the metadata of the buffer.

The function takes the following parameters:

  • frame: encoded VideoCodecFrame.

The function returns the following values:

  • flowReturn resulting from sending data downstream.

func (*VideoEncoder) FinishSubframe

func (encoder *VideoEncoder) FinishSubframe(frame *VideoCodecFrame) gst.FlowReturn

FinishSubframe: if multiple subframes are produced for one input frame then use this method for each subframe, except for the last one. Before calling this function, you need to fill frame->output_buffer with the encoded buffer to push.

You must call #gst_video_encoder_finish_frame() for the last sub-frame to tell the encoder that the frame has been fully encoded.

This function will change the metadata of frame and frame->output_buffer will be pushed downstream.

The function takes the following parameters:

  • frame being encoded.

The function returns the following values:

  • flowReturn resulting from pushing the buffer downstream.

func (*VideoEncoder) Frame

func (encoder *VideoEncoder) Frame(frameNumber int) *VideoCodecFrame

Frame: get a pending unfinished VideoCodecFrame.

The function takes the following parameters:

  • frameNumber: system_frame_number of a frame.

The function returns the following values:

  • videoCodecFrame: pending unfinished VideoCodecFrame identified by frame_number.

func (*VideoEncoder) Frames

func (encoder *VideoEncoder) Frames() []*VideoCodecFrame

Frames: get all pending unfinished VideoCodecFrame.

The function returns the following values:

  • list: pending unfinished VideoCodecFrame.

func (*VideoEncoder) IsQosEnabled

func (encoder *VideoEncoder) IsQosEnabled() bool

IsQosEnabled checks if encoder is currently configured to handle Quality-of-Service events from downstream.

The function returns the following values:

  • ok: TRUE if the encoder is configured to perform Quality-of-Service.

func (*VideoEncoder) Latency

func (encoder *VideoEncoder) Latency() (minLatency, maxLatency gst.ClockTime)

Latency: query the configured encoding latency. Results will be returned via min_latency and max_latency.

The function returns the following values:

  • minLatency (optional) address of variable in which to store the configured minimum latency, or NULL.
  • maxLatency (optional) address of variable in which to store the configured maximum latency, or NULL.

func (*VideoEncoder) MaxEncodeTime

func (encoder *VideoEncoder) MaxEncodeTime(frame *VideoCodecFrame) gst.ClockTimeDiff

MaxEncodeTime determines maximum possible encoding time for frame that will allow it to encode and arrive in time (as determined by QoS events). In particular, a negative result means encoding in time is no longer possible and should therefore occur as soon/skippy as possible.

If no QoS events have been received from downstream, or if VideoEncoder:qos is disabled this function returns MAXINT64.

The function takes the following parameters:

  • frame: VideoCodecFrame.

The function returns the following values:

  • clockTimeDiff: max decoding time.

func (*VideoEncoder) MinForceKeyUnitInterval

func (encoder *VideoEncoder) MinForceKeyUnitInterval() gst.ClockTime

MinForceKeyUnitInterval returns the minimum force-keyunit interval, see gst_video_encoder_set_min_force_key_unit_interval() for more details.

The function returns the following values:

  • clockTime: minimum force-keyunit interval.

func (*VideoEncoder) Negotiate

func (encoder *VideoEncoder) Negotiate() bool

Negotiate with downstream elements to currently configured VideoCodecState. Unmark GST_PAD_FLAG_NEED_RECONFIGURE in any case. But mark it again if negotiate fails.

The function returns the following values:

  • ok: TRUE if the negotiation succeeded, else FALSE.

func (*VideoEncoder) OldestFrame

func (encoder *VideoEncoder) OldestFrame() *VideoCodecFrame

OldestFrame: get the oldest unfinished pending VideoCodecFrame.

The function returns the following values:

  • videoCodecFrame: oldest unfinished pending VideoCodecFrame.

func (*VideoEncoder) OutputState

func (encoder *VideoEncoder) OutputState() *VideoCodecState

OutputState: get the current VideoCodecState.

The function returns the following values:

  • videoCodecState describing format of video data.

func (*VideoEncoder) ProxyGetcaps

func (enc *VideoEncoder) ProxyGetcaps(caps, filter *gst.Caps) *gst.Caps

ProxyGetcaps returns caps that express caps (or sink template caps if caps == NULL) restricted to resolution/format/... combinations supported by downstream elements (e.g. muxers).

The function takes the following parameters:

  • caps (optional): initial caps.
  • filter (optional) caps.

The function returns the following values:

  • ret owned by caller.

func (*VideoEncoder) SetHeaders

func (encoder *VideoEncoder) SetHeaders(headers []*gst.Buffer)

SetHeaders: set the codec headers to be sent downstream whenever requested.

The function takes the following parameters:

  • headers: list of Buffer containing the codec header.

func (*VideoEncoder) SetLatency

func (encoder *VideoEncoder) SetLatency(minLatency, maxLatency gst.ClockTime)

SetLatency informs baseclass of encoding latency.

The function takes the following parameters:

  • minLatency: minimum latency.
  • maxLatency: maximum latency.

func (*VideoEncoder) SetMinForceKeyUnitInterval

func (encoder *VideoEncoder) SetMinForceKeyUnitInterval(interval gst.ClockTime)

SetMinForceKeyUnitInterval sets the minimum interval for requesting keyframes based on force-keyunit events. Setting this to 0 will allow to handle every event, setting this to GST_CLOCK_TIME_NONE causes force-keyunit events to be ignored.

The function takes the following parameters:

  • interval: minimum interval.

func (*VideoEncoder) SetMinPts

func (encoder *VideoEncoder) SetMinPts(minPts gst.ClockTime)

SetMinPts: request minimal value for PTS passed to handle_frame.

For streams with reordered frames this can be used to ensure that there is enough time to accommodate first DTS, which may be less than first PTS.

The function takes the following parameters:

  • minPts: minimal PTS that will be passed to handle_frame.

func (*VideoEncoder) SetOutputState

func (encoder *VideoEncoder) SetOutputState(caps *gst.Caps, reference *VideoCodecState) *VideoCodecState

SetOutputState creates a new VideoCodecState with the specified caps as the output state for the encoder. Any previously set output state on encoder will be replaced by the newly created one.

The specified caps should not contain any resolution, pixel-aspect-ratio, framerate, codec-data, .... Those should be specified instead in the returned VideoCodecState.

If the subclass wishes to copy over existing fields (like pixel aspect ratio, or framerate) from an existing VideoCodecState, it can be provided as a reference.

If the subclass wishes to override some fields from the output state (like pixel-aspect-ratio or framerate) it can do so on the returned VideoCodecState.

The new output state will only take effect (set on pads and buffers) starting from the next call to #gst_video_encoder_finish_frame().

The function takes the following parameters:

  • caps to use for the output.
  • reference (optional): optional reference GstVideoCodecState.

The function returns the following values:

  • videoCodecState: newly configured output state.

func (*VideoEncoder) SetQosEnabled

func (encoder *VideoEncoder) SetQosEnabled(enabled bool)

SetQosEnabled configures encoder to handle Quality-of-Service events from downstream.

The function takes the following parameters:

  • enabled: new qos value.

type VideoEncoderClass

type VideoEncoderClass struct {
	// contains filtered or unexported fields
}

VideoEncoderClass subclasses can override any of the available virtual methods or not, as needed. At minimum handle_frame needs to be overridden, and set_format and get_caps are likely needed as well.

An instance of this type is always passed by reference.

type VideoEncoderOverrides

type VideoEncoderOverrides struct {
	// The function returns the following values:
	//
	Close func() bool
	// The function takes the following parameters:
	//
	// The function returns the following values:
	//
	DecideAllocation func(query *gst.Query) bool
	// The function returns the following values:
	//
	Finish func() gst.FlowReturn
	// The function returns the following values:
	//
	Flush func() bool

	// The function takes the following parameters:
	//
	// The function returns the following values:
	//
	HandleFrame func(frame *VideoCodecFrame) gst.FlowReturn
	// Negotiate with downstream elements to currently configured
	// VideoCodecState. Unmark GST_PAD_FLAG_NEED_RECONFIGURE in any case. But
	// mark it again if negotiate fails.
	//
	// The function returns the following values:
	//
	//    - ok: TRUE if the negotiation succeeded, else FALSE.
	//
	Negotiate func() bool
	// The function returns the following values:
	//
	Open func() bool
	// The function takes the following parameters:
	//
	// The function returns the following values:
	//
	PrePush func(frame *VideoCodecFrame) gst.FlowReturn
	// The function takes the following parameters:
	//
	// The function returns the following values:
	//
	ProposeAllocation func(query *gst.Query) bool
	// The function takes the following parameters:
	//
	// The function returns the following values:
	//
	Reset func(hard bool) bool
	// The function takes the following parameters:
	//
	// The function returns the following values:
	//
	SetFormat func(state *VideoCodecState) bool
	// The function takes the following parameters:
	//
	// The function returns the following values:
	//
	SinkEvent func(event *gst.Event) bool
	// The function takes the following parameters:
	//
	// The function returns the following values:
	//
	SinkQuery func(query *gst.Query) bool
	// The function takes the following parameters:
	//
	// The function returns the following values:
	//
	SrcEvent func(event *gst.Event) bool
	// The function takes the following parameters:
	//
	// The function returns the following values:
	//
	SrcQuery func(query *gst.Query) bool
	// The function returns the following values:
	//
	Start func() bool
	// The function returns the following values:
	//
	Stop func() bool
	// The function takes the following parameters:
	//
	//    - frame
	//    - meta
	//
	// The function returns the following values:
	//
	TransformMeta func(frame *VideoCodecFrame, meta *gst.Meta) bool
	// contains filtered or unexported fields
}

VideoEncoderOverrides contains methods that are overridable.

type VideoEncoderer

type VideoEncoderer interface {
	coreglib.Objector
	// contains filtered or unexported methods
}

VideoEncoderer describes types inherited from class VideoEncoder.

To get the original type, the caller must assert this to an interface or another type.

type VideoFieldOrder

type VideoFieldOrder C.gint

VideoFieldOrder: field order of interlaced content. This is only valid for interlace-mode=interleaved and not interlace-mode=mixed. In the case of mixed or GST_VIDEO_FIELD_ORDER_UNKOWN, the field order is signalled via buffer flags.

const (
	// VideoFieldOrderUnknown: unknown field order for interlaced content. The
	// actual field order is signalled via buffer flags.
	VideoFieldOrderUnknown VideoFieldOrder = iota
	// VideoFieldOrderTopFieldFirst: top field is first.
	VideoFieldOrderTopFieldFirst
	// VideoFieldOrderBottomFieldFirst: bottom field is first.
	VideoFieldOrderBottomFieldFirst
)

func VideoFieldOrderFromString

func VideoFieldOrderFromString(order string) VideoFieldOrder

VideoFieldOrderFromString: convert order to a VideoFieldOrder.

The function takes the following parameters:

  • order: field order.

The function returns the following values:

  • videoFieldOrder of order or T_VIDEO_FIELD_ORDER_UNKNOWN when order is not a valid string representation for a VideoFieldOrder.

func (VideoFieldOrder) String

func (v VideoFieldOrder) String() string

String returns the name in string for VideoFieldOrder.

type VideoFilter

type VideoFilter struct {
	gstbase.BaseTransform
	// contains filtered or unexported fields
}

VideoFilter provides useful functions and a base class for video filters.

The videofilter will by default enable QoS on the parent GstBaseTransform to implement frame dropping.

func BaseVideoFilter

func BaseVideoFilter(obj VideoFilterer) *VideoFilter

BaseVideoFilter returns the underlying base object.

type VideoFilterClass

type VideoFilterClass struct {
	// contains filtered or unexported fields
}

VideoFilterClass: video filter class structure.

An instance of this type is always passed by reference.

func (*VideoFilterClass) ParentClass

func (v *VideoFilterClass) ParentClass() *gstbase.BaseTransformClass

ParentClass: parent class structure.

type VideoFilterOverrides

type VideoFilterOverrides struct {
	// The function takes the following parameters:
	//
	//    - incaps
	//    - inInfo
	//    - outcaps
	//    - outInfo
	//
	// The function returns the following values:
	//
	SetInfo func(incaps *gst.Caps, inInfo *VideoInfo, outcaps *gst.Caps, outInfo *VideoInfo) bool
	// The function takes the following parameters:
	//
	//    - inframe
	//    - outframe
	//
	// The function returns the following values:
	//
	TransformFrame func(inframe, outframe *VideoFrame) gst.FlowReturn
	// The function takes the following parameters:
	//
	// The function returns the following values:
	//
	TransformFrameIP func(frame *VideoFrame) gst.FlowReturn
}

VideoFilterOverrides contains methods that are overridable.

type VideoFilterer

type VideoFilterer interface {
	coreglib.Objector
	// contains filtered or unexported methods
}

VideoFilterer describes types inherited from class VideoFilter.

To get the original type, the caller must assert this to an interface or another type.

type VideoFlags

type VideoFlags C.guint

VideoFlags: extra video flags.

const (
	// VideoFlagNone: no flags.
	VideoFlagNone VideoFlags = 0b0
	// VideoFlagVariableFPS: variable fps is selected, fps_n and fps_d denote
	// the maximum fps of the video.
	VideoFlagVariableFPS VideoFlags = 0b1
	// VideoFlagPremultipliedAlpha: each color has been scaled by the alpha
	// value.
	VideoFlagPremultipliedAlpha VideoFlags = 0b10
)

func (VideoFlags) Has

func (v VideoFlags) Has(other VideoFlags) bool

Has returns true if v contains other.

func (VideoFlags) String

func (v VideoFlags) String() string

String returns the names in string for VideoFlags.

type VideoFormat

type VideoFormat C.gint

VideoFormat: enum value describing the most common video formats.

See the GStreamer raw video format design document (https://gstreamer.freedesktop.org/documentation/additional/design/mediatype-video-raw.html#formats) for details about the layout and packing of these formats in memory.

const (
	// VideoFormatUnknown: unknown or unset video format id.
	VideoFormatUnknown VideoFormat = iota
	// VideoFormatEncoded: encoded video format. Only ever use that in caps for
	// special video formats in combination with non-system memory
	// GstCapsFeatures where it does not make sense to specify a real video
	// format.
	VideoFormatEncoded
	// VideoFormatI420: planar 4:2:0 YUV.
	VideoFormatI420
	// VideoFormatYV12: planar 4:2:0 YVU (like I420 but UV planes swapped).
	VideoFormatYV12
	// VideoFormatYuy2: packed 4:2:2 YUV (Y0-U0-Y1-V0 Y2-U2-Y3-V2 Y4 ...).
	VideoFormatYuy2
	// VideoFormatUyvy: packed 4:2:2 YUV (U0-Y0-V0-Y1 U2-Y2-V2-Y3 U4 ...).
	VideoFormatUyvy
	// VideoFormatAyuv: packed 4:4:4 YUV with alpha channel (A0-Y0-U0-V0 ...).
	VideoFormatAyuv
	// VideoFormatRgbx: sparse rgb packed into 32 bit, space last.
	VideoFormatRgbx
	// VideoFormatBgrx: sparse reverse rgb packed into 32 bit, space last.
	VideoFormatBgrx
	// VideoFormatXrgb: sparse rgb packed into 32 bit, space first.
	VideoFormatXrgb
	// VideoFormatXbgr: sparse reverse rgb packed into 32 bit, space first.
	VideoFormatXbgr
	// VideoFormatRGBA: rgb with alpha channel last.
	VideoFormatRGBA
	// VideoFormatBgra: reverse rgb with alpha channel last.
	VideoFormatBgra
	// VideoFormatARGB: rgb with alpha channel first.
	VideoFormatARGB
	// VideoFormatAbgr: reverse rgb with alpha channel first.
	VideoFormatAbgr
	// VideoFormatRGB: RGB packed into 24 bits without padding (R-G-B-R-G-B).
	VideoFormatRGB
	// VideoFormatBGR: reverse RGB packed into 24 bits without padding
	// (B-G-R-B-G-R).
	VideoFormatBGR
	// VideoFormatY41B: planar 4:1:1 YUV.
	VideoFormatY41B
	// VideoFormatY42B: planar 4:2:2 YUV.
	VideoFormatY42B
	// VideoFormatYvyu: packed 4:2:2 YUV (Y0-V0-Y1-U0 Y2-V2-Y3-U2 Y4 ...).
	VideoFormatYvyu
	// VideoFormatY444: planar 4:4:4 YUV.
	VideoFormatY444
	// VideoFormatV210: packed 4:2:2 10-bit YUV, complex format.
	VideoFormatV210
	// VideoFormatV216: packed 4:2:2 16-bit YUV, Y0-U0-Y1-V1 order.
	VideoFormatV216
	// VideoFormatNv12: planar 4:2:0 YUV with interleaved UV plane.
	VideoFormatNv12
	// VideoFormatNv21: planar 4:2:0 YUV with interleaved VU plane.
	VideoFormatNv21
	// VideoFormatGray8: 8-bit grayscale.
	VideoFormatGray8
	// VideoFormatGray16Be: 16-bit grayscale, most significant byte first.
	VideoFormatGray16Be
	// VideoFormatGray16LE: 16-bit grayscale, least significant byte first.
	VideoFormatGray16LE
	// VideoFormatV308: packed 4:4:4 YUV (Y-U-V ...).
	VideoFormatV308
	// VideoFormatRGB16: rgb 5-6-5 bits per component.
	VideoFormatRGB16
	// VideoFormatBGR16: reverse rgb 5-6-5 bits per component.
	VideoFormatBGR16
	// VideoFormatRGB15: rgb 5-5-5 bits per component.
	VideoFormatRGB15
	// VideoFormatBGR15: reverse rgb 5-5-5 bits per component.
	VideoFormatBGR15
	// VideoFormatUyvp: packed 10-bit 4:2:2 YUV (U0-Y0-V0-Y1 U2-Y2-V2-Y3 U4
	// ...).
	VideoFormatUyvp
	// VideoFormatA420: planar 4:4:2:0 AYUV.
	VideoFormatA420
	// VideoFormatRGB8P: 8-bit paletted RGB.
	VideoFormatRGB8P
	// VideoFormatYuv9: planar 4:1:0 YUV.
	VideoFormatYuv9
	// VideoFormatYvu9: planar 4:1:0 YUV (like YUV9 but UV planes swapped).
	VideoFormatYvu9
	// VideoFormatIyu1: packed 4:1:1 YUV (Cb-Y0-Y1-Cr-Y2-Y3 ...).
	VideoFormatIyu1
	// VideoFormatARGB64: rgb with alpha channel first, 16 bits (native
	// endianness) per channel.
	VideoFormatARGB64
	// VideoFormatAyuv64: packed 4:4:4 YUV with alpha channel, 16 bits (native
	// endianness) per channel (A0-Y0-U0-V0 ...).
	VideoFormatAyuv64
	// VideoFormatR210: packed 4:4:4 RGB, 10 bits per channel.
	VideoFormatR210
	// VideoFormatI42010Be: planar 4:2:0 YUV, 10 bits per channel.
	VideoFormatI42010Be
	// VideoFormatI42010LE: planar 4:2:0 YUV, 10 bits per channel.
	VideoFormatI42010LE
	// VideoFormatI42210Be: planar 4:2:2 YUV, 10 bits per channel.
	VideoFormatI42210Be
	// VideoFormatI42210LE: planar 4:2:2 YUV, 10 bits per channel.
	VideoFormatI42210LE
	// VideoFormatY44410Be: planar 4:4:4 YUV, 10 bits per channel (Since: 1.2).
	VideoFormatY44410Be
	// VideoFormatY44410LE: planar 4:4:4 YUV, 10 bits per channel (Since: 1.2).
	VideoFormatY44410LE
	// VideoFormatGbr: planar 4:4:4 RGB, 8 bits per channel (Since: 1.2).
	VideoFormatGbr
	// VideoFormatGbr10Be: planar 4:4:4 RGB, 10 bits per channel (Since: 1.2).
	VideoFormatGbr10Be
	// VideoFormatGbr10LE: planar 4:4:4 RGB, 10 bits per channel (Since: 1.2).
	VideoFormatGbr10LE
	// VideoFormatNv16: planar 4:2:2 YUV with interleaved UV plane (Since: 1.2).
	VideoFormatNv16
	// VideoFormatNv24: planar 4:4:4 YUV with interleaved UV plane (Since: 1.2).
	VideoFormatNv24
	// VideoFormatNv1264Z32: NV12 with 64x32 tiling in zigzag pattern (Since:
	// 1.4).
	VideoFormatNv1264Z32
	// VideoFormatA42010Be: planar 4:4:2:0 YUV, 10 bits per channel (Since:
	// 1.6).
	VideoFormatA42010Be
	// VideoFormatA42010LE: planar 4:4:2:0 YUV, 10 bits per channel (Since:
	// 1.6).
	VideoFormatA42010LE
	// VideoFormatA42210Be: planar 4:4:2:2 YUV, 10 bits per channel (Since:
	// 1.6).
	VideoFormatA42210Be
	// VideoFormatA42210LE: planar 4:4:2:2 YUV, 10 bits per channel (Since:
	// 1.6).
	VideoFormatA42210LE
	// VideoFormatA44410Be: planar 4:4:4:4 YUV, 10 bits per channel (Since:
	// 1.6).
	VideoFormatA44410Be
	// VideoFormatA44410LE: planar 4:4:4:4 YUV, 10 bits per channel (Since:
	// 1.6).
	VideoFormatA44410LE
	// VideoFormatNv61: planar 4:2:2 YUV with interleaved VU plane (Since: 1.6).
	VideoFormatNv61
	// VideoFormatP01010Be: planar 4:2:0 YUV with interleaved UV plane, 10 bits
	// per channel (Since: 1.10).
	VideoFormatP01010Be
	// VideoFormatP01010LE: planar 4:2:0 YUV with interleaved UV plane, 10 bits
	// per channel (Since: 1.10).
	VideoFormatP01010LE
	// VideoFormatIyu2: packed 4:4:4 YUV (U-Y-V ...) (Since: 1.10).
	VideoFormatIyu2
	// VideoFormatVyuy: packed 4:2:2 YUV (V0-Y0-U0-Y1 V2-Y2-U2-Y3 V4 ...).
	VideoFormatVyuy
	// VideoFormatGbra: planar 4:4:4:4 ARGB, 8 bits per channel (Since: 1.12).
	VideoFormatGbra
	// VideoFormatGbra10Be: planar 4:4:4:4 ARGB, 10 bits per channel (Since:
	// 1.12).
	VideoFormatGbra10Be
	// VideoFormatGbra10LE: planar 4:4:4:4 ARGB, 10 bits per channel (Since:
	// 1.12).
	VideoFormatGbra10LE
	// VideoFormatGbr12Be: planar 4:4:4 RGB, 12 bits per channel (Since: 1.12).
	VideoFormatGbr12Be
	// VideoFormatGbr12LE: planar 4:4:4 RGB, 12 bits per channel (Since: 1.12).
	VideoFormatGbr12LE
	// VideoFormatGbra12Be: planar 4:4:4:4 ARGB, 12 bits per channel (Since:
	// 1.12).
	VideoFormatGbra12Be
	// VideoFormatGbra12LE: planar 4:4:4:4 ARGB, 12 bits per channel (Since:
	// 1.12).
	VideoFormatGbra12LE
	// VideoFormatI42012Be: planar 4:2:0 YUV, 12 bits per channel (Since: 1.12).
	VideoFormatI42012Be
	// VideoFormatI42012LE: planar 4:2:0 YUV, 12 bits per channel (Since: 1.12).
	VideoFormatI42012LE
	// VideoFormatI42212Be: planar 4:2:2 YUV, 12 bits per channel (Since: 1.12).
	VideoFormatI42212Be
	// VideoFormatI42212LE: planar 4:2:2 YUV, 12 bits per channel (Since: 1.12).
	VideoFormatI42212LE
	// VideoFormatY44412Be: planar 4:4:4 YUV, 12 bits per channel (Since: 1.12).
	VideoFormatY44412Be
	// VideoFormatY44412LE: planar 4:4:4 YUV, 12 bits per channel (Since: 1.12).
	VideoFormatY44412LE
	// VideoFormatGray10LE32: 10-bit grayscale, packed into 32bit words (2 bits
	// padding) (Since: 1.14).
	VideoFormatGray10LE32
	// VideoFormatNv1210LE32: 10-bit variant of GST_VIDEO_FORMAT_NV12, packed
	// into 32bit words (MSB 2 bits padding) (Since: 1.14).
	VideoFormatNv1210LE32
	// VideoFormatNv1610LE32: 10-bit variant of GST_VIDEO_FORMAT_NV16, packed
	// into 32bit words (MSB 2 bits padding) (Since: 1.14).
	VideoFormatNv1610LE32
	// VideoFormatNv1210LE40: fully packed variant of NV12_10LE32 (Since: 1.16).
	VideoFormatNv1210LE40
	// VideoFormatY210: packed 4:2:2 YUV, 10 bits per channel (Since: 1.16).
	VideoFormatY210
	// VideoFormatY410: packed 4:4:4 YUV, 10 bits per channel(A-V-Y-U...)
	// (Since: 1.16).
	VideoFormatY410
	// VideoFormatVuya: packed 4:4:4 YUV with alpha channel (V0-U0-Y0-A0...)
	// (Since: 1.16).
	VideoFormatVuya
	// VideoFormatBGR10A2LE: packed 4:4:4 RGB with alpha channel(B-G-R-A), 10
	// bits for R/G/B channel and MSB 2 bits for alpha channel (Since: 1.16).
	VideoFormatBGR10A2LE
	// VideoFormatRGB10A2LE: packed 4:4:4 RGB with alpha channel(R-G-B-A), 10
	// bits for R/G/B channel and MSB 2 bits for alpha channel (Since: 1.18).
	VideoFormatRGB10A2LE
	// VideoFormatY44416Be: planar 4:4:4 YUV, 16 bits per channel (Since: 1.18).
	VideoFormatY44416Be
	// VideoFormatY44416LE: planar 4:4:4 YUV, 16 bits per channel (Since: 1.18).
	VideoFormatY44416LE
	// VideoFormatP016Be: planar 4:2:0 YUV with interleaved UV plane, 16 bits
	// per channel (Since: 1.18).
	VideoFormatP016Be
	// VideoFormatP016LE: planar 4:2:0 YUV with interleaved UV plane, 16 bits
	// per channel (Since: 1.18).
	VideoFormatP016LE
	// VideoFormatP012Be: planar 4:2:0 YUV with interleaved UV plane, 12 bits
	// per channel (Since: 1.18).
	VideoFormatP012Be
	// VideoFormatP012LE: planar 4:2:0 YUV with interleaved UV plane, 12 bits
	// per channel (Since: 1.18).
	VideoFormatP012LE
	// VideoFormatY212Be: packed 4:2:2 YUV, 12 bits per channel (Y-U-Y-V)
	// (Since: 1.18).
	VideoFormatY212Be
	// VideoFormatY212LE: packed 4:2:2 YUV, 12 bits per channel (Y-U-Y-V)
	// (Since: 1.18).
	VideoFormatY212LE
	// VideoFormatY412Be: packed 4:4:4:4 YUV, 12 bits per channel(U-Y-V-A...)
	// (Since: 1.18).
	VideoFormatY412Be
	// VideoFormatY412LE: packed 4:4:4:4 YUV, 12 bits per channel(U-Y-V-A...)
	// (Since: 1.18).
	VideoFormatY412LE
	// VideoFormatNv124L4: NV12 with 4x4 tiles in linear order.
	VideoFormatNv124L4
	// VideoFormatNv1232L32: NV12 with 32x32 tiles in linear order.
	VideoFormatNv1232L32
	// VideoFormatRgbp: planar 4:4:4 RGB, R-G-B order.
	VideoFormatRgbp
	// VideoFormatBgrp: planar 4:4:4 RGB, B-G-R order.
	VideoFormatBgrp
	// VideoFormatAv12: planar 4:2:0 YUV with interleaved UV plane with alpha as
	// 3rd plane.
	VideoFormatAv12
	// VideoFormatARGB64LE: RGB with alpha channel first, 16 bits (little
	// endian) per channel.
	VideoFormatARGB64LE
	// VideoFormatARGB64Be: RGB with alpha channel first, 16 bits (big endian)
	// per channel.
	VideoFormatARGB64Be
	// VideoFormatRGBA64LE: RGB with alpha channel last, 16 bits (little endian)
	// per channel.
	VideoFormatRGBA64LE
	// VideoFormatRGBA64Be: RGB with alpha channel last, 16 bits (big endian)
	// per channel.
	VideoFormatRGBA64Be
	// VideoFormatBgra64LE: reverse RGB with alpha channel last, 16 bits (little
	// endian) per channel.
	VideoFormatBgra64LE
	// VideoFormatBgra64Be: reverse RGB with alpha channel last, 16 bits (big
	// endian) per channel.
	VideoFormatBgra64Be
	// VideoFormatAbgr64LE: reverse RGB with alpha channel first, 16 bits
	// (little endian) per channel.
	VideoFormatAbgr64LE
	// VideoFormatAbgr64Be: reverse RGB with alpha channel first, 16 bits (big
	// endian) per channel.
	VideoFormatAbgr64Be
)

func VideoFormatFromFourcc

func VideoFormatFromFourcc(fourcc uint32) VideoFormat

VideoFormatFromFourcc converts a FOURCC value into the corresponding VideoFormat. If the FOURCC cannot be represented by VideoFormat, T_VIDEO_FORMAT_UNKNOWN is returned.

The function takes the following parameters:

  • fourcc: FOURCC value representing raw YUV video.

The function returns the following values:

  • videoFormat describing the FOURCC value.

func VideoFormatFromMasks

func VideoFormatFromMasks(depth, bpp, endianness int, redMask, greenMask, blueMask, alphaMask uint) VideoFormat

VideoFormatFromMasks: find the VideoFormat for the given parameters.

The function takes the following parameters:

  • depth: amount of bits used for a pixel.
  • bpp: amount of bits used to store a pixel. This value is bigger than depth.
  • endianness of the masks, LITTLE_ENDIAN or BIG_ENDIAN.
  • redMask: red mask.
  • greenMask: green mask.
  • blueMask: blue mask.
  • alphaMask: alpha mask, or 0 if no alpha mask.

The function returns the following values:

  • videoFormat or GST_VIDEO_FORMAT_UNKNOWN when the parameters to not specify a known format.

func VideoFormatFromString

func VideoFormatFromString(format string) VideoFormat

VideoFormatFromString: convert the format string to its VideoFormat.

The function takes the following parameters:

  • format string.

The function returns the following values:

  • videoFormat for format or GST_VIDEO_FORMAT_UNKNOWN when the string is not a known format.

func VideoFormatsRaw

func VideoFormatsRaw() []VideoFormat

VideoFormatsRaw: return all the raw video formats supported by GStreamer.

The function returns the following values:

  • videoFormats: array of VideoFormat.

func (VideoFormat) Palette

func (v VideoFormat) Palette() color.Palette

func (VideoFormat) String

func (v VideoFormat) String() string

String returns the name in string for VideoFormat.

type VideoFormatFlags

type VideoFormatFlags C.guint

VideoFormatFlags: different video flags that a format info can have.

const (
	// VideoFormatFlagYuv: video format is YUV, components are numbered 0=Y,
	// 1=U, 2=V.
	VideoFormatFlagYuv VideoFormatFlags = 0b1
	// VideoFormatFlagRGB: video format is RGB, components are numbered 0=R,
	// 1=G, 2=B.
	VideoFormatFlagRGB VideoFormatFlags = 0b10
	// VideoFormatFlagGray: video is gray, there is one gray component with
	// index 0.
	VideoFormatFlagGray VideoFormatFlags = 0b100
	// VideoFormatFlagAlpha: video format has an alpha components with the
	// number 3.
	VideoFormatFlagAlpha VideoFormatFlags = 0b1000
	// VideoFormatFlagLE: video format has data stored in little endianness.
	VideoFormatFlagLE VideoFormatFlags = 0b10000
	// VideoFormatFlagPalette: video format has a palette. The palette is stored
	// in the second plane and indexes are stored in the first plane.
	VideoFormatFlagPalette VideoFormatFlags = 0b100000
	// VideoFormatFlagComplex: video format has a complex layout that can't be
	// described with the usual information in the VideoFormatInfo.
	VideoFormatFlagComplex VideoFormatFlags = 0b1000000
	// VideoFormatFlagUnpack: this format can be used in a VideoFormatUnpack and
	// VideoFormatPack function.
	VideoFormatFlagUnpack VideoFormatFlags = 0b10000000
	// VideoFormatFlagTiled: format is tiled, there is tiling information in the
	// last plane.
	VideoFormatFlagTiled VideoFormatFlags = 0b100000000
)

func (VideoFormatFlags) Has

func (v VideoFormatFlags) Has(other VideoFormatFlags) bool

Has returns true if v contains other.

func (VideoFormatFlags) String

func (v VideoFormatFlags) String() string

String returns the names in string for VideoFormatFlags.

type VideoFormatInfo

type VideoFormatInfo struct {
	// contains filtered or unexported fields
}

VideoFormatInfo: information for a video format.

An instance of this type is always passed by reference.

func VideoFormatGetInfo

func VideoFormatGetInfo(format VideoFormat) *VideoFormatInfo

VideoFormatGetInfo: get the VideoFormatInfo for format.

The function takes the following parameters:

  • format: VideoFormat.

The function returns the following values:

  • videoFormatInfo for format.

func (*VideoFormatInfo) Component

func (info *VideoFormatInfo) Component(plane uint) int

Component: fill components with the number of all the components packed in plane p for the format info. A value of -1 in components indicates that no more components are packed in the plane.

The function takes the following parameters:

  • plane number.

The function returns the following values:

  • components: array used to store component numbers.

type VideoFrame

type VideoFrame struct {
	// contains filtered or unexported fields
}

VideoFrame: video frame obtained from gst_video_frame_map()

An instance of this type is always passed by reference.

func VideoFrameMap

func VideoFrameMap(info *VideoInfo, buffer *gst.Buffer, flags gst.MapFlags) (*VideoFrame, bool)

VideoFrameMap: use info and buffer to fill in the values of frame. frame is usually allocated on the stack, and you will pass the address to the VideoFrame structure allocated on the stack; gst_video_frame_map() will then fill in the structures with the various video-specific information you need to access the pixels of the video buffer. You can then use accessor macros such as GST_VIDEO_FRAME_COMP_DATA(), GST_VIDEO_FRAME_PLANE_DATA(), GST_VIDEO_FRAME_COMP_STRIDE(), GST_VIDEO_FRAME_PLANE_STRIDE() etc. to get to the pixels.

GstVideoFrame vframe;
...
// set RGB pixels to black one at a time
if (gst_video_frame_map (&vframe, video_info, video_buffer, GST_MAP_WRITE)) {
  guint8 *pixels = GST_VIDEO_FRAME_PLANE_DATA (vframe, 0);
  guint stride = GST_VIDEO_FRAME_PLANE_STRIDE (vframe, 0);
  guint pixel_stride = GST_VIDEO_FRAME_COMP_PSTRIDE (vframe, 0);

  for (h = 0; h < height; ++h) {
    for (w = 0; w < width; ++w) {
      guint8 *pixel = pixels + h * stride + w * pixel_stride;

      memset (pixel, 0, pixel_stride);
    }
  }

  gst_video_frame_unmap (&vframe);
}
...

All video planes of buffer will be mapped and the pointers will be set in frame->data.

The purpose of this function is to make it easy for you to get to the video pixels in a generic way, without you having to worry too much about details such as whether the video data is allocated in one contiguous memory chunk or multiple memory chunks (e.g. one for each plane); or if custom strides and custom plane offsets are used or not (as signalled by GstVideoMeta on each buffer). This function will just fill the VideoFrame structure with the right values and if you use the accessor macros everything will just work and you can access the data easily. It also maps the underlying memory chunks for you.

The function takes the following parameters:

  • info: VideoInfo.
  • buffer to map.
  • flags: MapFlags.

The function returns the following values:

  • frame: pointer to VideoFrame.
  • ok: TRUE on success.

func VideoFrameMapID

func VideoFrameMapID(info *VideoInfo, buffer *gst.Buffer, id int, flags gst.MapFlags) (*VideoFrame, bool)

VideoFrameMapID: use info and buffer to fill in the values of frame with the video frame information of frame id.

When id is -1, the default frame is mapped. When id != -1, this function will return FALSE when there is no GstVideoMeta with that id.

All video planes of buffer will be mapped and the pointers will be set in frame->data.

The function takes the following parameters:

  • info: VideoInfo.
  • buffer to map.
  • id: frame id to map.
  • flags: MapFlags.

The function returns the following values:

  • frame: pointer to VideoFrame.
  • ok: TRUE on success.

func (*VideoFrame) Buffer

func (v *VideoFrame) Buffer() *gst.Buffer

Buffer: mapped buffer.

func (*VideoFrame) Copy

func (dest *VideoFrame) Copy(src *VideoFrame) bool

Copy the contents from src to dest.

Note: Since: 1.18, dest dimensions are allowed to be smaller than src dimensions.

The function takes the following parameters:

  • src: VideoFrame.

The function returns the following values:

  • ok: TRUE if the contents could be copied.

func (*VideoFrame) CopyPlane

func (dest *VideoFrame) CopyPlane(src *VideoFrame, plane uint) bool

CopyPlane: copy the plane with index plane from src to dest.

Note: Since: 1.18, dest dimensions are allowed to be smaller than src dimensions.

The function takes the following parameters:

  • src: VideoFrame.
  • plane: plane.

The function returns the following values:

  • ok: TRUE if the contents could be copied.

func (*VideoFrame) Data

func (v *VideoFrame) Data() [4]unsafe.Pointer

Data pointers to the plane data.

func (*VideoFrame) Flags

func (v *VideoFrame) Flags() VideoFrameFlags

Flags for the frame.

func (*VideoFrame) ID

func (v *VideoFrame) ID() int

ID: id of the mapped frame. the id can for example be used to identify the frame in case of multiview video.

func (*VideoFrame) Info

func (v *VideoFrame) Info() *VideoInfo

Info: VideoInfo.

func (*VideoFrame) Map

func (v *VideoFrame) Map() [4]gst.MapInfo

Map mappings of the planes.

func (*VideoFrame) Meta

func (v *VideoFrame) Meta() unsafe.Pointer

Meta: pointer to metadata if any.

func (*VideoFrame) SetID

func (v *VideoFrame) SetID(id int)

ID: id of the mapped frame. the id can for example be used to identify the frame in case of multiview video.

func (*VideoFrame) Unmap

func (frame *VideoFrame) Unmap()

Unmap the memory previously mapped with gst_video_frame_map.

type VideoFrameFlags

type VideoFrameFlags C.guint

VideoFrameFlags: extra video frame flags.

const (
	// VideoFrameFlagNone: no flags.
	VideoFrameFlagNone VideoFrameFlags = 0b0
	// VideoFrameFlagInterlaced: video frame is interlaced. In mixed
	// interlace-mode, this flag specifies if the frame is interlaced or
	// progressive.
	VideoFrameFlagInterlaced VideoFrameFlags = 0b1
	// VideoFrameFlagTff: video frame has the top field first.
	VideoFrameFlagTff VideoFrameFlags = 0b10
	// VideoFrameFlagRff: video frame has the repeat flag.
	VideoFrameFlagRff VideoFrameFlags = 0b100
	// VideoFrameFlagOnefield: video frame has one field.
	VideoFrameFlagOnefield VideoFrameFlags = 0b1000
	// VideoFrameFlagMultipleView: video contains one or more non-mono views.
	VideoFrameFlagMultipleView VideoFrameFlags = 0b10000
	// VideoFrameFlagFirstInBundle: video frame is the first in a set of
	// corresponding views provided as sequential frames.
	VideoFrameFlagFirstInBundle VideoFrameFlags = 0b100000
	// VideoFrameFlagTopField: video frame has the top field only. This is the
	// same as GST_VIDEO_FRAME_FLAG_TFF | GST_VIDEO_FRAME_FLAG_ONEFIELD (Since:
	// 1.16).
	VideoFrameFlagTopField VideoFrameFlags = 0b1010
	// VideoFrameFlagOnefield: video frame has one field.
	VideoFrameFlagOnefield VideoFrameFlags = 0b1000
	// VideoFrameFlagBottomField: video frame has the bottom field only. This is
	// the same as GST_VIDEO_FRAME_FLAG_ONEFIELD (GST_VIDEO_FRAME_FLAG_TFF flag
	// unset) (Since: 1.16).
	VideoFrameFlagBottomField VideoFrameFlags = 0b1000
)

func (VideoFrameFlags) Has

func (v VideoFrameFlags) Has(other VideoFrameFlags) bool

Has returns true if v contains other.

func (VideoFrameFlags) String

func (v VideoFrameFlags) String() string

String returns the names in string for VideoFrameFlags.

type VideoFrameMapFlags

type VideoFrameMapFlags C.guint

VideoFrameMapFlags: additional mapping flags for gst_video_frame_map().

const (
	// VideoFrameMapFlagNoRef: don't take another reference of the buffer and
	// store it in the GstVideoFrame. This makes sure that the buffer stays
	// writable while the frame is mapped, but requires that the buffer
	// reference stays valid until the frame is unmapped again.
	VideoFrameMapFlagNoRef VideoFrameMapFlags = 0b10000000000000000
	// VideoFrameMapFlagLast: offset to define more flags.
	VideoFrameMapFlagLast VideoFrameMapFlags = 0b1000000000000000000000000
)

func (VideoFrameMapFlags) Has

Has returns true if v contains other.

func (VideoFrameMapFlags) String

func (v VideoFrameMapFlags) String() string

String returns the names in string for VideoFrameMapFlags.

type VideoGLTextureOrientation

type VideoGLTextureOrientation C.gint

VideoGLTextureOrientation: orientation of the GL texture.

const (
	// VideoGLTextureOrientationXNormalYNormal: top line first in memory, left
	// row first.
	VideoGLTextureOrientationXNormalYNormal VideoGLTextureOrientation = iota
	// VideoGLTextureOrientationXNormalYFlip: bottom line first in memory, left
	// row first.
	VideoGLTextureOrientationXNormalYFlip
	// VideoGLTextureOrientationXFlipYNormal: top line first in memory, right
	// row first.
	VideoGLTextureOrientationXFlipYNormal
	// VideoGLTextureOrientationXFlipYFlip: bottom line first in memory, right
	// row first.
	VideoGLTextureOrientationXFlipYFlip
)

func (VideoGLTextureOrientation) String

func (v VideoGLTextureOrientation) String() string

String returns the name in string for VideoGLTextureOrientation.

type VideoGLTextureType

type VideoGLTextureType C.gint

VideoGLTextureType: GL texture type.

const (
	// VideoGLTextureTypeLuminance: luminance texture, GL_LUMINANCE.
	VideoGLTextureTypeLuminance VideoGLTextureType = iota
	// VideoGLTextureTypeLuminanceAlpha: luminance-alpha texture,
	// GL_LUMINANCE_ALPHA.
	VideoGLTextureTypeLuminanceAlpha
	// VideoGLTextureTypeRGB16: RGB 565 texture, GL_RGB.
	VideoGLTextureTypeRGB16
	// VideoGLTextureTypeRGB: RGB texture, GL_RGB.
	VideoGLTextureTypeRGB
	// VideoGLTextureTypeRGBA: RGBA texture, GL_RGBA.
	VideoGLTextureTypeRGBA
	// VideoGLTextureTypeR: r texture, GL_RED_EXT.
	VideoGLTextureTypeR
	// VideoGLTextureTypeRg: RG texture, GL_RG_EXT.
	VideoGLTextureTypeRg
)

func (VideoGLTextureType) String

func (v VideoGLTextureType) String() string

String returns the name in string for VideoGLTextureType.

type VideoGLTextureUploadMeta

type VideoGLTextureUploadMeta struct {
	// contains filtered or unexported fields
}

VideoGLTextureUploadMeta: extra buffer metadata for uploading a buffer to an OpenGL texture ID. The caller of gst_video_gl_texture_upload_meta_upload() must have OpenGL set up and call this from a thread where it is valid to upload something to an OpenGL texture.

An instance of this type is always passed by reference.

func (*VideoGLTextureUploadMeta) Meta

func (v *VideoGLTextureUploadMeta) Meta() *gst.Meta

Meta: parent Meta.

func (*VideoGLTextureUploadMeta) NTextures

func (v *VideoGLTextureUploadMeta) NTextures() uint

NTextures: number of textures that are generated.

func (*VideoGLTextureUploadMeta) SetNTextures

func (v *VideoGLTextureUploadMeta) SetNTextures(nTextures uint)

NTextures: number of textures that are generated.

func (*VideoGLTextureUploadMeta) TextureOrientation

func (v *VideoGLTextureUploadMeta) TextureOrientation() VideoGLTextureOrientation

TextureOrientation: orientation of the textures.

func (*VideoGLTextureUploadMeta) TextureType

func (v *VideoGLTextureUploadMeta) TextureType() [4]VideoGLTextureType

TextureType: type of each texture.

func (*VideoGLTextureUploadMeta) Upload

func (meta *VideoGLTextureUploadMeta) Upload(textureId *uint) bool

Upload uploads the buffer which owns the meta to a specific texture ID.

The function takes the following parameters:

  • textureId: texture IDs to upload to.

The function returns the following values:

  • ok: TRUE if uploading succeeded, FALSE otherwise.

type VideoGammaMode

type VideoGammaMode C.gint
const (
	// VideoGammaModeNone: disable gamma handling.
	VideoGammaModeNone VideoGammaMode = iota
	// VideoGammaModeRemap: convert between input and output gamma Different
	// gamma conversion modes.
	VideoGammaModeRemap
)

func (VideoGammaMode) String

func (v VideoGammaMode) String() string

String returns the name in string for VideoGammaMode.

type VideoInfo

type VideoInfo struct {
	// contains filtered or unexported fields
}

VideoInfo: information describing image properties. This information can be filled in from GstCaps with gst_video_info_from_caps(). The information is also used to store the specific video info when mapping a video frame with gst_video_frame_map().

Use the provided macros to access the info in this structure.

An instance of this type is always passed by reference.

func NewVideoInfo

func NewVideoInfo() *VideoInfo

NewVideoInfo constructs a struct VideoInfo.

func NewVideoInfoFromCaps

func NewVideoInfoFromCaps(caps *gst.Caps) *VideoInfo

NewVideoInfoFromCaps constructs a struct VideoInfo.

func VideoBlendScaleLinearRGBA

func VideoBlendScaleLinearRGBA(src *VideoInfo, srcBuffer *gst.Buffer, destHeight, destWidth int) (*VideoInfo, *gst.Buffer)

VideoBlendScaleLinearRGBA scales a buffer containing RGBA (or AYUV) video. This is an internal helper function which is used to scale subtitle overlays, and may be deprecated in the near future. Use VideoScaler to scale video buffers instead.

The function takes the following parameters:

  • src describing the video data in src_buffer.
  • srcBuffer: source buffer containing video pixels to scale.
  • destHeight: height in pixels to scale the video data in src_buffer to.
  • destWidth: width in pixels to scale the video data in src_buffer to.

The function returns the following values:

  • dest: pointer to a VideoInfo structure that will be filled in with the details for dest_buffer.
  • destBuffer: pointer to a Buffer variable, which will be set to a newly-allocated buffer containing the scaled pixels.

func VideoInfoFromCaps

func VideoInfoFromCaps(caps *gst.Caps) (*VideoInfo, bool)

VideoInfoFromCaps: parse caps and update info.

The function takes the following parameters:

  • caps: Caps.

The function returns the following values:

  • info: VideoInfo.
  • ok: TRUE if caps could be parsed.

func VideoInfoInit

func VideoInfoInit() *VideoInfo

VideoInfoInit: initialize info with default values.

The function returns the following values:

  • info: VideoInfo.

func (*VideoInfo) Align

func (info *VideoInfo) Align(align *VideoAlignment) bool

Align: adjust the offset and stride fields in info so that the padding and stride alignment in align is respected.

Extra padding will be added to the right side when stride alignment padding is required and align will be updated with the new padding values.

The function takes the following parameters:

  • align: alignment parameters.

The function returns the following values:

  • ok: FALSE if alignment could not be applied, e.g. because the size of a frame can't be represented as a 32 bit integer (Since: 1.12).

func (*VideoInfo) AlignFull

func (info *VideoInfo) AlignFull(align *VideoAlignment) (uint, bool)

AlignFull: extra padding will be added to the right side when stride alignment padding is required and align will be updated with the new padding values.

This variant of gst_video_info_align() provides the updated size, in bytes, of each video plane after the alignment, including all horizontal and vertical paddings.

In case of GST_VIDEO_INTERLACE_MODE_ALTERNATE info, the returned sizes are the ones used to hold a single field, not the full frame.

The function takes the following parameters:

  • align: alignment parameters.

The function returns the following values:

  • planeSize (optional): array used to store the plane sizes.
  • ok: FALSE if alignment could not be applied, e.g. because the size of a frame can't be represented as a 32 bit integer.

func (*VideoInfo) ChromaSite

func (v *VideoInfo) ChromaSite() VideoChromaSite

ChromaSite: VideoChromaSite.

func (*VideoInfo) Colorimetry

func (v *VideoInfo) Colorimetry() *VideoColorimetry

Colorimetry: colorimetry info.

func (*VideoInfo) Convert

func (info *VideoInfo) Convert(srcFormat gst.Format, srcValue int64, destFormat gst.Format) (int64, bool)

Convert converts among various Format types. This function handles GST_FORMAT_BYTES, GST_FORMAT_TIME, and GST_FORMAT_DEFAULT. For raw video, GST_FORMAT_DEFAULT corresponds to video frames. This function can be used to handle pad queries of the type GST_QUERY_CONVERT.

The function takes the following parameters:

  • srcFormat of the src_value.
  • srcValue: value to convert.
  • destFormat of the dest_value.

The function returns the following values:

  • destValue: pointer to destination value.
  • ok: TRUE if the conversion was successful.

func (*VideoInfo) Copy

func (info *VideoInfo) Copy() *VideoInfo

Copy a GstVideoInfo structure.

The function returns the following values:

  • videoInfo: new VideoInfo. free with gst_video_info_free.

func (*VideoInfo) FPSD

func (v *VideoInfo) FPSD() int

FPSD: framerate denominator.

func (*VideoInfo) FPSN

func (v *VideoInfo) FPSN() int

FPSN: framerate numerator.

func (*VideoInfo) Finfo

func (v *VideoInfo) Finfo() *VideoFormatInfo

Finfo: format info of the video.

func (*VideoInfo) Flags

func (v *VideoInfo) Flags() VideoFlags

Flags: additional video flags.

func (*VideoInfo) Height

func (v *VideoInfo) Height() int

Height: height of the video.

func (*VideoInfo) InterlaceMode

func (v *VideoInfo) InterlaceMode() VideoInterlaceMode

InterlaceMode: interlace mode.

func (*VideoInfo) IsEqual

func (info *VideoInfo) IsEqual(other *VideoInfo) bool

IsEqual compares two VideoInfo and returns whether they are equal or not.

The function takes the following parameters:

  • other: VideoInfo.

The function returns the following values:

  • ok: TRUE if info and other are equal, else FALSE.

func (*VideoInfo) Offset

func (v *VideoInfo) Offset() [4]uint

Offset offsets of the planes.

func (*VideoInfo) ParD

func (v *VideoInfo) ParD() int

ParD: pixel-aspect-ratio denominator.

func (*VideoInfo) ParN

func (v *VideoInfo) ParN() int

ParN: pixel-aspect-ratio numerator.

func (*VideoInfo) SetFPSD

func (v *VideoInfo) SetFPSD(fpsD int)

FPSD: framerate denominator.

func (*VideoInfo) SetFPSN

func (v *VideoInfo) SetFPSN(fpsN int)

FPSN: framerate numerator.

func (*VideoInfo) SetFormat

func (info *VideoInfo) SetFormat(format VideoFormat, width uint, height uint) bool

SetFormat: set the default info for a video frame of format and width and height.

Note: This initializes info first, no values are preserved. This function does not set the offsets correctly for interlaced vertically subsampled formats.

The function takes the following parameters:

  • format: format.
  • width: width.
  • height: height.

The function returns the following values:

  • ok: FALSE if the returned video info is invalid, e.g. because the size of a frame can't be represented as a 32 bit integer (Since: 1.12).

func (*VideoInfo) SetHeight

func (v *VideoInfo) SetHeight(height int)

Height: height of the video.

func (*VideoInfo) SetInterlacedFormat

func (info *VideoInfo) SetInterlacedFormat(format VideoFormat, mode VideoInterlaceMode, width uint, height uint) bool

SetInterlacedFormat: same as #gst_video_info_set_format but also allowing to set the interlaced mode.

The function takes the following parameters:

  • format: format.
  • mode: VideoInterlaceMode.
  • width: width.
  • height: height.

The function returns the following values:

  • ok: FALSE if the returned video info is invalid, e.g. because the size of a frame can't be represented as a 32 bit integer.

func (*VideoInfo) SetParD

func (v *VideoInfo) SetParD(parD int)

ParD: pixel-aspect-ratio denominator.

func (*VideoInfo) SetParN

func (v *VideoInfo) SetParN(parN int)

ParN: pixel-aspect-ratio numerator.

func (*VideoInfo) SetSize

func (v *VideoInfo) SetSize(size uint)

Size: default size of one frame.

func (*VideoInfo) SetViews

func (v *VideoInfo) SetViews(views int)

Views: number of views for multiview video.

func (*VideoInfo) SetWidth

func (v *VideoInfo) SetWidth(width int)

Width: width of the video.

func (*VideoInfo) Size

func (v *VideoInfo) Size() uint

Size: default size of one frame.

func (*VideoInfo) Stride

func (v *VideoInfo) Stride() [4]int

Stride strides of the planes.

func (*VideoInfo) ToCaps

func (info *VideoInfo) ToCaps() *gst.Caps

ToCaps: convert the values of info into a Caps.

The function returns the following values:

  • caps: new Caps containing the info of info.

func (*VideoInfo) Views

func (v *VideoInfo) Views() int

Views: number of views for multiview video.

func (*VideoInfo) Width

func (v *VideoInfo) Width() int

Width: width of the video.

type VideoInterlaceMode

type VideoInterlaceMode C.gint

VideoInterlaceMode: possible values of the VideoInterlaceMode describing the interlace mode of the stream.

const (
	// VideoInterlaceModeProgressive: all frames are progressive.
	VideoInterlaceModeProgressive VideoInterlaceMode = iota
	// VideoInterlaceModeInterleaved: 2 fields are interleaved in one video
	// frame. Extra buffer flags describe the field order.
	VideoInterlaceModeInterleaved
	// VideoInterlaceModeMixed frames contains both interlaced and progressive
	// video, the buffer flags describe the frame and fields.
	VideoInterlaceModeMixed
	// VideoInterlaceModeFields: 2 fields are stored in one buffer, use the
	// frame ID to get access to the required field. For multiview (the 'views'
	// property > 1) the fields of view N can be found at frame ID (N * 2) and
	// (N * 2) + 1. Each field has only half the amount of lines as noted in the
	// height property. This mode requires multiple GstVideoMeta metadata to
	// describe the fields.
	VideoInterlaceModeFields
	// VideoInterlaceModeAlternate: 1 field is stored in one buffer,
	// GST_VIDEO_BUFFER_FLAG_TF or GST_VIDEO_BUFFER_FLAG_BF indicates if the
	// buffer is carrying the top or bottom field, respectively. The top and
	// bottom buffers must alternate in the pipeline, with this mode (Since:
	// 1.16).
	VideoInterlaceModeAlternate
)

func VideoInterlaceModeFromString

func VideoInterlaceModeFromString(mode string) VideoInterlaceMode

VideoInterlaceModeFromString: convert mode to a VideoInterlaceMode.

The function takes the following parameters:

  • mode: mode.

The function returns the following values:

  • videoInterlaceMode of mode or T_VIDEO_INTERLACE_MODE_PROGRESSIVE when mode is not a valid string representation for a VideoInterlaceMode.

func (VideoInterlaceMode) String

func (v VideoInterlaceMode) String() string

String returns the name in string for VideoInterlaceMode.

type VideoMasteringDisplayInfo

type VideoMasteringDisplayInfo struct {
	// contains filtered or unexported fields
}

VideoMasteringDisplayInfo: mastering display color volume information defined by SMPTE ST 2086 (a.k.a static HDR metadata).

An instance of this type is always passed by reference.

func VideoMasteringDisplayInfoFromString

func VideoMasteringDisplayInfoFromString(mastering string) (*VideoMasteringDisplayInfo, bool)

VideoMasteringDisplayInfoFromString: extract VideoMasteringDisplayInfo from mastering.

The function takes the following parameters:

  • mastering representing VideoMasteringDisplayInfo.

The function returns the following values:

  • minfo: VideoMasteringDisplayInfo.
  • ok: TRUE if minfo was filled with mastering.

func (*VideoMasteringDisplayInfo) AddToCaps

func (minfo *VideoMasteringDisplayInfo) AddToCaps(caps *gst.Caps) bool

AddToCaps: set string representation of minfo to caps.

The function takes the following parameters:

  • caps: Caps.

The function returns the following values:

  • ok: TRUE if minfo was successfully set to caps.

func (*VideoMasteringDisplayInfo) DisplayPrimaries

DisplayPrimaries: xy coordinates of primaries in the CIE 1931 color space. the index 0 contains red, 1 is for green and 2 is for blue. each value is normalized to 50000 (meaning that in unit of 0.00002).

func (*VideoMasteringDisplayInfo) FromCaps

func (minfo *VideoMasteringDisplayInfo) FromCaps(caps *gst.Caps) bool

FromCaps: parse caps and update minfo.

The function takes the following parameters:

  • caps: Caps.

The function returns the following values:

  • ok: TRUE if caps has VideoMasteringDisplayInfo and could be parsed.

func (*VideoMasteringDisplayInfo) Init

func (minfo *VideoMasteringDisplayInfo) Init()

Init: initialize minfo.

func (*VideoMasteringDisplayInfo) IsEqual

IsEqual checks equality between minfo and other.

The function takes the following parameters:

  • other: VideoMasteringDisplayInfo.

The function returns the following values:

  • ok: TRUE if minfo and other are equal.

func (*VideoMasteringDisplayInfo) MaxDisplayMasteringLuminance

func (v *VideoMasteringDisplayInfo) MaxDisplayMasteringLuminance() uint32

MaxDisplayMasteringLuminance: maximum value of display luminance in unit of 0.0001 candelas per square metre (cd/m^2 and nit).

func (*VideoMasteringDisplayInfo) MinDisplayMasteringLuminance

func (v *VideoMasteringDisplayInfo) MinDisplayMasteringLuminance() uint32

MinDisplayMasteringLuminance: minimum value of display luminance in unit of 0.0001 candelas per square metre (cd/m^2 and nit).

func (*VideoMasteringDisplayInfo) SetMaxDisplayMasteringLuminance

func (v *VideoMasteringDisplayInfo) SetMaxDisplayMasteringLuminance(maxDisplayMasteringLuminance uint32)

MaxDisplayMasteringLuminance: maximum value of display luminance in unit of 0.0001 candelas per square metre (cd/m^2 and nit).

func (*VideoMasteringDisplayInfo) SetMinDisplayMasteringLuminance

func (v *VideoMasteringDisplayInfo) SetMinDisplayMasteringLuminance(minDisplayMasteringLuminance uint32)

MinDisplayMasteringLuminance: minimum value of display luminance in unit of 0.0001 candelas per square metre (cd/m^2 and nit).

func (*VideoMasteringDisplayInfo) String

func (minfo *VideoMasteringDisplayInfo) String() string

String: convert minfo to its string representation.

The function returns the following values:

  • utf8: string representation of minfo.

func (*VideoMasteringDisplayInfo) WhitePoint

WhitePoint: xy coordinates of white point in the CIE 1931 color space. each value is normalized to 50000 (meaning that in unit of 0.00002).

type VideoMasteringDisplayInfoCoordinates

type VideoMasteringDisplayInfoCoordinates struct {
	// contains filtered or unexported fields
}

VideoMasteringDisplayInfoCoordinates: used to represent display_primaries and white_point of VideoMasteringDisplayInfo struct. See VideoMasteringDisplayInfo

An instance of this type is always passed by reference.

func NewVideoMasteringDisplayInfoCoordinates

func NewVideoMasteringDisplayInfoCoordinates(x, y uint16) VideoMasteringDisplayInfoCoordinates

NewVideoMasteringDisplayInfoCoordinates creates a new VideoMasteringDisplayInfoCoordinates instance from the given fields. Beware that this function allocates on the Go heap; be careful when using it!

func (*VideoMasteringDisplayInfoCoordinates) SetX

X: x coordinate of CIE 1931 color space in unit of 0.00002.

func (*VideoMasteringDisplayInfoCoordinates) SetY

Y: y coordinate of CIE 1931 color space in unit of 0.00002.

func (*VideoMasteringDisplayInfoCoordinates) X

X: x coordinate of CIE 1931 color space in unit of 0.00002.

func (*VideoMasteringDisplayInfoCoordinates) Y

Y: y coordinate of CIE 1931 color space in unit of 0.00002.

type VideoMatrixMode

type VideoMatrixMode C.gint

VideoMatrixMode: different color matrix conversion modes.

const (
	// VideoMatrixModeFull: do conversion between color matrices.
	VideoMatrixModeFull VideoMatrixMode = iota
	// VideoMatrixModeInputOnly: use the input color matrix to convert to and
	// from R'G'B.
	VideoMatrixModeInputOnly
	// VideoMatrixModeOutputOnly: use the output color matrix to convert to and
	// from R'G'B.
	VideoMatrixModeOutputOnly
	// VideoMatrixModeNone: disable color matrix conversion.
	VideoMatrixModeNone
)

func (VideoMatrixMode) String

func (v VideoMatrixMode) String() string

String returns the name in string for VideoMatrixMode.

type VideoMeta

type VideoMeta struct {
	// contains filtered or unexported fields
}

VideoMeta: extra buffer metadata describing image properties

This meta can also be used by downstream elements to specifiy their buffer layout requirements for upstream. Upstream should try to fit those requirements, if possible, in order to prevent buffer copies.

This is done by passing a custom Structure to gst_query_add_allocation_meta() when handling the ALLOCATION query. This structure should be named 'video-meta' and can have the following fields:

- padding-top (uint): extra pixels on the top

- padding-bottom (uint): extra pixels on the bottom

- padding-left (uint): extra pixels on the left side

- padding-right (uint): extra pixels on the right side The padding fields have the same semantic as VideoMeta.alignment and so represent the paddings requested on produced video buffers.

An instance of this type is always passed by reference.

func BufferAddVideoMeta

func BufferAddVideoMeta(buffer *gst.Buffer, flags VideoFrameFlags, format VideoFormat, width, height uint) *VideoMeta

BufferAddVideoMeta attaches GstVideoMeta metadata to buffer with the given parameters and the default offsets and strides for format and width x height.

This function calculates the default offsets and strides and then calls gst_buffer_add_video_meta_full() with them.

The function takes the following parameters:

  • buffer: Buffer.
  • flags: VideoFrameFlags.
  • format: VideoFormat.
  • width: width.
  • height: height.

The function returns the following values:

  • videoMeta on buffer.

func BufferAddVideoMetaFull

func BufferAddVideoMetaFull(buffer *gst.Buffer, flags VideoFrameFlags, format VideoFormat, width, height, nPlanes uint, offset [4]uint, stride [4]int) *VideoMeta

BufferAddVideoMetaFull attaches GstVideoMeta metadata to buffer with the given parameters.

The function takes the following parameters:

  • buffer: Buffer.
  • flags: VideoFrameFlags.
  • format: VideoFormat.
  • width: width.
  • height: height.
  • nPlanes: number of planes.
  • offset of each plane.
  • stride of each plane.

The function returns the following values:

  • videoMeta on buffer.

func BufferGetVideoMeta

func BufferGetVideoMeta(buffer *gst.Buffer) *VideoMeta

BufferGetVideoMeta: find the VideoMeta on buffer with the lowest id.

Buffers can contain multiple VideoMeta metadata items when dealing with multiview buffers.

The function takes the following parameters:

  • buffer: Buffer.

The function returns the following values:

  • videoMeta with lowest id (usually 0) or NULL when there is no such metadata on buffer.

func BufferGetVideoMetaID

func BufferGetVideoMetaID(buffer *gst.Buffer, id int) *VideoMeta

BufferGetVideoMetaID: find the VideoMeta on buffer with the given id.

Buffers can contain multiple VideoMeta metadata items when dealing with multiview buffers.

The function takes the following parameters:

  • buffer: Buffer.
  • id: metadata id.

The function returns the following values:

  • videoMeta with id or NULL when there is no such metadata on buffer.

func (*VideoMeta) Alignment

func (v *VideoMeta) Alignment() *VideoAlignment

Alignment paddings and alignment constraints of the video buffer. It is up to the caller of gst_buffer_add_video_meta_full() to set it using gst_video_meta_set_alignment(), if they did not it defaults to no padding and no alignment. Since: 1.18.

func (*VideoMeta) Buffer

func (v *VideoMeta) Buffer() *gst.Buffer

Buffer: buffer this metadata belongs to.

func (*VideoMeta) Flags

func (v *VideoMeta) Flags() VideoFrameFlags

Flags: additional video flags.

func (*VideoMeta) Format

func (v *VideoMeta) Format() VideoFormat

Format: video format.

func (*VideoMeta) Height

func (v *VideoMeta) Height() uint

Height: video height.

func (*VideoMeta) ID

func (v *VideoMeta) ID() int

ID: identifier of the frame.

func (*VideoMeta) Map

func (meta *VideoMeta) Map(plane uint, info *gst.MapInfo, flags gst.MapFlags) (unsafe.Pointer, int, bool)

Map the video plane with index plane in meta and return a pointer to the first byte of the plane and the stride of the plane.

The function takes the following parameters:

  • plane: plane.
  • info: MapInfo.
  • flags: GstMapFlags.

The function returns the following values:

  • data (optional) of plane.
  • stride of plane.
  • ok: TRUE if the map operation was successful.

func (*VideoMeta) Meta

func (v *VideoMeta) Meta() *gst.Meta

Meta: parent Meta.

func (*VideoMeta) NPlanes

func (v *VideoMeta) NPlanes() uint

NPlanes: number of planes in the image.

func (*VideoMeta) Offset

func (v *VideoMeta) Offset() [4]uint

Offset: array of offsets for the planes. This field might not always be valid, it is used by the default implementation of map.

func (*VideoMeta) PlaneHeight

func (meta *VideoMeta) PlaneHeight() ([4]uint, bool)

PlaneHeight: compute the padded height of each plane from meta (padded size divided by stride).

It is not valid to call this function with a meta associated to a TILED video format.

The function returns the following values:

  • planeHeight: array used to store the plane height.
  • ok: TRUE if meta's alignment is valid and plane_height has been updated, FALSE otherwise.

func (*VideoMeta) PlaneSize

func (meta *VideoMeta) PlaneSize() ([4]uint, bool)

PlaneSize: compute the size, in bytes, of each video plane described in meta including any padding and alignment constraint defined in meta->alignment.

The function returns the following values:

  • planeSize: array used to store the plane sizes.
  • ok: TRUE if meta's alignment is valid and plane_size has been updated, FALSE otherwise.

func (*VideoMeta) SetAlignment

func (meta *VideoMeta) SetAlignment(alignment *VideoAlignment) bool

SetAlignment: set the alignment of meta to alignment. This function checks that the paddings defined in alignment are compatible with the strides defined in meta and will fail to update if they are not.

The function takes the following parameters:

  • alignment: VideoAlignment.

The function returns the following values:

  • ok: TRUE if alignment's meta has been updated, FALSE if not.

func (*VideoMeta) SetHeight

func (v *VideoMeta) SetHeight(height uint)

Height: video height.

func (*VideoMeta) SetID

func (v *VideoMeta) SetID(id int)

ID: identifier of the frame.

func (*VideoMeta) SetNPlanes

func (v *VideoMeta) SetNPlanes(nPlanes uint)

NPlanes: number of planes in the image.

func (*VideoMeta) SetWidth

func (v *VideoMeta) SetWidth(width uint)

Width: video width.

func (*VideoMeta) Stride

func (v *VideoMeta) Stride() [4]int

Stride: array of strides for the planes. This field might not always be valid, it is used by the default implementation of map.

func (*VideoMeta) Unmap

func (meta *VideoMeta) Unmap(plane uint, info *gst.MapInfo) bool

Unmap a previously mapped plane with gst_video_meta_map().

The function takes the following parameters:

  • plane: plane.
  • info: MapInfo.

The function returns the following values:

  • ok: TRUE if the memory was successfully unmapped.

func (*VideoMeta) Width

func (v *VideoMeta) Width() uint

Width: video width.

type VideoMetaTransform

type VideoMetaTransform struct {
	// contains filtered or unexported fields
}

VideoMetaTransform: extra data passed to a video transform MetaTransformFunction such as: "gst-video-scale".

An instance of this type is always passed by reference.

func (*VideoMetaTransform) InInfo

func (v *VideoMetaTransform) InInfo() *VideoInfo

InInfo: input VideoInfo.

func (*VideoMetaTransform) OutInfo

func (v *VideoMetaTransform) OutInfo() *VideoInfo

OutInfo: output VideoInfo.

type VideoMultiviewFlags

type VideoMultiviewFlags C.guint

VideoMultiviewFlags are used to indicate extra properties of a stereo/multiview stream beyond the frame layout and buffer mapping that is conveyed in the VideoMultiviewMode.

const (
	// VideoMultiviewFlagsNone: no flags.
	VideoMultiviewFlagsNone VideoMultiviewFlags = 0b0
	// VideoMultiviewFlagsRightViewFirst: for stereo streams, the normal
	// arrangement of left and right views is reversed.
	VideoMultiviewFlagsRightViewFirst VideoMultiviewFlags = 0b1
	// VideoMultiviewFlagsLeftFlipped: left view is vertically mirrored.
	VideoMultiviewFlagsLeftFlipped VideoMultiviewFlags = 0b10
	// VideoMultiviewFlagsLeftFlopped: left view is horizontally mirrored.
	VideoMultiviewFlagsLeftFlopped VideoMultiviewFlags = 0b100
	// VideoMultiviewFlagsRightFlipped: right view is vertically mirrored.
	VideoMultiviewFlagsRightFlipped VideoMultiviewFlags = 0b1000
	// VideoMultiviewFlagsRightFlopped: right view is horizontally mirrored.
	VideoMultiviewFlagsRightFlopped VideoMultiviewFlags = 0b10000
	// VideoMultiviewFlagsHalfAspect: for frame-packed multiview modes,
	// indicates that the individual views have been encoded with half the true
	// width or height and should be scaled back up for display. This flag is
	// used for overriding input layout interpretation by adjusting
	// pixel-aspect-ratio. For side-by-side, column interleaved or checkerboard
	// packings, the pixel width will be doubled. For row interleaved and
	// top-bottom encodings, pixel height will be doubled.
	VideoMultiviewFlagsHalfAspect VideoMultiviewFlags = 0b100000000000000
	// VideoMultiviewFlagsMixedMono: video stream contains both mono and
	// multiview portions, signalled on each buffer by the absence or presence
	// of the GST_VIDEO_BUFFER_FLAG_MULTIPLE_VIEW buffer flag.
	VideoMultiviewFlagsMixedMono VideoMultiviewFlags = 0b1000000000000000
)

func (VideoMultiviewFlags) Has

Has returns true if v contains other.

func (VideoMultiviewFlags) String

func (v VideoMultiviewFlags) String() string

String returns the names in string for VideoMultiviewFlags.

type VideoMultiviewFlagsSet

type VideoMultiviewFlagsSet struct {
	gst.FlagSet
	// contains filtered or unexported fields
}

VideoMultiviewFlagsSet: see VideoMultiviewFlags.

type VideoMultiviewFramePacking

type VideoMultiviewFramePacking C.gint

VideoMultiviewFramePacking represents the subset of VideoMultiviewMode values that can be applied to any video frame without needing extra metadata. It can be used by elements that provide a property to override the multiview interpretation of a video stream when the video doesn't contain any markers.

This enum is used (for example) on playbin, to re-interpret a played video stream as a stereoscopic video. The individual enum values are equivalent to and have the same value as the matching VideoMultiviewMode.

const (
	// VideoMultiviewFramePackingNone: special value indicating no frame packing
	// info.
	VideoMultiviewFramePackingNone VideoMultiviewFramePacking = -1
	// VideoMultiviewFramePackingMono: all frames are monoscopic.
	VideoMultiviewFramePackingMono VideoMultiviewFramePacking = 0
	// VideoMultiviewFramePackingLeft: all frames represent a left-eye view.
	VideoMultiviewFramePackingLeft VideoMultiviewFramePacking = 1
	// VideoMultiviewFramePackingRight: all frames represent a right-eye view.
	VideoMultiviewFramePackingRight VideoMultiviewFramePacking = 2
	// VideoMultiviewFramePackingSideBySide: left and right eye views are
	// provided in the left and right half of the frame respectively.
	VideoMultiviewFramePackingSideBySide VideoMultiviewFramePacking = 3
	// VideoMultiviewFramePackingSideBySideQuincunx: left and right eye views
	// are provided in the left and right half of the frame, but have been
	// sampled using quincunx method, with half-pixel offset between the 2
	// views.
	VideoMultiviewFramePackingSideBySideQuincunx VideoMultiviewFramePacking = 4
	// VideoMultiviewFramePackingColumnInterleaved: alternating vertical columns
	// of pixels represent the left and right eye view respectively.
	VideoMultiviewFramePackingColumnInterleaved VideoMultiviewFramePacking = 5
	// VideoMultiviewFramePackingRowInterleaved: alternating horizontal rows of
	// pixels represent the left and right eye view respectively.
	VideoMultiviewFramePackingRowInterleaved VideoMultiviewFramePacking = 6
	// VideoMultiviewFramePackingTopBottom: top half of the frame contains the
	// left eye, and the bottom half the right eye.
	VideoMultiviewFramePackingTopBottom VideoMultiviewFramePacking = 7
	// VideoMultiviewFramePackingCheckerboard pixels are arranged with
	// alternating pixels representing left and right eye views in a
	// checkerboard fashion.
	VideoMultiviewFramePackingCheckerboard VideoMultiviewFramePacking = 8
)

func (VideoMultiviewFramePacking) String

String returns the name in string for VideoMultiviewFramePacking.

type VideoMultiviewMode

type VideoMultiviewMode C.gint

VideoMultiviewMode: all possible stereoscopic 3D and multiview representations. In conjunction with VideoMultiviewFlags, describes how multiview content is being transported in the stream.

const (
	// VideoMultiviewModeNone: special value indicating no multiview
	// information. Used in GstVideoInfo and other places to indicate that no
	// specific multiview handling has been requested or provided. This value is
	// never carried on caps.
	VideoMultiviewModeNone VideoMultiviewMode = -1
	// VideoMultiviewModeMono: all frames are monoscopic.
	VideoMultiviewModeMono VideoMultiviewMode = 0
	// VideoMultiviewModeLeft: all frames represent a left-eye view.
	VideoMultiviewModeLeft VideoMultiviewMode = 1
	// VideoMultiviewModeRight: all frames represent a right-eye view.
	VideoMultiviewModeRight VideoMultiviewMode = 2
	// VideoMultiviewModeSideBySide: left and right eye views are provided in
	// the left and right half of the frame respectively.
	VideoMultiviewModeSideBySide VideoMultiviewMode = 3
	// VideoMultiviewModeSideBySideQuincunx: left and right eye views are
	// provided in the left and right half of the frame, but have been sampled
	// using quincunx method, with half-pixel offset between the 2 views.
	VideoMultiviewModeSideBySideQuincunx VideoMultiviewMode = 4
	// VideoMultiviewModeColumnInterleaved: alternating vertical columns of
	// pixels represent the left and right eye view respectively.
	VideoMultiviewModeColumnInterleaved VideoMultiviewMode = 5
	// VideoMultiviewModeRowInterleaved: alternating horizontal rows of pixels
	// represent the left and right eye view respectively.
	VideoMultiviewModeRowInterleaved VideoMultiviewMode = 6
	// VideoMultiviewModeTopBottom: top half of the frame contains the left eye,
	// and the bottom half the right eye.
	VideoMultiviewModeTopBottom VideoMultiviewMode = 7
	// VideoMultiviewModeCheckerboard pixels are arranged with alternating
	// pixels representing left and right eye views in a checkerboard fashion.
	VideoMultiviewModeCheckerboard VideoMultiviewMode = 8
	// VideoMultiviewModeFrameByFrame: left and right eye views are provided in
	// separate frames alternately.
	VideoMultiviewModeFrameByFrame VideoMultiviewMode = 32
	// VideoMultiviewModeMultiviewFrameByFrame: multiple independent views are
	// provided in separate frames in sequence. This method only applies to raw
	// video buffers at the moment. Specific view identification is via the
	// GstVideoMultiviewMeta and VideoMeta(s) on raw video buffers.
	VideoMultiviewModeMultiviewFrameByFrame VideoMultiviewMode = 33
	// VideoMultiviewModeSeparated: multiple views are provided as separate
	// Memory framebuffers attached to each Buffer, described by the
	// GstVideoMultiviewMeta and VideoMeta(s).
	VideoMultiviewModeSeparated VideoMultiviewMode = 34
)

func VideoMultiviewModeFromCapsString

func VideoMultiviewModeFromCapsString(capsMviewMode string) VideoMultiviewMode

The function takes the following parameters:

  • capsMviewMode: multiview-mode field string from caps.

The function returns the following values:

  • videoMultiviewMode value

    Given a string from a caps multiview-mode field, output the corresponding VideoMultiviewMode or T_VIDEO_MULTIVIEW_MODE_NONE.

func (VideoMultiviewMode) String

func (v VideoMultiviewMode) String() string

String returns the name in string for VideoMultiviewMode.

type VideoOrientation

type VideoOrientation struct {
	*coreglib.Object
	// contains filtered or unexported fields
}

VideoOrientation: interface allows unified access to control flipping and autocenter operation of video-sources or operators.

VideoOrientation wraps an interface. This means the user can get the underlying type by calling Cast().

func (*VideoOrientation) Hcenter

func (videoOrientation *VideoOrientation) Hcenter() (int, bool)

Hcenter: get the horizontal centering offset from the given object.

The function returns the following values:

  • center: return location for the result.
  • ok: TRUE in case the element supports centering.

func (*VideoOrientation) Hflip

func (videoOrientation *VideoOrientation) Hflip() (flip, ok bool)

Hflip: get the horizontal flipping state (TRUE for flipped) from the given object.

The function returns the following values:

  • flip: return location for the result.
  • ok: TRUE in case the element supports flipping.

func (*VideoOrientation) SetHcenter

func (videoOrientation *VideoOrientation) SetHcenter(center int) bool

SetHcenter: set the horizontal centering offset for the given object.

The function takes the following parameters:

  • center: centering offset.

The function returns the following values:

  • ok: TRUE in case the element supports centering.

func (*VideoOrientation) SetHflip

func (videoOrientation *VideoOrientation) SetHflip(flip bool) bool

SetHflip: set the horizontal flipping state (TRUE for flipped) for the given object.

The function takes the following parameters:

  • flip: use flipping.

The function returns the following values:

  • ok: TRUE in case the element supports flipping.

func (*VideoOrientation) SetVcenter

func (videoOrientation *VideoOrientation) SetVcenter(center int) bool

SetVcenter: set the vertical centering offset for the given object.

The function takes the following parameters:

  • center: centering offset.

The function returns the following values:

  • ok: TRUE in case the element supports centering.

func (*VideoOrientation) SetVflip

func (videoOrientation *VideoOrientation) SetVflip(flip bool) bool

SetVflip: set the vertical flipping state (TRUE for flipped) for the given object.

The function takes the following parameters:

  • flip: use flipping.

The function returns the following values:

  • ok: TRUE in case the element supports flipping.

func (*VideoOrientation) Vcenter

func (videoOrientation *VideoOrientation) Vcenter() (int, bool)

Vcenter: get the vertical centering offset from the given object.

The function returns the following values:

  • center: return location for the result.
  • ok: TRUE in case the element supports centering.

func (*VideoOrientation) Vflip

func (videoOrientation *VideoOrientation) Vflip() (flip, ok bool)

Vflip: get the vertical flipping state (TRUE for flipped) from the given object.

The function returns the following values:

  • flip: return location for the result.
  • ok: TRUE in case the element supports flipping.

type VideoOrientationInterface

type VideoOrientationInterface struct {
	// contains filtered or unexported fields
}

VideoOrientationInterface interface.

An instance of this type is always passed by reference.

type VideoOrientationMethod

type VideoOrientationMethod C.gint

VideoOrientationMethod: different video orientation methods.

const (
	// VideoOrientationIdentity: identity (no rotation).
	VideoOrientationIdentity VideoOrientationMethod = iota
	// VideoOrientation90R: rotate clockwise 90 degrees.
	VideoOrientation90R
	// VideoOrientation180: rotate 180 degrees.
	VideoOrientation180
	// VideoOrientation90L: rotate counter-clockwise 90 degrees.
	VideoOrientation90L
	// VideoOrientationHoriz: flip horizontally.
	VideoOrientationHoriz
	// VideoOrientationVert: flip vertically.
	VideoOrientationVert
	// VideoOrientationUlLr: flip across upper left/lower right diagonal.
	VideoOrientationUlLr
	// VideoOrientationUrLl: flip across upper right/lower left diagonal.
	VideoOrientationUrLl
	// VideoOrientationAuto: select flip method based on image-orientation tag.
	VideoOrientationAuto
	// VideoOrientationCustom: current status depends on plugin internal setup.
	VideoOrientationCustom
)

func (VideoOrientationMethod) String

func (v VideoOrientationMethod) String() string

String returns the name in string for VideoOrientationMethod.

type VideoOrientationer

type VideoOrientationer interface {
	coreglib.Objector

	// Hcenter: get the horizontal centering offset from the given object.
	Hcenter() (int, bool)
	// Hflip: get the horizontal flipping state (TRUE for flipped) from the
	// given object.
	Hflip() (flip, ok bool)
	// Vcenter: get the vertical centering offset from the given object.
	Vcenter() (int, bool)
	// Vflip: get the vertical flipping state (TRUE for flipped) from the given
	// object.
	Vflip() (flip, ok bool)
	// SetHcenter: set the horizontal centering offset for the given object.
	SetHcenter(center int) bool
	// SetHflip: set the horizontal flipping state (TRUE for flipped) for the
	// given object.
	SetHflip(flip bool) bool
	// SetVcenter: set the vertical centering offset for the given object.
	SetVcenter(center int) bool
	// SetVflip: set the vertical flipping state (TRUE for flipped) for the
	// given object.
	SetVflip(flip bool) bool
}

VideoOrientationer describes VideoOrientation's interface methods.

type VideoOverlay

type VideoOverlay struct {
	*coreglib.Object
	// contains filtered or unexported fields
}

VideoOverlay interface is used for 2 main purposes :

* To get a grab on the Window where the video sink element is going to render. This is achieved by either being informed about the Window identifier that the video sink element generated, or by forcing the video sink element to use a specific Window identifier for rendering. * To force a redrawing of the latest video frame the video sink element displayed on the Window. Indeed if the Pipeline is in T_STATE_PAUSED state, moving the Window around will damage its content. Application developers will want to handle the Expose events themselves and force the video sink element to refresh the Window's content.

Using the Window created by the video sink is probably the simplest scenario, in some cases, though, it might not be flexible enough for application developers if they need to catch events such as mouse moves and button clicks.

Setting a specific Window identifier on the video sink element is the most flexible solution but it has some issues. Indeed the application needs to set its Window identifier at the right time to avoid internal Window creation from the video sink element. To solve this issue a Message is posted on the bus to inform the application that it should set the Window identifier immediately. Here is an example on how to do that correctly:

static GstBusSyncReply
create_window (GstBus * bus, GstMessage * message, GstPipeline * pipeline)
{
 // ignore anything but 'prepare-window-handle' element messages
 if (!gst_is_video_overlay_prepare_window_handle_message (message))
   return GST_BUS_PASS;

 win = XCreateSimpleWindow (disp, root, 0, 0, 320, 240, 0, 0, 0);

 XSetWindowBackgroundPixmap (disp, win, None);

 XMapRaised (disp, win);

 XSync (disp, FALSE);

 gst_video_overlay_set_window_handle (GST_VIDEO_OVERLAY (GST_MESSAGE_SRC (message)),
     win);

 gst_message_unref (message);

 return GST_BUS_DROP;
}
...
int
main (int argc, char **argv)
{
...
 bus = gst_pipeline_get_bus (GST_PIPELINE (pipeline));
 gst_bus_set_sync_handler (bus, (GstBusSyncHandler) create_window, pipeline,
        NULL);
...
}

Two basic usage scenarios

There are two basic usage scenarios: in the simplest case, the application uses #playbin or #playsink or knows exactly what particular element is used for video output, which is usually the case when the application creates the videosink to use (e.g. #xvimagesink, #ximagesink, etc.) itself; in this case, the application can just create the videosink element, create and realize the window to render the video on and then call gst_video_overlay_set_window_handle() directly with the XID or native window handle, before starting up the pipeline. As #playbin and #playsink implement the video overlay interface and proxy it transparently to the actual video sink even if it is created later, this case also applies when using these elements.

In the other and more common case, the application does not know in advance what GStreamer video sink element will be used for video output. This is usually the case when an element such as #autovideosink is used. In this case, the video sink element itself is created asynchronously from a GStreamer streaming thread some time after the pipeline has been started up. When that happens, however, the video sink will need to know right then whether to render onto an already existing application window or whether to create its own window. This is when it posts a prepare-window-handle message, and that is also why this message needs to be handled in a sync bus handler which will be called from the streaming thread directly (because the video sink will need an answer right then).

As response to the prepare-window-handle element message in the bus sync handler, the application may use gst_video_overlay_set_window_handle() to tell the video sink to render onto an existing window surface. At this point the application should already have obtained the window handle / XID, so it just needs to set it. It is generally not advisable to call any GUI toolkit functions or window system functions from the streaming thread in which the prepare-window-handle message is handled, because most GUI toolkits and windowing systems are not thread-safe at all and a lot of care would be required to co-ordinate the toolkit and window system calls of the different threads (Gtk+ users please note: prior to Gtk+ 2.18 GDK_WINDOW_XID was just a simple structure access, so generally fine to do within the bus sync handler; this macro was changed to a function call in Gtk+ 2.18 and later, which is likely to cause problems when called from a sync handler; see below for a better approach without GDK_WINDOW_XID used in the callback).

GstVideoOverlay and Gtk+

#include <gst/video/videooverlay.h>
#include <gtk/gtk.h>
#ifdef GDK_WINDOWING_X11
#include <gdk/gdkx.h>  // for GDK_WINDOW_XID
#endif
#ifdef GDK_WINDOWING_WIN32
#include <gdk/gdkwin32.h>  // for GDK_WINDOW_HWND
#endif
...
static guintptr video_window_handle = 0;
...
static GstBusSyncReply
bus_sync_handler (GstBus * bus, GstMessage * message, gpointer user_data)
{
 // ignore anything but 'prepare-window-handle' element messages
 if (!gst_is_video_overlay_prepare_window_handle_message (message))
   return GST_BUS_PASS;

 if (video_window_handle != 0) {
   GstVideoOverlay *overlay;

   // GST_MESSAGE_SRC (message) will be the video sink element
   overlay = GST_VIDEO_OVERLAY (GST_MESSAGE_SRC (message));
   gst_video_overlay_set_window_handle (overlay, video_window_handle);
 } else {
   g_warning ("Should have obtained video_window_handle by now!");
 }

 gst_message_unref (message);
 return GST_BUS_DROP;
}
...
static void
video_widget_realize_cb (GtkWidget * widget, gpointer data)
{
#if GTK_CHECK_VERSION(2,18,0)
  // Tell Gtk+/Gdk to create a native window for this widget instead of
  // drawing onto the parent widget.
  // This is here just for pedagogical purposes, GDK_WINDOW_XID will call
  // it as well in newer Gtk versions
  if (!gdk_window_ensure_native (widget->window))
    g_error ("Couldn't create native window needed for GstVideoOverlay!");
#endif

#ifdef GDK_WINDOWING_X11
  {
    gulong xid = GDK_WINDOW_XID (gtk_widget_get_window (video_window));
    video_window_handle = xid;
  }
#endif
#ifdef GDK_WINDOWING_WIN32
  {
    HWND wnd = GDK_WINDOW_HWND (gtk_widget_get_window (video_window));
    video_window_handle = (guintptr) wnd;
  }
#endif
}
...
int
main (int argc, char **argv)
{
  GtkWidget *video_window;
  GtkWidget *app_window;
  ...
  app_window = gtk_window_new (GTK_WINDOW_TOPLEVEL);
  ...
  video_window = gtk_drawing_area_new ();
  g_signal_connect (video_window, "realize",
      G_CALLBACK (video_widget_realize_cb), NULL);
  gtk_widget_set_double_buffered (video_window, FALSE);
  ...
  // usually the video_window will not be directly embedded into the
  // application window like this, but there will be many other widgets
  // and the video window will be embedded in one of them instead
  gtk_container_add (GTK_CONTAINER (ap_window), video_window);
  ...
  // show the GUI
  gtk_widget_show_all (app_window);

  // realize window now so that the video window gets created and we can
  // obtain its XID/HWND before the pipeline is started up and the videosink
  // asks for the XID/HWND of the window to render onto
  gtk_widget_realize (video_window);

  // we should have the XID/HWND now
  g_assert (video_window_handle != 0);
  ...
  // set up sync handler for setting the xid once the pipeline is started
  bus = gst_pipeline_get_bus (GST_PIPELINE (pipeline));
  gst_bus_set_sync_handler (bus, (GstBusSyncHandler) bus_sync_handler, NULL,
      NULL);
  gst_object_unref (bus);
  ...
  gst_element_set_state (pipeline, GST_STATE_PLAYING);
  ...
}

GstVideoOverlay and Qt

#include <glib.h>;
#include <gst/gst.h>;
#include <gst/video/videooverlay.h>;

#include <QApplication>;
#include <QTimer>;
#include <QWidget>;

int main(int argc, char *argv[])
{
  if (!g_thread_supported ())
    g_thread_init (NULL);

  gst_init (&argc, &argv);
  QApplication app(argc, argv);
  app.connect(&app, SIGNAL(lastWindowClosed()), &app, SLOT(quit ()));

  // prepare the pipeline

  GstElement *pipeline = gst_pipeline_new ("xvoverlay");
  GstElement *src = gst_element_factory_make ("videotestsrc", NULL);
  GstElement *sink = gst_element_factory_make ("xvimagesink", NULL);
  gst_bin_add_many (GST_BIN (pipeline), src, sink, NULL);
  gst_element_link (src, sink);

  // prepare the ui

  QWidget window;
  window.resize(320, 240);
  window.show();

  WId xwinid = window.winId();
  gst_video_overlay_set_window_handle (GST_VIDEO_OVERLAY (sink), xwinid);

  // run the pipeline

  GstStateChangeReturn sret = gst_element_set_state (pipeline,
      GST_STATE_PLAYING);
  if (sret == GST_STATE_CHANGE_FAILURE) {
    gst_element_set_state (pipeline, GST_STATE_NULL);
    gst_object_unref (pipeline);
    // Exit application
    QTimer::singleShot(0, QApplication::activeWindow(), SLOT(quit()));
  }

  int ret = app.exec();

  window.hide();
  gst_element_set_state (pipeline, GST_STATE_NULL);
  gst_object_unref (pipeline);

  return ret;
}.

VideoOverlay wraps an interface. This means the user can get the underlying type by calling Cast().

func (*VideoOverlay) Expose

func (overlay *VideoOverlay) Expose()

Expose: tell an overlay that it has been exposed. This will redraw the current frame in the drawable even if the pipeline is PAUSED.

func (*VideoOverlay) GotWindowHandle

func (overlay *VideoOverlay) GotWindowHandle(handle uintptr)

GotWindowHandle: this will post a "have-window-handle" element message on the bus.

This function should only be used by video overlay plugin developers.

The function takes the following parameters:

  • handle: platform-specific handle referencing the window.

func (*VideoOverlay) HandleEvents

func (overlay *VideoOverlay) HandleEvents(handleEvents bool)

HandleEvents: tell an overlay that it should handle events from the window system. These events are forwarded upstream as navigation events. In some window system, events are not propagated in the window hierarchy if a client is listening for them. This method allows you to disable events handling completely from the VideoOverlay.

The function takes the following parameters:

  • handleEvents indicating if events should be handled or not.

func (*VideoOverlay) PrepareWindowHandle

func (overlay *VideoOverlay) PrepareWindowHandle()

PrepareWindowHandle: this will post a "prepare-window-handle" element message on the bus to give applications an opportunity to call gst_video_overlay_set_window_handle() before a plugin creates its own window.

This function should only be used by video overlay plugin developers.

func (*VideoOverlay) SetRenderRectangle

func (overlay *VideoOverlay) SetRenderRectangle(x, y, width, height int) bool

SetRenderRectangle: configure a subregion as a video target within the window set by gst_video_overlay_set_window_handle(). If this is not used or not supported the video will fill the area of the window set as the overlay to 100%. By specifying the rectangle, the video can be overlayed to a specific region of that window only. After setting the new rectangle one should call gst_video_overlay_expose() to force a redraw. To unset the region pass -1 for the width and height parameters.

This method is needed for non fullscreen video overlay in UI toolkits that do not support subwindows.

The function takes the following parameters:

  • x: horizontal offset of the render area inside the window.
  • y: vertical offset of the render area inside the window.
  • width of the render area inside the window.
  • height of the render area inside the window.

The function returns the following values:

  • ok: FALSE if not supported by the sink.

func (*VideoOverlay) SetWindowHandle

func (overlay *VideoOverlay) SetWindowHandle(handle uintptr)

SetWindowHandle: this will call the video overlay's set_window_handle method. You should use this method to tell to an overlay to display video output to a specific window (e.g. an XWindow on X11). Passing 0 as the handle will tell the overlay to stop using that window and create an internal one.

The function takes the following parameters:

  • handle referencing the window.

type VideoOverlayComposition

type VideoOverlayComposition struct {
	// contains filtered or unexported fields
}

VideoOverlayComposition functions to create and handle overlay compositions on video buffers.

An overlay composition describes one or more overlay rectangles to be blended on top of a video buffer.

This API serves two main purposes:

* it can be used to attach overlay information (subtitles or logos) to non-raw video buffers such as GL/VAAPI/VDPAU surfaces. The actual blending of the overlay can then be done by e.g. the video sink that processes these non-raw buffers.

* it can also be used to blend overlay rectangles on top of raw video buffers, thus consolidating blending functionality for raw video in one place.

Together, this allows existing overlay elements to easily handle raw and non-raw video as input in without major changes (once the overlays have been put into a VideoOverlayComposition object anyway) - for raw video the overlay can just use the blending function to blend the data on top of the video, and for surface buffers it can just attach them to the buffer and let the sink render the overlays.

An instance of this type is always passed by reference.

func NewVideoOverlayComposition

func NewVideoOverlayComposition(rectangle *VideoOverlayRectangle) *VideoOverlayComposition

NewVideoOverlayComposition constructs a struct VideoOverlayComposition.

func (*VideoOverlayComposition) AddRectangle

func (comp *VideoOverlayComposition) AddRectangle(rectangle *VideoOverlayRectangle)

AddRectangle adds an overlay rectangle to an existing overlay composition object. This must be done right after creating the overlay composition.

The function takes the following parameters:

  • rectangle to add to the composition.

func (*VideoOverlayComposition) Blend

func (comp *VideoOverlayComposition) Blend(videoBuf *VideoFrame) bool

Blend blends the overlay rectangles in comp on top of the raw video data contained in video_buf. The data in video_buf must be writable and mapped appropriately.

Since video_buf data is read and will be modified, it ought be mapped with flag GST_MAP_READWRITE.

The function takes the following parameters:

  • videoBuf containing raw video data in a supported format. It should be mapped using GST_MAP_READWRITE.

The function returns the following values:

func (*VideoOverlayComposition) Copy

Copy makes a copy of comp and all contained rectangles, so that it is possible to modify the composition and contained rectangles (e.g. add additional rectangles or change the render co-ordinates or render dimension). The actual overlay pixel data buffers contained in the rectangles are not copied.

The function returns the following values:

  • videoOverlayComposition: new VideoOverlayComposition equivalent to comp.

func (*VideoOverlayComposition) MakeWritable

func (comp *VideoOverlayComposition) MakeWritable() *VideoOverlayComposition

MakeWritable takes ownership of comp and returns a version of comp that is writable (i.e. can be modified). Will either return comp right away, or create a new writable copy of comp and unref comp itself. All the contained rectangles will also be copied, but the actual overlay pixel data buffers contained in the rectangles are not copied.

The function returns the following values:

  • videoOverlayComposition: writable VideoOverlayComposition equivalent to comp.

func (*VideoOverlayComposition) NRectangles

func (comp *VideoOverlayComposition) NRectangles() uint

NRectangles returns the number of VideoOverlayRectangle<!-- -->s contained in comp.

The function returns the following values:

  • guint: number of rectangles.

func (*VideoOverlayComposition) Rectangle

Rectangle returns the n-th VideoOverlayRectangle contained in comp.

The function takes the following parameters:

  • n: number of the rectangle to get.

The function returns the following values:

  • videoOverlayRectangle: n-th rectangle, or NULL if n is out of bounds. Will not return a new reference, the caller will need to obtain her own reference using gst_video_overlay_rectangle_ref() if needed.

func (*VideoOverlayComposition) Seqnum

func (comp *VideoOverlayComposition) Seqnum() uint

Seqnum returns the sequence number of this composition. Sequence numbers are monotonically increasing and unique for overlay compositions and rectangles (meaning there will never be a rectangle with the same sequence number as a composition).

The function returns the following values:

  • guint: sequence number of comp.

type VideoOverlayCompositionMeta

type VideoOverlayCompositionMeta struct {
	// contains filtered or unexported fields
}

VideoOverlayCompositionMeta: extra buffer metadata describing image overlay data.

An instance of this type is always passed by reference.

func BufferAddVideoOverlayCompositionMeta

func BufferAddVideoOverlayCompositionMeta(buf *gst.Buffer, comp *VideoOverlayComposition) *VideoOverlayCompositionMeta

BufferAddVideoOverlayCompositionMeta sets an overlay composition on a buffer. The buffer will obtain its own reference to the composition, meaning this function does not take ownership of comp.

The function takes the following parameters:

  • buf: Buffer.
  • comp (optional): VideoOverlayComposition.

The function returns the following values:

  • videoOverlayCompositionMeta: VideoOverlayCompositionMeta.

func (*VideoOverlayCompositionMeta) Meta

Meta: parent Meta.

func (*VideoOverlayCompositionMeta) Overlay

Overlay: attached VideoOverlayComposition.

type VideoOverlayFormatFlags

type VideoOverlayFormatFlags C.guint

VideoOverlayFormatFlags: overlay format flags.

const (
	// VideoOverlayFormatFlagNone: no flags.
	VideoOverlayFormatFlagNone VideoOverlayFormatFlags = 0b0
	// VideoOverlayFormatFlagPremultipliedAlpha: RGB are premultiplied by A/255.
	VideoOverlayFormatFlagPremultipliedAlpha VideoOverlayFormatFlags = 0b1
	// VideoOverlayFormatFlagGlobalAlpha: global-alpha value != 1 is set.
	VideoOverlayFormatFlagGlobalAlpha VideoOverlayFormatFlags = 0b10
)

func (VideoOverlayFormatFlags) Has

Has returns true if v contains other.

func (VideoOverlayFormatFlags) String

func (v VideoOverlayFormatFlags) String() string

String returns the names in string for VideoOverlayFormatFlags.

type VideoOverlayInterface

type VideoOverlayInterface struct {
	// contains filtered or unexported fields
}

VideoOverlayInterface interface

An instance of this type is always passed by reference.

type VideoOverlayRectangle

type VideoOverlayRectangle struct {
	// contains filtered or unexported fields
}

VideoOverlayRectangle: opaque video overlay rectangle object. A rectangle contains a single overlay rectangle which can be added to a composition.

An instance of this type is always passed by reference.

func NewVideoOverlayRectangleRaw

func NewVideoOverlayRectangleRaw(pixels *gst.Buffer, renderX int, renderY int, renderWidth uint, renderHeight uint, flags VideoOverlayFormatFlags) *VideoOverlayRectangle

NewVideoOverlayRectangleRaw constructs a struct VideoOverlayRectangle.

func (*VideoOverlayRectangle) Copy

func (rectangle *VideoOverlayRectangle) Copy() *VideoOverlayRectangle

Copy makes a copy of rectangle, so that it is possible to modify it (e.g. to change the render co-ordinates or render dimension). The actual overlay pixel data buffers contained in the rectangle are not copied.

The function returns the following values:

  • videoOverlayRectangle: new VideoOverlayRectangle equivalent to rectangle.

func (*VideoOverlayRectangle) Flags

Flags retrieves the flags associated with a VideoOverlayRectangle. This is useful if the caller can handle both premultiplied alpha and non premultiplied alpha, for example. By knowing whether the rectangle uses premultiplied or not, it can request the pixel data in the format it is stored in, to avoid unnecessary conversion.

The function returns the following values:

  • videoOverlayFormatFlags associated with the rectangle.

func (*VideoOverlayRectangle) GlobalAlpha

func (rectangle *VideoOverlayRectangle) GlobalAlpha() float32

GlobalAlpha retrieves the global-alpha value associated with a VideoOverlayRectangle.

The function returns the following values:

  • gfloat: global-alpha value associated with the rectangle.

func (*VideoOverlayRectangle) PixelsARGB

func (rectangle *VideoOverlayRectangle) PixelsARGB(flags VideoOverlayFormatFlags) *gst.Buffer

The function takes the following parameters:

  • flags: flags If a global_alpha value != 1 is set for the rectangle, the caller should set the T_VIDEO_OVERLAY_FORMAT_FLAG_GLOBAL_ALPHA flag if he wants to apply global-alpha himself. If the flag is not set global_alpha is applied internally before returning the pixel-data.

The function returns the following values:

  • buffer holding the ARGB pixel data with width and height of the render dimensions as per gst_video_overlay_rectangle_get_render_rectangle(). This function does not return a reference, the caller should obtain a reference of her own with gst_buffer_ref() if needed.

func (*VideoOverlayRectangle) PixelsAyuv

func (rectangle *VideoOverlayRectangle) PixelsAyuv(flags VideoOverlayFormatFlags) *gst.Buffer

The function takes the following parameters:

  • flags: flags If a global_alpha value != 1 is set for the rectangle, the caller should set the T_VIDEO_OVERLAY_FORMAT_FLAG_GLOBAL_ALPHA flag if he wants to apply global-alpha himself. If the flag is not set global_alpha is applied internally before returning the pixel-data.

The function returns the following values:

  • buffer holding the AYUV pixel data with width and height of the render dimensions as per gst_video_overlay_rectangle_get_render_rectangle(). This function does not return a reference, the caller should obtain a reference of her own with gst_buffer_ref() if needed.

func (*VideoOverlayRectangle) PixelsRaw

func (rectangle *VideoOverlayRectangle) PixelsRaw(flags VideoOverlayFormatFlags) *gst.Buffer

The function takes the following parameters:

  • flags: flags If a global_alpha value != 1 is set for the rectangle, the caller should set the T_VIDEO_OVERLAY_FORMAT_FLAG_GLOBAL_ALPHA flag if he wants to apply global-alpha himself. If the flag is not set global_alpha is applied internally before returning the pixel-data.

The function returns the following values:

  • buffer holding the pixel data with format as originally provided and specified in video meta with width and height of the render dimensions as per gst_video_overlay_rectangle_get_render_rectangle(). This function does not return a reference, the caller should obtain a reference of her own with gst_buffer_ref() if needed.

func (*VideoOverlayRectangle) PixelsUnscaledARGB

func (rectangle *VideoOverlayRectangle) PixelsUnscaledARGB(flags VideoOverlayFormatFlags) *gst.Buffer

PixelsUnscaledARGB retrieves the pixel data as it is. This is useful if the caller can do the scaling itself when handling the overlaying. The rectangle will need to be scaled to the render dimensions, which can be retrieved using gst_video_overlay_rectangle_get_render_rectangle().

The function takes the following parameters:

  • flags: flags. If a global_alpha value != 1 is set for the rectangle, the caller should set the T_VIDEO_OVERLAY_FORMAT_FLAG_GLOBAL_ALPHA flag if he wants to apply global-alpha himself. If the flag is not set global_alpha is applied internally before returning the pixel-data.

The function returns the following values:

  • buffer holding the ARGB pixel data with VideoMeta set. This function does not return a reference, the caller should obtain a reference of her own with gst_buffer_ref() if needed.

func (*VideoOverlayRectangle) PixelsUnscaledAyuv

func (rectangle *VideoOverlayRectangle) PixelsUnscaledAyuv(flags VideoOverlayFormatFlags) *gst.Buffer

PixelsUnscaledAyuv retrieves the pixel data as it is. This is useful if the caller can do the scaling itself when handling the overlaying. The rectangle will need to be scaled to the render dimensions, which can be retrieved using gst_video_overlay_rectangle_get_render_rectangle().

The function takes the following parameters:

  • flags: flags. If a global_alpha value != 1 is set for the rectangle, the caller should set the T_VIDEO_OVERLAY_FORMAT_FLAG_GLOBAL_ALPHA flag if he wants to apply global-alpha himself. If the flag is not set global_alpha is applied internally before returning the pixel-data.

The function returns the following values:

  • buffer holding the AYUV pixel data with VideoMeta set. This function does not return a reference, the caller should obtain a reference of her own with gst_buffer_ref() if needed.

func (*VideoOverlayRectangle) PixelsUnscaledRaw

func (rectangle *VideoOverlayRectangle) PixelsUnscaledRaw(flags VideoOverlayFormatFlags) *gst.Buffer

PixelsUnscaledRaw retrieves the pixel data as it is. This is useful if the caller can do the scaling itself when handling the overlaying. The rectangle will need to be scaled to the render dimensions, which can be retrieved using gst_video_overlay_rectangle_get_render_rectangle().

The function takes the following parameters:

  • flags: flags. If a global_alpha value != 1 is set for the rectangle, the caller should set the T_VIDEO_OVERLAY_FORMAT_FLAG_GLOBAL_ALPHA flag if he wants to apply global-alpha himself. If the flag is not set global_alpha is applied internally before returning the pixel-data.

The function returns the following values:

  • buffer holding the pixel data with VideoMeta set. This function does not return a reference, the caller should obtain a reference of her own with gst_buffer_ref() if needed.

func (*VideoOverlayRectangle) RenderRectangle

func (rectangle *VideoOverlayRectangle) RenderRectangle() (renderX int, renderY int, renderWidth uint, renderHeight uint, ok bool)

RenderRectangle retrieves the render position and render dimension of the overlay rectangle on the video.

The function returns the following values:

  • renderX (optional) address where to store the X render offset.
  • renderY (optional) address where to store the Y render offset.
  • renderWidth (optional) address where to store the render width.
  • renderHeight (optional) address where to store the render height.
  • ok: TRUE if valid render dimensions were retrieved.

func (*VideoOverlayRectangle) Seqnum

func (rectangle *VideoOverlayRectangle) Seqnum() uint

Seqnum returns the sequence number of this rectangle. Sequence numbers are monotonically increasing and unique for overlay compositions and rectangles (meaning there will never be a rectangle with the same sequence number as a composition).

Using the sequence number of a rectangle as an indicator for changed pixel-data of a rectangle is dangereous. Some API calls, like e.g. gst_video_overlay_rectangle_set_global_alpha(), automatically update the per rectangle sequence number, which is misleading for renderers/ consumers, that handle global-alpha themselves. For them the pixel-data returned by gst_video_overlay_rectangle_get_pixels_*() won't be different for different global-alpha values. In this case a renderer could also use the GstBuffer pointers as a hint for changed pixel-data.

The function returns the following values:

  • guint: sequence number of rectangle.

func (*VideoOverlayRectangle) SetGlobalAlpha

func (rectangle *VideoOverlayRectangle) SetGlobalAlpha(globalAlpha float32)

SetGlobalAlpha sets the global alpha value associated with a VideoOverlayRectangle. Per- pixel alpha values are multiplied with this value. Valid values: 0 <= global_alpha <= 1; 1 to deactivate.

rectangle must be writable, meaning its refcount must be 1. You can make the rectangles inside a VideoOverlayComposition writable using gst_video_overlay_composition_make_writable() or gst_video_overlay_composition_copy().

The function takes the following parameters:

  • globalAlpha: global alpha value (0 to 1.0).

func (*VideoOverlayRectangle) SetRenderRectangle

func (rectangle *VideoOverlayRectangle) SetRenderRectangle(renderX int, renderY int, renderWidth uint, renderHeight uint)

SetRenderRectangle sets the render position and dimensions of the rectangle on the video. This function is mainly for elements that modify the size of the video in some way (e.g. through scaling or cropping) and need to adjust the details of any overlays to match the operation that changed the size.

rectangle must be writable, meaning its refcount must be 1. You can make the rectangles inside a VideoOverlayComposition writable using gst_video_overlay_composition_make_writable() or gst_video_overlay_composition_copy().

The function takes the following parameters:

  • renderX: render X position of rectangle on video.
  • renderY: render Y position of rectangle on video.
  • renderWidth: render width of rectangle.
  • renderHeight: render height of rectangle.

type VideoOverlayer

type VideoOverlayer interface {
	coreglib.Objector

	// Expose: tell an overlay that it has been exposed.
	Expose()
	// GotWindowHandle: this will post a "have-window-handle" element message on
	// the bus.
	GotWindowHandle(handle uintptr)
	// HandleEvents: tell an overlay that it should handle events from the
	// window system.
	HandleEvents(handleEvents bool)
	// PrepareWindowHandle: this will post a "prepare-window-handle" element
	// message on the bus to give applications an opportunity to call
	// gst_video_overlay_set_window_handle() before a plugin creates its own
	// window.
	PrepareWindowHandle()
	// SetRenderRectangle: configure a subregion as a video target within the
	// window set by gst_video_overlay_set_window_handle().
	SetRenderRectangle(x, y, width, height int) bool
	// SetWindowHandle: this will call the video overlay's set_window_handle
	// method.
	SetWindowHandle(handle uintptr)
}

VideoOverlayer describes VideoOverlay's interface methods.

type VideoPackFlags

type VideoPackFlags C.guint

VideoPackFlags: different flags that can be used when packing and unpacking.

const (
	// VideoPackFlagNone: no flag.
	VideoPackFlagNone VideoPackFlags = 0b0
	// VideoPackFlagTruncateRange: when the source has a smaller depth than the
	// target format, set the least significant bits of the target to 0. This is
	// likely slightly faster but less accurate. When this flag is not
	// specified, the most significant bits of the source are duplicated in the
	// least significant bits of the destination.
	VideoPackFlagTruncateRange VideoPackFlags = 0b1
	// VideoPackFlagInterlaced: source is interlaced. The unpacked format will
	// be interlaced as well with each line containing information from
	// alternating fields. (Since: 1.2).
	VideoPackFlagInterlaced VideoPackFlags = 0b10
)

func (VideoPackFlags) Has

func (v VideoPackFlags) Has(other VideoPackFlags) bool

Has returns true if v contains other.

func (VideoPackFlags) String

func (v VideoPackFlags) String() string

String returns the names in string for VideoPackFlags.

type VideoPrimariesMode

type VideoPrimariesMode C.gint

VideoPrimariesMode: different primaries conversion modes.

const (
	// VideoPrimariesModeNone: disable conversion between primaries.
	VideoPrimariesModeNone VideoPrimariesMode = iota
	// VideoPrimariesModeMergeOnly: do conversion between primaries only when it
	// can be merged with color matrix conversion.
	VideoPrimariesModeMergeOnly
	// VideoPrimariesModeFast: fast conversion between primaries.
	VideoPrimariesModeFast
)

func (VideoPrimariesMode) String

func (v VideoPrimariesMode) String() string

String returns the name in string for VideoPrimariesMode.

type VideoRectangle

type VideoRectangle struct {
	// contains filtered or unexported fields
}

VideoRectangle: helper structure representing a rectangular area.

An instance of this type is always passed by reference.

func NewVideoRectangle

func NewVideoRectangle(x, y, w, h int) VideoRectangle

NewVideoRectangle creates a new VideoRectangle instance from the given fields. Beware that this function allocates on the Go heap; be careful when using it!

func VideoCenterRect

func VideoCenterRect(src, dst *VideoRectangle, scaling bool) *VideoRectangle

VideoCenterRect takes src rectangle and position it at the center of dst rectangle with or without scaling. It handles clipping if the src rectangle is bigger than the dst one and scaling is set to FALSE.

The function takes the following parameters:

  • src: pointer to VideoRectangle describing the source area.
  • dst: pointer to VideoRectangle describing the destination area.
  • scaling indicating if scaling should be applied or not.

The function returns the following values:

  • result: pointer to a VideoRectangle which will receive the result area.

func VideoSinkCenterRect

func VideoSinkCenterRect(src, dst *VideoRectangle, scaling bool) *VideoRectangle

VideoSinkCenterRect: deprecated: Use gst_video_center_rect() instead.

The function takes the following parameters:

  • src describing the source area.
  • dst describing the destination area.
  • scaling indicating if scaling should be applied or not.

The function returns the following values:

  • result: pointer to a VideoRectangle which will receive the result area.

func (*VideoRectangle) H

func (v *VideoRectangle) H() int

H: height of the rectangle.

func (*VideoRectangle) SetH

func (v *VideoRectangle) SetH(h int)

H: height of the rectangle.

func (*VideoRectangle) SetW

func (v *VideoRectangle) SetW(w int)

W: width of the rectangle.

func (*VideoRectangle) SetX

func (v *VideoRectangle) SetX(x int)

X coordinate of rectangle's top-left point.

func (*VideoRectangle) SetY

func (v *VideoRectangle) SetY(y int)

Y coordinate of rectangle's top-left point.

func (*VideoRectangle) W

func (v *VideoRectangle) W() int

W: width of the rectangle.

func (*VideoRectangle) X

func (v *VideoRectangle) X() int

X coordinate of rectangle's top-left point.

func (*VideoRectangle) Y

func (v *VideoRectangle) Y() int

Y coordinate of rectangle's top-left point.

type VideoRegionOfInterestMeta

type VideoRegionOfInterestMeta struct {
	// contains filtered or unexported fields
}

VideoRegionOfInterestMeta: extra buffer metadata describing an image region of interest

An instance of this type is always passed by reference.

func BufferAddVideoRegionOfInterestMeta

func BufferAddVideoRegionOfInterestMeta(buffer *gst.Buffer, roiType string, x, y, w, h uint) *VideoRegionOfInterestMeta

BufferAddVideoRegionOfInterestMeta attaches VideoRegionOfInterestMeta metadata to buffer with the given parameters.

The function takes the following parameters:

  • buffer: Buffer.
  • roiType: type of the region of interest (e.g. "face").
  • x: x position.
  • y: y position.
  • w: width.
  • h: height.

The function returns the following values:

  • videoRegionOfInterestMeta on buffer.

func BufferAddVideoRegionOfInterestMetaID

func BufferAddVideoRegionOfInterestMetaID(buffer *gst.Buffer, roiType glib.Quark, x, y, w, h uint) *VideoRegionOfInterestMeta

BufferAddVideoRegionOfInterestMetaID attaches VideoRegionOfInterestMeta metadata to buffer with the given parameters.

The function takes the following parameters:

  • buffer: Buffer.
  • roiType: type of the region of interest (e.g. "face").
  • x: x position.
  • y: y position.
  • w: width.
  • h: height.

The function returns the following values:

  • videoRegionOfInterestMeta on buffer.

func BufferGetVideoRegionOfInterestMetaID

func BufferGetVideoRegionOfInterestMetaID(buffer *gst.Buffer, id int) *VideoRegionOfInterestMeta

BufferGetVideoRegionOfInterestMetaID: find the VideoRegionOfInterestMeta on buffer with the given id.

Buffers can contain multiple VideoRegionOfInterestMeta metadata items if multiple regions of interests are marked on a frame.

The function takes the following parameters:

  • buffer: Buffer.
  • id: metadata id.

The function returns the following values:

  • videoRegionOfInterestMeta with id or NULL when there is no such metadata on buffer.

func (*VideoRegionOfInterestMeta) AddParam

func (meta *VideoRegionOfInterestMeta) AddParam(s *gst.Structure)

AddParam: attach element-specific parameters to meta meant to be used by downstream elements which may handle this ROI. The name of s is used to identify the element these parameters are meant for.

This is typically used to tell encoders how they should encode this specific region. For example, a structure named "roi/x264enc" could be used to give the QP offsets this encoder should use when encoding the region described in meta. Multiple parameters can be defined for the same meta so different encoders can be supported by cross platform applications).

The function takes the following parameters:

  • s: Structure.

func (*VideoRegionOfInterestMeta) H

H: bounding box height.

func (*VideoRegionOfInterestMeta) ID

ID: identifier of this particular ROI.

func (*VideoRegionOfInterestMeta) Meta

func (v *VideoRegionOfInterestMeta) Meta() *gst.Meta

Meta: parent Meta.

func (*VideoRegionOfInterestMeta) Param

func (meta *VideoRegionOfInterestMeta) Param(name string) *gst.Structure

Param: retrieve the parameter for meta having name as structure name, or NULL if there is none.

The function takes the following parameters:

  • name: name.

The function returns the following values:

  • structure (optional): Structure.

func (*VideoRegionOfInterestMeta) ParentID

func (v *VideoRegionOfInterestMeta) ParentID() int

ParentID: identifier of its parent ROI, used f.i. for ROI hierarchisation.

func (*VideoRegionOfInterestMeta) RoiType

func (v *VideoRegionOfInterestMeta) RoiType() glib.Quark

RoiType: GQuark describing the semantic of the Roi (f.i. a face, a pedestrian).

func (*VideoRegionOfInterestMeta) SetH

func (v *VideoRegionOfInterestMeta) SetH(h uint)

H: bounding box height.

func (*VideoRegionOfInterestMeta) SetID

func (v *VideoRegionOfInterestMeta) SetID(id int)

ID: identifier of this particular ROI.

func (*VideoRegionOfInterestMeta) SetParentID

func (v *VideoRegionOfInterestMeta) SetParentID(parentId int)

ParentID: identifier of its parent ROI, used f.i. for ROI hierarchisation.

func (*VideoRegionOfInterestMeta) SetW

func (v *VideoRegionOfInterestMeta) SetW(w uint)

W: bounding box width.

func (*VideoRegionOfInterestMeta) SetX

func (v *VideoRegionOfInterestMeta) SetX(x uint)

X: x component of upper-left corner.

func (*VideoRegionOfInterestMeta) SetY

func (v *VideoRegionOfInterestMeta) SetY(y uint)

Y: y component of upper-left corner.

func (*VideoRegionOfInterestMeta) W

W: bounding box width.

func (*VideoRegionOfInterestMeta) X

X: x component of upper-left corner.

func (*VideoRegionOfInterestMeta) Y

Y: y component of upper-left corner.

type VideoResampler

type VideoResampler struct {
	// contains filtered or unexported fields
}

VideoResampler is a structure which holds the information required to perform various kinds of resampling filtering.

An instance of this type is always passed by reference.

func (*VideoResampler) Clear

func (resampler *VideoResampler) Clear()

Clear a previously initialized VideoResampler resampler.

func (*VideoResampler) InSize

func (v *VideoResampler) InSize() int

InSize: input size.

func (*VideoResampler) Init

func (resampler *VideoResampler) Init(method VideoResamplerMethod, flags VideoResamplerFlags, nPhases uint, nTaps uint, shift float64, inSize uint, outSize uint, options *gst.Structure) bool

The function takes the following parameters:

  • method
  • flags
  • nPhases
  • nTaps
  • shift
  • inSize
  • outSize
  • options

The function returns the following values:

func (*VideoResampler) MaxTaps

func (v *VideoResampler) MaxTaps() uint

MaxTaps: maximum number of taps.

func (*VideoResampler) NPhases

func (v *VideoResampler) NPhases() uint

NPhases: number of phases.

func (*VideoResampler) NTaps

func (v *VideoResampler) NTaps() *uint32

NTaps: array with new number of taps for each phase.

func (*VideoResampler) Offset

func (v *VideoResampler) Offset() *uint32

Offset: array with the source offset for each output element.

func (*VideoResampler) OutSize

func (v *VideoResampler) OutSize() int

OutSize: output size.

func (*VideoResampler) Phase

func (v *VideoResampler) Phase() *uint32

Phase: array with the phase to use for each output element.

func (*VideoResampler) SetInSize

func (v *VideoResampler) SetInSize(inSize int)

InSize: input size.

func (*VideoResampler) SetMaxTaps

func (v *VideoResampler) SetMaxTaps(maxTaps uint)

MaxTaps: maximum number of taps.

func (*VideoResampler) SetNPhases

func (v *VideoResampler) SetNPhases(nPhases uint)

NPhases: number of phases.

func (*VideoResampler) SetOutSize

func (v *VideoResampler) SetOutSize(outSize int)

OutSize: output size.

func (*VideoResampler) Taps

func (v *VideoResampler) Taps() *float64

Taps taps for all phases.

type VideoResamplerFlags

type VideoResamplerFlags C.guint

VideoResamplerFlags: different resampler flags.

const (
	// VideoResamplerFlagNone: no flags.
	VideoResamplerFlagNone VideoResamplerFlags = 0b0
	// VideoResamplerFlagHalfTaps: when no taps are given, half the number of
	// calculated taps. This can be used when making scalers for the different
	// fields of an interlaced picture. Since: 1.10.
	VideoResamplerFlagHalfTaps VideoResamplerFlags = 0b1
)

func (VideoResamplerFlags) Has

Has returns true if v contains other.

func (VideoResamplerFlags) String

func (v VideoResamplerFlags) String() string

String returns the names in string for VideoResamplerFlags.

type VideoResamplerMethod

type VideoResamplerMethod C.gint

VideoResamplerMethod: different subsampling and upsampling methods.

const (
	// VideoResamplerMethodNearest duplicates the samples when upsampling and
	// drops when downsampling.
	VideoResamplerMethodNearest VideoResamplerMethod = iota
	// VideoResamplerMethodLinear uses linear interpolation to reconstruct
	// missing samples and averaging to downsample.
	VideoResamplerMethodLinear
	// VideoResamplerMethodCubic uses cubic interpolation.
	VideoResamplerMethodCubic
	// VideoResamplerMethodSinc uses sinc interpolation.
	VideoResamplerMethodSinc
	// VideoResamplerMethodLanczos uses lanczos interpolation.
	VideoResamplerMethodLanczos
)

func (VideoResamplerMethod) String

func (v VideoResamplerMethod) String() string

String returns the name in string for VideoResamplerMethod.

type VideoScalerFlags

type VideoScalerFlags C.guint

VideoScalerFlags: different scale flags.

const (
	// VideoScalerFlagNone: no flags.
	VideoScalerFlagNone VideoScalerFlags = 0b0
	// VideoScalerFlagInterlaced: set up a scaler for interlaced content.
	VideoScalerFlagInterlaced VideoScalerFlags = 0b1
)

func (VideoScalerFlags) Has

func (v VideoScalerFlags) Has(other VideoScalerFlags) bool

Has returns true if v contains other.

func (VideoScalerFlags) String

func (v VideoScalerFlags) String() string

String returns the names in string for VideoScalerFlags.

type VideoSink

type VideoSink struct {
	gstbase.BaseSink
	// contains filtered or unexported fields
}

VideoSink provides useful functions and a base class for video sinks.

GstVideoSink will configure the default base sink to drop frames that arrive later than 20ms as this is considered the default threshold for observing out-of-sync frames.

type VideoSinkClass

type VideoSinkClass struct {
	// contains filtered or unexported fields
}

VideoSinkClass: video sink class structure. Derived classes should override the show_frame virtual function.

An instance of this type is always passed by reference.

func (*VideoSinkClass) ParentClass

func (v *VideoSinkClass) ParentClass() *gstbase.BaseSinkClass

ParentClass: parent class structure.

type VideoSinkOverrides

type VideoSinkOverrides struct {
	// SetInfo notifies the subclass of changed VideoInfo.
	//
	// The function takes the following parameters:
	//
	//    - caps: Caps.
	//    - info corresponding to caps.
	//
	// The function returns the following values:
	//
	SetInfo func(caps *gst.Caps, info *VideoInfo) bool
	// The function takes the following parameters:
	//
	// The function returns the following values:
	//
	ShowFrame func(buf *gst.Buffer) gst.FlowReturn
}

VideoSinkOverrides contains methods that are overridable.

type VideoTileMode

type VideoTileMode C.gint

VideoTileMode: enum value describing the available tiling modes.

const (
	// VideoTileModeUnknown: unknown or unset tile mode.
	VideoTileModeUnknown VideoTileMode = 0
	// VideoTileModeZflipz2X2: every four adjacent blocks - two horizontally and
	// two vertically are grouped together and are located in memory in Z or
	// flipped Z order. In case of odd rows, the last row of blocks is arranged
	// in linear order.
	VideoTileModeZflipz2X2 VideoTileMode = 65536
	// VideoTileModeLinear tiles are in row order.
	VideoTileModeLinear VideoTileMode = 131072
)

func (VideoTileMode) String

func (v VideoTileMode) String() string

String returns the name in string for VideoTileMode.

type VideoTileType

type VideoTileType C.gint

VideoTileType: enum value describing the most common tiling types.

const (
	// VideoTileTypeIndexed tiles are indexed. Use gst_video_tile_get_index ()
	// to retrieve the tile at the requested coordinates.
	VideoTileTypeIndexed VideoTileType = iota
)

func (VideoTileType) String

func (v VideoTileType) String() string

String returns the name in string for VideoTileType.

type VideoTimeCode

type VideoTimeCode struct {
	// contains filtered or unexported fields
}

VideoTimeCode: field_count must be 0 for progressive video and 1 or 2 for interlaced.

A representation of a SMPTE time code.

hours must be positive and less than 24. Will wrap around otherwise. minutes and seconds must be positive and less than 60. frames must be less than or equal to config.fps_n / config.fps_d These values are *NOT* automatically normalized.

An instance of this type is always passed by reference.

func NewVideoTimeCode

func NewVideoTimeCode(fpsN uint, fpsD uint, latestDailyJam *glib.DateTime, flags VideoTimeCodeFlags, hours uint, minutes uint, seconds uint, frames uint, fieldCount uint) *VideoTimeCode

NewVideoTimeCode constructs a struct VideoTimeCode.

func NewVideoTimeCodeEmpty

func NewVideoTimeCodeEmpty() *VideoTimeCode

NewVideoTimeCodeEmpty constructs a struct VideoTimeCode.

func NewVideoTimeCodeFromDateTime

func NewVideoTimeCodeFromDateTime(fpsN uint, fpsD uint, dt *glib.DateTime, flags VideoTimeCodeFlags, fieldCount uint) *VideoTimeCode

NewVideoTimeCodeFromDateTime constructs a struct VideoTimeCode.

func NewVideoTimeCodeFromDateTimeFull

func NewVideoTimeCodeFromDateTimeFull(fpsN uint, fpsD uint, dt *glib.DateTime, flags VideoTimeCodeFlags, fieldCount uint) *VideoTimeCode

NewVideoTimeCodeFromDateTimeFull constructs a struct VideoTimeCode.

func NewVideoTimeCodeFromString

func NewVideoTimeCodeFromString(tcStr string) *VideoTimeCode

NewVideoTimeCodeFromString constructs a struct VideoTimeCode.

func (*VideoTimeCode) AddFrames

func (tc *VideoTimeCode) AddFrames(frames int64)

AddFrames adds or subtracts frames amount of frames to tc. tc needs to contain valid data, as verified by gst_video_time_code_is_valid().

The function takes the following parameters:

  • frames: how many frames to add or subtract.

func (*VideoTimeCode) AddInterval

func (tc *VideoTimeCode) AddInterval(tcInter *VideoTimeCodeInterval) *VideoTimeCode

AddInterval: this makes a component-wise addition of tc_inter to tc. For example, adding ("01:02:03:04", "00:01:00:00") will return "01:03:03:04". When it comes to drop-frame timecodes, adding ("00:00:00;00", "00:01:00:00") will return "00:01:00;02" because of drop-frame oddities. However, adding ("00:09:00;02", "00:01:00:00") will return "00:10:00;00" because this time we can have an exact minute.

The function takes the following parameters:

  • tcInter to add to tc. The interval must contain valid values, except that for drop-frame timecode, it may also contain timecodes which would normally be dropped. These are then corrected to the next reasonable timecode.

The function returns the following values:

  • videoTimeCode (optional): new VideoTimeCode with tc_inter added or NULL if the interval can't be added.

func (*VideoTimeCode) Clear

func (tc *VideoTimeCode) Clear()

Clear initializes tc with empty/zero/NULL values and frees any memory it might currently use.

func (*VideoTimeCode) Compare

func (tc1 *VideoTimeCode) Compare(tc2 *VideoTimeCode) int

Compare compares tc1 and tc2. If both have latest daily jam information, it is taken into account. Otherwise, it is assumed that the daily jam of both tc1 and tc2 was at the same time. Both time codes must be valid.

The function takes the following parameters:

  • tc2: another valid VideoTimeCode.

The function returns the following values:

  • gint: 1 if tc1 is after tc2, -1 if tc1 is before tc2, 0 otherwise.

func (*VideoTimeCode) Config

func (v *VideoTimeCode) Config() *VideoTimeCodeConfig

Config: corresponding VideoTimeCodeConfig.

func (*VideoTimeCode) Copy

func (tc *VideoTimeCode) Copy() *VideoTimeCode

The function returns the following values:

  • videoTimeCode: new VideoTimeCode with the same values as tc.

func (*VideoTimeCode) FieldCount

func (v *VideoTimeCode) FieldCount() uint

FieldCount: interlaced video field count.

func (*VideoTimeCode) Frames

func (v *VideoTimeCode) Frames() uint

Frames frames field of VideoTimeCode.

func (*VideoTimeCode) FramesSinceDailyJam

func (tc *VideoTimeCode) FramesSinceDailyJam() uint64

The function returns the following values:

  • guint64: how many frames have passed since the daily jam of tc.

func (*VideoTimeCode) Hours

func (v *VideoTimeCode) Hours() uint

Hours hours field of VideoTimeCode.

func (*VideoTimeCode) IncrementFrame

func (tc *VideoTimeCode) IncrementFrame()

IncrementFrame adds one frame to tc.

func (*VideoTimeCode) Init

func (tc *VideoTimeCode) Init(fpsN uint, fpsD uint, latestDailyJam *glib.DateTime, flags VideoTimeCodeFlags, hours uint, minutes uint, seconds uint, frames uint, fieldCount uint)

Init: field_count is 0 for progressive, 1 or 2 for interlaced. latest_daiy_jam reference is stolen from caller.

Initializes tc with the given values. The values are not checked for being in a valid range. To see if your timecode actually has valid content, use gst_video_time_code_is_valid().

The function takes the following parameters:

  • fpsN: numerator of the frame rate.
  • fpsD: denominator of the frame rate.
  • latestDailyJam (optional): latest daily jam of the VideoTimeCode.
  • flags: VideoTimeCodeFlags.
  • hours field of VideoTimeCode.
  • minutes field of VideoTimeCode.
  • seconds field of VideoTimeCode.
  • frames field of VideoTimeCode.
  • fieldCount: interlaced video field count.

func (*VideoTimeCode) InitFromDateTime

func (tc *VideoTimeCode) InitFromDateTime(fpsN uint, fpsD uint, dt *glib.DateTime, flags VideoTimeCodeFlags, fieldCount uint)

InitFromDateTime: resulting config->latest_daily_jam is set to midnight, and timecode is set to the given time.

Will assert on invalid parameters, use gst_video_time_code_init_from_date_time_full() for being able to handle invalid parameters.

The function takes the following parameters:

  • fpsN: numerator of the frame rate.
  • fpsD: denominator of the frame rate.
  • dt to convert.
  • flags: VideoTimeCodeFlags.
  • fieldCount: interlaced video field count.

func (*VideoTimeCode) InitFromDateTimeFull

func (tc *VideoTimeCode) InitFromDateTimeFull(fpsN uint, fpsD uint, dt *glib.DateTime, flags VideoTimeCodeFlags, fieldCount uint) bool

InitFromDateTimeFull: resulting config->latest_daily_jam is set to midnight, and timecode is set to the given time.

The function takes the following parameters:

  • fpsN: numerator of the frame rate.
  • fpsD: denominator of the frame rate.
  • dt to convert.
  • flags: VideoTimeCodeFlags.
  • fieldCount: interlaced video field count.

The function returns the following values:

  • ok: TRUE if tc could be correctly initialized to a valid timecode.

func (*VideoTimeCode) IsValid

func (tc *VideoTimeCode) IsValid() bool

The function returns the following values:

  • ok: whether tc is a valid timecode (supported frame rate, hours/minutes/seconds/frames not overflowing).

func (*VideoTimeCode) Minutes

func (v *VideoTimeCode) Minutes() uint

Minutes minutes field of VideoTimeCode.

func (*VideoTimeCode) NsecSinceDailyJam

func (tc *VideoTimeCode) NsecSinceDailyJam() uint64

The function returns the following values:

  • guint64: how many nsec have passed since the daily jam of tc.

func (*VideoTimeCode) Seconds

func (v *VideoTimeCode) Seconds() uint

Seconds seconds field of VideoTimeCode.

func (*VideoTimeCode) SetFieldCount

func (v *VideoTimeCode) SetFieldCount(fieldCount uint)

FieldCount: interlaced video field count.

func (*VideoTimeCode) SetFrames

func (v *VideoTimeCode) SetFrames(frames uint)

Frames frames field of VideoTimeCode.

func (*VideoTimeCode) SetHours

func (v *VideoTimeCode) SetHours(hours uint)

Hours hours field of VideoTimeCode.

func (*VideoTimeCode) SetMinutes

func (v *VideoTimeCode) SetMinutes(minutes uint)

Minutes minutes field of VideoTimeCode.

func (*VideoTimeCode) SetSeconds

func (v *VideoTimeCode) SetSeconds(seconds uint)

Seconds seconds field of VideoTimeCode.

func (*VideoTimeCode) String

func (tc *VideoTimeCode) String() string

The function returns the following values:

  • utf8: SMPTE ST 2059-1:2015 string representation of tc. That will take the form hh:mm:ss:ff. The last separator (between seconds and frames) may vary:

    ';' for drop-frame, non-interlaced content and for drop-frame interlaced field 2 ',' for drop-frame interlaced field 1 ':' for non-drop-frame, non-interlaced content and for non-drop-frame interlaced field 2 '.' for non-drop-frame interlaced field 1.

func (*VideoTimeCode) ToDateTime

func (tc *VideoTimeCode) ToDateTime() *glib.DateTime

ToDateTime: tc.config->latest_daily_jam is required to be non-NULL.

The function returns the following values:

  • dateTime (optional) representation of tc or NULL if tc has no daily jam.

type VideoTimeCodeConfig

type VideoTimeCodeConfig struct {
	// contains filtered or unexported fields
}

VideoTimeCodeConfig: supported frame rates: 30000/1001, 60000/1001 (both with and without drop frame), and integer frame rates e.g. 25/1, 30/1, 50/1, 60/1.

The configuration of the time code.

An instance of this type is always passed by reference.

func (*VideoTimeCodeConfig) FPSD

func (v *VideoTimeCodeConfig) FPSD() uint

FPSD: denominator of the frame rate.

func (*VideoTimeCodeConfig) FPSN

func (v *VideoTimeCodeConfig) FPSN() uint

FPSN: numerator of the frame rate.

func (*VideoTimeCodeConfig) Flags

Flags: corresponding VideoTimeCodeFlags.

func (*VideoTimeCodeConfig) LatestDailyJam

func (v *VideoTimeCodeConfig) LatestDailyJam() *glib.DateTime

LatestDailyJam: latest daily jam information, if present, or NULL.

func (*VideoTimeCodeConfig) SetFPSD

func (v *VideoTimeCodeConfig) SetFPSD(fpsD uint)

FPSD: denominator of the frame rate.

func (*VideoTimeCodeConfig) SetFPSN

func (v *VideoTimeCodeConfig) SetFPSN(fpsN uint)

FPSN: numerator of the frame rate.

type VideoTimeCodeFlags

type VideoTimeCodeFlags C.guint

VideoTimeCodeFlags flags related to the time code information. For drop frame, only 30000/1001 and 60000/1001 frame rates are supported.

const (
	// VideoTimeCodeFlagsNone: no flags.
	VideoTimeCodeFlagsNone VideoTimeCodeFlags = 0b0
	// VideoTimeCodeFlagsDropFrame: whether we have drop frame rate.
	VideoTimeCodeFlagsDropFrame VideoTimeCodeFlags = 0b1
	// VideoTimeCodeFlagsInterlaced: whether we have interlaced video.
	VideoTimeCodeFlagsInterlaced VideoTimeCodeFlags = 0b10
)

func (VideoTimeCodeFlags) Has

Has returns true if v contains other.

func (VideoTimeCodeFlags) String

func (v VideoTimeCodeFlags) String() string

String returns the names in string for VideoTimeCodeFlags.

type VideoTimeCodeInterval

type VideoTimeCodeInterval struct {
	// contains filtered or unexported fields
}

VideoTimeCodeInterval: representation of a difference between two VideoTimeCode instances. Will not necessarily correspond to a real timecode (e.g. 00:00:10;00)

An instance of this type is always passed by reference.

func NewVideoTimeCodeInterval

func NewVideoTimeCodeInterval(hours uint, minutes uint, seconds uint, frames uint) *VideoTimeCodeInterval

NewVideoTimeCodeInterval constructs a struct VideoTimeCodeInterval.

func NewVideoTimeCodeIntervalFromString

func NewVideoTimeCodeIntervalFromString(tcInterStr string) *VideoTimeCodeInterval

NewVideoTimeCodeIntervalFromString constructs a struct VideoTimeCodeInterval.

func (*VideoTimeCodeInterval) Clear

func (tc *VideoTimeCodeInterval) Clear()

Clear initializes tc with empty/zero/NULL values.

func (*VideoTimeCodeInterval) Copy

The function returns the following values:

  • videoTimeCodeInterval: new VideoTimeCodeInterval with the same values as tc.

func (*VideoTimeCodeInterval) Frames

func (v *VideoTimeCodeInterval) Frames() uint

Frames frames field of VideoTimeCodeInterval.

func (*VideoTimeCodeInterval) Hours

func (v *VideoTimeCodeInterval) Hours() uint

Hours hours field of VideoTimeCodeInterval.

func (*VideoTimeCodeInterval) Init

func (tc *VideoTimeCodeInterval) Init(hours uint, minutes uint, seconds uint, frames uint)

Init initializes tc with the given values.

The function takes the following parameters:

  • hours field of VideoTimeCodeInterval.
  • minutes field of VideoTimeCodeInterval.
  • seconds field of VideoTimeCodeInterval.
  • frames field of VideoTimeCodeInterval.

func (*VideoTimeCodeInterval) Minutes

func (v *VideoTimeCodeInterval) Minutes() uint

Minutes minutes field of VideoTimeCodeInterval.

func (*VideoTimeCodeInterval) Seconds

func (v *VideoTimeCodeInterval) Seconds() uint

Seconds seconds field of VideoTimeCodeInterval.

func (*VideoTimeCodeInterval) SetFrames

func (v *VideoTimeCodeInterval) SetFrames(frames uint)

Frames frames field of VideoTimeCodeInterval.

func (*VideoTimeCodeInterval) SetHours

func (v *VideoTimeCodeInterval) SetHours(hours uint)

Hours hours field of VideoTimeCodeInterval.

func (*VideoTimeCodeInterval) SetMinutes

func (v *VideoTimeCodeInterval) SetMinutes(minutes uint)

Minutes minutes field of VideoTimeCodeInterval.

func (*VideoTimeCodeInterval) SetSeconds

func (v *VideoTimeCodeInterval) SetSeconds(seconds uint)

Seconds seconds field of VideoTimeCodeInterval.

type VideoTimeCodeMeta

type VideoTimeCodeMeta struct {
	// contains filtered or unexported fields
}

VideoTimeCodeMeta: extra buffer metadata describing the GstVideoTimeCode of the frame.

Each frame is assumed to have its own timecode, i.e. they are not automatically incremented/interpolated.

An instance of this type is always passed by reference.

func BufferAddVideoTimeCodeMeta

func BufferAddVideoTimeCodeMeta(buffer *gst.Buffer, tc *VideoTimeCode) *VideoTimeCodeMeta

BufferAddVideoTimeCodeMeta attaches VideoTimeCodeMeta metadata to buffer with the given parameters.

The function takes the following parameters:

  • buffer: Buffer.
  • tc: VideoTimeCode.

The function returns the following values:

  • videoTimeCodeMeta (optional) on buffer, or (since 1.16) NULL if the timecode was invalid.

func BufferAddVideoTimeCodeMetaFull

func BufferAddVideoTimeCodeMetaFull(buffer *gst.Buffer, fpsN, fpsD uint, latestDailyJam *glib.DateTime, flags VideoTimeCodeFlags, hours, minutes, seconds, frames, fieldCount uint) *VideoTimeCodeMeta

BufferAddVideoTimeCodeMetaFull attaches VideoTimeCodeMeta metadata to buffer with the given parameters.

The function takes the following parameters:

  • buffer: Buffer.
  • fpsN: framerate numerator.
  • fpsD: framerate denominator.
  • latestDailyJam for the latest daily jam.
  • flags: VideoTimeCodeFlags.
  • hours since the daily jam.
  • minutes since the daily jam.
  • seconds since the daily jam.
  • frames since the daily jam.
  • fieldCount fields since the daily jam.

The function returns the following values:

  • videoTimeCodeMeta on buffer, or (since 1.16) NULL if the timecode was invalid.

func (*VideoTimeCodeMeta) Meta

func (v *VideoTimeCodeMeta) Meta() *gst.Meta

Meta: parent Meta.

func (*VideoTimeCodeMeta) Tc

Tc: gstVideoTimeCode to attach.

type VideoTransferFunction

type VideoTransferFunction C.gint

VideoTransferFunction: video transfer function defines the formula for converting between non-linear RGB (R'G'B') and linear RGB.

const (
	// VideoTransferUnknown: unknown transfer function.
	VideoTransferUnknown VideoTransferFunction = iota
	// VideoTransferGamma10: linear RGB, gamma 1.0 curve.
	VideoTransferGamma10
	// VideoTransferGamma18: gamma 1.8 curve.
	VideoTransferGamma18
	// VideoTransferGamma20: gamma 2.0 curve.
	VideoTransferGamma20
	// VideoTransferGamma22: gamma 2.2 curve.
	VideoTransferGamma22
	// VideoTransferBt709: gamma 2.2 curve with a linear segment in the lower
	// range, also ITU-R BT470M / ITU-R BT1700 625 PAL & SECAM / ITU-R BT1361.
	VideoTransferBt709
	// VideoTransferSmpte240M: gamma 2.2 curve with a linear segment in the
	// lower range.
	VideoTransferSmpte240M
	// VideoTransferSrgb: gamma 2.4 curve with a linear segment in the lower
	// range. IEC 61966-2-1 (sRGB or sYCC).
	VideoTransferSrgb
	// VideoTransferGamma28: gamma 2.8 curve, also ITU-R BT470BG.
	VideoTransferGamma28
	// VideoTransferLog100: logarithmic transfer characteristic 100:1 range.
	VideoTransferLog100
	// VideoTransferLog316: logarithmic transfer characteristic 316.22777:1
	// range (100 * sqrt(10) : 1).
	VideoTransferLog316
	// VideoTransferBt202012: gamma 2.2 curve with a linear segment in the lower
	// range. Used for BT.2020 with 12 bits per component. Since: 1.6.
	VideoTransferBt202012
	// VideoTransferAdobergb: gamma 2.19921875. Since: 1.8.
	VideoTransferAdobergb
	// VideoTransferBt202010: rec. ITU-R BT.2020-2 with 10 bits per component.
	// (functionally the same as the values GST_VIDEO_TRANSFER_BT709 and
	// GST_VIDEO_TRANSFER_BT601). Since: 1.18.
	VideoTransferBt202010
	// VideoTransferSmpte2084: SMPTE ST 2084 for 10, 12, 14, and 16-bit systems.
	// Known as perceptual quantization (PQ) Since: 1.18.
	VideoTransferSmpte2084
	// VideoTransferAribStdB67: association of Radio Industries and Businesses
	// (ARIB) STD-B67 and Rec. ITU-R BT.2100-1 hybrid loggamma (HLG) system
	// Since: 1.18.
	VideoTransferAribStdB67
	// VideoTransferBt601: also known as SMPTE170M / ITU-R BT1358 525 or 625 /
	// ITU-R BT1700 NTSC.
	VideoTransferBt601
)

func VideoTransferFunctionFromISO

func VideoTransferFunctionFromISO(value uint) VideoTransferFunction

VideoTransferFunctionFromISO converts the value to the VideoTransferFunction The transfer characteristics (TransferCharacteristics) value is defined by "ISO/IEC 23001-8 Section 7.2 Table 3" and "ITU-T H.273 Table 3". "H.264 Table E-4" and "H.265 Table E.4" share the identical values.

The function takes the following parameters:

  • value: ITU-T H.273 transfer characteristics value.

The function returns the following values:

  • videoTransferFunction: matched VideoTransferFunction.

func (VideoTransferFunction) String

func (v VideoTransferFunction) String() string

String returns the name in string for VideoTransferFunction.

type VideoVBIEncoder

type VideoVBIEncoder struct {
	// contains filtered or unexported fields
}

VideoVBIEncoder: encoder for writing ancillary data to the Vertical Blanking Interval lines of component signals.

An instance of this type is always passed by reference.

func NewVideoVBIEncoder

func NewVideoVBIEncoder(format VideoFormat, pixelWidth uint32) *VideoVBIEncoder

NewVideoVBIEncoder constructs a struct VideoVBIEncoder.

func (*VideoVBIEncoder) AddAncillary

func (encoder *VideoVBIEncoder) AddAncillary(composite bool, DID byte, SDIDBlockNumber byte, data []byte) bool

AddAncillary stores Video Ancillary data, according to SMPTE-291M specification.

Note that the contents of the data are always read as 8bit data (i.e. do not contain the parity check bits).

The function takes the following parameters:

  • composite: TRUE if composite ADF should be created, component otherwise.
  • DID: data Identifier.
  • SDIDBlockNumber: secondary Data Identifier (if type 2) or the Data Block Number (if type 1).
  • data: user data content of the Ancillary packet. Does not contain the ADF, DID, SDID nor CS.

The function returns the following values:

  • ok: TRUE if enough space was left in the current line, FALSE otherwise.

func (*VideoVBIEncoder) Copy

func (encoder *VideoVBIEncoder) Copy() *VideoVBIEncoder

The function returns the following values:

func (*VideoVBIEncoder) WriteLine

func (encoder *VideoVBIEncoder) WriteLine(data *byte)

The function takes the following parameters:

type VideoVBIParser

type VideoVBIParser struct {
	// contains filtered or unexported fields
}

VideoVBIParser: parser for detecting and extracting GstVideoAncillary data from Vertical Blanking Interval lines of component signals.

An instance of this type is always passed by reference.

func NewVideoVBIParser

func NewVideoVBIParser(format VideoFormat, pixelWidth uint32) *VideoVBIParser

NewVideoVBIParser constructs a struct VideoVBIParser.

func (*VideoVBIParser) Ancillary

func (parser *VideoVBIParser) Ancillary() (*VideoAncillary, VideoVBIParserResult)

Ancillary: parse the line provided previously by gst_video_vbi_parser_add_line().

The function returns the following values:

  • anc to start the eventual ancillary data.
  • videoVBIParserResult: GST_VIDEO_VBI_PARSER_RESULT_OK if ancillary data was found and anc was filled. GST_VIDEO_VBI_PARSER_RESULT_DONE if there wasn't any data.

func (*VideoVBIParser) Copy

func (parser *VideoVBIParser) Copy() *VideoVBIParser

The function returns the following values:

type VideoVBIParserResult

type VideoVBIParserResult C.gint

VideoVBIParserResult: return values for VideoVBIParser.

const (
	// VideoVbiParserResultDone: no line were provided, or no more Ancillary
	// data was found.
	VideoVbiParserResultDone VideoVBIParserResult = iota
	// VideoVbiParserResultOK was found.
	VideoVbiParserResultOK
	// VideoVbiParserResultError: error occurred.
	VideoVbiParserResultError
)

func (VideoVBIParserResult) String

func (v VideoVBIParserResult) String() string

String returns the name in string for VideoVBIParserResult.

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL