Skip to content

Custom Pipelines

IP Camera streams can be configured with their own custom GStreamer pipelines, allowing for rich configuration of how the stream is processed. This section will not explain the intricacies of GStreamer pipelines as the official website provides excellent documentation on how these work. Instead, included are a few pipeline examples.

BrainFrame does quite a bit of work in the background to ensure that many different IP camera types are supported seamlessly. When using custom pipelines, more intimate knowledge of the IP camera stream is required compared to using BrainFrame normally.

Note that all custom pipelines:

  • Must include a {url} template field. This is where the specified IP camera URL will be inserted into the pipeline.
  • Must have an appsink element named "main_sink". This is where frames will be extracted from the pipeline for processing.
  • May optionally include an element named "buffer_src". This is required for frame skipping to work with custom pipelines. This name should be given to an element in the pipeline that sections off frame data from the network before decoding, like rtph264depay.

To specify a custom pipeline, check the "Advanced Options" checkbox in the stream creation window and enter your pipeline into the "Pipeline" textbox.

Example Pipelines

Cropping the Video Stream

For composite video streams or for scenes that contain uninteresting sections, one may want to crop the video stream before processing. Here is an example of a custom pipeline to accomplish this for an H264 RTSP stream:

rtspsrc location="{url}" ! rtph264depay name="buffer_src" ! decodebin ! videocrop top=x left=x right=x bottom=x ! videoconvert ! video/x-raw,format=(string)BGR ! appsink name="main_sink"

This pipeline uses the videocrop element to crop the video by some configurable value. The "x" values should be replaced with the amount in pixels to crop from each side of the frame.

Lower Latency Streaming

By default, BrainFrame will “buffer” frames in order to ensure a more stable streaming experience. In order to prevent that, try using the pipeline below:

rtspsrc location="{url}" latency=X ! rtph264depay name="buffer_src" ! decodebin ! videoconvert ! video/x-raw,format=(string)BGR ! appsink name="main_sink"

Replace the "X" in latency=X with 0 for no buffering at all. The unit X is in milliseconds.

Rotating the Video Stream

rtspsrc location="{url}" ! rtph264depay ! avdec_h264 ! videoconvert ! videoflip video-direction=x ! videoconvert ! video/x-raw,format=(string)BGR ! appsink name="main_sink"

The "x" value should be the number of degrees to rotate. Try numbers such as 0, 90, 180, and so on.

(Experimental) Hardware Accelerated Decoding

A custom pipeline may be used to enable experimental support for hardware accelerated video decoding. This pipeline is only known to work on machines with Intel CPUs. A discrete GPU may interfere with this pipeline.

rtspsrc location="{url}" ! rtph264depay name="buffer_src" ! vaapidecodebin ! videoconvert ! video/x-raw,format=(string)BGR ! appsink name="main_sink"

If in the client you get an error like this:

gi.repository.GLib.GError: gst_parse_error: no element "vaapidecodebin" (1)

The error suggests that the necessary hardware video decoding drivers cannot be found on your system. If you haven't already, run the provided install.sh script to ensure that the drivers are installed.

bash install.sh

If this doesn't fix the problem, your hardware may not be supported. If you believe that your hardware should be supported, please make a post on the BrainFrame forum with information on your hardware configuration and the error message you get.