Display Image with Gstreamer
This tutorial introduces how to build and run a simple GStreamer pipeline to capture and display images from a camera.
While you can construct and execute pipelines for video capture and processing within any application with the GStreamer API, in this tutorial, we will demonstrate examples using the gst-launch
console tool or integrating GStreamer with OpenCV, allowing for intuitive pipeline construction as a string.
Tutorial
Prerequisites
- Sensing-Dev SDK with GStreamer plugins and tools
- Windows: Install the C++ version of sensing-dev with
-InstallGstTools -InstallGstPlugins
. Make sureGST_PLUGIN_PATH
is set. - Linux: Install the C++ version of sensing-dev with
--install-gst-tools --install-gst-plugin
. Set theGST_PLUGIN_PATH
.
- Windows: Install the C++ version of sensing-dev with
- (When using OpenCV) OpenCV-Python with the GStreamer backend. Setup instructions can be found here.note
On Windows, the installation script automatically sets the environment variables. On Linux, you need to manually configure the following environment variables:
export GST_PLUGIN_PATH=/opt/sensing-dev/lib/x86_64-linux-gnu/gstreamer-1.0
export LD_LIBRARY_PATH=/opt/sensing-dev/lib:/opt/sensing-dev/lib/x86_64-linux-gnu:$LD_LIBRARY_PATH
:::
Configuration
Please adjust the camera_name, pixelformat, width, height, framerate based on your requirements. You can obtain these values by arv-tool-0.8 in Sensing-Dev software package
camera_name = 'replace-by-your-camera-name'
pixelformat = 'Mono8'
width = 1920
height = 1080
framerate = 60
Build a pipeline
In a GStreamer pipeline, elements are connected using !
to process and pass data through the pipeline.
Element | description | parameters |
---|---|---|
aravissrc | camera data acquisition | camera-name |
videoconvert | convert from raw to image | |
autovideosink | display |
The pipeline can be connected to the camera by setting the aravissrc
element at the beginning.
Additionally, formats can be specified using video/x-raw
or video/x-bayer
.
Here are a few examples:
Example for color images in Bayer format:
- The camera image format is specified using
video/x-bayer
to define the Bayer format. - The Bayer format is converted to RGB using
bayer2rgb
. - To ensure compatibility, the image format is adjusted using
videoconvert
.
Color (Bayer)(8-bit):
aravissrc camera-name="<camera-name>" ! gendcseparator ! queue ! video/x-bayer,format=bggr,width=1920,height=1080,framerate=60/1 ! bayer2rgb ! videoconvert ! appsink
Example for grayscale images:
- Raw formats such as
GRAY8
(8-bit) orGRAY16_LE
(16-bit) are supported withvideo/x-raw
. - The image format is adjusted using
videoconvert
.
Grayscale (8-bit)
aravissrc camera-name="<camera-name>" ! gendcseparator ! queue ! video/x-raw,format=GRAY8,width=1920,height=1080,framerate=60/1 ! videoconvert ! appsink
Grayscale (16-bit):
aravissrc camera-name="<camera-name>" ! gendcseparator ! queue ! video/x-raw,format=GRAY16_LE,width=1920,height=1080,framerate=60/1 ! videoconvert ! appsink
Currently, this tutorial (GStreamer 1.20.3) does not support BayerBG10 and BayerBG12. The bayer2rgb
element only supports 8-bit formats.
Execute the pipeline
The pipeline constructed above can be executed on the console using the gst-launch
command-line tool.
Additionally, by replacing autovideosink with the appsink
element, the Python OpenCV API becomes available, allowing the captured images to be displayed using OpenCV.
Option 1. Using gst-launch
When you install Sensing-Dev with the gst-plugins and tools options enabled, the gst-launch-1.0
console tool becomes available.
By typing gst-launch-1.0
on the command line and then entering the pipeline you constructed above, the images captured from the camera will continue to be displayed until you press Ctrl-C
.
e.g.
gst-launch-1.0 camera-name="<camera-name>" ! video/x-raw,format=GRAY8,width=1920,height=1080,framerate=60/1 ! videoconvert ! autovideosink
Option 2. Output to OpenCV:
By using appsink
instead of autovideosink
, OpenCV API can access the processed video frames.
You will need to import opencv-python with the GStreamer backend. The setup instructions can be found here.
Additionally, if you are using Windows, you need to add the Sensing-Dev bin directory to the DLL directory so that opencv-python can find GStreamer, as follows:
import os
if os.name == 'nt': # windows
os.add_dll_directory(os.path.join(os.environ["SENSING_DEV_ROOT"], "bin"))
import cv2
Please construct the pipeline as a string as shown below. Also, make sure that the last element is appsink
.
pipeline = 'aravissrc camera-name="<camera-name>" ! video/x-bayer,format=bggr,width=1920,height=1080,framerate=60/1 ! bayer2rgb ! videoconvert ! appsink'
In the source code, the line cap = cv2.VideoCapture(pipeline, cv2.CAP_GSTREAMER)
connects OpenCV to the GStreamer pipeline, enabling OpenCV to capture video frames.
The output of cap.read()
consists of two parts: ret
, which indicates whether the read operation was successful, and frame
, which contains the actual captured data.
By using a for
loop to display the frame
for each pipeline execution, you can continuously display the captured images.
while (user_input == -1):
ret, frame = cap.read()
if not ret:
print("Can't receive frame")
break
cv2.imshow('opencv with gstreamer test', frame)
user_input = cv2.waitKeyEx(1)
Do not forget to destroy windows and release the camera after for
loop.
cv2.destroyAllWindows()
cap.release()
Complete code
Complete code used in the tutorial is here.