Creating Gstreamer Multimedia Pipeline With C++ Part 2

Karthick Panner Selvam
Analytics Vidhya
Published in
5 min readMay 12, 2020

To grasp the full content of this article, kindly read my previous post.

If you are working on Edge AI or Computer Vision projects, you will surely come across Gstreamer framework. Having a good knowledge on Gstreamer will be of great help for your projects.

This article part -2 shows the rest of the basic concepts required to use GStreamer, mainly

  • Gstreamer multi-threading
  • Gstreamer demuxer
  • Gstreamer tee
  • Gstreamer queue
  • GStreamer visual
  • GStreamer resample
  • Gstreamer filesink.

After completion of this article, you will obtain the necessary knowledge to build the Gstreamer pipeline “on the fly”.

Let’s start building a dynamic pipeline project. The source is a network video, then it converted into audio signal, which is split using a tee element (it sends through its source pads everything it receives through its sink pad). One branch then sends the signal to the filesink to save audio file, and the other renders a video of the waveform and sends it to the screen.

Complete Pipeline Diagram:

File Structure

  • CmakeLists.txt
  • main.cpp

Walkthrough

struct CustomData {
GstElement *pipeline;
GstElement *source;
GstElement *convert;
GstElement *resample;
GstElement *tee;
GstElement *audio_queue;
GstElement *wavenc;
GstElement *sink;
GstElement *wave_queue;
GstElement *visual;
GstElement *wave_convert;
GstElement *wave_sink;

};

So far we have kept all the information we needed (pointers to GstElements, basically) as local variables. Since this tutorial (and most real applications) involves callbacks, we will group all our data in a structure for easier handling.

/* Handler for the pad-added signal */
static void pad_added_handler (GstElement *src, GstPad *pad, CustomData *data);

This is a forward reference, to be used later.

data.source = gst_element_factory_make("uridecodebin", "source");
data.tee = gst_element_factory_make("tee", "tee");
data.audio_queue = gst_element_factory_make("queue", "audio_queue");
data.convert = gst_element_factory_make("audioconvert", "convert");
data.resample = gst_element_factory_make("audioresample", "resample");
data.wavenc = gst_element_factory_make("wavenc", "wavenc");
data.sink = gst_element_factory_make("filesink", "sink");
data.wave_queue = gst_element_factory_make("queue", "wave_queue");
data.visual = gst_element_factory_make ("wavescope", "visual");
data.wave_convert = gst_element_factory_make ("videoconvert", "csp");
data.wave_sink = gst_element_factory_make ("autovideosink", "wave_sink");

We will create the elements as usual. uridecodebin will internally instantiate all the necessary elements (sources, demuxers and decoders) to turn a URI into raw audio and/or video streams. It does half of the work that playbin does. Since it contains demuxers, its source pads are not initially available and we will need to link to them on the fly.

audioconvert is useful for converting between different audio formats, making sure that this example will work on any platform, since the format produced by the audio decoder might not be the same that the audio sink expects.

audioresample is useful for converting between different audio sample rates, similarly making sure that this example will work on any platform, since the audio sample rate produced by the audio decoder might not be one that the audio sink supports.

wavescope consumes an audio signal and renders a waveform in display using autovideosink.

if (!gst_element_link_many(data.convert, data.resample, data.tee, NULL) || !gst_element_link_many(data.audio_queue, data.wavenc, data.sink, NULL) || !gst_element_link_many(data.wave_queue, data.visual, data.wave_convert,data.wave_sink, NULL)) {
g_printerr("Elements could not be linked");
gst_object_unref(data.pipeline);
return -1;
}

Here we link convert element to the tee but we DO NOT link them with the source, since at this point it contains no source pads. We just leave this branch (converter -> tee) unlinked, until later on.

tee_audio_pad = gst_element_get_request_pad(data.tee, "src_%u");
g_print("Obtained request pad %s for audio branch.\n", gst_pad_get_name (tee_audio_pad));
tee_wave_pad = gst_element_get_request_pad(data.tee, "src_%u");
g_print("Obtained request pad %s for audio branch.\n", gst_pad_get_name (tee_wave_pad));
queue_audio_pad = gst_element_get_static_pad(data.audio_queue, "sink");
queue_wave_pad = gst_element_get_static_pad(data.wave_queue, "sink");

if (gst_pad_link(tee_audio_pad, queue_audio_pad) != GST_PAD_LINK_OK ||
gst_pad_link(tee_wave_pad, queue_wave_pad) != GST_PAD_LINK_OK) {
g_printerr("Tee could not be linked.\n");
gst_object_unref(data.pipeline);
return -1;
}
gst_object_unref(queue_audio_pad);
gst_object_unref(queue_wave_pad);

To link Request Pads, they need to be obtained by “requesting” them to the element. An element might be able to produce different kinds of Request Pads, so, when requesting them, the desired Pad Template name must be provided. In the documentation for the tee element we see that it has two pad templates named “sink” (for its sink Pads) and “src_%u” (for the Request Pads). We request two Pads from the tee (for the audio and wave branches) with gst_element_get_request_pad().

We then obtain the Pads from the downstream elements to which these Request Pads need to be linked. These are normal Always Pads, so we obtain them with gst_element_get_static_pad().

Finally, we link the pads with gst_pad_link(). This is the function that gst_element_link() and gst_element_link_many() use internally.

g_signal_connect (data.source, "pad-added", G_CALLBACK (pad_added_handler), &data);

GSignals are a crucial point in GStreamer. They allow you to be notified (by means of a callback) when something interesting has happened. Signals are identified by a name, and each GObject has its own signals.

When our source element finally has enough information to start producing data, it will create source pads, and trigger the “pad-added” signal. At this point our callback will be called:

static void pad_added_handler (GstElement *src, GstPad *new_pad, CustomData *data) {

src is the GstElement which triggered the signal. In this example, it can only be the uridecodebin .

new_pad is the GstPad that has just been added to the src element. This is usually the pad to which we want to link.

data is the pointer we provided when attaching to the signal. In this example, we use it to pass the CustomData pointer.

GstPad *sink_pad = gst_element_get_static_pad(data->convert, "sink");

From CustomData we extract the converter element, and then retrieve its sink pad using gst_element_get_static_pad (). This is the pad to which we want to link new_pad

/* Check the new pad's type */
new_pad_caps = gst_pad_get_current_caps(new_pad);
new_pad_struct = gst_caps_get_structure(new_pad_caps, 0);
new_pad_type = gst_structure_get_name(new_pad_struct);
if (!g_str_has_prefix(new_pad_type, "audio/x-raw")) {
g_print("It has type '%s' which is not raw audio. Ignoring.\n", new_pad_type);
goto exit;
}

Now we will check the type of data this new pad is going to output, because we are only interested in pads producing audio. We have previously created a piece of pipeline which deals with audio (an audioconvert linked with an audioresample and tee)

If the name is not audio/x-raw, this is not a decoded audio pad, and we are not interested in it.

Otherwise, attempt the link:

ret = gst_pad_link(new_pad, sink_pad);
if (GST_PAD_LINK_FAILED (ret)) {
g_print("Type is '%s' but link failed.\n", new_pad_type);
} else {
g_print("Link succeeded (type '%s').\n", new_pad_type);
}

gst_pad_link() tries to link two pads. And finally we are done, source linked with tee through convert and resample. tee linked with audio_queue and wave_queue then audio_queue linked with filesink to save a wav file using wavenc encoder and wave_queue visualize the audio frequency through autovideosink.

Build & Execute :

git clone https://github.com/karthickai/gstreamer.git
cd gstreamer/gstreamer-02
mkdir build; cd build
cmake ../
make
./gstreamer-02

Output :

Link :

Thanks for reading.

If you have any suggestions/questions kindly let us know in the comments section!

--

--

Karthick Panner Selvam
Analytics Vidhya

Computer Scientist, specializing in AI, IoT and Computer Vision.