Dev @ Work

take care

Archive for the ‘GStreamer’ Category

GStreamer SDK for Windows

with 2 comments


Fluendo and Collabora announced GStreamer SDK for several platforms in November last year. I have written in the past about live streaming of WebM over HTTP using GStreamer and Node.js. I decided to see what adaptations would be required on Windows with the new SDK.

Fortunately, the GStreamer pipeline in that post mostly works. The only element missing from the SDK is tcpclientsink. I was able to replace that with udpsink. Besides, GStreamer on Windows has some useful elements for capturing video and audio from devices:

  • dshowvideosrc
  • dshowaudiosrc

Modified source code that uses these elements and works on Windows follows. You’ll need to install the Windows version of the GStreamer SDK. You’ll also need Node.js and the express module to be able to execute the script below. After executing the script, access http://localhost:9001 in any browser capable of WebM playback. A media player such as VLC will also work.

var http = require('http')
var express = require('express')
var dgram = require('dgram')
var child = require('child_process');
 
var app = express();
var httpServer = http.createServer(app);
 
app.get('/', function(req, res) {
  var date = new Date();
 
  res.writeHead(200, {
    'Date':date.toUTCString(),
    'Connection':'close',
    'Cache-Control':'private',
    'Content-Type':'video/webm',
    'Server':'CustomStreamer/0.0.1',
    });
 
  var udpServer = dgram.createSocket('udp6');
    
  udpServer.on('message', function(msg, info) {
    res.write(msg);
  });
  
  udpServer.on('close', function() {
    res.end();
  });
 
  udpServer.on('error', function(error) {
    res.end();
  });
  
  udpServer.on('listening', function() {
    var cmd = 'gst-launch-0.10';
    var address = udpServer.address();
    var args =
      ['dshowvideosrc', 
      '!', 'ffmpegcolorspace',
      '!', 'vp8enc', 'speed=2',
      '!', 'queue2',
      '!', 'm.', 'dshowaudiosrc',
      '!', 'audioconvert',
      '!', 'vorbisenc',
      '!', 'queue2',
      '!', 'm.', 'webmmux', 'name=m', 'streamable=true',
      '!', 'udpsink', 'clients=localhost:'+address.port];
    var options = null;
 
    var gstMuxer = child.spawn(cmd, args, options);
 
    gstMuxer.stderr.on('data', onSpawnError);
    gstMuxer.on('exit', onSpawnExit);
 
    res.connection.on('close', function() {
      gstMuxer.kill();
      udpServer.close();
    });
  });
  
  udpServer.bind(0, 'localhost');
});
 
httpServer.listen(9001)
 
function onSpawnError(data) {
  console.log(data.toString());
}
 
function onSpawnExit(code) {
  if (code != null) {
    console.error('GStreamer error, exit code ' + code);
  }
}
 
process.on('uncaughtException', function(err) {
  console.debug(err);
});

The video has no audio though. It may have something to do with my PC, a simple pipeline such as:

gst-launch-0.10 dshowaudiosrc ! autoaudiosink

Produces the following messages and does not work:

Setting pipeline to PAUSED ...
Pipeline is live and does not need PREROLL ...
Setting pipeline to PLAYING ...
New clock: GstAudioSrcClock
WARNING: from element /GstPipeline:pipeline0/GstDshowAudioSrc:dshowaudiosrc0: Can't record audio fast enough
Additional debug info:
gstbaseaudiosrc.c(840): gst_base_audio_src_create (): /GstPipeline:pipeline0/GstDshowAudioSrc:dshowaudiosrc0:
Dropped 22050 samples. This is most likely because downstream can't keep up and is consuming samples too slowly.

Keep you posted on a fix when I find one.

Written by Devendra

January 5, 2013 at 4:14 pm

Play WebM streamed over HTTP using GStreamer’s souphttpsrc

leave a comment »


The pipeline below receives WebM video using souphttpsrc and plays it

gst-launch souphttpsrc location=http://127.0.0.1:9001 ! matroskademux ! vp8dec ! ffmpegcolorspace ! ximagesink

Check the manual page for souphttpsrc or the gst-inspect output for the element, for further details.

Written by Devendra

October 11, 2011 at 9:18 am

Posted in GStreamer, Linux

Read and write raw PCM using GStreamer

with one comment


Embedded developers have a frequent need to encode or decode PCM audio. In this post I show some GStreamer pipelines that can help with that task.

Convert WAV to PCM

gst-launch filesrc location=file.wav ! wavparse ! audioresample ! audioconvert ! audio/x-raw-int, rate=8000, channels=1, endianness=4321, width=16, depth=16, signed=true ! filesink location=file.pcm

For bulk conversion

ls *.wav | xargs -i -n 1 gst-launch filesrc location='{}' ! wavparse ! audioresample ! audioconvert ! audio/x-raw-int, rate=8000, channels=1, endianness=4321, width=16, depth=16, signed=true ! filesink location='{}'.pcm

Convert PCM to WAV

gst-launch filesrc location=file.pcm ! audio/x-raw-int, rate=8000, channels=1, endianness=4321, width=16, depth=16, signed=true ! audioconvert ! audio/x-raw-int, rate=8000, channels=1, endianness=1234, width=16, depth=16, signed=true ! wavenc ! filesink location=file.wav

Play PCM

gst-launch filesrc location=file.pcm ! audio/x-raw-int, rate=8000, channels=1, endianness=4321, width=16, depth=16, signed=true ! pulsesink

Use xxd to create C array of PCM data

xxd -i file.pcm > voice.c

Written by Devendra

October 4, 2011 at 9:29 am

Posted in GStreamer

Link dynamic pads of demuxer

leave a comment »


Demuxers do not have any pads till they receive the buffers to parse. As data is available to parse, pads are dynamically added based on the streams available.

The pad-added signal

The pad-added signal can be used to attach new elements to the pipeline once a new pad gets added. Use the g_signal_connect function to listen for pad-added. In the callback function, you can add new elements to the pipeline and link them to the demuxer based on the name of the pad. If the pad name starts with audio, for instance, you can link the element for audio playback. The state of these new elements needs to set to GST_STATE_PLAYING.

Here’s how you can register a callback for pad-added

g_signal_connect (demux, "pad-added", (GCallback)demux_pad_added, NULL);

Here’s a sample callback function for the matroskademux element

void
demux_pad_added (GstElement* demux, GstPad* pad, gpointer user_data)
{
  char* name;
  GstPad *sinkpad;
  GstElement *tee, *sink;

  name = gst_pad_get_name(pad);

  if (strncmp(name, "audio", 5) == 0)
  {
    // link audio sink pad of demuxer to src pad of audio tee
    tee = gst_element_factory_make ("tee", "audiotee");
    sink = gst_element_factory_make ("fakesink", "audiosink");
    gst_bin_add_many (GST_BIN (pipeline), tee, sink, NULL);
    gst_element_link (tee, sink);
    sinkpad = gst_element_get_static_pad(tee, "sink");
    gst_pad_link(pad, sinkpad);
    gst_object_unref (sinkpad);
    gst_element_set_state(tee, GST_STATE_PLAYING);
    gst_element_set_state(sink, GST_STATE_PLAYING);
    g_print ("Linked pad %s of demuxer\n", name);
  }
  else if (strncmp(name, "video", 5) == 0)
  {
    // link src pad of demuxer to sink pad of video tee
    tee = gst_element_factory_make ("tee", "videotee");
    sink = gst_element_factory_make ("fakesink", "videosink");
    gst_bin_add_many (GST_BIN (pipeline), tee, sink, NULL);
    gst_element_link (tee, sink);
    sinkpad = gst_element_get_static_pad(tee, "sink");
    gst_pad_link(pad, sinkpad);
    gst_object_unref (sinkpad);
    gst_element_set_state(tee, GST_STATE_PLAYING);
    gst_element_set_state (sink, GST_STATE_PLAYING);
    g_print ("Linked pad %s of demuxer\n", name);
  }

  g_free (name);
}

The no-more-pads signal

Another signal that can be used is no-more-pads. You can check for its existence with your version of GStreamer by using gst-inspect e.g. gst-inspect avidemux. In the callback of that signal you can link new elements to the demuxer using gst_element_link_filtered. Call the function once for each caps. The caps parameter required by the function can be created using gst_caps_new_simple e.g. gst_caps_new_simple ("video/x-vp8", NULL). Again, the state of these new elements needs to set to GST_STATE_PLAYING.

Here’s how you can register a callback for no-more-pads

  g_signal_connect (demux, "no-more-pads", (GCallback)demux_no_more_pads, NULL);

Here’s a sample callback function.

void
demux_no_more_pads (GstElement* demux, gpointer user_data)
{
  GstCaps *caps;
  GstElement *tee, *sink;

  tee = gst_element_factory_make ("tee", "videotee");
  sink = gst_element_factory_make ("fakesink", "videosink");
  gst_bin_add_many (GST_BIN (pipeline), tee, sink, NULL);
  gst_element_link (tee, sink);
  caps = gst_caps_new_simple ("video/x-vp8", NULL);
  gst_element_link_filtered (demux, tee, caps);
  gst_element_set_state(tee, GST_STATE_PLAYING);
  gst_element_set_state(sink, GST_STATE_PLAYING);

  tee = gst_element_factory_make ("tee", "audiotee");
  sink = gst_element_factory_make ("fakesink", "audiosink");
  gst_bin_add_many (GST_BIN (pipeline), tee, sink, NULL);
  gst_element_link (tee, sink);
  caps = gst_caps_new_simple ("audio/x-vorbis", NULL);
  gst_element_link_filtered (demux, tee, caps);
  gst_element_set_state(tee, GST_STATE_PLAYING);
  gst_element_set_state(sink, GST_STATE_PLAYING);
}

Debugging

As usual, if you have any issues you need to troubleshoot with your pipeline, you can try setting the environment variable GST_DEBUG to 5. GStreamer and its elements will print copious amounts of information as they execute.

export GST_DEBUG=5

Written by Devendra

July 25, 2011 at 9:14 pm

Posted in GStreamer, Linux

Video streaming using jpeg encoding

leave a comment »


Here’s an example of a GStreamer pipeline that produces a less CPU intensive and low latency video stream using jpeg encoding. Audio in vorbis is muxed, along with the video, into a matroska stream. I have tested this on Ubuntu 11.04.

gst-launch v4l2src decimate=3 ! video/x-raw-yuv,width=320,height=240 ! jpegenc ! queue2 ! m. alsasrc device=hw:2,0 ! audioconvert ! vorbisenc ! queue2 ! m. matroskamux name=m streamable=true ! tcpclientsink host=localhost port=9002

A server can stream it with a content type of video/x-matroska. Most browsers will not play it directly, but external plugins can be used.

Written by Devendra

July 6, 2011 at 7:35 pm

Posted in GStreamer, HTML, RTC

Adjusting attributes of v4l2src and vp8enc elements for video conferencing

leave a comment »


Video conferencing is real time in nature. The default encoding parameters of vp8enc element of GStreamer are not always appropriate. Let us start with the following pipeline

gst-launch v4l2src ! video/x-raw-rgb,width=320,height=240 ! ffmpegcolorspace ! vp8enc ! vp8dec ! ffmpegcolorspace ! ximagesink sync=false

The CPU usage, on a PandaBoard with Ubuntu 11.04, is close to 100% (since there are 2 cores, that translates to 50%).

Now, modify the pipeline as follows

gst-launch v4l2src decimate=3 ! video/x-raw-rgb,width=320,height=240 ! ffmpegcolorspace ! vp8enc speed=2 max-latency=2 quality=5.0 max-keyframe-distance=3 threads=5 ! vp8dec ! ffmpegcolorspace ! ximagesink sync=false

Note the decimate attribute of the v4l2src element, and the attributes speed, max-latency, max-keyframe-distance, threads and quality of the vp8enc element. With these changes the CPU usage drops to 40% and the video playback is more real time.

Written by Devendra

June 29, 2011 at 5:35 pm

Posted in GStreamer, RTC

Using the fdsink element of GStreamer

with 2 comments


The fdsink element is useful because it can be used to write data directly to a socket. In this post, we’ll see how to setup a listener for client connections and stream directly to the client socket using fdsink.

Listen for incoming connections

The functions below set up a server socket to listen for incoming client connections. Once a client connects, we send the appropriate HTTP headers, and call the function that will stream data to the client socket using fdsink. You can find the make_socket function in the GNU libc manual here.

gpointer
client_thread(gpointer data)
{
  int BUF_SIZE = 256;
  char buffer[BUF_SIZE+1];
  int client = (int)data;
  int ret;

  ret = read(client, buffer, BUF_SIZE);

  while(ret != -1)
  {
    buffer[ret] = 0;
    g_print("%s", buffer);
    if (ret > 3 && strncmp(buffer, "GET", 3) == 0)
    {
      send(client, "HTTP/1.0 200 OK\r\n", 17, 0);
      send(client, "Connection: close\r\n", 19, 0);
      send(client, "Content-Type: video/webm\r\n", 26, 0);
      send(client, "\r\n", 2, 0);

      //... create pipeline with fdsink
    }

    ret = read((int)data, buffer, BUF_SIZE);
  }
}

gpointer
server_thread(gpointer data)
{
  int sock, client;
  struct sockaddr_in addr;
  size_t size;

  g_print("Server thread started\n");

  sock = make_socket(9001);

  while(1)
  {
    if (listen (sock, 1) < 0)
    {
      g_printerr ("listener failed");
      exit (EXIT_FAILURE);
    }
    size = sizeof(addr);
    client = accept(sock, (struct sockaddr *)&addr, &size);

    if (client < 0)
    {
      g_printerr ("accept failed");
      continue;
    }

    g_print("connect from host %s, port %d.\n",
      inet_ntoa(addr.sin_addr),
      ntohs(addr.sin_port));

    g_thread_create(client_thread, (gpointer)client, TRUE, NULL);
  }
}

Create listener in its own thread

The server above can be executed in its own thread (we use glib) thus

  sthread = g_thread_create(server_thread, NULL, TRUE, NULL);

Use fdsink to stream to socket

The following code snippet demonstrates how fdsink can be setup

  sink = gst_element_factory_make ("fdsink", NULL);
  g_object_set (G_OBJECT (sink), "fd", client, NULL);

Handling client removal in a dynamic pipeline

A client can disconnect without a warning, fdsink does not provide any mechanism to handle such as situation. The whole pipeline can end if a single client disconnects. Luckily, the multifdsink can be used in such a scenario, it handles client disconnection more gracefully. The num-fds property can be polled to detect that there are no pending clients. Create a multifdsink thus

  sink = gst_element_factory_make ("multifdsink", NULL);

After starting the pipeline, add a new socket fd thus

  g_signal_emit_by_name(sink, "add", client, G_TYPE_NONE);

The multifdsink element has a bug that causes 100% CPU usage, this has been fixed in version 0.10.33 of GStreamer.

Headers

The following headers contain the declarations required to compile the code above

#include <gst/gst.h>
#include
#include <sys/socket.h>
#include <netinet/in.h>

That’s all there is to it.

Written by Devendra

June 14, 2011 at 5:18 pm

Posted in C and C++, GStreamer, Linux

GStreamer pipeline with Tee

with 7 comments


The tee element is useful to branch a data flow so that it can be fed to multiple elements. In this post we’ll use the tee element to split live test video and audio sources after encoding, mux the output as live WebM, and stream the result using the tcpclientsink element. This procedure can be repeated several times to stream to multiple clients, the only limit being the CPU and bandwidth. By encoding only once we avoid taxing the CPU, encoding being the most intensive operation it must perform. The code presented below has been tested with GStreamer 0.10.32.

Creating a pipeline with Tee

Example C code that creates a dynamic GStreamer pipeline using tee follows

  GstElement *pipeline, *videosrc, *colorspace, *videoenc,
    *videotee, *audiosrc, *conv, *audioenc, *audiotee;

  // Create elements
  pipeline = gst_pipeline_new ("tcp-streamer");
  videosrc = gst_element_factory_make ("videotestsrc", "videosrc");
  colorspace = gst_element_factory_make ("ffmpegcolorspace", "colorspace");
  videoenc = gst_element_factory_make ("vp8enc", "videoenc");
  videotee = gst_element_factory_make ("tee", "videotee");
  audiosrc = gst_element_factory_make ("autoaudiosrc", "audiosrc");
  conv = gst_element_factory_make ("audioconvert", "converter");
  audioenc = gst_element_factory_make ("vorbisenc", "audioenc");
  audiotee = gst_element_factory_make ("tee", "audiotee");

  if (!pipeline || !videosrc || !colorspace || !videoenc
    || !videotee || !audiosrc || !conv || !audioenc || !audiotee) {
    g_printerr ("One element could not be created.\n");
    return NULL;
  }

  // set the properties of elements
  g_object_set (G_OBJECT (videosrc), "horizontal-speed", 1, NULL);
  g_object_set (G_OBJECT (videosrc), "is-live", 1, NULL);
  g_object_set (G_OBJECT (videoenc), "speed", 2, NULL);

  // add all elements to the pipeline
  gst_bin_add_many (GST_BIN (pipeline),
    videosrc, colorspace, videoenc, videotee, audiosrc, conv,
    audioenc, audiotee, NULL);

  // link the elements together
  gst_element_link_many (videosrc, colorspace, videoenc,
    videotee, NULL);
  gst_element_link_many (audiosrc, conv, audioenc,
    audiotee, NULL);

Branching from a Tee on a running Pipeline

We create a sub-pipeline using a bin. Creating a new branch from the tee, on a running pipeline, can be achieved thus

  GstElement *bin, *videoq, *audioq, *muxer, *sink,
    *videotee, *audiotee;

  GstPad *sinkpadvideo, *srcpadvideo, *sinkpadaudio, *srcpadaudio;

  bin = gst_bin_new (NULL);
  videoq = gst_element_factory_make ("queue2", NULL);
  audioq = gst_element_factory_make ("queue2", NULL);
  muxer = gst_element_factory_make ("webmmux", NULL);
  sink = gst_element_factory_make ("tcpclientsink", NULL);

  if (!bin || !videoq || !audioq || !muxer || !sink) {
    g_printerr ("One element could not be created.\n");
    return FALSE;
  }

  g_object_set (G_OBJECT (muxer), "streamable", 1, NULL);

  g_object_set (G_OBJECT (sink), "port", port,
    "host", "localhost", NULL);

  gst_bin_add_many (GST_BIN (bin), videoq, audioq,
    muxer, sink, NULL);

  // link src pad of video queue to sink pad of muxer
  srcpadvideo = gst_element_get_static_pad(videoq, "src");
  sinkpadvideo = gst_element_get_request_pad(muxer, "video_%d");
  gst_pad_link(srcpadvideo, sinkpadvideo);

  // link src pad of audio queue to sink pad of muxer
  srcpadaudio = gst_element_get_static_pad(audioq, "src");
  sinkpadaudio = gst_element_get_request_pad(muxer, "audio_%d");
  gst_pad_link(srcpadaudio, sinkpadaudio);

  gst_element_link(muxer, sink);

  // Create ghost pads on the bin and link to queues
  sinkpadvideo = gst_element_get_static_pad(videoq, "sink");
  gst_element_add_pad(bin, gst_ghost_pad_new("videosink", sinkpadvideo));
  gst_object_unref(GST_OBJECT(sinkpadvideo));
  sinkpadaudio = gst_element_get_static_pad(audioq, "sink");
  gst_element_add_pad(bin, gst_ghost_pad_new("audiosink", sinkpadaudio));
  gst_object_unref(GST_OBJECT(sinkpadaudio));

  // set the new bin to PAUSE to preroll
  gst_element_set_state(bin, GST_STATE_PAUSED);

  // Request source pads from tee and sink pads from bin
  videotee = gst_bin_get_by_name (GST_BIN(pipeline), "videotee");
  srcpadvideo = gst_element_get_request_pad(videotee, "src%d");
  sinkpadvideo = gst_element_get_pad(bin, "videosink");
  audiotee = gst_bin_get_by_name (GST_BIN(pipeline), "audiotee");
  srcpadaudio = gst_element_get_request_pad(audiotee, "src%d");
  sinkpadaudio = gst_element_get_pad(bin, "audiosink");

  // Link src pad of tees to sink pads of bin
  gst_bin_add(GST_BIN(pipeline), bin);
  gst_pad_link(srcpadvideo, sinkpadvideo);
  gst_pad_link(srcpadaudio, sinkpadaudio);

  gst_element_set_state (pipeline, GST_STATE_PLAYING);

Removing the branch from a running pipeline

The following code illustrates how to remove the sub-pipeline.

  //gst_element_set_state (pipeline, GST_STATE_PAUSED);
  // pause pipeline if no more bins left
  gst_element_set_state (bin, GST_STATE_NULL);

  gst_pad_unlink(srcpadvideo, sinkpadvideo);
  gst_pad_unlink(srcpadaudio, sinkpadaudio);

  gst_element_remove_pad(videotee, srcpadvideo);
  gst_element_remove_pad(audiotee, srcpadaudio);

  gst_bin_remove(GST_BIN(pipeline), bin);

  //gst_element_set_state (pipeline, GST_STATE_PLAYING);
  // resume pipeline if there are bins left

For the curious, I cache the above pointers in a GHashTable using port number as the key.

Written by Devendra

June 6, 2011 at 3:03 pm

Posted in C and C++, GStreamer, Linux

Stream raw vorbis audio over UDP or TCP with GStreamer

with 3 comments


I have posted before about streaming a vorbis audio stream using TCP, muxed as WebM, so that the container header provides the necessary information regarding the audio stream to the receiver. I have also posted about streaming raw audio using RTP over UDP. All that works fine, but I wanted to try doing the same using just UDP or TCP, without resorting to a container format or other protocol.

Using UDP

The receiver starts the pipeline first, so it can receive the right headers:

gst-launch -v udpsrc port=9001 ! vorbisdec ! audioconvert ! alsasink sync=false

The sender then starts a pipeline thus:

gst-launch -v autoaudiosrc ! audioconvert ! audioresample ! vorbisenc ! multiudpsink client="localhost:9001,localhost:9002"

I have used a multiudpsink to demonstrate that it is possible to stream to multiple receivers. If you don’t have a sound input device, you may try using the audiotestsrc. The neat thing about using UDP is that the sender can be stopped and started again, without affecting the receiver. Note the use of sync property in the alsasink element. If you set it to true, the audio stops playing after a while or does not begin playing.

If you initiate the receiver pipeline after the sender, you’ll see a message such as:

ERROR: from element /GstPipeline:pipeline0/GstVorbisDec:vorbisdec0: Could not decode stream.
Additional debug info:
gstvorbisdec.c(976): vorbis_handle_data_packet (): /GstPipeline:pipeline0/GstVorbisDec:vorbisdec0:

Using TCP

Now, one would think that replacing the udpsrc and udpsink above, with tcpserversrc and tcpclientsink respectively, would work just fine. Unfortunately, that is not so. I haven’t arrived at a good explanation for it yet. I suspect it has to do with caps, so I use gdppay and gdpdepay in the pipeline below. Any GStreamer plugin hacker who can explain this difference between UDP and TCP is welcome to comment below.

The receiver can run a pipeline such as:

gst-launch -v tcpserversrc port=9001 ! gdpdepay ! vorbisdec ! audioconvert ! alsasink sync=false

The sender can then start sending the audio stream, using a pipeline such as:

gst-launch -v autoaudiosrc ! audioconvert ! audioresample ! vorbisenc ! gdppay ! tcpclientsink port=9001

One immediate advantage of using gdp with TCP, the sender can stream data using a tcpserversink, which can be received by multiple clients using tcpclientsrc. So, you are able to start the sender before the receiver.

The TCP pipelines above will not work with UDP. On executing the sender pipeline, the receiver prints a message such as

gstgdpdepay.c(416): gst_gdp_depay_chain (): /GstPipeline:pipeline0/GstGDPDepay:gdpdepay0:
Received a buffer without first receiving caps

I have absolutely no idea why.

Written by Devendra

June 1, 2011 at 4:32 pm

Posted in GStreamer

GStreamer pipeline in C

with 10 comments


In a previous post, we implemented live streaming of WebM to browser, using GStreamer and Node.js. In this post, we replace the GStreamer pipeline we spawned in that post, with a native executable that does exactly the same thing.

Here’s the code for the pipeline in C. You can build the code using instructions in this post.

#include <gst/gst.h>
#include <glib.h>

static gboolean
bus_call (GstBus *bus, GstMessage *msg, gpointer data)
{
  GMainLoop *loop = (GMainLoop *) data;

  switch (GST_MESSAGE_TYPE (msg)) {

    case GST_MESSAGE_EOS:
      g_print ("End of stream\n");
      g_main_loop_quit (loop);
      break;

    case GST_MESSAGE_ERROR: {
      gchar  *debug;
      GError *error;

      gst_message_parse_error (msg, &error, &debug);
      g_free (debug);

      g_printerr ("Error: %s\n", error->message);
      g_error_free (error);

      g_main_loop_quit (loop);
      break;
    }
    default:
      break;
  }

  return TRUE;
}

int
main (int argc, char *argv[])
{
  GMainLoop *loop;

  GstElement *pipeline, *videosrc, *colorspace, *videoenc,
    *videoq, *audiosrc, *conv, *audioenc, *audioq, *muxer, *sink;

  GstBus *bus;

  /* Initialisation */
  gst_init (&argc, &argv);

  loop = g_main_loop_new (NULL, FALSE);

  /* Check input arguments */
  if (argc != 2) {
    g_printerr ("Usage: %s <port number>\n", argv[0]);
    return -1;
  }

  /* Create gstreamer elements */
  pipeline = gst_pipeline_new ("audio-player");
  videosrc = gst_element_factory_make ("videotestsrc", "videosrc");
  colorspace = gst_element_factory_make ("ffmpegcolorspace", "colorspace");
  videoenc = gst_element_factory_make ("vp8enc", "videoenc");
  videoq = gst_element_factory_make ("queue2", "videoq");
  audiosrc = gst_element_factory_make ("audiotestsrc", "audiosrc");
  conv = gst_element_factory_make ("audioconvert", "converter");
  audioenc = gst_element_factory_make ("vorbisenc", "audioenc");
  audioq = gst_element_factory_make ("queue2", "audioq");
  muxer = gst_element_factory_make ("webmmux", "mux");
  sink = gst_element_factory_make ("tcpclientsink", "sink");

  if (!pipeline || !videosrc || !colorspace || !videoenc
    || !videoq || !audiosrc || !conv || !audioenc || !audioq
    || !muxer || !sink) {
    g_printerr ("One element could not be created. Exiting.\n");
    return -1;
  }

  /* Set up the pipeline */

  /* we set the port number to the sink element */
  g_object_set (G_OBJECT (sink), "port", atoi(argv[1]),
    "host", "localhost", NULL);

  /* set the properties of other elements */
  g_object_set (G_OBJECT (videosrc), "horizontal-speed", 1, NULL);
  g_object_set (G_OBJECT (videosrc), "is-live", 1, NULL);
  g_object_set (G_OBJECT (videoenc), "speed", 2, NULL);
  g_object_set (G_OBJECT (audiosrc), "is-live", 1, NULL);
  g_object_set (G_OBJECT (muxer), "streamable", 1, NULL);

  /* we add a message handler */
  bus = gst_pipeline_get_bus (GST_PIPELINE (pipeline));
  gst_bus_add_watch (bus, bus_call, loop);
  gst_object_unref (bus);

  /* we add all elements into the pipeline */
  gst_bin_add_many (GST_BIN (pipeline),
    videosrc, colorspace, videoenc, videoq, audiosrc, conv,
    audioenc, audioq, muxer, sink, NULL);

  /* we link the elements together */
  gst_element_link_many (videosrc, colorspace, videoenc,
    videoq, muxer, NULL);
  gst_element_link_many (audiosrc, conv, audioenc, audioq,
    muxer, NULL);
  gst_element_link(muxer, sink);

  /* Set the pipeline to "playing" state*/
  g_print ("Streaming to port: %s\n", argv[1]);
  gst_element_set_state (pipeline, GST_STATE_PLAYING);

  /* Iterate */
  g_print ("Running...\n");
  g_main_loop_run (loop);

  /* Out of the main loop, clean up nicely */
  g_print ("Returned, stopping playback\n");
  gst_element_set_state (pipeline, GST_STATE_NULL);

  g_print ("Deleting pipeline\n");
  gst_object_unref (GST_OBJECT (pipeline));

  return 0;
}

To test, replace the cmd variable with the name of the executable compiled above, e.g.

cmd = './a.out';

Replace args with just one parameter, the muxPort

args = muxPort;

Then, execute Node.js.

 

Written by Devendra

May 24, 2011 at 5:08 pm

Posted in C and C++, GStreamer, Linux

Follow

Get every new post delivered to your Inbox.

Join 51 other followers

%d bloggers like this: