Wearable live streaming gadget using Raspberry pi

1367-00

Introduction:

This article presents a raspberry pi based wearable device capable of streaming live audio and video on to the dedicated server and to android based mobile phones and tablets using gstreamer technology. This device is made with the consideration to provide security to the user such that in case of any danger, the user would be able to send panic signal to the server and through live footage of the scene, the response team can react accordingly. The gadget is accessible to the user through its application made currently for android enabled smart devices. The application provides the functionality of clicking, sharing, and publishing pictures and videos. In addition to that the application utilises the gps present inside the smart device and send its location to the dedicated server through the device network on user demand.

The gadget is made using raspberry pi. Raspberry pi is a programmable single board computer and can be used to perform multi-processing tasks. We used raspberry pi as a mediator between camera and the end devices. It processes the data received from the camera, encode it, build a pipeline, and send it via udp or tcp network protocol. Gstreamer technology is used to build pipeline for sending and receiving camera feeds.

GStreamer is an open source framework for constructing pipeline consisting of media handling elements.

blog-1.

Device components:

 

blog-2

Hardware:

  1. Raspberry pi B+
  2. Raspberry Pi camera
  3. USB Wi-Fi adapter
  4. SDHC card:
  5. Mic + USB sound card

Software:

  1. Raspberry side:
    1. Raspbian OS, Gstreamer Api, FFmpeg api, ALSA(Advances Linux Sound Architecture) utilities.
  2. Android side:
    1. Gstreamer SDK
  3. Server side:
    1. Wowza media streaming Engine

blog-3.jpg

Working Principle:

Audio and Video Streaming:

Streaming technology refers to sending large streams of data between systems. Because the data is too big to send in one go it is cut in to smaller packets of data. These packets are then send sequentially. In order to decrease the size of the data it is often compressed. The operating principle of video streaming is the same. Basically a video is compressed and then send in packets through a transport.

Methods of compressing video data:

The first is ‘Inter-Frame’ based compression. Think of this as saving every image in the video as a JPEG image. An example compression algorithm that works accordingly is Motion-JPEG. Other examples are DV and HuffYUV.

The second method is ‘Intra-Frame’ based compression and uses the the differences in images. If you start with an image the ‘Intra-Frame’ based method only tracks the differences in the following frames. Some highly sophisticated algorithms have been developed over the years of which the most used one is H.264. Other examples include Theora, Xvid and Divx. Compression algorithms for video are often referred to as a ‘codec’.

Methods of transmitting video data:

To transport the stream of video data packets there are many possibilities. In TCP/IP networks an UDP transport is the most simple solution. The RTP protocol is a transport protocol on top of UDP. Nowadays HTTP is also often used as a transport for streaming video

Strengths and Weaknesses:

As UDP does not guarantee delivery nor order it’s only suitable for situations where speed and minimal bandwidth are a top requirement. However usually you do want the right order of packets if the packets do make it across. The RTP protocol provides this on top of UDP. Therefore the RTP protocol is better suited for transporting video streams. The HTTP protocol was never designed to do streaming. However as a lot of firewalls block everything except HTTP, HTTP is nowadays used for everything thus including video streaming.

When it comes to compressing video the Motion-JPEG compression is a common ‘Inter-Frame’ compression method which simply consists of compressing to JPEG images. This is very suitable for situations where you need fast encoding and decoding. As it’s based on single frames Motion-JPEG is also very suitable to seek through the video. Seeking through a video is much more difficult when the compression is ‘Intra-Frame’ based. This method uses the changes in sequential frames. Before finding a frame at a certain position in the video the seek method first needs to find a full frame (key frame) and from there calculate the differences to the position. The H.264 is a codec based on the differences in frames and therefore less suited for situations where you do a lot of seeking in the video stream. However when it comes to bandwidth the H.264 codec is the clear winner compared to Motion-JPEG.The H.264 codec was designed for streaming. It provides many parameters to tweak the compression to specific needs.

Gstreamer:

To use the Gstreamer framework it’s easiest to install it on a Linux system. In this example we are using Ubuntu but the steps should be similar on other platforms. To make sure the framework is installed run the following command in the terminal:

sudo apt-get install gstreamer1.0-tools \

gstreamer1.0-plugins-base \

gstreamer1.0-plugins-good \

gstreamer1.0-plugins-bad \

gstreamer1.0-plugins-ugly

To have a basic understanding of the Gstreamer framework you need to think of it as a pipeline. The video data starts at the source and moves to the sink. Meanwhile you can do many things with the video data. Each chain in the pipeline is called an element.

 

blog-4

To construct a pipeline we have a very simple command line tool called ‘gst-launch’. The most simple pipeline would be a simple test video display which consists of the following elements:

  1. videotestsrc: A simple element creating a test image.
  2. autovideosink: A display element which needs no configuring.

blog-5

To create this pipeline run the following command:

gst-launch-1.0 videotestsrc ! autovideosink

Methods and Analysis:

There are many methods for audio and video streaming over the network differ from each other over the latency they produce and CPU usage.

A.      The Simplest – Raspvid and nc
  • On the Pi the raspividutility is used to encode H.264 video from the camera
  • The video stream is piped to the ncutility, which pushes it out to the network address where the video player is.
  • On the player computer ncreceives the stream and pipes it into mplayer to play.

On the pi:

raspivid -t 0 -fps 24 -o – | nc -k -l 8554

and on the client:

nc 8554 | vlc –file-caching=1024 file/h264:///dev/stdin

This is certainly the simplest and easiest. However, this is only supported by a limited number of media players and not by Android. The lag with this method was 2-3 seconds.

B.      Mjpeg-Streamer

Instead of streaming video, how about capturing still images and sending them to a web server one after each other? This is called Motion JPEG is a commonly used by web cams in surveillance systems.

Although this worked quite nicely and was easy to setup it still had a lag of 1-2 seconds.

C.      VLC

raspivid -o – -t 0 -hf -w 640 -h 360 -fps 5|cvlc -vvv stream:///dev/stdin   –sout ‘#standard{access=http,mux=ts,dst=:8090}’ :demux=h264

Although this played on VLC in Windows, Linux and on VLC for Android it was not in a native format supported by the the Android MediaPlayer.

The lag was also terrible at about 5-6 seconds. This could not be reduced be even using small video resolutions. VLC is doing a certain amount of buffering. If you need real time this is not the way to go.

D.      UV4L

It comes with a server that can stream MJPEG.  The uv4l-server module is a plug-in specific for UV4L which enables a per-camera HTTP Streaming Server that can be accessed-by-any-browser.

It offers a Web Page from which it’s possible to watch the video stream and a Control Page allowing you to fully control the camera settings while streaming with any Video4Linux application. Basic authentication for both the normal and admin users is also supported. By default the homepage of the server can be accessed at the following-address:
http://raspberrypi:8080 (where raspberrypi has to be replaced with the actual hostname or IP of the RaspberryPi in your network). Since MJPEG is supported by web browsers there is an easy way to integrate a web page into an Android app using WebView. Has to set the frame rate and the video resolution to be quite low to reduce the lag

E.     Gstreamer

The audio and video latency is quite low using gstreamer. IT comes out to be less than 1 sec.

Implementation

blog-7

We used following gstreamer pipeline to send audio and video to android and wowza media streaming engine.

Raspberry side:

To Stream audio over the network:

 gst-launch alsasrc device=hw:Device ! audioconvert ! audioresample ! ‘audio/x-raw int,rate=8000,width=16,channels=1’ ! udpsink host=x.x.x.x port=5001 To Stream video over the network to Andriod device: raspivid -t 999999 -h 720 -w 1080 -fps 25 -hf -b 2000000 -o – | gst-launch -v fdsrc fd=0 ! h264parse ! rtph264pay ! udpsink host=192.168.0.193 port=8554 

Andriod Side:

Receive video pipeline(Using Gstreamer SDK):

data->pipeline = gst_parse_launch(“udpsrc port=8554 caps=\”application/x-rtp, media=video, clock-rate=90000, encoding-name=H264,sprop-parameter-      sets=\\\”J2QAFKwrQLj/LwDxImo\\\\=\\\\,KO4fLA\\\\=\\\\=\\\”\”, payload=96\” ! rtph264depay byte-stream=false ! ffdec_h264 ! autovideosink sync=false”, &error);

To Stream video over the network to Wowza Media Streaming Engine:

raspivid -n -mm matrix -w 1280 -h 720 -fps 25 -hf -vf -g 100 -t 0 -b 500000 -o – | ffmpeg -y  -f h264  -i –  -c:v copy  -map 0:0  -f flv  -rtmp_buffer 100  -rtmp_live live rtmp://107.170.xxx.xxx:1935/MyApp/myStream  Finally these pipelines are sent like : raspivid -n -mm matrix -w 1280 -h 720 -fps 25 -hf -vf -g 100 -t 0 -b 500000 -o – | tee >(ffmpeg -y  -f h264  -i –  -c:v copy  -map 0:0  -f flv  -rtmp_buffer 100  -rtmp_live live rtmp://107.170.xxx.xxx:1935/MyApp/myStream) | gst-launch -v fdsrc fd=0 ! h264parse ! rtph264pay ! udpsink host=192.168.0.193 port=8554 & gst-launch alsasrc device=hw:Device ! audioconvert ! audioresample ! ‘audio/x-raw-int,rate=8000,width=16,channels=1’ ! udpsink host=x.x.x.x port=5001

Audio:

Audio is received on port and feed data received to AudioTrack object.

 

AudioTrack track = new AudioTrack(AudioManager.STREAM_MUSIC,

SAMPLE_RATE,AudioFormat.CHANNEL_CONFIGURATION_MONO, AudioFormat.ENCODING_PCM_16BIT, BUF_SIZE, AudioTrack.MODE_STREAM);
Getting the user location:

Get the location of the users device (via the NETWORK or GPS). This will be latitude and longitude.

blog-8

To get a location from the GPS you would just change this line:

locationManager.requestLocationUpdates(LocationManager.NETWORK_PROVIDER, 0, 0, locationListener);
To:
locationManager.requestLocationUpdates(LocationManager.GPS_PROVIDER, 0, 0, locationListener);

Send to the Internet:

Send that off to a server via Http GET

blog-9

Issues:

1.       It has been found that on increasing the frame rate above 20 fps, the android application tends to drop packets.

2.       On increasing resolution from 1080 by 720 to higher, latency also increases.

3.       The product is runnable in devices with android api version 9 or higher.

 

blog-10

Leave a Comment

Your email address will not be published. Required fields are marked *