Some links in this post may be affiliate links. We may get paid if you buy something or take an action after clicking one of these, but without addictional costs for you compared to direct buying.

Image Classification Video Streaming from TensorFlow with headless Raspberry PI

Raspberry pi headless tensorflow image classification featured image
5
(1)

Artificial Intelligence with Tensorflow is a standard for image intelligent recognition industry. Even if examples are available to use Raspberry PI with tensorflow, all of these work only if an HDMI cable is connected to a monitor. Image classification video streaming from headless Raspberry PI is also possible with a few code edits

In this tutorial I’m going to show how to get image classification video streaming from headless (Lite) Raspberry PI installation with TensorFlow Lite.

With Tensorflow spreading in Artificial Intelligence applications and becoming more and more used in this industry, developers from all the world have adapted this open source framework to run on quite every device. A relatively new brunch merged from original one, adapting this framework to small devices using ARM processors. This is the case of IoT devices, smartphones and… Raspberry PIs. A lighter version of TensorFlow was born: TensorFlow Lite.

With Raspberry PI, new examples have been published on GitHub, the most significant being https://github.com/tensorflow/examples/tree/master/lite/examples/image_classification/raspberry_pi. But I wasn’t able to find tutorials explaining how to get it working on headless installation. I think this is based on the fact that they used one of most common PI camera python frameworks: “picamera”. Default usage includes preview function, which should require an HDMI cable connected. This means a Desktop installation that wastes Raspberry PI computing resources for desktop environment when not used.

In Picamera basic examples, on the other hand, network streaming is realized with start_recording function.

So, I decided to try in changing tensorflow image classification scrtipt, introducing socket management and streaming video over network. result is described in this guide.

In this tutorial I’m going to use a Raspberry PI 3 model A+, but it applies to all Raspberry PI boards able to run TensorFlow lite.

What We Need

RPI 3 model A+

As usual, I suggest adding from now to your favourite e-commerce shopping cart all needed hardware, so that at the end you will be able to evaluate overall costs and decide if continuing with the project or removing them from the shopping cart. So, hardware will be only:

Check hardware prices with following links:

Amazon raspberry pi boards box
Amazon Raspberry PI 3 Model A+ box
Amazon Micro SD box
Amazon Raspberry PI Power Supply box
Amazon Raspberry PI Camera box

Step-by-Step Procedure

Prepare Operating System

Start with your OS. You can use install Raspberry PI OS Lite guide (for headless, fast operating system). This tutorial is based, of course, on headless installation but, if you prefer, you can also use this guide with Raspberry PI OS Desktop (in this case working from its internal terminal).

Make your OS up to date. From terminal:

sudo apt update -y && sudo apt upgrade -y

Connect your camera module to Raspberry PI and enable camera from raspi-config tool. From terminal:

sudo raspi-config

Terminal will show following page:

raspi-config home pi3 model A+

Go to option 3 (Interface Option) and press ENTER:

raspi-config interface options pi3 model A+

Select fist option (Camera) and press ENTER. In next screen move selectio from “No” to “Yes”:

raspi-config interface options camera pi3 model A+

Press ENTER and confirm also in following screen.

raspi-config interface options camera enabled pi3 model A+

You wil go back to raspi-config home. Move to finish button and press ENTER.

This operation will require a reboot. Confirm in next screen and wait for reboot:

raspi-config reboot pi3 model A+

Once your Raspberry PI is rebooted, connect again to terminal and install required libraries. A note: numpy coming from pip3 repository is not compatible with current python3 default version from apt (3.7). For this reason we need to be sure that numpy from pip is uninstalled and get it from apt. This makes our versions a bit older, but we get a simpler way to install requirements. If you prefer to use last versions, you need to get all required packages from source. From terminal:

sudo apt install python3-pip git
pip3 uninstall numpy
pip3 install image
sudo apt install python3-numpy libopenjp2-7-dev libtiff5

Use pip to install tensowflow lite. Link to .whl installation file are available from https://www.tensorflow.org/lite/guide/python?hl=en and depends on hardware and OS. With Raspberry PI OS (32 bit), at the time of this article installation will be done with this terminal command:

pip3 install https://github.com/google-coral/pycoral/releases/download/release-frogfish/tflite_runtime-2.5.0-cp37-cp37m-linux_armv7l.whl

Create a folder where files will be stored:

mkdir imclassif
cd imclassif

Get raw requirements file and download script from github source portal:

wget https://raw.githubusercontent.com/tensorflow/examples/master/lite/examples/image_classification/raspberry_pi/requirements.txt
wget https://raw.githubusercontent.com/tensorflow/examples/master/lite/examples/image_classification/raspberry_pi/download.sh

Get modified classify script from my download area:

wget https://peppe8o.com/download/python/peppe8o_classify.py

From terminal, download pretrained models and labels:

bash download.sh ./

Run classify script with following command:

python3 peppe8o_classify.py --model mobilenet_v1_1.0_224_quant.tflite --label labels_mobilenet_quant_v1_224.txt

Raspberry PI will start listening on port 8000 for imcoming connections. You will be able to stop this process with common interrupt keys (CTRL+C).

That’s all on remote RPI. Now switch on local computer where you want to get image classification video stream.

Receiving Image Classification Streaming

I will use VLC media player, but you can use whatever media player able to manager network streams with h264 format.

Open VLC interface:

vlc home

From “Media” menu use “Open Network stream” option:

vlc Media

Switch to “Network” tab and use your Raspberry PI IP address (mine one is 192.168.1.177) to set stream connection string. Compose URL with “tcp/h264://” + your RPI address + “:8000”. You shold use something similar to following:

vlc Media Network stream h264

You will get a result similar to following one:

vlc tensorflow image classification example coffee mug

What’s New Compared to Image Classification Original Code

Comparing to original code, I had to make some changed on code to stream video flow.

I added socket library to manage socket connection:

import socket

I also added socket management code in main block. This part opens a socket connection binding for every connection coming to Raspberry PI on port 8000 (please refer to picamera basic recipes page). setsockopt manages script interruption and re-execution after a few time: because of a intended behaviour, socket library keeps occuped connection for several seconds after script execution, so giving “address already in use” error if you try before this timeout. With “setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)” you can reuse address and port without waiting for resources to be free:

server_socket = socket.socket()
server_socket.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
server_socket.bind(('0.0.0.0', 8000))
server_socket.listen(0)
connection = server_socket.accept()[0].makefile('wb')

Instead of start_preview function, I use start_recording according to picamera network streaming docs:

camera.start_recording(connection, format='h264')

I prefer managing script interruprion with KeyboardInterrup exception instead of “finally”. This block stops camera and closes connection socket.

except KeyboardInterrupt:
      camera.stop_recording()
      connection.close()

Final Toughts

This tutorial uses pre-trained model from tensowrflox examples. While this can be a good start, you will need to train your own model to get more accurate results.

Enjoy!

How useful was this post?

Click on a star to rate it anonymously!

Average rating 5 / 5. Vote count: 1

No votes so far! Be the first to rate this post.

We are sorry that this post was not useful for you!

Let us improve this post!

Tell us how we can improve this post?

Leave a Reply

Your email address will not be published. Required fields are marked *

I accept the Privacy Policy