====== Implementation ======
The code designed to develop the project responds to a sequence that is based on the availability of a network connection to the Internet. When you run the program, video recording time was selected and checked if there is an internet connection. If the test is positive the video is constructed and transmitted via FTP to a backup server, when this occurs deleted files Rapberry Pi and starts again. It is important to consider that this system saves time into equal pieces. When there is no internet connection but continues recording the program removes the oldest section, corresponding to half of the selected recording time. This cycle is repeated until it detects the internet, where it builds the video and transmitted to the server.
The following block diagram explaining this process.
{{ :ie0117_proyectos:final_2013:camberry_final:diagrama_progra_en.png?800 |}}
This system is composed of three source files, two in the Python and the third in C. The main code is pycamaraV3.py which uses the other two codes to run the camera program. Pytransfer.py program is responsible for transferring the finished video file from the Raspberry Pi to the selected server and other functions that the main code uses. The code that is responsable for the use of the camera is v4l2grab.c, this code performs the image-making process from the camera using the video4linux2 API (v4l2). Then these codes are described in detail.
Dependencies needed for running the code:
* ffmpeg
* jpeglib
===== pycamaraV3.py =====
This is the executable file, from here you start recording video, check the internet connection and builds video to send to the server. The code is shown below.
#!/usr/bin/env python
import os #Module for of using the operating system functionality
import pytransfer as pt
import time
internet=False
timecount=10 #Video length in seconds
fps=8 #Maximun frames per second that the Raspberry supports
tframes=timecount*fps
mframes=tframes/2
quality=70 #Quality of the images that conforms the video
while True:
os.system("./v4l2grab -c "+str(tframes)+" -q "+str(quality))
internet=pt.check_internet()
if internet == False:
while internet==False:
for j in xrange(mframes+1,tframes+1):
os.rename("tmpframes/frame"+str(j)+".jpeg","tmpframes/frame"+str(j-mframes)+".jpeg")
os.system("./v4l2grab -c "+str(tframes-mframes)+" -q "+str(quality)+" -n "+str(mframes))
internet=pt.check_internet()
outname = pt.currdt()+".avi" #Puts video name according with the function currdt() in pytransfer.py
os.system("ffmpeg -y -r "+str(fps)+" -i tmpframes/frame%d.jpeg video/"+outname)
pt.transfer(outname)
os.system("rm tmpframes/*")
os.system("rm video/*")
while internet == True:
internet=pt.check_internet()
if internet == False:
break
time.sleep(3)
This file defines video variables, such as time in seconds (timecount), the number of frames per second (fps) and quality, which is a pattern defined by v4l2 and which in this case is 70, since for higher values do not notice improvement video quality while improving the performance. If necessary, these parameters must be modified directly in the code before running it.
After the file is called v4l2grab.c using os.system function, this allows the load python executable functions and content available in the C language file This recording starts by capturing video images, which are stored in a buffer and are waiting to finish the recording time or established internet connection. Checking the internet connection is accomplished with the "pt.check_internet ()", which is on file pytransfer.py and is called from the executable by the action at the beginning of the code "import pytransfer as pt" where "check_internet" is a function in pytransfer.py that is responsible for checking the internet connection. This will be expanded later.
To name the file ("outname") using the "currdt ()" of pytransfer.py, which puts the date and time of the time to make sure there are no conflicts for different files with the same name.
When conditions are met, the file is transmitted with the name assigned by "pt.transfer(outname)" also belonging to pytransfer.py.
===== pytransfer.py =====
This python code contains the functions that sends the video file through a FTP connection, checks if there is an internet connection and names the video files according to the Raspberry Pi date and hour. Then this program is shown:
import urllib2 #Library that allows opening URLs
import ftplib #Python FTP protocol client
import datetime as d #Module that supplies classes for manipulating dates and times
import sys #Module that provides specific system functions
def check_internet(): #Check if there is an internet connection
try:
response=urllib2.urlopen('http://www.google.com',timeout=1)
return True
except urllib2.URLError as err: pass
return False
def transfer(filename): #Transfer file via FTP
# FTP Data
ftp_server = '192.168.12.27' # Server IP
ftp_user = 'mark30'
ftp_pass = '12345'
ftp_route = '/home/mark30/videos' # Destination directory
dest_file=filename
try:
s = ftplib.FTP(ftp_server, ftp_user, ftp_pass)
try:
f = open("video/"+filename, 'r+')
except:
print "File not found " + "video/"+filename
sys.exit(0)
try:
s.cwd(ftp_route)
s.storbinary('STOR ' + dest_file, f)
f.close()
s.quit()
except:
print "Transfer error"
sys.exit(0)
else:
print "Successful transfer"
except:
print "Error in connection to server " + ftp_server
sys.exit(0)
def currdt(): #Video output file naming
now = d.datetime.now()
year = str(now.year)
month = str(now.month)
day = str(now.day)
hour = str(now.hour)
minute = str(now.minute)
st = year+"-"+month+"-"+day+"_"+hour+":"+minute
return st
The function "check_internet()" is connected to the page http://www.google.com and expect a satisfactory answer, if after a second it does not, an error occurs and it is interpreted as there is not internet connection, otherwise states that there is a connection to the network. This function is called in pycamaraV3.py.
To transfer files via FTP it is used the function "tranfer" that receives the argument "filename" given by pycamaraV3.py. Here you assign the server user name, password, the path to keep the video and IP address. It is important to note that the file transfer is only allowed on a local network, to transmit a file from another network you have to properly configure your FTP server. The function has multiple errors catchings for the different errors that can occur during the transferring and prints the corresponding message.
Finally it has the function "currdt()" that assigns the video file name that corresponds to the date and time that has the Raspberry Pi at that moment. This function is called from pycamaraV3.py at the time the code creates the avi video file and is part of the argument passed to the function "transfer(filename)" above.
===== v4l2grab.c =====
This code is responsible for the communication with the camera using the v4l2 API and the conversion of the raw data stored in the buffer to jpeg images.
#include
#include
#include
#include
#include /* getopt_long() */
#include /* low-level i/o */
#include
#include
#include
#include
#include
#include
#include
#include /* video4linux2 API header */
#include /* library for JPEG image compression */
#define CLEAR(x) memset(&(x), 0, sizeof(x))
#ifndef V4L2_PIX_FMT_H264
#define V4L2_PIX_FMT_H264 v4l2_fourcc('H', '2', '6', '4') /* H264 with start codes */
#endif
enum io_method {
IO_METHOD_READ,
IO_METHOD_MMAP,
IO_METHOD_USERPTR,
};
struct buffer {
void *start;
size_t length;
};
static char *dev_name;
static int fd = -1;
struct buffer *buffers;
static unsigned int n_buffers;
static int frame_count = 50;
static int frame_number = 0;
static unsigned int width = 320;
static unsigned int height = 240;
static unsigned char jpegQuality = 70;
static void errno_exit(const char *s)
{
fprintf(stderr, "%s error %d, %s\n", s, errno, strerror(errno));
exit(EXIT_FAILURE);
}
static int xioctl(int fh, int request, void *arg)
{
int r;
do {
r = ioctl(fh, request, arg);
} while (-1 == r && EINTR == errno);
return r;
}
/* Function to convert the raw data obtained from the camera from the YUV color space to the RGB color space */
static void YUV422toRGB888(int width, int height, unsigned char *src, unsigned char *dst)
{
int line, column;
unsigned char *py, *pu, *pv;
unsigned char *tmp = dst;
/* In this format each four bytes is two pixels. Each four bytes is two Y's, a Cb and a Cr.
Each Y goes to one of the pixels, and the Cb and Cr belong to both pixels. */
py = src;
pu = src + 1;
pv = src + 3;
#define CLIP(x) ( (x)>=0xFF ? 0xFF : ( (x) <= 0x00 ? 0x00 : (x) ) )
for (line = 0; line < height; ++line) {
for (column = 0; column < width; ++column) {
*tmp++ = CLIP((double)*py + 1.402*((double)*pv-128.0));
*tmp++ = CLIP((double)*py - 0.344*((double)*pu-128.0) - 0.714*((double)*pv-128.0));
*tmp++ = CLIP((double)*py + 1.772*((double)*pu-128.0));
// increase py every time
py += 2;
// increase pu,pv every second time
if ((column & 1)==1) {
pu += 4;
pv += 4;
}
}
}
}
/* Function to compress the RGB data in a jpeg image */
static void jpegWrite(unsigned char* img, char* jfn)
{
struct jpeg_compress_struct cinfo;
struct jpeg_error_mgr jerr;
JSAMPROW row_pointer[1];
FILE *outfile = fopen( jfn, "wb" );
// try to open file for saving
if (!outfile) {
errno_exit("jpeg");
}
// create jpeg data
cinfo.err = jpeg_std_error( &jerr );
jpeg_create_compress(&cinfo);
jpeg_stdio_dest(&cinfo, outfile);
// set image parameters
cinfo.image_width = width;
cinfo.image_height = height;
cinfo.input_components = 3;
cinfo.in_color_space = JCS_RGB;
// set jpeg compression parameters to default
jpeg_set_defaults(&cinfo);
// and then adjust quality setting
jpeg_set_quality(&cinfo, jpegQuality, TRUE);
// start compress
jpeg_start_compress(&cinfo, TRUE);
// feed data
while (cinfo.next_scanline < cinfo.image_height) {
row_pointer[0] = &img[cinfo.next_scanline * cinfo.image_width * cinfo.input_components];
jpeg_write_scanlines(&cinfo, row_pointer, 1);
}
// finish compression
jpeg_finish_compress(&cinfo);
// destroy jpeg data
jpeg_destroy_compress(&cinfo);
// close output file
fclose(outfile);
}
static void process_image(const void *p)
{
frame_number++;
char filename[30];
sprintf(filename, "tmpframes/frame%d.jpeg", frame_number);
unsigned char* src = (unsigned char*)p;
unsigned char* dst = malloc(width*height*3*sizeof(char));
// convert from YUV422 to RGB888
YUV422toRGB888(width,height,src,dst);
char* jpegFilename=filename;
// write jpeg
jpegWrite(dst,jpegFilename);
// free temporary image
free(dst);
}
static int read_frame(void)
{
struct v4l2_buffer buf;
CLEAR(buf);
buf.type = V4L2_BUF_TYPE_VIDEO_CAPTURE;
buf.memory = V4L2_MEMORY_MMAP;
if (-1 == xioctl(fd, VIDIOC_DQBUF, &buf)) {
switch (errno) {
case EAGAIN:
return 0;
case EIO:
/* Could ignore EIO, see spec. */
/* fall through */
default:
errno_exit("VIDIOC_DQBUF");
}
}
assert(buf.index < n_buffers);
process_image(buffers[buf.index].start);
if (-1 == xioctl(fd, VIDIOC_QBUF, &buf))
errno_exit("VIDIOC_QBUF");
return 1;
}
static void mainloop(void)
{
unsigned int count;
count = frame_count;
while (count-- > 0) {
for (;;) {
fd_set fds;
struct timeval tv;
int r;
FD_ZERO(&fds);
FD_SET(fd, &fds);
/* Timeout. */
tv.tv_sec = 1;
tv.tv_usec = 0;
r = select(fd + 1, &fds, NULL, NULL, &tv);
if (-1 == r) {
if (EINTR == errno)
continue;
errno_exit("select");
}
if (0 == r) {
fprintf(stderr, "select timeout\n");
exit(EXIT_FAILURE);
}
if (read_frame())
break;
/* EAGAIN - continue select loop. */
}
}
}
static void stop_capturing(void)
{
}
static void start_capturing(void)
{
unsigned int i;
enum v4l2_buf_type type;
for (i = 0; i < n_buffers; ++i) {
struct v4l2_buffer buf;
CLEAR(buf);
buf.type = V4L2_BUF_TYPE_VIDEO_CAPTURE;
buf.memory = V4L2_MEMORY_MMAP;
buf.index = i;
if (-1 == xioctl(fd, VIDIOC_QBUF, &buf))
errno_exit("VIDIOC_QBUF");
}
type = V4L2_BUF_TYPE_VIDEO_CAPTURE;
if (-1 == xioctl(fd, VIDIOC_STREAMON, &type))
errno_exit("VIDIOC_STREAMON");
}
static void uninit_device(void)
{
unsigned int i;
for (i = 0; i < n_buffers; ++i)
if (-1 == munmap(buffers[i].start, buffers[i].length))
errno_exit("munmap");
free(buffers);
}
static void init_mmap(void)
{
struct v4l2_requestbuffers req;
CLEAR(req);
req.count = 4;
req.type = V4L2_BUF_TYPE_VIDEO_CAPTURE;
req.memory = V4L2_MEMORY_MMAP;
if (-1 == xioctl(fd, VIDIOC_REQBUFS, &req)) {
if (EINVAL == errno) {
fprintf(stderr, "%s does not support "
"memory mapping\n", dev_name);
exit(EXIT_FAILURE);
}
else {
errno_exit("VIDIOC_REQBUFS");
}
}
if (req.count < 2) {
fprintf(stderr, "Insufficient buffer memory on %s\n",
dev_name);
exit(EXIT_FAILURE);
}
buffers = calloc(req.count, sizeof(*buffers));
if (!buffers) {
fprintf(stderr, "Out of memory\n");
exit(EXIT_FAILURE);
}
for (n_buffers = 0; n_buffers < req.count; ++n_buffers) {
struct v4l2_buffer buf;
CLEAR(buf);
buf.type = V4L2_BUF_TYPE_VIDEO_CAPTURE;
buf.memory = V4L2_MEMORY_MMAP;
buf.index = n_buffers;
if (-1 == xioctl(fd, VIDIOC_QUERYBUF, &buf))
errno_exit("VIDIOC_QUERYBUF");
buffers[n_buffers].length = buf.length;
buffers[n_buffers].start =
mmap(NULL /* start anywhere */,
buf.length,
PROT_READ | PROT_WRITE /* required */,
MAP_SHARED /* recommended */,
fd, buf.m.offset);
if (MAP_FAILED == buffers[n_buffers].start)
errno_exit("mmap");
}
}
static void init_device(void)
{
struct v4l2_capability cap;
struct v4l2_cropcap cropcap;
struct v4l2_crop crop;
struct v4l2_format fmt;
unsigned int min;
if (-1 == xioctl(fd, VIDIOC_QUERYCAP, &cap)) {
if (EINVAL == errno) {
fprintf(stderr, "%s is no V4L2 device\n",
dev_name);
exit(EXIT_FAILURE);
}
else {
errno_exit("VIDIOC_QUERYCAP");
}
}
if (!(cap.capabilities & V4L2_CAP_VIDEO_CAPTURE)) {
fprintf(stderr, "%s is no video capture device\n",
dev_name);
exit(EXIT_FAILURE);
}
/* Select video input, video standard and tune here. */
CLEAR(cropcap);
cropcap.type = V4L2_BUF_TYPE_VIDEO_CAPTURE;
if (0 == xioctl(fd, VIDIOC_CROPCAP, &cropcap)) {
crop.type = V4L2_BUF_TYPE_VIDEO_CAPTURE;
crop.c = cropcap.defrect; /* reset to default */
if (-1 == xioctl(fd, VIDIOC_S_CROP, &crop)) {
switch (errno) {
case EINVAL:
/* Cropping not supported. */
break;
default:
/* Errors ignored. */
break;
}
}
}
else {
/* Errors ignored. */
}
CLEAR(fmt);
fmt.type = V4L2_BUF_TYPE_VIDEO_CAPTURE;
fprintf(stderr, "Grabbing:\n");
fmt.fmt.pix.width = width; //replace
fmt.fmt.pix.height = height; //replace
fmt.fmt.pix.pixelformat = V4L2_PIX_FMT_YUYV; //replace
fmt.fmt.pix.field = V4L2_FIELD_INTERLACED;
if (-1 == xioctl(fd, VIDIOC_S_FMT, &fmt))
errno_exit("VIDIOC_S_FMT");
/* Buggy driver paranoia. */
min = fmt.fmt.pix.width * 2;
if (fmt.fmt.pix.bytesperline < min)
fmt.fmt.pix.bytesperline = min;
min = fmt.fmt.pix.bytesperline * fmt.fmt.pix.height;
if (fmt.fmt.pix.sizeimage < min)
fmt.fmt.pix.sizeimage = min;
init_mmap();
}
static void close_device(void)
{
if (-1 == close(fd))
errno_exit("close");
fd = -1;
}
static void open_device(void)
{
struct stat st;
if (-1 == stat(dev_name, &st)) {
fprintf(stderr, "Cannot identify '%s': %d, %s\n",
dev_name, errno, strerror(errno));
exit(EXIT_FAILURE);
}
if (!S_ISCHR(st.st_mode)) {
fprintf(stderr, "%s is no device\n", dev_name);
exit(EXIT_FAILURE);
}
fd = open(dev_name, O_RDWR /* required */ | O_NONBLOCK, 0);
if (-1 == fd) {
fprintf(stderr, "Cannot open '%s': %d, %s\n",
dev_name, errno, strerror(errno));
exit(EXIT_FAILURE);
}
}
static void usage(FILE *fp, int argc, char **argv)
{
fprintf(fp,
"Usage: %s [options]\n\n"
"Version 1.0\n"
"Options:\n"
"-d | --device name Video device name [%s]\n"
"-h | --help Print this message\n"
"-n | --number Set frame initial numbering\n"
"-q | --quality Set jpeq quality(0-100) [70]\n"
"-c | --count Number of frames to grab [%i]\n"
"",
argv[0], dev_name, frame_count);
}
static const char short_options[] = "d:h:n:q:c:";
static const struct option
long_options[] = {
{ "device", required_argument, NULL, 'd' },
{ "help", no_argument, NULL, 'h' },
{ "number", required_argument, NULL, 'n' },
{ "quality", required_argument, NULL, 'q' },
{ "count", required_argument, NULL, 'c' },
{ 0, 0, 0, 0 }
};
int main(int argc, char **argv)
{
dev_name = "/dev/video0";
for (;;) {
int idx;
int c;
c = getopt_long(argc, argv,
short_options, long_options, &idx);
if (-1 == c)
break;
switch (c) {
case 0: /* getopt_long() flag */
break;
case 'd':
dev_name = optarg;
break;
case 'h':
usage(stdout, argc, argv);
exit(EXIT_SUCCESS);
case 'n':
frame_number=atoi(optarg);
case 'q':
// set jpeg quality
jpegQuality = atoi(optarg);
break;
case 'c':
errno = 0;
frame_count = strtol(optarg, NULL, 0);
if (errno)
errno_exit(optarg);
break;
default:
usage(stderr, argc, argv);
exit(EXIT_FAILURE);
}
}
open_device();
init_device();
start_capturing();
mainloop();
stop_capturing();
uninit_device();
close_device();
fprintf(stderr, "\n");
return 0;
}
The v4l2 API and all its functionalities are imported with the "linux/videodev2.h" header and the "jpeglib.h" header imports the necessary tools to compress a raw image data in a jpeg image file. Importantly, this project does not include the audio of the video.
Now will be discussed the most important parts of the C code and its functionalities.
If there are no options set in the execution of the code then the first function than the main function calls is //open_device()// that identifies the device and if no error occurs, open the device with read and write capabilities. The next function //init_device()// checks if that the device is manageable with the v4l2 API and checks that the device has a video capture capability, then sets the image properties like width, height and pixel format, then the function //init_mmap()// is called. First of all, "memory mapping" is the streaming method used in this program to get the information from the camera for each frame, the memory mapping method is an I/O method where only pointers to buffers are exchanged between application and driver, the data itself is not copied, then the process is faster for getting data from the device to the computer or viceversa, than using the I/O methods read() and write() for get frames from the camera. Then, the function //init_mmap()// requests the kernel for memory to assign the buffers. The function //start_capturing()// prepares the buffer structures needed to map the information from the camera to the computer, v4l2 uses two buffers to interact between the userspace and the device, VIDIOC_QBUF is an empty buffer sent by the program to the camara driver for be filled. The processed buffers by the driver are now VIDIOC_DQBUF and the structure is waiting for the application to use it, this output buffer contains the frame data. The following image illustrates this complex interaction.
{{ :ie0117_proyectos:final_2013:camberry_final:v4l2_buffers.png?600 |}}
After this preparation stage, the program enters the function //mainloop()//, basically this is a loop that calls the function //read_frame()// a number of times equal to the number of frames set to be grabbed. //read_frame()// reads the information sent by the device on the VIDIOC_DQBUF buffer and calls the function //process_image()// on that data, then the buffer VIDIOC_QUF is sent to the device's driver to grab another frame.
The function //process_image()// first transforms the image from the YUV color space to the RGB color space with the function //YUV422toRGB888()//. The color space YUV (luminance (Y) and chrominance (UV) ) is based on the hue and saturation properties of the color to encode the images, which is why it is more pleasant to the human eye. The color space RGB (red, green and blue) is based on the color composition of three primary colors that conforms the light. This model adds the colors red, green, and blue in various ways to reproduce a broad array of colors.
The conversion of the YUV components to the RGB components is done with the following equations:
B = Y+1.402*(V-128)
G = Y-0.344*(U-128)-0.714*(V-128)
R = Y + 1.772*(U-128)
When the image is in the RGB color space, the data is compressed in a jpeg image with the //jpegWrite()// function, this function uses the library libjpeg that is a free software C library for jpeg image compression, in this function various parameters are set like the width, height and quality of the final jpeg picture frame. After all the frames are compressed in the corresponding jpeg files, the program exits the mainloop and free the memory occupied by the v4l2 buffer structures with the //uninit_device()// function, then the camera is closed with the //close_device()// function and the program finishes with the //return 0// call in the //main()// function.
This code can be compiled with the following command:
gcc v4l2grab.c -o v4l2grab -ljpeg
===== Run the program =====
As mentioned above, run the file pycamaraV3.py. To do it manually you should write in the console:
./pycamaraV3.py
First making sure you have sufficient permissions to do that action.
It is also possible to automatically run at startup Rasbian in the Raspberry Pi. This is achieved by modifying the rc.local file with full permissions (sudo). Edit /etc/rc.local writing in console:
sudo nano/etc/rc.local
At the bottom, just above exit 0 should be written as follows:
sudo python /home/pi/pycamaraV3.py
After the file is saved. Now, every time you turn on the Raspberry Pi will automatically run the video recording of each contribution supported on the server.
[[teaching:ie0117:proyectos:2012:i:final_2013:camberry_final|Back to index]]
[[teaching:ie0117:proyectos:2012:i:final_2013:camberry_final:object|Previus section: Objectives]]
[[teaching:ie0117:proyectos:2012:i:final_2013:camberry_final:res|Next section: Results]]