Design and implementation of the most popular embe

2022-08-02
  • Detail

Design and implementation of embedded remote video capture system

the development of multimedia communication technology provides rich means for information acquisition and transmission. Video capture is an indispensable and important part of the system. The system is based on ARM9 chip of S3C2410 and embedded Linux operating system. It uses USB camera to capture video, which is compressed and encoded by MPEG-4 algorithm. The system is directly connected to the network, Users can view remote video images by using the standard web browser and the local 24-hour uninterrupted service streaming media player

1. The hardware system plays a decisive role in the measurement of tension accuracy.

the hardware platform of the system selects the up-netarm2410 development board of Beijing Bochuang company. The system is based on the ARM9 architecture embedded chip S3C2410, which works stably at the 202mhz master frequency, 64MB SDRAM and 64MB flash on board. The motherboard resources include: Master USB port, slave USB port, 10m/100m Ethernet port, touch screen, color LCD, keyboard, 8 user-defined LED digital tubes, a/d, RTC circuits, 2 serial ports, 1 JTAG universal interface, audio module, support MPEG4, MP3 encoding and decoding, 3 168pin expansion sockets, 32-bit data bus, and reserve sufficient expansion space

the standard modules include: IC card + PS2 module, IDE hard disk + CF card module, PCMCIA + sd/mmc module. In addition, optional modules include: GPS module, GPRS module, FPGA module, can + AD + Da module, infrared module, Bluetooth module and camera module

2. Software system

2.1 kernel configuration and USB camera driver

assuming that the embedded Linux development environment has been built, the next step is to install and drive the USB camera

first check whether USB module support has been added to the Linux kernel, and add Video4Linux support

multimedia devices → video for linux

video for Linux → [*]v4l information in proc filesystem

there are also various camera drivers under USB support in the main menu. Select the camera chip type to be used

<> USB IBM (xirlink) C - it camera support <*> USB ov511 camera support <> USB Philips cameras

USB stv680 (pencam) camera support <> USB 3Com homeconnect (akavicam) support

when purchasing USB cameras, priority should be given to the camera chips publicly supported by the Linux kernel, otherwise corresponding USB camera drivers should be written, Then compile and install. Here, the V3000 product of eye company is selected, and the ov511 chip is adopted

after confirming that the USB camera is normally driven, the next step is to use the API function set provided by Video4Linux to write a video capture program

2.2 video capture module based on V4L

under Linux, all peripherals are regarded as a special file, called device file. System call is the interface between kernel and application, while device driver is the interface between kernel and peripheral. It completes the initialization and release of devices, various operations on device files, interrupt processing and other functions, shielding the details of peripheral hardware for applications, so that applications can operate on external devices like ordinary files

video4linux, the video subsystem in Linux system, provides a set of unified API for video applications. Video applications can operate various video capture devices through standard system calls. Video4Linux registers video device files with the virtual file system, and applications can access video devices by operating video device files

video4linux related devices and applications under Linux are shown in Table 1

here, the program design of video capture is mainly aimed at the device file/dev/video

linux video capture process is shown in Figure 2

the main functions used are:

camera_ Open(): used to open a video device file. A video must be declared before use_ Device file of type device

Camera_ get_ Capability (): obtain the relevant information of the device file by calling the IOCTL () function and store it in the video_ Capability structure

Camera_ get_ Picture (): get the relevant information of the image by calling IOCTL () function and store it in video_ Picture structure

Camera_ Close(): used to close the device file

Camera_ grab_ Image(): used to capture images. The MMAP method is used to directly map the device file/dev/video0 to memory, speed up file i/o operations, and enable multiple threads to share data

there are other related functions such as device initialization and parameter device, which will not be described in detail

2.3 after the video compression and coding module

obtains the image data, it can be directly output to the framebuffer for display. Because the system needs to transmit the collected video through the network, the original image data should be compressed and encoded before transmission. Here, MPEG-4 video coding and decoding scheme is selected. Compared with other standards, MPEG-4 has higher compression ratio, saves storage space and has better image quality. It is especially suitable for video transmission under low bandwidth conditions and can maintain image quality

object-based video coding in MPEG-4 can be divided into three steps:

(1) segment video objects from the original video stream

(2) encode video objects, and assign different codewords to the motion information, shape information and texture information of different video objects. For the input VOP sequence with arbitrary shape, the block based hybrid coding technology is used for coding. The processing order is first ivop, then pvop, and then bvop. After coding the shape information of VOP, obtain the samples of VOP with arbitrary shape. Each VOP is divided into disjoint macroblocks, and each macroblock contains 4 8 × 8-pixel block for motion compensation and texture coding. The encoded VOP frame is saved in the frame memory. The motion vector is calculated between the current VOP frame and the encoded VOP frame; For the blocks and macroblocks to be encoded, their motion compensated prediction errors are calculated; Ivop and error after motion compensation prediction are calculated with 8 × 8-block DCT transform and quantization of DCT coefficients, followed by run length coding and entropy coding

(3) the code streams of each video object are compounded, the shape and motion texture information of each video object are compounded into a Vol bit stream, and the video streams of each video object are compounded into a unified code stream output. After the video stream is compressed and encoded, the next step is to realize the function of the network transmission part

2.4 jrtplib network transmission module

streaming media refers to continuous time-based media transmitted using streaming technology in the network. RTP is a good way to solve the problem of real-time transmission of streaming media at present. Jrtplib is an object-oriented RTP library. It fully follows the design of rfc1889. The following describes how to use RTP protocol for real-time streaming media programming on Linux platform

(1) initialization

before using jrtplib for real-time streaming media data transmission, you should first generate an instance of rtpsession class to represent the RTP session, and then call the create() method to initialize it. The create() method of rtpsession class has only one parameter, which indicates the port number used in this RTP session

(2) data transmission

after the RTP session is successfully established, the real-time transmission of streaming media data can be started. First, you need to set the destination address for data transmission. The RTP protocol allows multiple destination addresses in the same session. This can be done by calling the adddestination(), deletedestination() and cleardestinations() methods of the rtpsession class. After all the target addresses are specified, you can then call the sendpacket () method of rtpsession class to send streaming media data to all the target addresses

(3) data receiving

for the receiving end of streaming media data, it is first necessary to call the polldata() method to receive the RTP or RTC sent. It is widely used in shoe making, leather making and other fields and has accumulated a lot of quality control experience. Since multiple participants (sources) are allowed in the same RTP session, you can traverse all sources by calling gotofirstsource() and gotonextsource() methods, or you can traverse those sources with data by calling gotofisstsourcewithdat() and gotonextsourcewithdata() methods. After an effective data source is detected from the RTP session, you can then call the getnextpacket() method of the rtpsession class to extract RTP datagrams from it. When the received RTP datagrams are processed, they should be released in time

jrtplib defines three receiving modules for RTP datagrams. The following receiving modes can be set by calling the setreceivemode() method of rtpsession class:

receivemode_ All: the default receiving mode. All incoming RTP datagrams will be accepted

RECEIVEMODE_ Ignore: except for some specific senders whose "start-up enterprise" index ranks 27, all incoming RTP datagrams will be accepted, and the rejected sender list can be set by calling addtoignorelist(), deletefromignorelist() and clearignorelist()

RECEIVEMODE_ Acceptstore: except for some specific senders, all incoming RTP datagrams will be rejected. The list of accepted senders can be set by calling addtoacceptlist(), deletefromacceptlist() and clearacceptlist()

(4) control information

jrtplib is a highly encapsulated RTP library. As long as the polldata() or sendpacket() methods are successfully called, jrtplib can automatically process the reached RTCP datagrams and send RTCP datagrams when necessary, so as to ensure the correctness of the entire RTP session process

in this system, the method provided by rtpsession jrtplib class library is used to realize the underlying rtp/rtcp operation, and it is encapsulated in the crtptransmitter class. This class inherits from the media sink class, receives the corresponding media frame data, and uses the operation of rtpsession class library to send the data to the network

3. Conclusion

this system is based on S3C2410 platform and Linux operating system. It uses Video4Linux to design acquisition program, MPEG-4 compression coding algorithm and real-time streaming media transmission technology to realize network transmission. The whole system has the characteristics of stability, reliability, simple installation and low cost. It can be extended to industrial control, video conference system, visual, remote monitoring system and many other fields. (end)

Copyright © 2011 JIN SHI