This website uses cookies. By using this site, you consent to the use of cookies. For more information, please take a look at our Privacy Policy.
Home > FPGA Technology > FPGA > FPGA promotes the development of crowd automatic monitoring technology - FPGA Technology

FPGA promotes the development of crowd automatic monitoring technology

Date: Jun 30, 2020

Click Count: 1066

The overall system design is completed in two stages. The first stage is to develop a crowd motion classification algorithm. After the verification of this algorithm is completed, the next step is to implement it into the FPGA.

FPGA promotes the development of crowd automatic monitoring technology

Crowd monitoring and surveillance has become an important area at present. Both the government and the security department have begun to seek more advanced ways to intelligently monitor people in public places to avoid detecting any abnormal activity before they can take action. However, some obstacles need to be overcome before this goal can be effectively achieved. For example, if you need to monitor all possible crowd activities in the entire city at the same time 24 hours a day, it is impossible to rely solely on manual monitoring, especially when thousands of CCTV cameras are installed.

The solution to this problem lies in the development of a brand new smart camera or vision system, which uses advanced video analysis technology to automatically monitor the activities of the crowd, so that it can immediately report any abnormal events to the central control station.

Designing this smart camera/vision system requires not only standard imaging sensors and optical equipment, but also high-performance video processors to perform video analysis. The reason for using this powerful onboard video processor is that advanced video analysis technologies have high processing requirements, and most of these technologies usually use computationally intensive video processing algorithms.

FPGA is very suitable for such high-performance applications. With the UltraFast? design method implemented in the high-level synthesis (HLS) feature of the Xilinx Vivado? Design Suite, you can now easily create ideal high-performance designs for FPGAs. In addition, embedded processors such as Xilinx MicroBlazeâ„¢ and FPGA reconfigurable logic are perfectly integrated, allowing users to easily port applications with complex control flows to FPGAs.

In view of this situation, we used the software-based EDA tools in Vivado HLS, Xilinx Embedded Development Kit (EDK) and ISE? Design Suite to design a prototype for crowd motion classification and monitoring system. This design method is based on what we think of as software control and hardware acceleration architecture. Our design uses low-cost Xilinx Spartan?-6 LX45 FPGA. We completed the overall system design in a relatively short period of time, and it showed promising results in terms of real-time performance, low cost, and high flexibility of the design.

system design

The overall system design is completed in two stages. The first stage is to develop a crowd motion classification algorithm. After the verification of this algorithm is completed, the next step is to implement it into the FPGA. In the second stage of development, we mainly focus on the architectural design of FPGA-based real-time video processing applications. The specific work includes the development of real-time video pipelines, the development of hardware accelerators, and finally the integration and implementation of the two into algorithm control and data flow to complete the system design.

The following describes each development stage, starting with a brief introduction to algorithm design, and then detailing how to implement the algorithm on the FPGA platform.

algorithm design

In terms of crowd surveillance and monitoring, a variety of algorithms have been proposed in the literature. Most such algorithms start by detecting (or arranging) feature points in a crowd scene, and then tracking these feature points over time to collect motion statistics. These motion statistics are then projected onto some previously calculated motion models to predict any abnormal activity. Further improvements include clustering feature points and tracking these clusters instead of individual feature points.

In addition, we use a weighted Gaussian kernel to achieve the reliability of the occlusion area and zero contrast area in the image. Moreover, the processing of one patch used to calculate a motion vector is independent of the processing of other patches, so this method is very suitable for using parallel implementation schemes on FPGAs.

After calculating the motion vectors on the entire image, the algorithm then calculates their statistical properties. These attributes include the average length of motion vectors, the number of motion vectors, the dominant direction of motion, and similar indicators.

In addition, we also calculated a 360-degree histogram in the direction of the motion vector, and further analyzed its attributes such as standard deviation, average deviation, and deviation coefficient. These statistical attributes are then projected onto the pre-calculated motion model, thereby classifying the current motion into one of several major categories. We then use multiple frames to interpret these statistical attributes to confirm the classification results.

The pre-calculated motion model is constructed in the form of a weighted decision tree classifier, which fully considers these statistical attributes to classify the observed motion. For example, if you observe a fast movement and there is a sudden change in momentum in the scene, and the direction of movement is random or exceeds the image plane, you can classify it as a possible panic situation. The development of this algorithm is completed using Microsoft Visual C++ and OpenCV library. For a complete demonstration of the algorithm, please refer to the web link provided at the end of this article.

FPGA implementation

The second stage of system design is the FPGA implementation of the algorithm. This step of implementation has its own design challenges. For example, FPGA design now includes video input/output and frame buffer. In addition, limited resources and available performance may require necessary design optimization.

In view of these design features and other architectural considerations, the entire FPGA implementation is divided into three parts. The first part is to develop a universal real-time video pipeline on FPGA,

Used to process the necessary video input/output and frame buffer. The second part is the development of dedicated hardware accelerators for algorithms. Finally, in the third stage of the design, we integrated them together to implement algorithm control and data flow. This completes the entire FPGA-based system design.


<< Previous: Popularize FPGA visual functions

<< Next: How to prevent FPGA-based projects from going astray

Relateds

Need Help?

Support

If you have any questions about the product and related issues, Please contact us.