OpenVINO is Intel’s launch of a deep learning-based computer vision acceleration optimization framework that supports compression optimization and accelerated computing functions for other machine learning platform models.
Use Intel® Distributed OpenVINO™ Toolkit to develop applications and solutions that simulate human vision. The toolkit is based on Convolutional Neural Networks (CNN), which can scale the workload of Intel® hardware (including accelerators) and maximize performance.
OpenVINO mainly includes two core components and a pre-trained model library
the deep learning framework supported by the model optimizer includes:ONNX,TensorFlow,Caffe and MXNet
The Inference Engine supports the accelerated operation of deep learning models at the hardware instruction set level. At the same time, the traditional OpenCV image processing library has also been optimized for the instruction set, which has significantly improved performance and speed. The supported hardware platforms include the following:CPU,GPU,FPGA,MYRIAD (Intel accelerated computing stick first and second generation),HDDL and GAN
The following diagram illustrates the typical OpenVINO™ workflow:
You can use your framework of choice to prepare and train a Deep Learning model or just download a pre-trained model from the Open Model Zoo. The Open Model Zoo includes Deep Learning solutions to a variety of vision problems, including object recognition, face recognition, pose estimation, text detection, and action recognition, at a range of measured complexities. Several of these pre-trained models are used also in the code samples and application demos. To download models from the Open Model Zoo,the Model Downloader tool is used.
One of the core components of the OpenVINO™ toolkit is the Model Optimizer a cross-platform command-line tool that converts a trained neural network from its source framework to an open-source, nGraph-compatible Intermediate Representation (IR) for use in inference operations. The Model Optimizer imports models trained in popular frameworks such as Caffe, TensorFlow, MXNet, Kaldi, and ONNX and perform a few optimizations to remove excess layers and group operations when possible into simpler, faster graphs.
FPGA Virtex-5 FXT Family 65nm Technology 1V 1136-Pin FCBGA
FPGA Virtex-5 FXT Family 65nm Technology 1V 1136-Pin FCBGA
FPGA Virtex-5 FXT Family 65nm Technology 1V 665-Pin FCBGA
FPGA Virtex-E Family 63.504K Gates 5292 Cells 357MHz 0.18um Technology 1.8V 144-Pin CSBGA
FPGA Virtex-E Family 63.504K Gates 5292 Cells 357MHz 0.18um Technology 1.8V 456-Pin FBGA
Support