This website uses cookies. By using this site, you consent to the use of cookies. For more information, please take a look at our Privacy Policy.
Home > Wiki encyclopedia > openVINO

openVINO

OpenVINO is Intel’s launch of a deep learning-based computer vision acceleration optimization framework that supports compression optimization and accelerated computing functions for other machine learning platform models.

Use Intel® Distributed OpenVINO™ Toolkit to develop applications and solutions that simulate human vision. The toolkit is based on Convolutional Neural Networks (CNN), which can scale the workload of Intel® hardware (including accelerators) and maximize performance.

openVINO

Core components and functions

OpenVINO mainly includes two core components and a pre-trained model library

Core components and functions.jpg

Core component-model optimizer

Model Optimizer, the deep learning framework supported by the model optimizer includes

-ONNX -TensorFlow -Caffe -MXNet

Core component-inference engine

The Inference Engine supports the accelerated operation of deep learning models at the hardware instruction set level. At the same time, the traditional OpenCV image processing library has also been optimized for the instruction set, which has significantly improved performance and speed. The supported hardware platforms include the following:

-GPU -GPU -FPGA -MYRIAD (Intel accelerated computing stick first and second generation) -HDDL -GAN

OpenVINO Toolkit Workflow

The following diagram illustrates the typical OpenVINO™ workflow (click to see the full-size image): 

OpenVINO Toolkit Workflow.png

Model Preparation, Conversion and Optimization

You can use your framework of choice to prepare and train a Deep Learning model or just download a pretrained model from the Open Model Zoo. The Open Model Zoo includes Deep Learning solutions to a variety of vision problems, including object recognition, face recognition, pose estimation, text detection, and action recognition, at a range of measured complexities. Several of these pretrained models are used also in the code samples and application demos. To download models from the Open Model Zoo, the Model Downloader tool is used.


One of the core component of the OpenVINO™ toolkit is the Model Optimizer a cross-platform command-line tool that converts a trained neural network from its source framework to an open-source, nGraph-compatible Intermediate Representation (IR) for use in inference operations. The Model Optimizer imports models trained in popular frameworks such as Caffe*, TensorFlow*, MXNet*, Kaldi*, and ONNX* and performs a few optimizations to remove excess layers and group operations when possible into simpler, faster graphs.


ASSOCIATED PRODUCTS

  • XC5VFX70T-2FF1136I

    XC5VFX70T-2FF1136I

    FPGA Virtex-5 FXT Family 65nm Technology 1V 1136-Pin FCBGA

  • XC5VFX70T-2FFG1136I

    XC5VFX70T-2FFG1136I

    FPGA Virtex-5 FXT Family 65nm Technology 1V 1136-Pin FCBGA

  • XC5VFX70T-3FF665C

    XC5VFX70T-3FF665C

    FPGA Virtex-5 FXT Family 65nm Technology 1V 665-Pin FCBGA

  • XCV200E-6CS144I

    XCV200E-6CS144I

    FPGA Virtex-E Family 63.504K Gates 5292 Cells 357MHz 0.18um Technology 1.8V 144-Pin CSBGA

  • XCV200E-6FGG456C

    XCV200E-6FGG456C

    FPGA Virtex-E Family 63.504K Gates 5292 Cells 357MHz 0.18um Technology 1.8V 456-Pin FBGA

FPGA Tutorial Lattice FPGA
Need Help?

Support

If you have any questions about the product and related issues, Please contact us.