Embedded Vision: Success through Collaboration

embedded Vision

Hardware and software manufacturers for embedded vision components must work together to promote the effective use of this pioneering technology. The potential applications of embedded vision in combination with machine learning are enormous.

Many future applications - whether in B2C or B2B – will be based on Embedded Vision: small, integrated image processing systems that work intelligently directly from devices enabling them to see and understand. Embedded vision is made possible by compact, high-performance computing platforms that consume very little energy. Thanks to standardized interfaces between computing platforms and image sensors, an increasing amount of image data in real time can be processed. With artificial intelligence, image processing systems are becoming even more intelligent: they are learning by themselves.


“The potential of Embedded Vision is enormous. The growing number of exhibitors and demos with an embedded vision reference, that could be seen at many of the trade fair booths, shows this", summarized Prof. Dr.-Ing. Axel Sikora, chairman of the embedded world advisory board and chairman of the embedded world Conference. "We are delighted that VDMA Machine Vision and the embedded world have organised again a panel discussion and a dedicated track on embedded vision at the embedded world Conference. Together, we are driving this topic forward".


According to the panelists, embedded vision technology will not completely replace traditional, PC- or smart camera-based machine vision systems in the foreseeable future. However, from a technical and economical point of view, it does offer extremely interesting solutions in a multitude of application fields. "The speed of development of the required components, from sensor boards to various embedded platforms to machine vision software for evaluation, is extremely high. As a result, embedded vision technology has now reached a level of performance that already allows the use of effective systems in many applications today," said Dr. Klaus-Henning Noffz, Chairman of the Board of VDMA Machine Vision, at the panel discussion.


An important and necessary step to make it easier for users to implement this innovative technology is that the manufacturers of the embedded vision components work together with regard to standardization and platform building: "If users have to assemble sensors, processors, software, and other components tediously and individually when developing solutions, the success of Embedded Vision will not reach the extent that it potentially can." However, various camera, embedded board and software manufacturers have already recognized this and are cooperating for the benefit of users.


Further development of the technology is supported by continuous improvements in areas related to processors as well as innovative algorithms and methods such as deep learning and artificial intelligence. On the hardware side, smaller and smaller computers with multi-core processors and lower and lower power consumption guarantee sufficient computing power. Deep learning is becoming increasingly important in the use of embedded vision systems, for example to classify defects. According to Dr. Olaf Munkelt, Managing Director of MVTec Software GmbH, even complex image processing tasks can be solved efficiently in connection with suitable methods for image preprocessing and postprocessing. Participants of the panel discussion all agree: Embedded Vision will establish itself in numerous industries increasingly as a successful and cost-effective technology.

Statements from participants

Allied Vision Technologies GmbH

“Today’s biggest challenge to apply efficient vision capabilities to embedded systems is the camera itself with all the integration efforts. New kind of camera modules and technologies will help embedded engineers to lower NRE costs significantly. At the same time, users benefit from more image processing capabilities directly in the camera module, which improves the allocation of resources on the host side.” Paul Maria Zalewski, Director, Product Management, Allied Vision Technologies GmbH

Congatec AG

The core component of an embedded vision system is the processor or processing board. Some doubt that small embedded boards with supposedly limited processing power can replace powerful PCs in demanding vision applications. What’s your opinion?
„Given the continuos evolution of increased computing power at small sizes, low power advancements, and multi-core processors capable of running multiple software applications, highly intelligent vision systems with centralized computing at the edge will enable a wide variety of volume applications that in the past were limited by the requirement and cost of dedicated PCs.  Vision systems powered with serious analytics, using time sensitive networks, delivering real time performance will enable the next generation of products at a wide variety of market acceptable price points, and the value created will be from Digitization of information.“ Jason Carlson, CEO, Congatec AG

MVTec Software GmbH

“Deep Learning on embedded devices is continuously gaining importance in the market. Yet, we do not see deep learning as a one-for-all solution, but rather as an ideal complementary technology for solving specific machine vision applications, e.g. classifying defects. By combining deep learning technology with other approaches, complex vision tasks including pre- and post-processing can be solved efficiently. Thus, a comprehensive toolset available for a broad range of embedded hardware architectures, such as provided by MVTec HALCON, is crucial to build embedded vision solutions in an efficient way and thus reducing time-to-market.” Olaf Munkelt, Managing Director, MVTec Software GmbH

ON Semiconductor Ltd

Artificial intelligence – machines that see and learn for themselves. We have heard this dream for years. Is deep learning nowadays really as simple and great as everyone says, or are the expectations exaggerated? 
“It is not simple but it is starting to do great things.  These systems are the heart of new transportation experiences with autonomous driving.   They are developing higher efficiency and better quality manufacturing systems for in-line product inspection.  It is simplifying shopping experiences from having no checkout lines to paying at vending machines with facial recognition. There is a lot more to do but real change is happening today because of improvements in imaging quality and advancements of artificial intelligence.” James Tornes, Vice President Systems and Software, Intelligent Sensor Group, ON Semiconductor Ltd