// php echo do_shortcode (‘[responsivevoice_button voice=”US English Male” buttontext=”Listen to Post”]’)?>

The Embedded Vision Summit is scheduled for May 16-19 in Santa Clara, California. This is a conference uniquely focused on practical computer vision and visual artificial intelligence (AI), aimed directly at innovators incorporating visual abilities into products. One of the great things about being part of the Summit team is seeing trends in the built-in vision space and editors in EE Times ask me to share some of the things we see in 2022

Phil Lapsley (Source: Embedded Vision Summit)

The first trend that jumped on me was the huge increase in productivity and efficiency for built-in vision applications. Interestingly, these increases are not just from processors. Of course, processors are getting faster, often due to the variety of architectural approaches (Cambrian Explosion, as my colleague Jeff Beer recently wrote). But algorithms and tools are also leading to this increase. A great example of a practical algorithmic innovation is the Edge Impulse talk “Faster objects, more objects” (FOMO!), Presented by their technical director Ian Jongbum. Similarly, Felix Baum, Qualcomm’s director of product management, will discuss the company’s latest tools to help developers get the best possible machine learning performance from their built-in processors.

(By the way, the great thing about this trend is that these performance gains are multiplied: when you combine efficiency in algorithms, tools, and processors – each of which can be significant in itself – you quickly realize you’re looking for fantastic improvement from year to year.)

The second trend is the democratization of the ultimate AI by simplifying development. In order to become a massive AI on the edge and vision, system developers without deep experience must be able to master the technology. This means more use of ready-made models, such as the 270+ models available in OpenVINO Open Model Zoo, presented in a conversation by Ansley Dunn and Ryan Loney of Intel. And that means increasing the level of abstraction for developers with low code / no code toolssuch as those presented by Alvin Clark of NVIDIA.

A third trend is large-scale deployment. How do you go from proof of concept to large-scale implementation? Emerging MLops techniques and tools mean that product developers are no longer alone in finding difficult problems such as version control for training data, as we will see in Nicol├ís Eiris’ interview. AI reproducibility and continuous updates and the speech of Rakshit Agrawal Kubernetes and containerization for end-view applications.

The fourth trend concerns the reliability and reliability of AI. As AI systems become more widespread, there are more opportunities for errors that can have serious consequences. Industry veterans will share their views on how to make AI more reliable. Remarkable examples here are the speech of Krishnaram Kentapadi responsible AI and Robert Laganier’s model operations and conversations sensory fusion. There are also important issues to consider about privacy, bias and ethics in AI. Professor Susan Kennedy of the University of Santa Clara will present at “Privacy: an insurmountable challenge for computer vision”, followed by a question and answer session with an extended audience called “Ask an ethicist: answer your questions about AI privacy, bias, and ethics.”

It’s such an exciting time to get involved in the ultimate AI and vision. What trends will You place of the summit?

– Phil Lapsley is a co-founder of the consulting firm BDTI and one of the organizers of Embedded Vision Summitto be held in Santa Clara, California, May 16-19.



https://www.eetimes.com/embedded-vision-four-trends-to-watch/

Previous articleWho is responsible for the actions of robots controlled by AI
Next articleSamsung Galaxy S22 Camera Features Available in Galaxy Z Series, Older Galaxy S Series Models