On Edge, Episode 3: What do ML developers and embedded engineers have to look forward to in 2024?

Blog

“On Edge” is back with our final episode of the calendar year. In this episode, hosts Alicja Kwasniewska and Carlos Davila are putting a twist on traditional New Year’s resolutions, instead offering predictions that will impact the ML developer and embedded edge communities in 2024.

Why will latency be such a critical factor as we undergo a revolution in human-to-machine interfaces in 2024, and why is operating at the edge the clear solution (2:53)? (time) What advances will allow personal assistants to evolve so that they don’t simply read your calendar, but can actually sense your mood (4:15)?

Using SiMa.ai CEO Krishna Rangasayee’s recent blog post as a blueprint for their discussion, our hosts give us their own detailed take on three key predictions for the coming year:

Domain specialized models. Scaling back on parameter width is critical given current capacity limitations. How have those capacity limitations changed year-to-year, and how can we optimize large models to fit these parameters (5:50)?

Software flexibility. In order to keep experimenting with these models, software flexibility will become increasingly important. Why is adaptability such a critical part of allowing the industry to keep iterating and innovating (8:05)?

Data security. Data security, model accuracy and transparency will be critical as we enable more LLM and LMM solutions. Human-in-the-loop systems and collaborative intelligence can support these requirements and fit well into embedded edge systems. How can ML developers ensure these three foundational pillars are solid when evaluating a vendor’s technology (10:05)?

But first, we get a few visual prognostications from the artistically inclined generative AI program DALL-E, which provides its output as a digital image. We work out its surrealist responses through a combination of inference and laughter, as our hosts demonstrate the admittedly entertaining limitations of using a visual program to generate text (0:36). DALL-E’s predictions – we think – include advances in robotic surgery, autonomous vehicles, and AI personal assistants. Like both its namesake artist and cartoon robot, DALL-E speaks its own language.

And as always, we close out the episode with more questions from you – our viewers. How is SiMa.ai’s Model SDK compiled and packaged after partitioning (14:45)? And how does SiMa.ai stay up-to-date in the ever-changing robotics industry (15:35)? Watch the episode to find out, and to see what our lucky participants won for submitting their questions.

A sincere thank you to everyone who has followed along with us for the first few episodes of On Edge. This series is ultimately inspired by you, the ML developers and embedded edge engineers doing boots-on-the-ground work every day, and we look forward to bringing you more in-depth, educational content throughout the coming year. And in case you missed it, check out our recently announced MLSoC Developer Kit, featuring the latest release of our Palette software.

Please like and subscribe to our YouTube channel, and if you have questions you’d like to see answered on a future episode reach out to us at OnEdge@sima.ai.