The Next Chapter at One Platform for All Edge AI

Krishna Rangasayee, founder and CEO & Elizabeth Samara Rubio, Chief Business Officer

It’s been our goal since starting to create one software and hardware platform for the embedded edge that empowers companies to make their AI/ML innovations come to life. With the rise of generative AI already reshaping the way humans and machines work together from copilots to chatbots, it’s inevitable that generative AI’s universal adoption will be more impactful than the PC, browser with search, and smartphone combined. 

There is no doubt that generative AI is integral to many of our visions of the future, from autonomous cars to drones, aerospace, smart factories, intelligent healthcare diagnostics and more. What is at stake for these AI/ML innovations could not be higher. For generative AI to become this widespread and embed AI/ML in our daily lives in such a meaningful way, it must be deployed responsibly – meaning privacy, safety, energy efficiency and education are not optional, but required. The truth remains that today’s computing chips and software are not designed to make generative AI adoption happen at scale. 

Today, we’re thrilled to share that we are accelerating the release of’s second-generation Machine Learning System-on-Chip (MLSoC). By Q1 of 2025 our second-generation MLSoC will join forces with our first-generation MLSoC as radically simplifies edge AI/ML for customers through one, software-centric, platform for all edge AI including computer vision, transformers and generative AI.

The need for ultra-latent, power-efficient, responsible and secure data has never been more critical and it’s a combination only possible at the edge. AI chips today running in the cloud are nearing their power capacity and the cost of compute to train and run state of the art generative AI is a nonstarter for most organizations. Generative AI is accelerating and progressing at an unprecedented speed where most companies do not have the talent capacity or CapEx to continuously reinvest as the technology advances.

Our current generation of MLSoC has time and time again demonstrated leadership against incumbent industry peers in performance and power efficiency, validated most recently by the MLCommons® February 2024 MLPerf benchmark competition in the MLPerf™ Inference 4.0 closed, edge, power division category.’s sustained leadership position in performance of FPS/W demonstrates our commitment and unique ability to provide one platform for all edge AI that scales with customers as their AI/ML journey evolves, from computer vision, to transformers to multi-modal generative AI.

The second-generation MLSoC will enable any framework, any network, any model or sensor, as well as any modality (as in audio, speech, text, image, and more) for edge AI applications. Through’s proprietary combination of silicon and software, SiMa’s MLSoC will continue to radically simplify and scale all edge AI/ML for customers through one platform. 

By combining the MLSoC with our patented Palette software features, such as static scheduling and double buffering that allows for proactive data prefetch in a layered approach ahead of the compute time, customers will not be restricted by model sizes. The memory hierarchy is based on a use of external DRAM, combined with high-bandwidth LPDDR5, Network on Chip (NoC) and Direct Memory Access (DMA) engines. This unique differentiator of our memory hierarchy design maximizes capacity, efficiency and power, leaving space to execute transformer based multi-modal applications directly on the device.’s roadmap will expand support for high performance at the lowest power for large language models (LLMs) and large multi-modal models (LMMs), in a single platform. 

The future of’s MLSoC’s capabilities are underpinned by incredible innovations from trusted partners.’s heterogeneous compute platform for overall application development is powered by an Arm Holdings plc processor subsystem. In addition, we have integrated the EV74 Embedded Vision Processors from Synopsys, allowing us to enable pre and post processing in computer vision applications on a single chip. TSMC’s 6nm technology generates further performance and power consumption optimizations for second-generation MLSoC customers. 

Generative AI at the edge will finally bring to life a machine that can inherently act autonomously. The intelligence exchanged to and from a device will be far more collaborative with AI/ML supporting the human to machine interface, and the resulting output will be more advanced as a result. Imagine drones that take in multiple sources of information about their environments to improve their precision in runway detection and runway tracking by adding text, speech, or sound to image recognition on the machine. Or healthcare workers that utilize generative AI to combine medical history sources from physician notes, diagnostic images, lab tests, and medical reports to yield a more accurate diagnosis of a condition, improving the state of healthcare and patient-doctor relationships. Or generative AI applications that can work with manufacturers to create text, images, or videos with step-by-step instructions to help the operators complete repairs and upgrades in less time.

The time to build a future that gives sight, sound and speech to the technology that surrounds us is now. We’re grateful to our partners and investors who are making our mission and accelerated execution possible. Thank you to our newest investors, Maverick Capital, Point72, and Jericho along with our existing investors Amplify Partners, Dell Technologies Capital, Fidelity Management & Research Company, and Lip-Bu Tan.  

Billions of devices exist between a PC and smartphone that have the potential to embody AI through an infinite amount of LLMs and LMMs. The opportunity to embed conversational AI throughout these machines will be the biggest technology transformation of our lifetime. This is only the beginning. Companies need a partner now to help them prepare for that reality. is ready. Get started with today.