Resource Hub

SiMa.ai Palette 1.3 Release

June 10, 2024
Niladri Roy
Blog
Share on:

It's an exciting time for Developers. The latest (1.3) release of SiMa.ai Palette™ SDK is here, and brings with it several enhancements designed to make the ML developer experience easier.

 

Palette is the low code, command line integrated environment for ML application development from SiMa.ai. It is used to create, build and deploy edge ML solutions on SiMa.ai Machine Learning System-on-Chip (MLSoC) silicon. Our goal is to make the acceleration of your complete application pipeline painless and efficient. To that end, Palette already incorporates the features and capabilities to build, create and deploy your application pipeline in minutes, using Python scripting. 

Palette enables you to tackle any sensor data set, model, any computer vision problem. You can auto-partition and compile across MLA and Quad-core Arm Subsystem with integrated cache and integrate any C/C++ host application, library or function using our C/C++ APIs to quickly bring the total solution into an integrated production environment.

 

New C++ API Enhancements

Palette provides the ability to integrate the MLSoC platform into existing C++ applications utilizing C/C++ co-processing APIs. It provides a deployment command line capability to connect to the development board environment, configure and update the development board, and download the ML application pipeline executable files to the board. Command scripts enable the user to debug the software execution on the device and stream to the host platform logs of the MLSoC code execution. With the release of Palette 1.3, new enhancements to C++ APIs provide acceleration pipeline status and enhanced error reporting, providing increased trace and debug visibility into your ML pipeline, enabling easy porting of any C++ application which uses HOST + GPU/Accelerators.

 

Introducing int16 quantization

Along with continued enhancements to existing quantization methods, Palette 1.3 introduces int16 quantization support for achieving improved precision in hard-to-quantize models. int16 quantization is useful when it is critical to maintain higher accuracy than is possible with int8, since it offers better precision with minimal loss of accuracy when compared with int8. It is ideal for applications with moderate memory and computational constraints. For example, developers can first quantize to int8 and then apply different calibration methods, testing with various calibration samples to assess the metrics to identify the best int8 version for their needs by checking the output image detections for accuracy. If int8 metrics are insufficient, they can quantize and compile to int16, fine-tuning calibration parameters for optimal performance.

 

Install Right First Time

Every user environment, even from machine to machine, can be potentially different, and it is never fun to have to deal with the myriad dependencies that exist to trip you up unawares when installing new software. Palette 1.3 takes the guesswork out of SDK installation, saving valuable time for the real productive work. The new Install-Right tool in Palette 1.3 automatically checks to ensure that all system requirements, for example, software versions, docker and OS versions, memory and HDD storage, port access, etc., are correct, before the actual SDK installation. This ensures that system dependencies are met, reducing potential lags or downtime  from non-functional installs due to the omission of one or more critical steps in the process.

 

Expanding Model Support

Palette 1.3 continues the work of adding to our supported models, as well as optimizing existing models. Palette 1.3 adds MaskRCNN and YOLOv8 4-camera support over Ethernet in addition to the more than 350 models fully compatible with the SiMa.ai MLSoC. 

 

SiMa.ai Palette software is a unified suite of software, tools, and libraries designed to enable developers to create, build and deploy applications on multiple devices. Palette can manage all dependencies and configuration issues within a container environment while securely communicating to the edge devices. This approach still affords embedded programmers the flexibility to perform low level optimization of the code.

 

Our goal at SiMa.ai is to support developers wherever they are in their ML journey by continuously adding functionality, scripts and models for a truly effortless developer experience in accelerating your entire application pipeline at the edge.