Palette 1.4 Software
Command line integrated environment for edge ML application development, to create, build and deploy edge ML solutions on SiMa.ai MLSoC silicon.
Palette 1.4 software update is available on our Developer Site.
A snapshot of new features in Palette 1.4 Software Production release:
- Improved ‘hello’ experience:
- Install-script dependency checks is now a seamless feature enabling developers to just install the SDK without running any pre-install scripts manually, saving a step, as well as time, in the process.
- Streamlined edge ML deployment:
- Runtime dynamic profiler, tracer, and visualizer for GST helps understand the application behavior on the MLSoC so developers can better optimize their applications
- File transfer from SoC to Host over PCIe enabling faster nominal data transfer speeds for application logic and business logic data from MLSoC to host PC
- Improved model performance:
- Large tensor performance improvements, reducing latency and increasing throughput of large tensor pipelines
- Default calibration algorithm set to MSE which produces better results than min-max
- Accelerated quantization evaluation in ModelSDK, speeding up by up to 2X the iterative process of determining the ideal quantization method for a particular model
- Manual mixed-precision quantization mode, enabling precise control over mixing and matching 8 bit and 16 bit precision quantization
Also Featuring:
- Ability to integrate the MLSoC machine learning platform into existing C++ applications utilizing C/C++ co-processing APIs.
- Support for Palette installation on Linux and Windows
- Improved quantization with additional new schemes and accuracy modes.
- Increased library support for ML models, plug-ins and applications.
- Compile and evaluate any ML model, from any ML framework to silicon.
- Build applications using Python or GStreamer on SiMa.ai MLSoC.
- Develop GStreamer applications with SiMa.ai plug-in libraries.
- Deploy and manage edge ML applications using Palette tools.
- Evaluate the KPI performance of ML models and pipelines.
- Customize embedded Linux run-time environment for hosting edge ML applications.
- Support for creating functional pipelines on target using Python APIs.
- Expansion of optimized model support with over 380 models fully compatible with the SiMa.ai Machine Learning Accelerator (MLA). (GitHub link)
SiMa.ai Palette™ software addresses ML developers’ steep learning curve by avoiding the arcane practice of embedded programming. SiMa.ai Palette software is a unified suite of software, tools, and libraries designed to enable developers to create, build and deploy applications on multiple devices. Palette can manage all dependencies and configuration issues within a container environment while securely communicating to the edge devices, while continuing to empower the embedded programmer by retaining the flexibility to perform low level code optimizations.
Palette is the delivery mechanism for Any, 10x, and Push-button.
Any Model: SiMa.ai Palette ML compiler supports virtually any framework and compiles across heterogeneous processors, providing layer-by-layer targeting of those compute resources with the necessary precision to achieve accurate results running on SiMa.ai MLSoC silicon.
Any Pipeline: Automated path from Python to SiMa.ai MLSoC silicon is supported with the ability to cross-compile from computer vision pipelines on cloud and x86 hosted platforms to run on SiMa.ai MLSoC silicon with minimal code development.
Any Application: Any full ML application is supported on the Yocto Linux platform running on Quad Arm processors.
10x:
SiMa.ai Palette ML compiler targets the high-performance MLA on the SiMa.ai MLSoC to achieve 10x performance over typical compiled results on other platforms. SiMa.ai ML compiler and our patented static scheduling approach eliminates stalls, minimizes data movement, caching and improves the utilization of our ultradense machine learning tiled architecture. This automated toolchain delivers high TOPS/watt, and our FPS/watt efficiency is 10x better than competing compiled solutions that often resort to hand coding models to silicon.
Push-button:
We designed our innovative software front-end to automatically partition and schedule your entire application across all of the MLSoC™ compute subsystems. For ML models, we created a suite of specialized and generalized optimization and scheduling algorithms for our back-end compiler. These algorithms automatically convert your ML network into highly optimized assembly code that runs on the Machine Learning Accelerator (MLA) – no manual intervention needed for improved performance.
Palette Software Functional Description
The SiMa.ai Palette software provides an integrated development environment for full stack ML application development on a host PC that can be easily cross-compiled to the SiMa.ai MLSoC target silicon host Arm Processor, dramatically simplifying the process for ML developers to do their algorithm porting on the SiMa.ai MLSoC embedded platform. This cross-compilation frees the developer to utilize the desktop as a convenient development platform, contained in a Docker hosted image that contains all of the tools in a single package for full stack ML development. A Push-button build enables the cross compilation to create application packages for the heterogeneous target processors of the SiMa.ai MLSoC silicon. These application packages are deployed, using the device manager Command Line Interface (CLI), to the device where they are unpacked, verified, installed and initiated to execute the resulting build. Device manager commands also manage and control the debugging and logging of events on the SiMa.ai MLSoC for real-time monitoring by the host development platform. SiMa.ai Palette deployment capabilities can support a large number of devices simultaneously, extending developers’ MLOps environment to deploy, execute and gather statistics back from the edge device(s). The diagram depicts a simplified flow that describes the major components we utilize to create, build and deploy an ML application on the SiMa.ai MLSoC silicon platform.
Develop an ML Model
SiMa.ai Palette ML Model Developer incorporates a parser, quantizer and multi-mode compiler to generate executable code for the Machine Learning Accelerator (MLA). The parser, based on open source TVM, can receive neural networks defined in a wide variety of NN frameworks, providing the capability to support Any ML network. The SiMa.ai ML Model development tool performs graph transformations to produce a network graph used for quantization and auto-partitioning. The resulting graph network with quantization defined is then cross-compiled with an advanced SiMa.ai proprietary compiler for memory allocation, code generation and scheduling, producing an executable for use with the SiMa.ai MLSoC. Those layers auto-partitioned to the CPU are compiled using the Arm compiler of the TVM. A JSON file is generated specifying the sequence of the MLA and ARM code execution to compute the network.
Develop an ML Enabled Computer Vision Pipeline
The second major component consists of the computer vision pipeline creation tools, to incorporate the user-compiled ML model(s) from the SiMa.ai ML Model Developer. SiMa.ai Palette 1.4 supports three sets of programming APIs and methodologies for pipeline creation and application development.
- Functional pipelines using Python scripting to incorporate the pre and post processing functions around the compiled ML models using SiMa.ai Python APIs. There are examples of pipelines using the SiMa.ai Python APIs provided as a guide for developers to create their own performant ML pipelines using Python scripts running on the SiMa.ai MLSoC.
- GStreamer optimized pipelines that leverage the example pipelines provided by SiMa.ai, library plug-ins that define the pre and post processing functions as well as SiMa.ai optimized ML models. Using a simple JSON file with a sequence of commands or editing a SiMa.ai example JSON file, the user defines the input data streams from PCIe, Ethernet or other peripherals, the computer vision pre-processing functions, ML model, the post processing and analytic application software, to create a GStreamer pipeline. Each pipeline element can be built with functional parameters for each plug-in defined. The developer can choose to utilize an existing pipeline from the SiMa.ai library to modify that pipeline and/or its parameters to deploy and test on the SiMa.ai MLSoC platform. The SiMa.ai Palette software then builds executable images using auto code generation tools for each of the embedded video and application processors contained in the SiMa.ai MLSoC for deployment to the silicon for evaluation and testing. This process can be quickly iterated to modify the pipeline and its components or to tune the pipeline and its parameters to achieve the desired system requirements.
- Host-side C/C++ APIs, or GStreamer plugins provide the embedded developer a methodology to integrate SiMa.ai MLSoC as a co-processor into existing applications. Co-processor mode enables developers to leverage SiMa.ai MLSoC heterogeneous compute to accelerate their existing deployments by offloading all or part of the application to the MLSoC via PCIe.
Deploy and run an ML Model on the MLSoC device
The third major component is the deployment and device management tool. SiMa.ai Palette provides a deployment command line capability to connect to the development board environment, configure and update the development board and download the ML application pipeline executable files to the board. Utilizing a secure link from the host development to the targeted SiMa.ai MLSoC device(s), users can issue commands and scripts to the device-manager that can download, unpack and install the application pipelines, then execute, stop and update the execution pipeline parameters. Additional command scripts enable the user to debug the software execution on the device and stream to the host platform logs of the MLSoC code execution. The secure connection is utilized to monitor the execution, extract metrics and can also provide connectivity to a production host MLOps server/cloud solution that can manage the edge SiMa.ai MLSoC device(s).
How does Palette Production Release help developers today?
With SiMa.ai Palette you get:
- Faster time to value. Understand tools flow, features and capabilities. Build, create and deploy in minutes. Get your pipelines running quickly using Python scripting.
- Model Versatility. Tackle any sensor data set, model, any computer vision problem imaginable. Auto-partition and compile across MLA and Quad-core Arm Subsystem with integrated cache.
- Application Versatility. Integrate any C/C++ host application, library or function using our C/C++ APIs to quickly bring the total solution into an integrated production environment.
- Simplicity. Automation is critical to ML development at the edge, eliminating the need for hand coding with push button ease.
- Performance. Exponential performance gains beat legacy solutions designed for the data center.
To learn about, see a demonstration or evaluate our Palette Software, please fill out the form and our SiMa.ai team will provide you access.