Ceva-NeuPro-Nano Wins Product of the Year Award



at EE Awards Asia Event.

ROCKVILLE, MD, Dec 9, 2024 – Ceva, Inc announced that the Ceva-NeuPro-Nano NPUs have been awarded the Best IP/ Processor of the Year award at the prestigious EE Awards Asia event, recently hosted in Taipei.

The award-winning Ceva-NeuPro-Nano NPUs deliver the power, performance and cost efficiencies needed for semiconductor companies and OEMs to integrate Embedded AI models into their SoCs for consumer, industrial, and general-purpose AIoT products. Embedded AI models are artificial intelligence algorithms and systems that are integrated directly into hardware devices and run locally on the device rather than relying on cloud processing. By addressing the specific performance challenges of embedded AI, the Ceva-NeuPro-Nano NPUs aim to make AI ubiquitous, economical and practical for a wide range of use cases, spanning voice, vision, predictive maintenance, and health sensing in consumer and industrial IoT applications.

Iri Trashanski, chief strategy officer of Ceva, commented: “Winning Best IP/ Processor of the Year from EE Awards Asia is a testament to the innovation and excellence of our NeuPro-Nano NPUs which bring cost effective AI processing to power-constrained devices. Connectivity, sensing and inference are the three key pillars shaping a smarter, more efficient future, and we are proud to lead the way with our unrivalled IP portfolio addressing these three use cases.”

The Ceva-NeuPro-Nano Embedded AI NPU architecture is fully programmable and efficiently executes Neural Networks, feature extraction, control code and DSP code, and supports most advanced machine learning data types and operators including native transformer computation, sparsity acceleration and fast quantization. This optimized, self-sufficient and single core architecture enables Ceva-NeuPro-Nano NPUs to deliver superior power efficiency, with a smaller silicon footprint, and optimal performance compared to the existing processor solutions used for embedded AI workloads which utilize a combination of CPU or DSP with AI accelerator-based architectures. Furthermore, Ceva-NetSqueeze AI compression technology directly processes compressed model weights, without the need for an intermediate decompression stage. This enables the Ceva-NeuPro-Nano NPUs to achieve up to 80% memory footprint reduction, solving a key bottleneck inhibiting the broad adoption of AIoT processors today.

The NPUs are delivered with a complete AI SDK – Ceva-NeuPro Studio – which is a unified AI stack that delivers a common set of tools across the entire Ceva-NeuPro NPU family, supporting open AI frameworks including LiteRT for Microcontrollers (formerly TensorFlow Lite for Microcontrollers) and microTVM.

The EE Awards Asia celebrate the best products, companies, and individuals across the continent’s highly regarded electronics industry. Judging and selection is undertaken by a global panel of experts, who select the shortlist before the EETimes reader communities of EE Times and EDN in Taiwan and Asia cast their votes.

For more information, visit ceva-ip.com.



Source link

Related Posts

About The Author

Add Comment