DeepCube announced the launch of a new suite of products and services to help drive enterprise adoption of deep learning, at scale, on intelligent edge devices and in data centers.
The offerings build on DeepCube’s patented platform, which is the industry’s first software-based deep learning accelerator that drastically improves performance on any existing hardware.
Now, DeepCube will offer solutions for neural network training and inference, allowing users to leverage DeepCube’s technology to address challenges in their deep learning pipeline.
Additionally, a new service offering will make available DeepCube’s team of leading AI experts to support deep learning projects.
DeepCube’s new offerings include:
CubeIQ: the first fully automated training framework that can take a model and surgically eliminate unnecessary parameters to ensure that physical, real-world constraints and profiles are met. CubeIQ trains models with a significant reduction in size, and with pre-knowledge of the end target and environment. This leads to a drastic speed increase, minimized compute footprint, and efficient edge deployment – all while maintaining accuracy.
CubeEngine: an inference engine designed to run next-generation deep learning models for optimal performance. CubeEngine is designed to accelerate CubeIQ generated models by dynamically assigning optimal kernels suitable for the specific hardware and model execution. CubeEngine is architected as a composable inference engine, unlike prior generation monolithic inference engines.
CubeAdvisor: an expert-level service offered to leverage DeepCube’s wide-ranging ML experience, with guidance from some of the world’s leading AI experts and PhDs. It helps customers design, optimize, and deploy deep learning models, ensuring that customers achieve the best performing model that fits their strict cost, performance, power, and latency requirements.
To trial the new suite of products, DeepCube utilized 2nd Gen ..