Trends

Amazon open-sources Neo-AI, a framework for optimizing AI models

At last year’s re:Invent 2018 conference in Las Vegas, Amazon took the wraps off SageMaker Neo, a feature that enabled developers to train machine learning models and deploy them virtually anywhere their hearts desired, either in the cloud or on-premises. It worked as advertised, but the benefits were necessarily limited to AWS customers — Neo was strictly a closed-source, proprietary affair. That changed this week.
Amazon yesterday announced that it’s publishing Neo’s underlying code under the Apache Software License as Neo-AI and making it freely available in a repository on GitHub. This step, it says, will help usher in “new and independent innovations” on a “wide variety” of hardware platforms, from third-party processor vendors and device manufacturers to deep learning practitioners.
“Ordinarily, optimizing a machine learning model for multiple hardware platforms is difficult, because developers need to tune models manually for each platform’s hardware and software configuration,” Sukwon Kim, senior product manager for AWS Deep Learning, and Vin Sharma, engineering leader, wrote in a blog post. “This is especially challenging for edge devices, which tend to be constrained in compute power and storage … Neo-AI eliminates the time and effort needed to tune machine learning models for deployment on multiple platforms.”
Neo-AI plays nicely with a swath of machine learning frameworks, including Google’s TensorFlow, MXNet, Facebook’s PyTorch, ONNX, and XGBoost, in addition to ancillary platforms from Intel, Nvidia, and Arm. (Support for Xilinx, Cadence, and Qualcomm projects is forthcoming.) In addition to optimizing models to perform at “up to twice the speed” of the original with “no loss” in accuracy, it helpfully converts them into a common format, obviating the need to ensure that software on a given target device matches the model’s exact requirements.
So how does it do all that? By using a custom machine learning compiler and runtime, which Amazon claims is built on “decades” of research on traditional compiler technologies — including the University of Washington’s TVM and Treelite. In the spirit of collaboration, the Seattle company says the Neo-AI project will be steered principally by contributions from Arm, Intel, Qualcomm, Xilinx, Cadence, and others.
Processor vendors can integrate custom code into the compiler to improve model performance, Amazon says, while device makers can customize the Neo-AI runtime for particular software and hardware configurations. The runtime has already been deployed on devices from ADLINK, Lenovo, Leopard Imaging, Panasonic, and others.
“Intel’s vision of artificial intelligence is motivated by the opportunity for researchers, data scientists, developers, and organizations to obtain real value from advances in deep learning,” Naveen Rao, general manager of the Artificial Intelligence Products Group at Intel, said of today’s news. “To derive value from AI, we must ensure that deep learning models can be deployed just as easily in the datacenter and in the cloud as on devices at the edge. Intel is pleased to expand the initiative that it started with nGraph by contributing those efforts to Neo-AI. Using Neo, device makers and system vendors can get better performance for models developed in almost any framework on platforms based on all Intel compute platforms.”
Source: VentureBeat

Follow Us On Facebook, Twitter & Instagram Please Share Your Stories, Press Release & Articles At [email protected]

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button