Trends

Intel and Philips use Xeon chips to speed up AI medical scan analysis

Global artificial intelligence (AI) is forecast to reach $200 billion by 2022, and if the current trend holds, health care will make up a significant portion of that market. That’s no surprise, given its promise — AI has the potential to reduce administrative costs, cut down on patient wait times, and diagnose diseases. And today, Intel and Philips demonstrated two more applications: bone modeling and lung segmentation.
Philips Medical, Philips’ medical supply and sensor division, published the results of recent machine learning tests performed on Intel’s Xeon Scalable processors with its OpenVINO computer vision toolkit. Researchers explored two use cases: one on X-rays of bones to model how bone structures change over time, and the other on CT scans of lungs for lung segmentation (i.e., identifying the boundaries of lung from surrounding tissue).
They achieved a speed improvement of 188 times for the bone-age-prediction model, which went from a baseline result of 1.42 images per second to a rate of 267.1 images per second. The lung-segmentation model, meanwhile, saw a 38 times speed improvement, processing 71.7 images per second after optimizations up from 1.9 images per second.
“Intel Xeon Scalable processors appear to be the right solution for this type of AI workload,” said Vijayananda J., chief architect at Philips HealthSuite Insights. “Our customers can use their existing hardware to its maximum potential … while still aiming to achieve quality output resolution at exceptional speeds.”
Intel contends that its processors, rather than the powerful graphics cards popularly used to train and run machine learning models, have a critical advantage when it comes to computer vision: the ability to handle larger, more memory-intensive algorithms.
In a blog post in May, Intel claimed that its Xeon platform could outperform Nvidia’s Volta 100 in inferencing tasks like machine learning translation. And it recently published a case study of drug maker Novartis showing that Xeon resulted in an improvement in image analysis models for early drug discovery by greater than 20 times.
One thing’s clear: Intel is positioning its AI chip business for growth. In August, it announced that it had sold more than 220 million Xeon processors over the past 20 years, generating $130 billion in revenues. That’s a far cry from the $200 billion the AI market is expected to be worth in 2022, but the company plans to close the gap aggressively, with plans to capture $20 billion in the next four years.
It’s certainly well-positioned to do so. The chipmaker’s acquisition of Altera brought field programmable gate array (an integrated, reconfigurable circuit) into its product lineup, and other recent purchases — namely of Movidius and Nervana — bolstered its real-time processing portfolio. Of note, Nervana’s neural network processor, which is expected to begin production in late 2019, can reportedly deliver up to 10 times the AI training performance of competing graphics cards.
Furthermore, Intel says its upcoming 14-nanometer Cascade Lake architecture will be 11 times better at image recognition compared to its previous-gen Silver Lake platform, and will also support a new AI-focused instruction set dubbed DL Boost.
“After 50 years, this is the biggest opportunity for the company,” Navin Shenoy, executive vice president at Intel, said at the company’s Data Centric Innovation Summit this month. “We have 20 percent of this market today … Our strategy is to drive a new era of data center technology
Source: VentureBeat

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button