Trends

Facebook’s chief AI scientist: Deep learning may need a new programming language

Deep learning may need a new programming language that’s more flexible and easier to work with than Python, Facebook AI Research director Yann LeCun said today. It’s not yet clear if such a language is necessary, but the possibility runs against very entrenched desires from researchers and engineers, he said.
LeCun has worked with neural networks since the 1980s.
“There are several projects at Google, Facebook, and other places to kind of design such a compiled language that can be efficient for deep learning, but it’s not clear at all that the community will follow, because people just want to use Python,” LeCun said in a phone call with VentureBeat.
“The question now is, is that a valid approach?”
Python is currently the most popular language used by developers working on machine learning projects, according to GitHub’s recent Octoverse report, and the language forms the basis for Facebook’s PyTorch and Google’s TensorFlow frameworks.
LeCun presented a paper exploring the latest trends and spoke before companies making next generation computer chips at the IEEE’s International Solid-State Circuits Conference (ISSCC) today in San Francisco.
The first portion of the paper is devoted to lessons LeCun took away from Bell Labs, including his observation that the AI researchers‘ and computer scientists‘ imaginations tend to be tied to hardware and software tools.
Artificial intelligence is more than 50 years old, but its current rise has been closely linked to the growth in compute power provided by computer chips and other hardware.
A virtuous cycle of better hardware causing better algorithms, causing better performance, causing more people to build better hardware is only a few years old, said LeCun, who worked at Bell Labs in the 1980s and made ConvNet (CNN) AI to read zip codes on postal envelopes and bank checks.
In the early 2000s, after leaving Bell Labs and joining New York University, LeCun worked with other luminaries in the space, like Yoshua Bengio and Geoffrey Hinton, conducting research to revive interest in neural networks and grow the popularity of deep learning.
In recent years, advances in hardware — like field programmable gate arrays (FPGA), tensor processing units (TPU) from Google, and graphics processing units (GPU) — have played a major role in the industry‘s growth. Facebook is reportedly also working on its own semiconductor.
“The kind of hardware that’s available has a big influence on the kind of research that people do, and so the direction of AI in the next decade or so is going to be greatly influenced by what hardware becomes available,” he said. “It’s very humbling for computer scientists, because we like to think in the abstract that we’re not bound by the limitation of our hardware, but in fact we are.”
LeCun highlighted a number of AI trends hardware makers should consider in the years ahead and made recommendations about the kind of architecture needed in the near future, recommending developers take into consideration the growing size of deep learning systems.
He also spoke about the need for hardware designed specifically for deep learning and hardware that can handle a batch of one, rather than needing to batch multiple training samples to efficiently run a neural net, which is the standard today.
“If you run a single image, you’re not going to be able to exploit all the computation that’s available to you in a GPU. You’re going to waste resources, basically, so batching forces you to think about certain ways of training neural nets,” he said.
He also recommended dynamic networks and hardware that can adjust to utilize only the neurons needed for a task.
In the paper, LeCun reiterated his belief that self-supervised learning will play a major role in advancing state of the art AI.
“If self-supervised learning eventually allows machines to learn vast amounts of background knowledge about how the world works through observation, one may hypothesize that some form of machine common sense could emerge,” LeCun wrote in the paper.
LeCun believes that in the future deep learning systems will largely be trained with self-supervised learning and new high performance hardware will be needed to support such self-supervised learning.
Last month, LeCun discussed the importance of self-supervised learning with VentureBeat as part of a story about predictions for AI in 2019. Hardware that can handle self-supervised learning will be important for Facebook, as well as for autonomous driving, robotics, and many other forms of technology.
Source: VentureBeat

Follow Us On Facebook, Twitter & Instagram Please Share Your Stories, Press Release & Articles At [email protected]. To Read More News Daily, Subscribe To Our Push Notification at https://www.inventiva.co.in/

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button