The Pioneers Behind Deep Learning Have Won Computing’s Highest Honor
April 3, 2019 - 9 minutes readKnown as the “Nobel Prize of computing,” the Turing Award is the highest honor in computer science. It is bestowed upon individuals who make major technological contributions to the field.
The award is named after Alan Turing, the mathematician and computing pioneer who is credited as the key founder of artificial intelligence (AI). And it appears that things have come full circle—three researchers who laid the foundation for neural networks, the technology behind the current AI development boom, have won the 2018 Turing Award.
The Backbone of the AI Boom
It was the late 1980s, and a few AI researchers had become fascinated with a fringe idea: What if we could build software that mimicked how our brains’ neurons processed data? There was little evidence such a creation would work, but that did little to dissuade McGill University grad student Yoshua Bengio.
“I fell in love with the idea that we could both understand the principles of how the brain works and also construct AI,” explains Bengio. Fast forward more than 20 years later, and it looks like the whole world has fallen in love with the idea of neural networks too.
Forming the backbone of deep learning, a subset of machine learning, neural networks are behind many of the recent advancements we’ve seen from the AI boom. Today, they play a vital role in computer recognition, natural language processing, speech recognition, and virtually every type of AI you’ve encountered in your day-to-day.
Recently, Bengio, now a professor at the University of Montreal, was the recipient of the prestigious Turing Award alongside two other pioneers of deep learning: Geoff Hinton, a professor at the University of Toronto and researcher at Google, and Yann LeCun, an NYU professor and the Chief AI Scientist of Facebook. LeCun and Hinton are behind some of the papers that inspired Bengio to take a deep dive into the world of deep learning.
Recognition Long Overdue
Giving the Turing Award to these three pioneers is an act long overdue. Deep learning plays a central role in every tech company’s plans for the future. Whether we’re talking about how Google’s AI can read medical scans, how Facebook removes hate speech, or how Tesla’s Autopilot detects road markings, deep learning is there.
But before deep learning entered the spotlight and strategies of these San Francisco development giants, it had to go through decades of research, skepticism, and refining. And without Bengio, Hinton, and LeCun, deep learning’s potential may have never been realized.
Pedros Domingos, a University of Washington professor and lead machine learning researcher at hedge fund DE Shaw, thinks this award is just as much of an acknowledgment of deep learning’s impact as it is of the pioneers behind it: “This is not just a Turing Award for these particular people. It’s recognition that machine learning has become a central field in computer science.”
In a field like computer science where the emphasis is placed on mathematically proven solutions to problems, machine learning can be seen as messy. Often, this AI subset relies on statistical data trails to identify methods that perform well in practice. But clarity on how they work well isn’t always available. Still, its impact was simply too big to be ignored. “Computer science is a form of engineering, and what really matters is whether you get results,” says Domingo.
The Foundation for AI’s Future
With its roots going all the way back to the late 1950s, the concept of a neural network is actually one of the oldest AI approaches. Back then, AI researchers took rudimentary brain cell models made by neuroscientists and adapted them into a network of simple nodes that could learn how to filter and categorize data.
Breakthroughs like the Perceptron, a machine which could learn how to distinguish between on-screen shapes, hinted at deep learning’s potential. But training large neural networks with many layers remained an elusive endeavor. This changed in 1986 when Hinton showed how it could be done through a learning algorithm known as back-propagation. Today, backprop plays a central role in deep learning, but it took a while to work through some obstacles.
This period of trials and tribulations made deep learning a rather unpopular research topic at the time. LeCun explains, “There was a blackout period between the mid-’90s and the mid-2000s where essentially nobody but a few crazy people like us were working on neural nets.” Still, Bengio, LeCun, and Hinton persisted.
Grit and Perseverance Pays Off
LeCun would eventually make the breakthrough of neural network designs that could be used with images. Known as convnets, LeCun proved their validity by creating ATM check-reading software. Meanwhile, Bengio pioneered new ways to apply deep learning to sequences like speech and text.
As the rest of the world started figuring out how graphics processors could bolster the abilities of neural networks, deep learning began to grow in popularity. But it wasn’t until 2012 when the field’s potential was realized in an undeniable way. Hinton and two grad students won an annual contest centered around software identifying objects in photos.
Their method was leagues ahead of the competition; their software could sort more than 100,000 photos into 1,000 categories with an 85 percent accuracy rate—10 whole percentage points higher than the next best solution. Hinton and his two colleagues went on to start DNNresearch, which was acquired by Google in 2013. Today, Hinton still works for Google. During that same year, Facebook hired LeCun. Bengio chose to remain in academia, but he is also an advisor to Microsoft.
While the rest is history, Hinton explains that deep learning was a gamble until they could prove its potential: “You can look back on what happened and think science worked the way it’s meant to work, [but] until we could produce results that were clearly better than the current state of the art, people were very skeptical.”
Who Will Push the Limits of What’s Possible Next?
Although deep learning has given AI a much-needed boost in capabilities, there’s still much it can’t accomplish. While it can perform exceptionally at narrowly defined tasks, general AI that resembles the adaptability of the human brain remains out of reach.
Both LeCun and Hinton would like to divert today’s systems away from being dependent on extensive training through data and humans. Bengio explains that, while AI can function as a translation tool, the technology itself still isn’t capable of actually understanding languages.
All three of the Turing Award winners aren’t sure how these obstacles will be overcome. But they do have some advice for the next generation of pioneers: “They should not follow the trend—which right now is deep learning.” The trio’s story is a testament to the power of grit and determination—and that sometimes, going against the grain is the right direction.
Who do you think will win next year’s Turing Award? What breakthroughs in AI will we soon see? And will it be through deep learning? Let us know your thoughts in the comments!
Tags: AI, AI and machine learning, AI and ML, AI App Developer, AI app developer San Francisco, AI research, machine learning app developer, machine learning app developer San Francisco, machine learning app development, machine learning app development San Francisco, ML and AI, ML app developer, ML app development, mobile app development San Francisco, San Francisco app developers