Is AI Development Moving Too Fast?
January 15, 2020 - 9 minutes readIs the development of artificial intelligence (AI) moving too fast? Just a decade ago, AI was little more than a laboratory curiosity. Today, it is an unparalleled economic force that can be found all around us. But sadly, so can many of the problems that AI brings. Bias, deepfakes, privacy violations, and cases of malicious use are now more prevalent than ever before.
Still, there may be a silver lining to the AI industry’s rapid-fire rate of innovation: It gives us an opportunity to observe this technology’s trajectory in real time and correct its course. This is imperative to do if humanity is to get what it really wants from AI. But the window for this change could be closing.
A Quick Review of AI in 2019
The potential of AI’s current capabilities took center stage in 2019. DeepMind and OpenAI created bots that can beat top esports professionals at their own games. Waymo finally deployed autonomous vehicles to actual paying patrons in Arizona. And deep learning algorithms demonstrated that they can perform just as well as physicians (if not better) at identifying medical maladies in x-ray scans and other images.
Unfortunately, many of AI’s issues were on full display as well during 2019. Last year, Seattle-based developer and e-commerce giant Amazon made headlines for all the wrong reasons with two of its AI projects. The first was an internal recruiting tool that quickly developed a bias against women. The second was facial recognition software that, according to MIT researchers, misidentified nearly 33% of dark-skinned female faces while achieving perfect marks for light-skinned males.
Luckily for Amazon, other debacles took the spotlight off the company. Emotion detection has become an AI buzzword in recent years, with companies like WeSee and HireVue claiming it can help screen and analyze job applicants. But the Association for Psychological Science says this is complete nonsense.
Outside of visual applications, even niches like natural language processing (NLP) took a hit in 2019. OpenAI created GPT-2, an AI system capable of generating hundreds of words of cogent prose after only digesting a few text prompts. After evaluating it, the organization pulled back on releasing the system because it could be used for malicious endeavors such as propagating hate speech or fake news.
Essentially, 2019 made two things clear about AI: Firstly, innovation in this field is accelerating at an unprecedented pace. Secondly, if we don’t address the issues at hand right now, they could spiral into something much worse. Realizing this can be frightening since we’re all actively engaged with this technology in our day-to-day now. But, on the bright side, this also means we can actively steer it in the right direction.
AI’s Unique Evolution
In 2012, deep learning with neural networks entered a renaissance. Prior to this year, the subset of machine learning had remained in a dormant academic rut for decades. But thanks to mavericks like the University of Toronto professor Geoffrey Hinton, interest in deep learning was renewed.
Near the end of 2012, Hinton and his team used a deep learning network to set new records in a famous computer vision challenge known as ImageNet. Hinton would go on to win the 2018 Turing Award with colleagues Yoshua Bengio and Yann LeCun.
After the events of the 2012 ImageNet, companies like Google and Microsoft began racing to hire the world’s premier deep learning experts and dubbed themselves “AI-first” organizations. But it wasn’t only tech titans that were boarding the AI hype train. In a 2018 global survey, consulting firm McKinsey & Company found that more than 2,000 companies had either already incorporated AI into their operations or were undergoing pilot programs with the technology.
The rate of AI’s integration into our lives cannot be overstated. Smartphones took 10 years to “eat the world,” as Benedict Evans, an analyst at Andreessen Horowitz, put it. The Internet took 20 years. It has only taken AI five years to go from lab obscurity to disruptive technology. Accounting firm PricewaterhouseCoopers estimates that AI contributed $2 trillion to global GDP in 2017 and 2018. “We’re certainly working on a compressed time cycle when it comes to the speed of AI evolution,” explains R. David Edelman. Before leading AI policy research at MIT, Edelman was a former tech adviser to former president Barack Obama.
This expedited timeline has only served to raise anxiety around the technology. Between 54% to 75% of the general public believes AI will increase social isolation, cause a “loss of human intellectual capabilities”, and ultimately hurt the poor and help the wealthy, according to a 2019 survey conducted by consulting firm Edelman and the World Economic Forum. To make matters worse, 33% of survey respondents also believe that deepfakes could cause “an information war that, in turn, might lead to a shooting war.”
Course Correction Is Still Viable
While undoubtedly leading to some scary, anxiety-inducing concerns, humanity’s up-close look and usage of AI could end up being our saving grace.
Technological innovation typically follows an S-shaped curve: After a slow start, the tech finally gathers momentum as it rises in popularity, then this pandemonium tapers off as it becomes ubiquitous. For previous world-eating technologies like cars or smartphones, big problems and consequences of innovation only surfaced during the later stages of this pattern.
For instance, after the mass adoption of automobiles, society suffered through inordinate amounts of carnage as we struggled to put safety standards in place. Climate change accelerated. And public transportation began to take a nosedive. Similarly, 3 billion people were already using smartphones by the time psychologist Jean Twenge declared that they may be causing anxiety and social media addiction in 2017.
But since AI is moving faster than these examples, it’s now acting as a real-time feedback loop that we can use to correct its course. Can we make neural networks more interpretable so we can understand them better? Is there a methodology we can use to test deep learning systems for bias? “The fact that [AI’s] changing rapidly bringing a certain urgency to the asking of these questions,” says Nick Obradovich, a scientist who studies AI’s social challenges at the Max Planck Institute for Human Development.
Answering these hard questions is already underway around the world. At Google’s 2019 I/O conference, the company revealed TCAV, a new system that can act as a sort of legitimacy tester for deep learning systems. And last May, 42 countries formally adopted new guidelines from the Organization for Economic Co-operation and Development (OECD) that strive to “uphold international standards that aim to ensure AI systems are designed to be robust, safe, fair, and trustworthy.”
These emerging changes for AI could help us drive the technology towards better, more well-behaved days. And they could also elucidate how we approach responding to and regulating other new technologies like 5G, self-driving cars, and genetic engineering. Edelman says, “This ideal—of codesigning systems and [social] policy—can be a repeatable formula for new innovations.”
AI certainly has some loose nuts and bolts. But its fast pace of progression is teaching us how to handle the technologies of tomorrow better.
Tags: AI, AI and machine learning, AI and ML, AI app developer Seattle, AI app developers, AI app developers Seattle, AI applications, AI apps, AI warning, app developers Seattle, machine learning, machine learning app developer, machine learning applications, machine learning apps, mobile app developer Seattle, mobile app developers Seattle, mobile app development Seattle