5 Things We Must Do to Make AI a Force for Good
March 15, 2021 - 8 minutes readOver the past decade, artificial intelligence (AI) applications have had an eventful journey that involved autonomous car crashes with pedestrians, discriminatory recruiting technology that favored men over women, and racist AI tools. And although these issues sparked intense debate and discussion over AI ethics, the fact of the matter is that talk is nothing without action. When the pandemic began, AI tools that were previously seen as controversial, like HireVue’s face-scanning algorithms during recruitment and interviews, began booming in popularity.
But even with the growth of smaller AI-enabled tools during this time, large companies like Google, Amazon, and IBM have suspended their facial recognition work with law enforcement. And NeurIPS, a prestigious AI conference, required researchers to add an ethics statement to their submissions for the first time ever. The field of AI is wrought with extreme highs and even deeper lows, but we must keep working on bettering AI for the greater good — here are five ways we can do that this year.
1. Encourage and Foster Diversity
Algorithms have been shown to closely follow their creators’ biases and beliefs. Knowing this, we need to expand what kinds of people work on AI algorithms. This requires a shift from straight white male developers to a more diverse team: one with different experiences, backgrounds, upbringings, values, and perspectives.
The good news is that the 2019 NeurIPS conference saw the largest number of women and minorities speaking and attending the conference. As a result, there were more talks than ever about AI’s influence on society. But the bad news is that women are still not seen as equals in tech. Google’s firing of Timnit Gebru, one of the only outspoken Black women in AI and tech, showed us that no one is safe, especially during talks of ethics, change, and regulation.
At the end of the day, companies still show us through their actions that diversity, values, and opinions are not important to them. That’s disheartening because we desperately need AI with more perspective (a more realistic perspective) on how the world really is. And if a company doesn’t care about diversity, at the end of the day, it doesn’t care about limiting or avoiding bias in its algorithms.
2. Uplift Impacted Communities
Because the developers of an algorithm have more power over the system than the people the algorithm will ultimately impact, participatory machine learning applications have grown over the past year. These involve engaging people who will be impacted by the algorithm to help build more robust and less biased algorithms. It puts more power into the hands of the subjects of the algorithms and reframes how AI is developed.
AI experts and enthusiasts have already begun discussing and collecting a wide variety of ideas about what participatory machine learning will look like and what it will entail. There have been concepts around governance in garnering community feedback, redesigning AI systems to give users more control, and auditing models for engagement with the public. It’s important that we continue advancing these discussions and brainstorms in 2021 while starting to take more concrete action towards making these ideas a reality.
3. Decrease Corporate Funding
AI is currently being advanced in large part by tech giants who have billions of dollars to invest in research and development. The direction of the field has shifted more towards big data and big models, but this is unrealistic for most companies. Focusing on these niche fields lessens the impact of AI advancement in other areas, boxes out smaller companies with less data, and creates issues between private and academic research and publishing standards.
If we want AI to grow in other niches, we need to reduce how much corporate money is being used to fund research. One example of a change in the wrong direction is San Francisco-based OpenAI, an AI research lab that originally said it would rely on independent, wealthy donors. But that business plan was unsustainable, and OpenAI signed a deal with Microsoft for investment. If we want to reduce corporate dollars in AI, we need governments to step up and invest tax dollars in AI. And, more importantly, there needs to be scrutinous regulation over how the AI being researched is going to directly benefit taxpayers and not corporations.
4. Shift Back to Common Sense
AI was initially intended to understand and perceive, not just figure out patterns in a set of data. Corporations have funded AI that directly and quickly benefits their needs, but we need to invest in AI that solves a variety of problems. Some experts are experimenting with AI that uses probability to infer information from a very small dataset, similar to how a human child learns from a handful of experiences.
Other experts are excited about the idea of neurosymbolic AI, which combines deep learning with symbolic knowledge systems. One thing is for sure: a shift from prediction to comprehension would elevate AI algorithms in every niche. It would reduce bias, errors, and hacking, and it could save lives.
5. Toughen Up Rules and Regulation
It’s taken a lot of grassroots work and effort to bring to light algorithmic harms and gather the public to hold large corporations accountable. But we need national and international regulations, and those regulations need guard rails to stop anyone from going way beyond the rules. In the U.S., Congress is considering bills that regulate AI bias, facial recognition, and deepfake technology. Around the world, lawmakers have been closely paying attention to AI’s highs and lows, and many are creating legislation in their countries. Although this is a great start, we need to see it to the end.
AI in 2021
AI has given us many lessons and memories over the years. We hope that 2021 is a year where we take those lessons to heart and see increased regulation, reduced corporate ties, common sense algorithms, an increase in diversity, and the impact of opinions from end-users.
Have you used any interesting AI software recently? What was your experience like? Let us know in the comments below!
Tags: AI app developer San Francisco, app developers san francisco, artificial intelligence app development, machine learning app development San Francisco, machine learning apps, mobile app developers San Francisco, San Francisco AI app developers, San Francisco app development, San Francisco machine learning app development, San Francisco mobile app developer, San Francisco tech