Your Brief Guide to Natural Language Processing (Part 1)
December 9, 2019 - 7 minutes readIn recent years, natural language processing (NLP) has become a part of our everyday lives. You probably interact with this sub-field of artificial intelligence (AI) development quite often without realizing it.
Smartphones now come equipped with NLP-powered voice assistants that interpret and understand human speech in order to provide relevant responses to user queries. NLP also helps translation apps break down communication barriers by analyzing input in one language and transforming it into another language. Even word processors rely on NLP to check the grammar, logic, and syntax of written input. And NLP is now an integral part of customer service; it’s used to guide people to the right representative through verbal commands.
We each encounter one of these examples regularly. Yet, few people actually understand how NLP plays a role in making them possible. In this special two-part series, we’ll explore what NLP is, its history, and how NLP works.
First Things First: What’s NLP?
NLP is the result of combining the work of multiple fields such as computer science, linguistics, and AI. Today, industry experts define it as a subdomain of machine learning that strives to give a computer the ability to comprehend, manipulate, and produce human language.
When you think about it, NLP’s explosive growth in popularity was a natural development. Humanity’s dependence on computational devices has surged over the past few decades. As a result, so has our need to communicate with these devices in a more universally-comprehensible manner. NLP closes the gap between humans and computers by making it easier for us to understand each other.
But what seems like a small division now began has a colossal chasm. After all, the native languages of humans and computers couldn’t be more different. Whereas humans rely on words to communicate, computers utilize zeroes and ones to get the job done. Let’s now take a look at the long journey humanity took to simplify this back-and-forth translation between numbers and words.
How NLP Came to Be
The concept of NLP came about as early as the 1600s, thanks to Gottfried Wilhelm Leibniz and René Descartes. Back then, the two proposed the wild idea of creating codes to relate words between languages. But due to significant technological limitations, this idea could not come to fruition until centuries later.
In 1954, IBM and Georgetown University collaborated in a research project to create the foundation for modern machine translation (MT). Known as the Georgetown-IBM experiment, this endeavor was the first to perform automatic MT of more than 60 Russian sentences to English. The Georgetown-IBM experiment’s achievement cannot be overstated; before it, MT seemed impossible. This initiative paved the way for more research into MT, eventually leading to the myriad of MT software that exists today.
After the Georgetown-IBM experiment, advancements in AI and MT helped keep NLP research alive. But it wasn’t until developments in linguistics, specifically those by Noam Chomsky regarding universal grammar, that things really began to take off again. Applying these standardized rules enabled MT systems to leverage a uniform understanding of language so that they could learn how to interpret speech and text.
For the next breakthrough, we must fast-forward to 1969. Roger Schank, a cognitive psychologist and AI theorist, was on a mission to teach computer systems how to understand logical inference. Essentially, he wanted to make the meaning of a phrase independent from its actual input.
For example, the sentences “Billy gave Joe an apple.” and “Billy gave an apple to Joe.” both mean the same thing. For humans, the identical intent is clear, regardless of the difference in word arrangement. But for computers, this isn’t so clear. To solve this, Schank developed the conceptual dependency theory model for natural language understanding, a monumental step forward for NLP.
Advancements in Modern NLP
In 1970, famous NLP researcher Bill Woods introduced the world to augmented transition networks (ATNs). An extension of recursive transition networks (RTNs), ATNs can represent natural language and analyze sentence structure, regardless of their complexity. Both RTNs and ATNs can represent rules of context-free grammar, which is vital to not only NLP and lexical analysis but programming languages as well.
After this, the rest of the 70s was largely comprised of programmers creating conceptual ontologies, formal representations of relationships and categories in a universal set of variables.
The 1980s were when machine learning algorithms for language processing finally became more common. The first known algorithms of this category were decision trees, which initially focused on “if this then, then that” rules similar to the complex sets of rules used up until this point.
Today, modern NLP systems mostly rely on statistical models to evaluate inputs because they have a much higher level of accuracy compared to previous methodologies. NLP research now largely focuses on supervised and unsupervised learning. Algorithms in this category can learn from both annotated and non-annotated data (and even a combination of the two).
NLP Today: A Pillar for Everyday Tech Applications
It took a ton of work, but NLP is now used in a variety of applications in modern society. As both AI and technology, in general, continue to unlock more use cases, NLP will unsurprisingly continue to play an integral role in communication between machines and humans. Take a quick look around the Internet, and you’ll find no shortage of novel NLP ideas being built by startups and enterprises from San Francisco to Tokyo.
We hope you’ve enjoyed this primer on NLP and its history. Stay tuned for the final part of this series, where we’ll discuss the differences between rule-based and statistical modeling as well as how NLP actually works!
Tags: AI, AI and machine learning, AI and ML, AI App Developer, AI app developer San Francisco, benefits of machine learning, machine learning app developer San Francisco, machine learning app developers, machine learning app development, machine learning app development San Francisco, machine learning applications, machine learning apps, mobile app developer San Francisco, mobile app development San Francisco, natural language processing, NLP, San Francisco app developer, San Francisco app development