Natural Language Processing (NLP) is a field at the intersection of computer science, artificial intelligence, and linguistics. It is used to build systems that can process and understand human language. Since its inception in the 1950s until very recently, Natural Language Processing (NLP) has primarily been the domain of academia and research labs, requiring long formal education and training. The past decade’s breakthroughs have resulted in NLP being increasingly used in a range of domains as diverse as retail, healthcare, finance, law, marketing, human resources and many more. With this rapidly growing usage, larger and larger proportion of the workforce that is building these NLP systems is grappling with limited experience and theoretical knowledge
Imagine a regular working day in a hypothetical person John’s life. John wakes up and ask his voice assistant — “What is the weather today?” Depending on the answer, he plans to dress for the day. John then asks about the time it takes to commute to work an hour from then, and gets an estimate based on the usual traffic situation at that time of the day. John asks about his schedule today, and the assistant lists the set of planned events. He suddenly remembers that he planned to meet a friend for lunch. So he says “add a lunch meeting with Lance at noon”. The event is then saved on his calendar. While John is an imaginary character, all of you must have used such smart assistants such as alexa/google home/siri/cortana on your device to do similar things. How are we, as end users, interacting with such software? Yes, in our (human) language, and not a computer programming language.
NLP in the Real-World
NLP forms an important component in a wide range of tools and software that we use in our everyday life today. Some of the examples are listed below:
- In the email systems we use such as Gmail, NLP is used to provide a range of features such as spam classification, priority inbox, calendar event extraction from emails, and so on
- Voice based assistants such as Apple’s Siri, Google Assistant, Microsoft’s Cortana and Amazon’s Alexa rely on a range of NLP techniques to interact with the user, understand and respond to the user commands appropriately
- Modern search engines such as Google and Bing rely on NLP for search query understanding, question answering, information retrieval, grouping of the results etc
- Machine Translation software such as Google Translate, which are increasingly being used in today’s world for a wide range of scenarios, are direct applications of NLP
- NLP is used by product companies to analyse social media sentiment on their products and a range of other analytics on social media data
- NLP is widely used in e-commerce websites such as amazon.com for extracting information from product descriptions and user reviews
- NLP is used in a range of use cases in many other domains such as finance, law, and healthcare
- Some companies such as arria.com1 work on generating reports for various domains from weather forecasting to financial services automatically using NLP methods
- NLP forms the backbone of spelling and grammar check tools we use on software such as Microsoft Word and grammarly.com
- NLP is used in a range of learning and assessment related technologies such as automated scoring of test taker essays in exams such as Graduate Record Examination (GRE), plagiarism detection (e.g., turnitin.com), intelligent tutoring systems, and language learning apps such as DuoLingo.
- NLP is used in building large knowledge bases such as the Google Knowledge Graph, which are useful in a range of applications such as search and question answering.
Challenges in NLP
What makes natural language processing a challenging problem domain? There are several characteristics of human language that make NLP a challenging area to work.
One aspect that can challenge NLP systems is the inherent ambiguity in the language. Consider the sentence: “Children make delicious snacks” — if we look at it casually, we can see two meanings: children can prepare some delicious snacks, and children themselves are delicious snacks. We can guess that the first one is the intended meaning, but from a computational perspective, understanding the right sense can be challenging. The ambiguity here came from the use of the word “make”. It can also come in other forms. Consider the sentence: “Look at the man with one eye”. Does this mean — I should look at the man with my one eye, or that I should at the man who has one eye? We are still talking about direct sentences, and not ones using figurative language, idioms etc. As an experiment, consider taking an off-the-shelf NLP system (say Google Translate) and how such ambiguities affect (or don’t affect) translation from English to another language!
Another aspect that makes NLP challenging is the fact that encoding all things that are “common knowledge” to humans in a computational model is challenging. For example, consider two sentences — “man bit dog” and “dog bit man”. We all know the first sentence is unlikely to happen. However, linguistically, it is hard to tell the difference between the two for a computer. The primary challenge in such a case is to achieve some form of computational representation that can say — the first sentence is absurd while the second sentence is probable.
Other issues that can pose challenges to NLP systems are that language is creative, and there are various styles, dialects, genres and variations in language use. Additionally, there are thousands of languages in the world. Even if we propose a NLP solution that works on all forms of a given language, there is no guarantee it works on another language. All such issues pose challenges to NLP. Due to all of these reasons, many researchers think that solving general NLP is one of the fundamental tasks in getting us close to the Artificial General Intelligence
At the level of words and sentences, some of the basic NLP tasks include breaking of a text into sentences (Sentence segmentation), splitting a sentence into words (Word Tokenization), identifying Part Of Speech tags of words (POS tagging), identifying phrasal structures and syntactic relations between words in the sentence (Parsing) and so on. Moving beyond sentences, NLP tasks involve understanding the context and meaning of the text.Let us take a look at some such tasks using the text example in below.
Is Lincoln Way something related to President Lincoln? Is the word “booth” in the passage referring to a stall in a market, or a seating area in a restaurant? — such questions are answered using the NLP tasks of word sense disambiguation (for normal words such as booth) and named entity disambiguation (for words or phrases referring to some named entities such as person/location/organization names ).
Who does “she” in the first sentence of the passage refer to? When “she” says “I” in the first sentence, does she mean herself literally? What is “home country” in the last sentence? — these kind of questions are addressed using the NLP task of coreference resolution. What is the relationship between “Chinese Homestyle cooking” and Tina? Answering such questions comes under the NLP task of relation extraction.
There are several other NLP tasks not related to this piece of text. For example, language modeling is the task of predicting the next word/character, given a sequence of words/characters that occurred before that. Specialized focus on NLP as a separate field of research started around the 1940s and 50s, especially in the post second world war period. The first few decades saw the development of algorithms for logic based language understanding and creating elaborate computational grammars of human language. Towards 1990s, research focused more towards probabilistic and statistical models of language phenomena. In the past two decades, much of NLP research has been dominated by machine learning based approaches, and more recently, by deep learning. Real-world products relying heavily on NLP started to arrive in the past decade or so, and saw a rapid increase in the past few years. Modern day NLP approaches are heavily dominated by machine learning based solutions.
An NLP Walkthrough: Conversational Agents
Voice based conversational agents such as Amazon Alexa and Apple Siri are one of the most ubiquitous application of NLP. This figure shows the typical interaction model of such software.
Speech Recognition and Speech Synthesis are the two steps take care of the interaction between system and user. While speech recognition converts user input to text that can be processed by the system, speech synthesis conveys the system’s response back to the user. Developing a speech recognition system or a speech synthesizer is beyond the scope of NLP and we would recommend using cloud APIs when needed for this or developing in house solutions if you have the right form of data and expertise. Let us see where NLP comes into picture in this application apart from these steps.
Post speech recognition, the identified word sequence is sent to a Natural Language Understanding (NLU) component, which involves analysing the user query, interpreting it, and asking for clarification questions if needed. A range of NLP tasks we saw earlier in NLP pipeline such as entity recognition/disambiguation, coreference resolution etc are useful at this stage. For example, once the system has identified the content of the user response, it has to understand the user’s intent i.e. if they are asking a factual question, like the weather today or giving a command, like playing a particular music playlist.
Once the question is processed, the next step is to generate a response to the user based on the information it gathered in response to user query. While the response can be a direct result from a search engine, it may also involve stringing words together and creating sentences on its own. NLP plays a key role in such cases.
Machine Learning, Deep Learning and NLP
In our own experience, developing NLP applications from scratch using deep learning is not yet as common outside of large organizations with dedicated R&D teams for NLP and Machine Learning. Often off the shelf tools and models are picked from the deep learning toolsets to utilize it outside of such organizations. This is primarily due to the heavy computing requirements of deep learning methods, with often result in very small improvements for real world applications. In NLP, supervised learning is the most commonly seen form of machine learning. Consequently, most machine learning methods used in NLP rely on a large collection of manually annotated data with input-output mapping. Deep learning methods are even more data hungry than other machine learning methods. However, real world NLP application scenarios often have to be developed without such large datasets.