Natural Language Processing(NLP)

Rumman Ansari   2018-05-05   Student   Miscellaneous > Natural Language Processing   1942 Share

About Natural language processing(NLP)

Natural language processing (NLP) is the ability of a computer program to understand human language as it is spoken. NLP is a component of artificial intelligence (AI).The development of NLP applications is challenging because computers traditionally require humans to "speak" to them in a programming language that is precise, unambiguous and highly structured, or through a limited number of clearly enunciated voice commands. Human speech, however, is not always precise -- it is often ambiguous and the linguistic structure can depend on many complex variables, including slang, regional dialects, and social context.

NLP draws from many disciplines, including computer science and computational linguistics, in its pursuit to fill the gap between human communication and computer understanding.

It was formulated to build software that generates and comprehends natural languages so that a user can have natural conversations with his or her computer instead of through programming or artificial languages like Java or C.

Why Is Language Is So Complex?

Percy Liang, a Stanford CS professor and NLP expert, breaks down the various approaches to NLP / NLU into four distinct categories:

1) Distributional
2) Frame-based
3) Model-theoretical
4) Interactive learning

You might appreciate a brief linguistics lesson before we continue on to define and describe those categories. There are three levels of linguistic analysis:

1) Syntax – what is grammatical?
2) Semantics – what is the meaning?
3) Pragmatics – what is the purpose or goal?

Drawing upon a programming analogy, Liang likens successful syntax to “no compiler errors”, semantics to “no implementation bugs”, and pragmatics to “implemented the right algorithm.”

He highlights that sentences can have the same semantics, yet different syntax, such as “3+2” versus “2+3”. Similarly, they can have identical syntax yet different syntax, for example, 3/2 is interpreted differently in Python 2.7 vs Python 3.

Ultimately, pragmatics is key, since language is created from the need to motivate an action in the world. If you implement a complex neural network to model a simple coin flip, you have excellent semantics but poor pragmatics since there are a plethora of easier and more efficient approaches to solving the same problem.

Plenty of other linguistics terms exist which demonstrate the complexity of language. Words take on different meanings when combined with other words, such as “light” versus “light bulb” (i.e. multi-word expressions), or used in various sentences such as “I stepped into the light” and “the suitcase was light” (polysemy).

Hyponymy shows how a specific instance is related to a general term (i.e. a cat is a mammal) and meronymy denotes that one term is a part of another (i.e. a cat has a tail). Such relationships must be understood to perform the task of textual entailment, recognizing when one sentence is logically entailed in another. “You’re reading this article” entails the sentence “you can read”.

Aside from complex lexical relationships, your sentences also involve beliefs, conversational implicatures, and presuppositions. Liang provides excellent examples of each. Superman and Clark Kent are the same people, but Lois Lane believes Superman is a hero while Clark Kent is not. If you say “Where is the roast beef?” and your conversation partner replies “Well, the dog looks happy”, the conversational implicature is the dog ate the roast beef. Presuppositions are background assumptions that are true regardless of the truth value of a sentence. “I have stopped eating meat” has the presupposition “I once ate meat” even if you inverted the sentence to “I have not stopped eating meat.”

Adding to the complexity are vagueness, ambiguity, and uncertainty. Uncertainty is when you see a word you don’t know and must guess at the meaning. If you’re stalking a crush on Facebook and their relationship status says “It’s Complicated”, you already understand vagueness. Richard Socher, Chief Scientist at Salesforce, gave an excellent example of ambiguity at a recent AI conference: “The question ‘can I cut you?’ means very different things if I’m standing next to you in line or if I am holding a knife”.

Now that you’re more enlightened about the myriad challenges of language, let’s return to Liang’s four categories of approaches to semantic analysis in NLP / NLU.

 

Evolution of natural language processing

While natural language processing isn’t a new science, the technology is rapidly advancing thanks to an increased interest in human-to-machine communications, plus an availability of big data, powerful computing, and enhanced algorithms. 

As a human, you may speak and write in English, Spanish or Chinese. But a computer’s native language – known as machine code or machine language – is largely incomprehensible to most people. At your device’s lowest levels, communication occurs not with words but through millions of zeros and ones that produce logical actions. 

Indeed, programmers used punch cards to communicate with the first computers 70 years ago. This manual and arduous process were understood by a relatively small number of people. Now you can say, “Alexa, I like this song,” and a device playing music in your home will lower the volume and reply, “OK. Rating saved,” in a human-like voice. Then it adapts its algorithm to play that song – and others like it – the next time you listen to that music station. 

Let’s take a closer look at that interaction. Your device activated when it heard you speak, understood the unspoken intent in the comment, executed an action and provided feedback in a well-formed English sentence, all in the space of about five seconds. The complete interaction was made possible by NLP, along with other AI elements such as machine learning and deep learning.

BREAKING DOWN 'Natural Language Processing (NLP)'

Natural Language Processing (NLP) is one step in a larger mission for the technology sector – namely, to use artificial intelligence (AI) to simplify the way the world works. The digital world has proved to be a game-changer for a lot of companies as an increasingly technology-savvy population finds new ways of interacting online with each other and with companies. Social media has redefined the meaning of community; cryptocurrency has changed the digital payment norm; e-commerce has created a new meaning of the word convenience, and cloud storage has introduced another level of data retention to the masses.

Through AI, fields like machine learning and deep learning are opening eyes to a world of all possibilities. Machine learning is increasingly being used in data analytics to make sense of big data. It is also used to program chatbots to simulate human conversations with customers. However, these forward applications of machine learning wouldn't be possible without the improvisation of Natural Language Processing (NLP).

Uses of natural language processing

Most of the research being done on natural language processing revolves around search, especially enterprise search. This involves allowing users to query data sets in the form of a question that they might pose to another person. The machine interprets the important elements of the human language sentence, such as those that might correspond to specific features in a dataset, and returns an answer.

NLP can be used to interpret free text and make it analyzable. There is a tremendous amount of information stored in free text files, like patients' medical records, for example. Prior to deep learning-based NLP models, this information was inaccessible to computer-assisted analysis and could not be analyzed in any kind of systematic way. But NLP allows analysts to sift through massive troves of free text to find relevant information in the files.

Sentiment analysis is another primary use case for NLP. Using sentiment analysis, data scientists can assess comments on social media to see how their business's brand is performing, for example, or review notes from customer service teams to identify areas where people want the business to perform better.

Google and other search engines base their machine translation technology on NLP deep learning models. This allows algorithms to read text on a webpage, interpret its meaning and translate it to another language.

How natural language processing works

Current approaches to NLP are based on deep learning, a type of AI that examines and uses patterns in data to improve a program's understanding. Deep learning models require massive amounts of labeled data to train on and identify relevant correlations, and assembling this kind of big data set is one of the main hurdles to NLP currently.

Earlier approaches to NLP involved a more rules-based approach, where simpler machine learning algorithms were told what words and phrases to look for in text and given specific responses when those phrases appeared. But deep learning is a more flexible, intuitive approach in which algorithms learn to identify speakers' intent from many examples, almost like how a child would learn human language.

NLP combines AI with computational linguistics and computer science to process human or natural languages and speech. The process can be broken down into three parts. The first task of NLP is to understand the natural language received by the computer. The computer uses a built-in statistical model to perform a speech recognition routine that converts the natural language to a programming language. It does this by breaking down a recent speech it hears into tiny units and then compares these units to previous units from a previous speech. The output or result in text format statistically determines the words and sentences that were most likely said. This first task is called the speech-to-text process.

The next task is called the part-of-speech (POS) tagging or word-category disambiguation. This process elementarily identifies words in their grammatical forms as nouns, verbs, adjectives, past tense, etc. using a set of lexicon rules coded into the computer. After these two processes, the computer probably now understands the meaning of the speech that was made.

The third step taken by an NLP is text-to-speech conversion. At this stage, the computer programming language is converted into an audible or textual format for the user. A financial news chatbot, for example, that is asked a question like “How is Google doing today?” will most likely scan online finance sites for Google stock, and may decide to select only information like price and volume as its reply.

NLP attempts to make computers intelligent by making humans believe they are interacting with another human. The Turing test, proposed by Alan Turing in 1950, states that a computer can be fully intelligent if it can think and make a conversation like a human without the human knowing he or she is conversing with a machine. So far, only one computer has passed the test – a chatbot with the persona of a 13-year-old boy. This is not to say that an intelligent machine is impossible to build, but it does outline the difficulties inherent in making a computer think or converse like a human. Since words can be used in different contexts, and machines don’t have the real-life experience that humans have for conveying and describing entities in words, it may take a little while longer before the world can completely do away with computer programming language.

Importance of NLP

The advantage of natural language processing can be seen when considering the following two statements: "Cloud computing insurance should be part of every service level agreement" and "A good SLA ensures an easier night's sleep -- even in the cloud." If you use national language processing for search, the program will recognize that cloud computing is an entity, that cloud is an abbreviated form of cloud computing and that SLA is an industry acronym for service level agreement.

These are the types of vague elements that appear frequently in human language and that machine learning algorithms have historically been bad at interpreting. Now, with improvements in deep learning and artificial intelligence, algorithms can effectively interpret them.

This has implications for the types of data that can be analyzed. More and more information is being created online every day, and a lot of it is natural human language. Until recently, businesses have been unable to analyze this data. But advances in NLP make it possible to analyze and learn from a greater range of data sources.

Large volumes of textual data

Natural language processing helps computers communicate with humans in their own language and scales other language-related tasks. For example, NLP makes it possible for computers to read the text, hear the speech, interpret it, measure sentiment and determine which parts are important. 

Today’s machines can analyze more language-based data than humans, without fatigue and in a consistent, unbiased way. Considering the staggering amount of unstructured data that’s generated every day, from medical records to social media, automation will be critical to fully analyze text and speech data efficiently.

Structuring a highly unstructured data source

Human language is astoundingly complex and diverse. We express ourselves in infinite ways, both verbally and in writing. Not only are there hundreds of languages and dialects, but within each language is a unique set of grammar and syntax rules, terms, and slang. When we write, we often misspell or abbreviate words, or omit punctuation. When we speak, we have regional accents, and we mumble, stutter and borrow terms from other languages. 

While supervised and unsupervised learning, and specifically deep learning, are now widely used for modeling human language, there’s also a need for syntactic and semantic understanding and domain expertise that are not necessarily present in these machine learning approaches. NLP is important because it helps resolve ambiguity in language and adds useful numeric structure to the data for many downstream applications, such as speech recognition or text analytics. 

 

Conclusion

Language is both logical and emotional. We use words to describe both math and poetry. Accommodating the wide range of our expressions in NLP and NLU applications may entail combining the approaches outlined above, ranging from the distributional / breadth-focused methods to model-based systems to interactive learning environments. We may also need to re-think our approaches entirely, using interactive human-computer based cooperative learning rather than researcher-driven models.

If you have a spare hour and a half, I highly recommend you watch Percy Liang’s entire talk which this summary article was based on:

 

4 Approaches To Natural Language Processing & Understanding

1. Distributional Approaches

2. Frame-Based Approach

3. Model-Theoretical Approach

4. Interactive Learning