When Heng Ji, an Amazon Scholar and professor of computer science at the University of Illinois, began attending the annual meeting of the North American Chapter of the Association for Computational Linguistics (NAACL), about 15 years ago, the conference drew around 700 people.
“This year, we’ll probably reach something like 3,000 people” says Ji, who is one of the conference’s senior area chairs for the research topic of information extraction. “When we were students, we usually just needed one chair for each area. This year, we have three senior area chairs for information extraction, and under us we have 18 area chairs. So it's just growing like crazy.”
That growth, of course, is a result of the deep-learning revolution and the role that the statistical approach to natural-language processing has played in recent artificial-intelligence research. Ji, however, looks back fondly on the conference’s prerevolutionary period.
“Methodology-wise, it's actually less heterogenous, less diverse than before,” she says. “Machine learning methods are a hammer, and now we have many nails. In the past, we didn't have a very good hammer, so we were busy inventing other tools.”
As a senior area chair, however, Ji has a good overview of the paper submissions on her research topic — information extraction — and in recent work, she sees a revival of the idea of symbolic semantics, which had fallen into neglect.
“The whole idea of deep neural networks is built on distributional semantics, which means you don't need rules or linguistic intuitions because you can simply count words, right?” she says. “So ‘apple’ and ‘orange’ are similar just because they appear in similar contexts. If I give you one billion documents, you can simply count the stats.”
Distributional semantics is the basis for most linguistic embeddings, or representations of words and strings of words as points in a multidimensional space, such that spatial relationships between points encode semantic relationships between texts. Pretrained, transformer-based embedding networks such as BERT are the basis for most recent advances in natural-language processing (NLP). They generally infer semantic relationships from words’ co-occurrence with other words.
Symbolic semantics
Symbolic semantics, by contrast, makes use of logical relationships between symbols, encoded either as rules (based on linguistic intuitions) or as syntactic relationships within sentences. There are four main ways that symbolic semantics has begun to return to information extraction, Ji says.
“The first idea is we directly change the input data,” she says. “When I say ‘John Smith’, I could refer to this John Smith or that John Smith. Where I say ‘apple’, I can refer to the company or the fruit. The idea is, let's try to do disambiguation before we learn an embedding. So instead of just saying ‘Apple’, we say ‘Apple Incorporated’ to indicate that it's a company. We change the input data to make it more knowledge aware.
“The second idea is we keep the input data the same, but we try to convert the natural language into some sort of structure. For example, we can use semantic parsing to convert the input sentence into a graph structure. Then we can initialize the embedding using a traditional embedding but propagate the embedding representation among the neighboring nodes of the graph.
“So, for example, ‘succeed’ can mean following after something, or it can mean being successful. If we only count co-occurrences, it's very hard to distinguish these two senses. But if we know whether the verb has an object, we can distinguish these meanings. If we can teach the model in advance, ‘This is the structure’, ‘this is the object’, then we can represent it better.
“The third idea is that we will use distributional semantics to discover some new types or new clusters and then use symbolic semantics to name them. One big issue in many NLP tasks, especially information extraction, is that every time we define something that requires an ontology — these are the 10 types of events I want you to extract from news articles — we just annotate data for those 10 types. Then, when we want to add 10 new types of events, the old system we trained with the old training data has become useless, because the deep-learning model is customized for those 10 types.
“The idea here is, Let's forget about the classification paradigm. Let's try to discover clusters using embedding, right? So if all these words look similar, we put them together into one cluster. And if we can look into the representative meaning of this cluster, we can then use symbolic semantics to come up with the naming.
“And then the fourth method is that we just let the embedding methods do their work at their low level, and then we use the symbolic-semantic resources to do the final decoding. We use some background knowledge or commonsense knowledge as a global constraint, where we pick candidates from the distributional semantics.
“I think this is a very promising direction because all the resources we have prepared in the past, maybe, two decades, three decades, will not be dumped. We can still leverage them. And on the other hand, it also makes all the results more explainable.”
For more about Amazon’s involvement in this year’s NAACL — including publications, committee memberships, and participation in workshops and tutorials — please visit our NAACL 2021 conference page.