Knowledge Graphs @ ICLR 2020

  1. Neural Reasoning for Complex QA with KGs
  2. KG-augmented Language Models
  3. KG Embeddings: Temporal and Inductive Inference
  4. Entity Matching with GNNs
  5. Bonus: KGs in Text RPGs!
  6. Conclusions

Neural Reasoning for Complex QA with KGs

It’s great to see more research and more datasets on complex QA and reasoning tasks. Whereas last year we saw a surge of multi-hop reading comprehension datasets (e.g., HotpotQA), this year at ICLR there is a strong line-up of papers dedicated to studying compositionality and logical complexity: and here KGs are of big help!

Intuition behind the construction process of CFQ. Source: Google blog
DrKIT intuition. Source: Dhingra et al
Recurrent Retrieval architecture. Source: Asai et al
NeRd vs other approaches. Source: Chen et al

KG-augmented Language Models

As discussed in the previous posts, there is a clear trend on infusing knowledge into large-scale language models. 📈

Pre-training objective of WKLM. Source: Xiong et al

KG Embeddings: Temporal and Inductive Inference

Large-scale KGs like Wikidata are never static, the community updates thousands facts every day where some facts become obsolete and new facts require creating new entities.

Time-aware ComplEx (TNTComplEx) scores. Source: Lacroix et al
Source: Xu et al
CompGCN intuition. Source: Vashishth et al
Source: Tabacoff and Costabello

Entity Matching with GNNs

We briefly discussed the problem of entity matching in the previous AAAI’20 post. Integrating different graphs (including KGs) you often have multiple representations of the same real-world entity described with different schemas. Entity matching helps to identify such similar entities: previously it was a tedious mappings curation job, and recently more ML algorithms appear that leverage the graph structure and do not require writing rules. It’s great to see that ICLR 2020 moves the field even further!

Deep Graph Matching Consensus intuition. Source: Fey et al

⚔️ Bonus: KGs in Text RPGs 🔮

Interactive Fiction games (like text RPGs Zork) are pure fun 😍 especially in the moments you explore the world and type something weird and wait for the game’s reaction. Ammanabrolu and Hausknecht present a new work on Reinforcement Learning in IF games and employ KGs there all the way to model state space and user interactions.

Source: Ammanabrolu and Hausknecht
If there is ever Knights of the Old Republic 3, it would be a blast to have a companion with a real language model trained with HK-47 personality

Conclusion

--

--

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
Michael Galkin

Michael Galkin

Postdoc @ Mila & McGill University. Working on Knowledge Graphs, Graph ML, and NLP