Home / Papers / machine learning 1 Deep X : Deep Learning with Deep...

machine learning 1 Deep X : Deep Learning with Deep Knowledge

88 Citations2018
Volker Tresp
journal unavailable

This work focuses on deep knowledge in the form of deeply structured knowledge graphs, and introduces basic and advanced tensor models, which can be related to probabilistic graphical models, sum-product networks, and basis function models.

Abstract

The the first part I will cover recent work on learning with knowledge graphs. In the second part I will review tensor models and some applications of tensor models in machine learning 1 Deep X: Deep Learning with Deep Knowledge Deep X In many applications, the full potential of deep learning can only unfold in combination with deep knowledge. Deep knowledge can mean that instances or entities are described by many dimensions, e.g., patients are described by their general health profiles in conjunction with extensive molecular profiles. Here we focus on deep knowledge in the form of deeply structured knowledge graphs. Knowledge Graphs The most prominent example is the Google Knowledge Graph [9], approaching 100 billion statements and describing world facts as triples, such as (Obama, exPresidentOf, US). Knowledge graphs are closely related to relational databases and graph databases, supplemented with type constraints and concept hierarchies. Relational databases are ubiquitous in industry in general, and graph databases are extensively used in communication and social networks. Knowledge graphs are considered easier to extend and to maintain than relational databases and are becoming increasingly popular in many industries. Knowledge graphs can be used for linking information sources, for querying, and in question answering. Different analytic functions can be realized, such as trend analysis, the visualization of views, and the calculation of statistics. Machine Learning with Knowledge Graphs Knowledge graphs can learn. Relational machine learning can be used to derive triples that are not part of the knowledge graph, such as (Obama, gender, Male), (Obama, race, Caucasian) [8, 7]. (Well, 50% correct!) Furthermore, machine learning can derive priors for text and image understanding and thus support the automatic filling of knowledge graphs [3]. Finally, latent entity representations derived from machine learning can support other applications. Modelling Events Events in time can be modelled by adding a time index to a triple. This concept is very useful in the development of medical decision support systems where a semantic knowledge graph represents a patient’s background (existing conditions, age, genetic profile, . . . ) and an episodic knowledge graph represents patient-specific events like treatments, outcomes, lab measurements, and administered medications [4, 13]. Perception: “You only see what you know” Deep Learning is currently the leading computational approach to image analysis. But perception is more: perception requires a decoding of sensory inputs in the context of an agent’s understanding of the world. So the Goethe quote “Man sieht nur, was man weiß” might be quite appropriate! In [1], it was shown how regional convolutional neural networks (R-CNNs) can be combined with knowledge graphs, which describe prior knowledge about concepts and their dependencies, to map an image to a set of triples. Cognitive Deep X It has been argued that our conscious mind emerges from thousands of lower-level processes operating in parallel: “The human brain has a modular organization consisting of identifiable component processes that participate in the generation of a cognitive state.”[5] We argue that some modules might adequately be modeled by deep neural networks, but for others, like memory functions, knowledge graphs and their tensor models might be more suitable [10, 11, 12]. 2 Introduction to Tensor Models in Machine Learning Tensor models have been popular in signal processing (e.g., chemometrics and psychometrics) to analyze 3-way data [6]. More recently they have been used in machine learning, e.g., to model knowledge graphs and in physics, as tensor networks or tensor trains to analyse partial differential equations. I will introduce basic and advanced tensor models (canonical decomposition, Tucker, HOSVD, tensor networks, tensor trains) and show how they can be related to probabilistic graphical models, sum-product networks, and basis function models. I will show how they can be applied to learn inverse models in robotics [2] and in the classification of video sequences [14].