generated digits

[Source: rdn-consulting]

Home   


 
      

 

   

AI Fundamentals

Sub-areas

» New inductive biases
» Memory-enabled AI
» Learning to reason
» Abstraction & analogy
» Learning with less labels
» Social reinforcement learning
» Human-compatible learning

Publications

Projects

» Understanding human behaviours in video
» Visual question answering and dialog
» AI for structural biology
» Exploring molecular and materials spaces

As an approach to general intelligence, we study new ways for differentiable learning to reason with minimal supervision, towards System 2 capability. Deep Learning achieves the goals through compositional neural networks, iterative estimation, and differentiable programming. Our research program draws certain inspiration from cognitive neuroscience, fused with rigorous probabilistic inference. The ultimate long-term goal is devise a unified cognitive architecture that guides the learning and reasoning across scales in space-time.

The research program has three broad aims:

» To understand intelligence from computational and cognitive perspectives.
» To design intelligent machines that are competent, scalable and robust.
» To solve important data-rich problems across living, physical and digital domains.

Sub-area: New inductive biases

Successes in machine learning depend critically on having good priors on inductive biases. In deep learning, the strongest prior thus far has been neural architectures built on a small set of operators (signal filtering, convolution, recurrence, gating, memory and attention). We derive modular networks for regular data such as matrix and tensor as well as new data such as graphs and relations. We draw our architectural inspiration from neuroscience including the columnar structure of the neocortex for distributed processing, the thalamus structure for information routing, working memory for problem solving, and episodic memory for integrating information over time.

Column networks

Column Networks, as inspired by the cortical columns, to solve multi-relational learning.

Sub-area: Memory-enabled AI

Deep neural networks excel at function approximation and pattern recognition but fall short on manipulating complex, highly dependent systems, rapid contextualisation in new settings, retaining previously acquired skills, and holding long conversations. These limitations are possibly due to the lack of an explicit notion of memory. We design new kinds of memory with more robust handling of variability, less memorization, and stored programs. The memory serves as a central component in a grand unified cognitive architecture that naturally supports learning, reasoning, rapid contextualisation and imagination.

Variational memory encoder decoder

Variational Memory Encoder-Decoder, as applied for generating a diverse and coherent dialog.

Sub-area: Learning to reason

We are concerned about learning the capability to deduce new knowledge from previously acquired knowledge in response to a query. Such behaviours can be demonstrated naturally using a symbolic system with a rich set inferential tools, given that the symbols can be grounded to the sensory world. Deep learning contributes to the bottom-up learning of such a reasoning system by resolving the symbol grounding problem. Our research aims to build neural architectures that can learn to exhibit high-level reasoning functionalities, e.g., answering new questions over space-time in a compositional and progressive fashion.

A system for Video QA

A system for Video Question Answering that implements the dual-process theory of reasoning.

Sub-area: Abstraction and analogy

The capacity of extreme generalization into completely new domains in human-level intelligence is strongly connected to the ability to abstract out the complex world and draw analogies between seemingly disconnected parts of the world. Here we search for new computational foundations that support abstraction, including object discovery, relations discovery,  functional programming, indirection, symbolic manupilation and  formulation of analogies.

A system for Video QA

A system for abstracting out visual details, focusing on relations between images via an indirection mechanism. This is capable of solving IQ problems.

Sub-area: Learning with less labels

Learning with a few explicit labels is the hallmark of human intelligence. Leveraging unlabelled data, either through existing datasets, or through self-exploration, will be critical to the next AI generation. We investigate the following sub-areas. Representation learning: Learning starts with representation of latent factors in the data which are invariant to small changes and insensitive of noise. Generative models: The ability to model high-dimensional world and to imagine the future is fundamental to AI. We investigate fundamental issues of deep generative models including stability, generalisation and catastrophic forgetting in Generative Adversarial Networks, as well as disentanglement in Variational Auto-Encoders. Continual learning: We design new learning algorithms that adapt continually as new tasks are introduced, even if the task change is not explicitly marked.

2D Boltzmann machine

A Boltzmann machine for recommender system.

Sub-area: Social reinforcement learning

We leverage deep neural networks to enable multi-agent systems to perceive the world, act on it, interact with others, build theory of mind, imagine the future and receive feedbacks. Equipped with deep nets for perception, memory, statistical relational learning, and reasoning capabilities, we aim to bring multi-agent reinforcement learning to a new level.

ToMAGA system.

A system of multi-agents equipped with social psychology.

Sub-area:    Human-compatible learning

The rapid advancement of AI raises new ethical challenges which pose great risks to humanity if unsolved. We aim to invent new machine learning algorithms that teach machine to be compatible with human preferences. We derive  computational frameworks for instilling intrinsic human preferences into AI through preference learning, alignment optimisation, and preference-guided agent designs.


Ongoing Projects:

·       Human behaviour understanding in video

This project aims at a deep understanding of human behaviours seen through (fixed and moving) videos in various indoor and outdoor contexts. We build new models of trajectories and social interactions, inferring past trigger events, and predict actions and intention.

Partners: iCetana

Anomaly detection with skeleton trajectories

Detecting anomalies in video using skeleton trajectories (last row).

·       Visual question answering and dialog

We study the new cognitive capability of a system to answer new natural questions about an image or a video. This is a powerful way to demonstrate the reasoning capacity, which involves linguistic, visual processing and high-level symbols manipulation skills. In visual dialog, we build a system having a natural multi-turn chat with human about a visual object.

Answering question about video

Answering questions about a video.

·       AI for structural biology

This research aims at designing neural architectures for representation of -omics and structured biological data. We map genotype-phenotype, answer any genomic queries for a given sequence, predict protein-target interactions, estimate protein folding, design drugs, and learn to generate DNA/RNA and protein. The long-term goals also include acquiring, organizing and reasoning about established biology knowledge.

Partners: TBA.

·       Exploring the molecular and materials space

We use deep learning to characterise the chemical space, replace expensive physical computation and experiments, predict molecular properties, molecular-molecular interactions and chemical reactions, and generate drug molecules given a set of desirable bioactivity properties. In materials design, we design new tools for understanding the structure and characteristics of materials, searching for new alloys, and generating molecules & crystals.

Partners: Institute of Frontier Materials at Deakin, Japan Institute of Advanced Science and Technology.

Relational Dynamic Memory Network

Relational Dynamic Memory Network, a model for detecting interactions among molecules.




Preprints

Publications