generated digits

[Source: rdn-consulting]  

Home
Projects
Grants
Talks
Popular writings
Theses
Publications


 

 AI Future

Our goal is to develop generalist AI systems that can learn, reason, and act meaningfully across all practical scenarios in an advanced civilization, operating across scales in space-time.

 

Aims

  • To understand intelligence.
  • To design intelligent machines.
  • To solve important problems.

Areas

Projects
Please visit the Projects page for the latest update.
AI Future Wordcloud
 
      

 

   

Scaling out: Agents as digital spieces

Instead of scaling up a single agent, we envision a future 'society of agents' representing a new kind of digital species. Each agent develops through lifewide experiences, interacting and collaborating with other agents and humans while evolving continuously with its dynamic environments. These agents are equipped with advanced cognitive capabilities: multimodal perception, episodic and semantic memory, statistical relational learning, theory of mind, common sense reasoning, and knowledge integration. Through their interactions, they develop collective intelligence and cultural evolution, adapting their behaviors and knowledge based on societal needs. This distributed approach enables more robust, adaptable, and aligned AI systems that can address complex challenges across different domains.

Reasoning: Augmented System 2 capability

We are concerned with learning the capability to deduce new knowledge from previously acquired knowledge in response to a query. Such behaviors can be demonstrated naturally using a symbolic system with a rich set of inferential tools, given that the symbols can be grounded in the sensory world. Deep learning contributes to the bottom-up learning of such a reasoning system by resolving the symbol grounding problem. Our research aims to build neural architectures that can learn to exhibit high-level reasoning functionalities, for example, answering new questions across space-time in a compositional and progressive fashion.

Alignment: Truth, values, safety and civilization

The rapid advancement of AI raises critical challenges that pose significant risks to social stability, cultures and humanity if left unchecked. We aim to humanize machine learning algorithms to ensure AI systems act in alignment with human values and preferences. Examples of focus areas include: truth seeking, preference learning from human feedback, optimization techniques for value alignment and safety, preference-guided agent architectures, mechanisms for moral reasoning, transparency in decision-making, and robustness against misalignment. Our goal is to ensure AI development promotes advances in civilization while minimizing potential risks.


Grants
  • Learning and reasoning on multisensor data ($850K), Australian DoD, 2022-2024.
  • Framework for verifying machine learning algorithms ($360K), ARC Discovery, 2021-2023.
  • Defence applied AI experiential CoLab ($1M), Australian DoD, 2020-2021.
  • Telstra centre of excellence in big data and machine learning ($1.6M), Telstra, 2016–2020.
  • Predicting hazardous software using deep learning, ($100K), Samsung GRO, 2016–2017.
  • Studying and developing advanced machine learning based models for extracting chemical/drug-disease relations from biomedical literature”, ($54K), Vietnam NAFOSTED, 2017–2018.
  • Building a simulator of mail sorting machine, ($12K), PTIT VN, 2003.

Talks/Tutorials

Popular writings

Theses

Preprints

Publications