generated digits

[Source: rdn-consulting]  

Home
Projects
Grants
Talks
Popular writings
Theses
Publications


 

 AI Future

Our goal is to develop generalist AI systems that can learn, reason, and act meaningfully across all practical scenarios in an advanced civilization, operating across scales in space-time.

 




Aims

  • To understand intelligence.
  • To design intelligent machines.
  • To solve important problems.

Areas

Projects
Please visit the Projects page for the latest update.
AI Future Wordcloud



 

      

 

   

Scaling out: Agents as digital spieces

Instead of scaling up a single agent, we envision a future 'society of agents' representing a new kind of digital species. Each agent develops through lifewide experiences, interacting and collaborating with other agents and humans while evolving continuously with its dynamic environments. These agents are equipped with advanced cognitive capabilities: multimodal perception, episodic and semantic memory, statistical relational learning, theory of mind, common sense reasoning, and knowledge integration. Through their interactions, they develop collective intelligence and cultural evolution, adapting their behaviors and knowledge based on societal needs. This distributed approach enables more robust, adaptable, and aligned AI systems that can address complex challenges across different domains.

Reasoning: Augmented System 2 capability

We are concerned with learning the capability to deduce new knowledge from previously acquired knowledge in response to a query. Such behaviors can be demonstrated naturally using a symbolic system with a rich set of inferential tools, given that the symbols can be grounded in the sensory world. Deep learning contributes to the bottom-up learning of such a reasoning system by resolving the symbol grounding problem. Our research aims to build neural architectures that can learn to exhibit high-level reasoning functionalities, for example, answering new questions across space-time in a compositional and progressive fashion.

Alignment: Truth, values, safety and civilization

The rapid advancement of AI raises critical challenges that pose significant risks to social stability, cultures and humanity if left unchecked. We aim to humanize machine learning algorithms to ensure AI systems act in alignment with human values and preferences. Examples of focus areas include: truth seeking, preference learning from human feedback, optimization techniques for value alignment and safety, preference-guided agent architectures, mechanisms for moral reasoning, transparency in decision-making, and robustness against misalignment. Our goal is to ensure AI development promotes advances in civilization while minimizing potential risks.


Grants
  • Learning and reasoning on multisensor data ($850K), Australian DoD, 2022-2024.
  • Framework for verifying machine learning algorithms ($360K), ARC Discovery, 2021-2023.
  • Defence applied AI experiential CoLab ($1M), Australian DoD, 2020-2021.
  • Telstra centre of excellence in big data and machine learning ($1.6M), Telstra, 2016–2020.
  • Predicting hazardous software using deep learning, ($100K), Samsung GRO, 2016–2017.
  • Studying and developing advanced machine learning based models for extracting chemical/drug-disease relations from biomedical literature”, ($54K), Vietnam NAFOSTED, 2017–2018.
  • Building a simulator of mail sorting machine, ($12K), PTIT VN, 2003.

Talks/Tutorials

Popular writings

Theses
  • Long Tran (PhD, with Dr Phuoc Nguyen), Causal inference, 2024-2027 (expected).
  • Thong Bach (PhD), Alignment AI, 2024-2027 (expected).
  • Dat Ho (PhD, with Dr Shannon Ryan), PIML for breakup mechanics, 2024-2027 (expected).
  • Linh La (PhD, with Dr Sherif Abbas), Physics-informed ML for materials, 2024-2027 (expected).
  • Giang Do (PhD), Scaling LLMs, 2024-2027 (expected).
  • Quang-Hung Le (PhD, with Dr Thao Le), Toward instruction-following navigation, 2023-2026 (expected).
  • Minh-Khoa Le (PhD), Structured learning and reasoning, 2023-2026 (expected).
  • Minh-Thang Nguyen (PhD), Knowledge-guided machine learning, 2023-2026 (expected).
  • Tuyen Tran (PhD, with Dr Vuong Le & Dr Thao Le), Structural video understanding, 2022-2025 (expected).
  • Tien-Kha Pham (PhD, Deakin), Associative memory in neural networks, 2021-2024.
  • Hung Tran (PhD, Deakin, with Dr Vuong Le), Human behaviours understanding in video: Goals, dual-processes and commonsense, 2020-2024.
  • Hoang-Long Dang (PhD), Language-guided visual reasoning via deep neural networks, 2020-2023. Nominee of Deakin's Thesis Award 2024; Nominee for CORE Distinguished Dissertation Award 2023-2024.
  • Hoang-Anh Pham (MPhil), Video-grounded dialog: Models and applications, 2020-2023.
  • Duc Nguyen (PhD), Learning dependency structures through time using neural networks, 2019-2023.
  • Tri Nguyen (PhD, with Dr Thin Nguyen), Decoding the drug-target interaction mechanism using deep learning, 2019-2022. Nominee of Deakin's Thesis Award 2022.
  • Dung Nguyen (PhD), Towards social AI: Roles and theory of mind, 2019-2022. Nominee of Deakin's Thesis Award 2022.
  • Hoang Thanh-Tung (PhD), Toward generalizable deep generative models, 2017-2021.
  • Thao Minh Le (PhD), Deep neural networks for visual reasoning, 2018-2021, after 2.5 years. Winner of Deakin's Thesis Award 2021.
  • Romero de Morais (PhD, with Dr Vuong Le), Human behaviour understanding in computer vision, 2018-2021.
  • Hung Le (PhD), Memory and attention in deep learning, 2018-2020, after just 2 years! Winner of Deakin's Thesis Award 2020.
  • Kien Do (PhD), Novel deep architectures for representation learning, 2017-2020.
  • Trang Pham (PhD), Recurrent neural networks for structured data, 2016-2019.
  • Shivapratap Gopakumar (PhD), Machine learning in healthcare: An investigation into model stability, 2014-2017.
  • Tu Dinh Nguyen (PhD, with A/Prof Dinh Phung), Structured representation learning from complex data, 2012-2015.

Preprints

Publications