Skip to toolbar

Home » Research


The Distributed Artificial Intelligence Research Lab directed by Professor Anita Raja, is concerned with the design and development of  intelligent single and multi-agent systems.  Lab members conduct research in distributed computing, convention formation, cascading risks,  clinical informatics, monitoring and control of computation, complex networks, machine learning, resource-bounded reasoning, and reasoning under uncertainty.

Advanced Machine Learning for Clinical Informatics

The multifactorial complexity of clinical data complicates prediction and prevention of undesired outcomes. This project aims to investigate the value of more advanced machine learning methods by simultaneously considering all the factors, to develop better predictive and prevention methods.

2019 NIH/NLM-funded praoject information forthcoming.

To read the NSF-funded project abstract, click here.

Project papge:

Emergence of Social Norms and Conventions in Multiagent Systems

In this project, we study the importance and challenges of establishing cooperation among self-interested agents in multiagent systems (MAS). The hypothesis of this work is that equipping agents in networked MAS with “network thinking” capabilities and using this contextual knowledge to form social norms in an effective and efficient manner improves the performance of the MAS. We investigate the social norm emergence problem in conventional norms (where there is no conflict between individual and collective interests) and essential norms (where agents need to explicitly cooperate to achieve socially-efficient behavior) from a game-theoretic perspective

    • Visit the project website here.

Decision Making in Partially Observable Environments

Design, develop and evaluate stochastic transition probability function, cost effectiveness analysis and sensitivity analysis to support decision making in uncertain environments. We will study this in the context of prevention of undesired outcomes in clinical informatics.

Coordinating Meta-level Control Across Agent Boundaries

The fundamental question addressed in this work is how to determine and obtain the minimal overlapping context among decentralized decision makers required to make their decisions more consistent. Our approach is a two-phased learning process where agents first learn their policies offline within the context of a simplified environment where it is not necessary to know detailed context information about neighbors. We evaluate our approach by addressing meta-level decisions in a complex multiagent weather tracking domains.

Visit the project website here.

Need help with the Commons? Visit our
help page
Send us a message