Home » Research


The Distributed Artificial Intelligence Research Lab directed by Professor Anita Raja, is concerned with the design and development of  intelligent single and multi-agent systems.  Lab members conduct research in distributed computing, convention formation, cascading risks,  clinical informatics, monitoring and control of computation, complex networks, machine learning, resource-bounded reasoning, and reasoning under uncertainty. The current projects are:

1. Advanced Machine Learning for Clinical Informatics

The multifactorial complexity of clinical data complicates prediction and prevention of undesired outcomes. This project aims to investigate the value of more advanced machine learning methods by simultaneously considering all the factors, to develop better predictive and prevention methods.

Some sub-projects

  • Decision Making in Partially Observable Environments: Design, develop and evaluate stochastic transition probability function, cost effectiveness analysis and sensitivity analysis to support decision making in uncertain environments. We are studying this in the context of prevention of undesired outcomes in clinical informatics.

To read the NSF-funded project abstract, click here.

Project page: http://www.cs.columbia.edu/~ansaf/cing/PTB/index.html

2. Multiagent Meta-level Control for the Future of Transportation Systems

As cities across the globe continue to grow, traffic congestion has become globally ubiquitous with great economic and environmental costs associated with it. The increasing prevalence of self-driving vehicles creates an opportunity to build smart, responsive traffic infrastructure of the future. Such an infrastructure consisting of connected and autonomous vehicles and smart traffic lights would have the potential to cope with congestion, weather phenomena and accidents, while maintaining safety and ensuring privacy of information. We argue that an approach that leverages multiagent meta-level control (MMLC)  to address the challenge of dynamically adjusting traffic to the changes in the environment leads to improvement in travel times as well as decrease in emissions in  mixed traffic simulation environments.

Some related previous works:

  • Emergence of Social Norms and Conventions in Multiagent Systems: In this project, we study the importance and challenges of establishing cooperation among self-interested agents in multiagent systems (MAS). The hypothesis of this work is that equipping agents in networked MAS with “network thinking” capabilities and using this contextual knowledge to form social norms in an effective and efficient manner improves the performance of the MAS. We investigate the social norm emergence problem in conventional norms (where there is no conflict between individual and collective interests) and essential norms (where agents need to explicitly cooperate to achieve socially-efficient behavior) from a game-theoretic perspective. 

  • Coordinating Meta-level Control Across Agent Boundaries: The fundamental question addressed in this work is how to determine and obtain the minimal overlapping context among decentralized decision makers required to make their decisions more consistent. Our approach is a two-phased learning process where agents first learn their policies offline within the context of a simplified environment where it is not necessary to know detailed context information about neighbors. We evaluate our approach by addressing meta-level decisions in a complex multiagent weather tracking domains. 

3. Software Engineering for Machine Learning

Machine Learning (ML), including Deep Learning (DL), systems, i.e., those with ML capabilities, are pervasive in today’s data-driven society. Such systems are complex; they are comprised of ML models and many subsystems that support learning processes. As with other complex systems, ML systems are prone to classic technical debt issues, especially when such systems are long-lived, but they also exhibit debt specific to these systems. Unfortunately, there is a gap of knowledge in how ML systems actually evolve and are maintained. Our recent work indicates that developers refactor these systems for a variety of reasons, both specific and tangential to ML, some refactorings correspond to established technical debt categories, while others do not, and code duplication is a major cross-cutting theme that particularly involved ML configuration and model code, which was also the most refactored. We also introduce  new ML-specific refactorings and technical debt categories, respectively, and put forth several recommendations, best practices, and anti-patterns. The results can potentially assist practitioners, tool developers, and educators in facilitating long-term ML system usefulness. More information can be found here.


4. Risk Assessment in Finance

Details Forthcoming

Need help with the Commons? Visit our
help page
Send us a message