ARCHIMEDES is a research unit engaging scientists from leading Greek and international institutions on research in Artificial Intelligence and its supporting disciplines, including Algorithms, Statistics, Learning Theory, and Game Theory. We place emphasis on the mathematical, algorithmic and conceptual foundations of these fields, and we target several application domains. Find out more about our (still expanding) list of research areas below.
Machine Learning Foundations
We study the foundations of Machine Learning and Statistics. We study existing methods and models, and develop new ones, targeting challenging learning modalities. We develop techniques that address important challenges, including learning from data that are high-dimensional, contain biases, or are corrupted, and addressing fairness, incentive and reliability issues that arise at model deployment.
Causality and Fairness
Machine Learning methods rely heavily on supervision, i.e. the provision of “labels” or “annotations” in the training data. And, while they are impressive in identifying patterns that are useful for making predictions, they are not as successful in identifying causal relationships among the relevant variables so as to be able to make good counterfactual predictions. While causal inference is a widely studied field, there remain important challenges in bringing existing techniques to bear in the high-dimensional, non-parametric, and non-asymptotic sample settings of relevance to modern learning applications. We develop these foundations using a variety of approaches from high-dimensional Statistics, Econometrics and Machine Learning. We also use our techniques to improve model performance under distribution shift, decrease the level of supervision that is needed to train models, obtain models that generalize better to unseen and multi-modal data, and decrease the bias baked into trained models and the unfairness caused by their deployment.
The issue of algorithmic bias and fairness appears also in different contexts. We consider fairness for graphs. Graphs are a ubiquitous data model for entities, their relations, dependencies, and interactions. There are also multiple real-world graphs, ranging from social, communication and transportation networks to biological networks and the brain. We study the representation bias in graph data, as a result of data collection, and its effect on graph algorithms. We also consider different processes that happen on graphs, such as temporal evolution, ranking processes, information diffusion and opinion formation, and study their fairness. Finally, we explore explanations for bias of graph algorithms, where the goal is to explain the bias towards groups rather than individuals.
Machine Learning and Computer Vision
Computer vision models based on deep neural networks have reached impressive levels of performance in the past decade or so. We focus on how to effectively exploit large scale unlabeled data in order to improve the quality of learned visual data representations and to make these representations generalize to a much wider variety of tasks. We also develop novel deep learning-based methods for continual/incremental visual learning, where training data as well as visual tasks are presented to us in a sequential manner over time. Finally, we explore neuromorphic algorithms for visual motion perception and develop the foundations of multimodal machine learning and geometric deep learning as well as the intersection of tensor methods and deep learning with a focus on higher-order deep learning on multimodal/multiway data.
Machine Learning and the Life Sciences
Artificial Intelligence has become pivotal for research and innovation in health care. An increasing number of algorithms find their way to clinical practice providing powerful solutions and assisting medical doctors in their everyday practice. We study deep-learning-based approaches in this domain and work towards novel, unbiased, and generalizable algorithms for cancer treatment and response to immunotherapy. Of particular interest are learning schemes for training on gigapixel histopathological slides, transformer-based architectures with different attention schemes for the fusion of histopathology and genetic/clinical information, and bias identification and domain adaptation methods based on the image-to-image translation and adversarial attacks for addressing domain shifts and possibly biological and clinical biases.
Machine Learning and Natural Language Processing
Natural Language Processing has seen important advances over the past decade. While deep learning models have achieved impressive performance, even demonstrating common-sense knowledge, they are commonly viewed as uninterpretable black-boxes. We combine deep learning with deductive reasoning techniques from the symbolic Artificial Intelligence tradition, to produce new algorithms capable of reaching conclusions requiring multiple inference steps, using premises and conclusions expressed in natural language. We are also interested in understanding multilingual language models. While neural multilingual language models (MLMs) have notable success, their internal mechanics remain quite unclear. We explore symbolic approaches to understand the inner workings of large MLMs and investigate how to inject linguistic knowledge into neural models, aiming to learn how to benefit from human expertise and work with sparse data.
Game Theory, Optimization and Multi-Agent Learning
Over the past decade, Machine Learning has delivered important advances in learning challenges such as speech and image recognition, translation, text and image generation, and protein folding. These challenges pertain to single-agent learning problems, involving a single agent whose goal is to use observations from some unknown environment in order to learn how to make good predictions or decisions in this environment. These problems are typically modeled in the language of single-objective optimization and solved via simple methods such as gradient descent or some variant of it. From robustifying machine learning models against adversarial attacks to training generative models, to performing causal inference, to playing difficult games like Go, Poker and Starcraft, to improving autonomous driving agents, to evaluating the outcomes of economic policies, to training agents for some multi-agent interaction, many outstanding challenges in Machine Learning pertain to multi-agent learning problems, wherein multiple agents learn and make decisions and predictions in a shared environment. These settings deviate from the single-objective optimization paradigm as different agents may have different objectives, and Game Theory provides a useful framework for thinking about such settings. At the same time, classical Game Theory falls short from addressing the challenges posed by modern ML applications, such as the high-dimensionality of strategies and the non-concavity of utilities/non-convexity of losses that one typically encounters in these settings. We develop the foundations of multi-agent learning, bringing to bear techniques from optimization, game theory and learning.