top of page

DIEP@UvA research priorites

Research at DIEP@UvA spans research interests from 6 different institutes covering physics, mathematics, chemistry, informatics, logic and computation (see institutes here). One of the main goals of the DIEP@UvA research programme is to establish interdisciplinary collaborations between fields of study in order to progress the science of emergence. See here for ongoing research projects from each institute and current DIEP fellows.  Though not restricted to, DIEP@UvA has identified several research priority areas:
DIEPatUva.png

DIEP

@UvA

Non-equilibrium driven systems

Statistical physics has been widely successful in describing equilibrium situations, e.g. phase transitions, critical phenomena, material properties, mechanics, etc. Even systems mildly out of equilibrium can be described by non-equilibrium statistical physics or hydrodynamics, for example, by applying Onsager’s non-equilibrium thermodynamics or linear response theory. However, when systems are driven far from equilibrium these approaches break down and radically new behaviour emerges. Classic examples are pattern formations in oscillators, transitions to chaotic behaviour, gravitational shockwaves, thermalisation processes, non-trivial dynamical transitions in active systems, active transport phenomena, non-linear response in materials, systems under (time dependent) external fields, and process involving chemical fuels, catalysis, and chemical networks.  
While equilibrium statistical mechanics is well understood and is immensely powerful, there is no real counterpart for these driven non-equilibrium dynamics, as there is no equivalent to the Boltzmann distribution for the steady state condition. Nevertheless, out of equilibrium systems show remarkable emergent behaviour, and non-Hermitian physics is currently a hot topic. The field of active matter provides very nice examples, such as such flocks of birds, fish and sheep, but also bacteria, nanoparticles, enzymes and mechanical metamaterials. In these cases, internal driving forces keep the system out of equilibrium. This largely prevents linear response approaches for non-equilibrium statistical mechanics. The challenge is to develop tools and methods (such as non-equilibrium statistical physics) for addressing these systems and predicting emergent behaviour.
A related challenge is how coarse-graining can be carried out for non-equilibrium driven systems. While for equilibrium there is a precise recipe for coarse graining, for example, by integrating out degrees of freedom while retaining essential properties, this is not the case for non-equilibrium systems. Moreover, it is not known what properties need to be retained. Even if it can be done, how do we identify the relevant variables doing this coarse graining? Such questions might be solved with novel machine learning techniques.
DRIVENSYSTEMS
MULTISCALE

Multiscale modelling and information theory

Multiscale modelling consists of studying properties of given systems integrating effective theories and models at different length scales. Examples of such models/effective theories are quantum mechanical, molecular dynamic models or coarse-grained models. This approach has had an important impact in the understanding of the collective behavior of polymer chains, the physics of plasmas, global/local climate predictions, rare events in complex systems (as met in friction, also in landslides), modelling decision making and agent networks or in the prediction of material properties in soft matter and quantum materials systems, including their effective simulation for computer graphics and animation purposes. 
Multiscale modelling is inherently an interdisciplinary study where input from different disciplines associated with the required different length scales is required. Here not only hierarchical modeling (where coarse graining is only done once and for all) is important, but in particular also concurrent modeling, where fine-grained levels constantly influence the coarse-grained levels and vice versa. As for the study of emergence, multiscale modelling can offer a framework for understanding how information flows between scales, and how it can be repackaged at different scales. This question is also intimately related to information/network theory and renormalization group techniques. Understanding the relationship between these approaches for specific systems is currently an interesting open problem that can lead to important insights.

Collective intelligence in agent networks

A large class of emergent phenomena concern the behavior of a network of (more or less) intelligent agents. Examples of such complex networks of reflective, intelligent agents are markets, businesses, social networks, scientific communities, the Internet. As already mentioned, there are also networks in which the `agents’ are simpler, endowed with only very basic informational capabilities (e.g. neural networks, colonies of bacteria, swarms of insects, flocks of birds, programmable metamaterials etc.), but the network as a whole can still exhibit very sophisticated intelligence. Each agent can acquire, store and exchange information with its neighbors (including information about others agents’ behavior), and given this information it adopts its own (similar or dissimilar) behavior, which in its turn influences the others. 

Sometimes the agents’ information and behaviors are aggregated into a group decision using some collective decision-making mechanism (e.g. voting), while in other cases this is done by simpler, automatic mechanisms, such as imitation or natural selection. This dynamic is repeated, giving rise to complex emergent phenomena. These include higher forms of collective intelligence, that surpass by far each individual’s epistemic capacity, illustrating the so-called ``wisdom of the crowds” or ``hive mind”: collective achievements of big collaborative projects (e.g. putting humans on the Moon, mapping the human genome), crowdsourcing (e.g. Wikipedia), the highly intelligent behavior exhibited by swarms of bees etc.  

But there are also emergent group phenomena that are information-distorting, leading to negative or even catastrophic collective decisions: informational cascades (e.g. market bubbles and crashes), group polarization leading to political paralysis, pluralistic ignorance preventing successful coordination, groupthink and bandwagonning, echo-chambers, the bystander effect etc. Traditionally investigated by social sciences using mainly statistically methods, these phenomena have recently become the object of study of logicians, computer-scientists, mathematicians, cognitive scientists and philosophers, using qualitative tools (logical models and formal languages, graph theory, algebraic methods) as well as quantitative ones (probabilistic and statistical, differential equations etc).

COLLECTIVE

Emergence of causality and causal modelling

There at least two different, but related aspects: (1) emergence of causal structure in the world, (2) emergence and development of causal models and causal reasoning in (natural or artificial) learning. Concerning (1), causality is a basic feature of the world. In fact, without causal structure the universe would not function. However, most of the microscopic laws are time reversible, and hence it is sometimes difficult (or even impossible) to point out what is cause and what is effect on the most fundamental level. Yet, on a macroscopic level causality emerges very clearly often in a one-way fashion, i.e. one condition or variable influences (causes) the other and not vice versa. How this causality emerges from the underlying reversible laws is not well understood. Of course, in principle the Second Law provides time’s arrow, but how a causal structure precisely arises remains rather intangible.  In addition, when a causal structure or network emerges the question remains how one can find out what the important collective variables are that need to be taken into account. Finally, it is often not clear what the effect of changing such variables or conditions is, e.g. when intervening in a complex process. Examples of this type of problems can be found in the treatment of chemical networks (both abiotic as well as in living cells), organisms, human health and psychology, and societal structures such as language, economy and regime shifts.

This connects to the second problem (2): how do causal models and causal reasoning arise in the process of learning from data by intelligent agents. The current Machine Learning techniques are essentially based on extracting correlations from data, and use them to predict future data. Although highly successful, these methods are susceptible to the correlation-versus-causation problem: lacking a causal model of their environment, they may get stuck sometimes with spurious, accidental, non-causal correlations. While there are frameworks in CS and AI that handle complex causal networks, using e.g.  Bayesian networks and causal inference (see Pearl etc.), the underlying emergence principles seem often murky. Does causal reasoning naturally arise (say, at some high enough level of complexity) in any parallel data-processing agent learning about its environment? Or maybe some form of Darwinistic selection is necessary for this? How do intelligent learners produce hypothetical causal models from data, test them and replace them with better models? The issues of emergence and discovery of causality are inter-related, and they lie at the intersection of Physics, Information Theory, Logic, Computer Science and AI.

CAUSALITY
bottom of page