Larissa Albantakis | Causation in neural networks
Department of Psychiatry
University of Wisconsin
abstract : When an agent interacts with its environment, we are often interested in why the agent performed a particular action. Even if we are able to model the agent's behavior and its internal states in detail, understanding the cause for the agent's action typically requires extensive additional analysis and cannot be addressed in purely reductionist (or holistic) terms. Nevertheless, in neuroscience and elsewhere, causation is often conflated with prediction: once we can account for the behavior of all individual neurons (or the system state as a whole), there seems to be no room left for additional causes to do anything. By means of a simple model organism, I hope to demonstrate that causal reductionism cannot provide a complete explanatory account of ‘what caused what’. To that end, I will outline an explicit, operational approach that is able to reveal a system's compositional causal structure. By contrast to prior approaches, the presented framework connects counterfactual reasoning with information theoretic measures to evaluate actual causes and effects ("what caused what?") in a quantitative manner.