Skip to content

Latest commit

 

History

History
38 lines (23 loc) · 4.89 KB

File metadata and controls

38 lines (23 loc) · 4.89 KB
layout default
year 2026

Keynote Talks


Loris D'Antoni

University of California San Diego

Constraining Chaos: Toward Faithful and Semantic Decoding in Language Models

Language models excel at producing fluent text, but in domains like code and math, fluency isn’t enough --- outputs must obey strict syntactic and semantic rules. A new wave of research is rethinking language model decoding itself: not as a process of sampling words, but as a negotiation between probability, structure, and meaning. In this talk, I’ll explore how grammar and semantics can be embedded into the language model decoding loop, how we can sample from the true model conditional distribution under constraints, and how programmable abstractions make it possible to enforce properties like type safety or program invariants on the output of language models. The result is a vision of decoding that is faithful to the model yet governed by rules, pointing toward a future where LLMs generate not just plausible text, but reliably correct output.

Loris D’Antoni is a Jacobs Faculty Scholar and Associate Professor in the Department of Computer Science and Engineering at the University of California San Diego. His research helps people build trustworthy software. His work has introduced new frameworks for verifying and synthesizing programs—ranging from resilient network configurations to robust decision-making systems—and, more recently, methods for aligning language models with user intent. He is the recipient of an NSF CAREER Award and a Microsoft Research Faculty Fellowship, and was selected as a Vilas Associate at the University of Wisconsin-Madison. He has also received Google, Amazon, and Meta Faculty Awards, and the Morris and Dorothy Rubinoff Dissertation Award. His papers have earned several best paper awards and nominations, including at TACAS, ESOP, ICDCN, and SBES. Loris received his B.S. and M.S. in Computer Science from the University of Torino, and his Ph.D. in Computer Science from the University of Pennsylvania. Before joining UC San Diego, he was a faculty member at the University of Wisconsin–Madison.


Adam Wilkins

Humboldt Universität zu Berlin

Animal domestication: Some new perspectives on the oldest problem in genetics

Charles Darwin was the first person to look at the phenomenon of domestication of mammals as a problem of genetics. He concluded that there was some general hereditary process underlying the taming and domestication of many different animal species, which had occurred independently of each other. However, he was writing in 1868, long before Mendelian genetics had been rediscovered, hence he could not answer what was a hard genetics question. Nor has it been fully answered in more than 150 years since then. However, Darwin had produced some valuable clues. Furthermore, a lot of relevant data have accumulated since his work. In this talk, I will describe a genetic idea that two friend-colleagues and I first published in 2014. It invokes a possibly crucial role that the neural crest cells play in domestication in mammals and birds (though it potentially applies to all vertebrates). I will explain it, then discuss its strengths and limits, what needs to be done to definitively test it, and what more is needed beyond it to explain domestication.


Krzysztof Krawiec

Institute of Computing Science, Poznan University of Technology

Beyond Pattern Matching: Achieving Algorithmic Intelligence through Neurosymbolic and Physics-Aware Integration

Modern deep learning is primarily defined by high-dimensional pattern recognition, yet it is fundamentally limited by a reliance on statistical correlations. These "black-box" models often fail to capture the causal and algorithmic structures inherent in complex domains, leading to extreme demands for training data, poor generalization, and a lack of adherence to physical constraints. This talk introduces a roadmap toward Algorithmic Intelligence—a paradigm designed to "lift" connectionist architectures by grounding them in symbolic reasoning and the rigorous laws of physics. We explore two research frontiers: (i) neurosymbolic synthesis: the integration of symbolic and programmatic priors into neural blueprints to move beyond simple prediction toward structured reasoning and logical consistency; (ii) physics-Informed manifolds: embedding of inductive biases—specifically through differentiable renderers and transparent latent disentanglement—to enforce physical consistency directly within the learning process. We demonstrate that these hybrid approaches can effectively overcome the optimization plateaus and data inefficiencies that plague traditional models. Evidence is presented across diverse applications, including abstract reasoning puzzles, interpretation of medical imaging, and analysis of remote sensing data.