MIAO Seminars
The
MIAO seminars
are what we call the series of seminars arranged by the
Mathematical
Insights into Algorithms for Optimization (MIAO)
research group
at the University of Copenhagen and Lund University.
Our seminars often (but far from always) consist of two parts: first
a regular talk, and after this a ca-1-hour informal, technical
presentation with discussions. The intention is that all listeners
will first get an overview of some exciting research results, and
then people who find the topic particularly interesting will get an
opportunity to study the same material more in-depth.
To elaborate a bit, for seminars conducted in this format the first
part is just a standard, listener-friendly seminar of 50-55 minutes
(including time for questions). The audience is assumed to have a
general knowledge of computer science and/or mathematics, but is usually not
expected to have any detailed insights into the specific area covered
by the talk. This first part is intended to be self-contained, since most
attendees tend to leave after the first part.
After a ca-10-minute break it is time for the second part, when
those who so wish reconvene for a more in-depth technical
presentation. During this second part the goal is to "open up the
hood", so that for a theory talk we look at the formal definitions
and prove at least some of the key ingredients in the results
including all (or at least some of) the gory technical details
glossed over in a polished seminar presentation. For a more applied
talk, we might go into questions of implementation or discuss
detailed experimental results. Questions and discussions are
strongly encouraged. However, for those who feel that the first
50-55-minute regular seminar was enough for today, it is perfectly
fine to just discretely drop out during the break. No excuses
needed; no questions asked.
These seminars typically have a low-key, informal, atmosphere, but
they are open to the public and are announced beforehand on
our seminar mailing lists (with two different lists for more
theoretically oriented or more applied seminars, respectively).
In case you want to receive the announcements, please just let us
know by sending an e-mail message to jn@di.ku.dk
specifying which seminar mailing list you wish to be added to
(theoretical, applied, or both).
We also add all public seminars to the
MIAO seminar calendar.
The Zoom link to the video seminars is only communicated on the seminar mailing lists, but it tends to remain stable over time.
We also record most seminars and post the videos on the
MIAO Research YouTube channel youtube.com/@MIAOresearch.
Please note that
all seminar times below are in the Central European time zone
(with daylight saving time observed from the last Sunday in March to
the last Sunday in October, even if the time is usually stated as CET in the announcements sent out).
Upcoming seminars
The MIAO seminars
have switched to
winter vacation and travel mode, but we plan to be back sometime in January or early February.
MIAO seminars autumn 2024
-
Thursday Dec 5 at 14:00
in room 20-1-123, BIO-Aqua, Building 20, Universitetsparken 4, University of Copenhagen,
and on Zoom
Colouring is hard on average for polynomial calculus and Nullstellensatz
(video)
(Shuo Pang, University of Copenhagen and Lund University)
We prove that polynomial calculus (and hence also
Nullstellensatz) over any field requires linear degree to refute
that sparse random regular graphs, as well as sparse
Erdős-Rényi random graphs, are 3-colourable. Using the
known relation between size and degree for polynomial calculus
proofs, this implies strongly exponential lower bounds on proof
size.
It also means that algorithms based on Hilbert’s Nullstellensatz
or Gröbner bases must require exponential running time for
almost all such graphs.
This is joint work with
Jonas Conneryd,
Susanna F. de Rezende,
Jakob Nordström,
and
Kilian Risse
that appeared at FOCS 2023.
-
Tuesday Dec 3 at 14:00
in auditorium Store UP1, Universitetsparken 3, University of Copenhagen,
and on Zoom
Relating implicit hitting set and core-guided MaxSAT solvers via
linear programming
(video)
(George Katsirelos, INRAE)
Many current complete MaxSAT algorithms fall into two
categories: core-guided or implicit hitting set. The two kinds
of algorithms seem to have complementary strengths in practice,
so that each kind of solver is better able to handle different
families of instances. This suggests that a hybrid might match
and outperform either, but the techniques used seem
incompatible. In this talk, I will focus on PMRES and OLL, two
core-guided algorithms based on max resolution and soft
cardinality constraints, respectively. I will show that these
algorithms implicitly discover cores of the original formula.
Moreover, in some cases they compute the optimum hitting set of
these cores at each iteration. More importantly, I will also
give compact integer linear programs for each which encode this
hitting set problem, and whose continuous relaxation has an
optimum that matches the bound computed by the respective
algorithms. This allows the design of hybrid algorithms, with
components from implicit hitting set, core-guided, and beyond,
with linear programs as the shared representation between the
various components.
-
Wednesday Nov 27 at 14:00
in seminar room E:1124, Klas Anshelms väg 10, Lund University,
and on Zoom
Lower bounds beyond DNF of parities
(video)
(Anastasia Sofronova, EPFL)
We consider a subclass of AC^0[2] circuits that simultaneously
captures DNF-Xor and depth-3 AC^0 circuits. For this class we show a
technique for proving lower bounds inspired by the top-down approach. We
give lower bounds for the middle slice function, inner product
function, and affine dispersers. Joint work with Artur Riazanov and
Dmitry Sokolov.
-
Tuesday Nov 5 – Thursday Nov 7, 2024
1st International Workshop
on Solving Linear Optimization Problems for Pseudo-Booleans and Yonder
(SLOPPY '24)
at Lund University
-
Tuesday Sep 17 at 14:00
in Auditorium No. 9, Hans Christian Ørsted Building,
Universitetsparken 5, University of Copenhagen,
and on Zoom
Large corner-free sets in high dimensions
(video)
(Morgan Shirley, University of Victoria)
A central question in additive combinatorics is to understand
how large arithmetic progression-free sets can be. In this talk,
I will focus on this question for high-dimensional
generalization of arithmetic progressions (AP) known as
corners. A (2-dimensional) corner is a triple of the form
(x,y),(x+d,y),(x,y+d) for some d > 0 in [N]x[N]. Extending this
definition to higher dimensions, a k-dimensional corner in [N]^k
is a (k+1)-tuple defined similarly for some d. While it is known
that corner-free sets have a vanishingly small density, the
precise bounds on their size remain unknown.
Until recently, the best-known corner-free sets were derived
from constructions of AP-free sets: a construction of a 3-term
AP-free set by Behrend from 1946, and a generalization by Rankin
for k-term APs in 1961. New results by Linial and
Shraibman (CCC 2021) and Green (New Zealand Journal of
Mathematics 2021) changed this picture; they improved the
upper bound for k=2 by adopting a communication
complexity point of view.
I will discuss the recent work where the same perspective of
communication complexity has been employed and obtain the first
improvement on the upper bound of the size of high-dimensional
(k > 2) corner-free sets since the original construction of
Rankin.
Based on joint work with Lianna Hambardzumyan, Toniann Pitassi,
Suhail Sherif, and Adi Shraibman.
MIAO seminars spring 2024
-
Monday Jun 10 at 14:00
on Zoom
Toward better depth lower bounds: A KRW-like theorem for strong composition
(video)
(Or Meir, University of Haifa)
One of the major open problems in complexity theory is proving
super-logarithmic lower bounds on the depth of circuits.
Karchmer, Raz, and Wigderson (Computational Complexity 5(3/4), 1995)
suggested approaching this problem by proving that
depth complexity of a composition of two functions is roughly
the sum of their individual depth complexities. They showed that
the validity of this conjecture would imply the desired lower
bounds.
The intuition that underlies the KRW conjecture is that
composition should behave like a "direct-sum problem", in a
certain sense, and therefore the depth complexity of the
composition should be the sum of the individual depth
complexities. Nevertheless, there are two obstacles toward
turning this intuition into a proof: first, we do not know how
to prove that the composition must behave like a direct-sum
problem; second, we do not know how to prove that the complexity
of the latter direct-sum problem is indeed the sum of the
individual complexities.
In this talk, we will describe a new work that tackles the
second obstacle. We will consider a notion of "strong
composition" that is forced to behave like a direct-sum problem,
and see that a variant of the KRW conjecture holds for this
notion. This result demonstrates that the first obstacle above
is the crucial barrier toward resolving the KRW
conjecture. Along the way, we will discuss some general
techniques that might be of independent interest.
-
Thursday May 23 – Friday May 24, 2024
1st International Workshop
on Highlights in Organizing and Optimizing Proof-logging Systems
(WHOOPS '24)
at the University of Copenhagen
-
Wednesday May 22 at 14:00
in seminar room N116A&B, Universitetsparken 1, University of Copenhagen,
and on Zoom
Solving optimization problems by a sequence of decisions
(video)
(Jeremias Berg, University of Helsinki)
Computationally challenging optimization problems are common in
artificial intelligence and other real-world settings. Among the
several approaches to optimization problems that have been developed,
the declarative approach is arguably one of the most promising ones in
terms of allowing reasonable effectiveness and strong guarantees of
the trustworthiness of the solutions obtained, e.g., via
proof-logging. In the declarative approach, an instance of the
optimization problem of interest is first represented as a set of
mathematical constraints and an objective function. Afterward, an
automated reasoning procedure (a solver) is invoked in order to
compute an optimal solution to the constraints. Finally, a solution to
the original problems is obtained from the solution to the constraint
instance.
Most state-of-the-art approaches to computing optimal solutions
of constraint instances reduce the optimization problem into a
sequence of decision problems, which are in turn tackled with decision
procedures for the constraint language being employed. A natural
example is the so-called solution-improving approach, which
iteratively queries a decision procedure for solutions of increasing
quality until no such solutions can be found. Other central examples
are the core-guided and implicit-hitting-set-based approaches that
refine the optimization instance based on small sources of
inconsistency (known as unsatisfiable cores) until a solution that
relaxes all inconsistencies in a cost-optimal way is found. The
solution-improving search, core-guided, and implicit
hitting-set-based, algorithms currently represent the state-of-the-art
in Maximum Satisfiability and pseudo-boolean Optimization. The
algorithms have also been successfully applied in Constraint
Programming and SMT.
In this seminar, I will present our recent work on a framework
that allows us to view solution-improving, core-guided, and
implicit-hitting-set-based search in a unified way. The
framework provides new insight into the relationship between the
three types of algorithms and also allows for a unified way of
proving the correctness of virtually all instantiations of them
that we are aware of. The constraint-agnostic framework can be
used to analyze MaxSAT, PBO, and (some of the) CP algorithms
that exist today. I will also highlight some interesting work,
such as the potential of using the framework as the basis of a
unified way of proof-logging optimization algorithms in various
paradigms.
-
Tuesday May 21 at 10:00 (note the time!)
in seminar room N116B, Universitetsparken 1, University of Copenhagen,
and on Zoom
Efficient algorithms for symmetry detection
(video)
(Markus Anders, Technische Universität Darmstadt)
Exploitation of symmetry has a dramatic impact on the efficiency
of algorithms in various fields. Before symmetries of a
structure can be exploited, one first has to have algorithmic
means to find the symmetries efficiently. Typically, this is
achieved through modeling said structure as a graph, followed by
an application of a practical graph isomorphism solver such as
nauty, saucy, bliss, Traces, or dejavu.
In this talk, I will give an overview of recent progress on
practical graph isomorphism solvers. I present a theoretical
model which captures the backtracking behavior of all
state-of-the-art solvers, within which we prove probabilistic
strategies to be optimal up to logarithmic factors. Then, I
briefly discuss the design of the practical solver dejavu, which
is based on a near-optimal probabilistic backtracking strategy.
The material presented is based on joint work with Pascal Schweitzer.
-
Thursday May 16 at 14:30 (note the time!)
in seminar room N116A, Universitetsparken 1, University of Copenhagen,
and on Zoom
PPP is not Turing-closed in the black-box setting
(video)
(Noah Fleming, Memorial University)
The TFNP class PPP contains total search problems which are
reducible to finding a collision in an exponential-size instance
of the pigeonhole principle. PPP is one of the "original five"
subclasses of TFNP, and has been extensively studied due to the
strong connections between its defining problem — the pigeonhole
principle — and problems in cryptography, extremal
combinatorics, proof complexity, and other fields. However,
despite its importance, PPP appears to be less robust than other
important TFNP subclasses. In particular, Buss and Johnson
conjectured that PPP is not closed under Turing reductions, and
they called for a black-box separation in order to provide
evidence for this conjecture, and this conjecture was further
highlighted by Daskalakis in his recent IMU Abacus Medal
Lecture. This is in contrast to all other major TFNP subclasses,
which are known to be closed under Turing reductions.
In this talk we will discuss our recent work, showing that PPP
is indeed not Turing-closed in the black-box setting,
affirmatively resolving the above conjecture and providing
strong evidence that PPP is not Turing-closed. Our proof
requires developing new tools for PPP lower bounds, and creates
new connections between PPP and the theory of pseudo-expectation
operators used for Sherali-Adams and Sum-of-Squares lower bounds
in proof complexity.
This talk will be mostly self-contained, assuming only familiarity with NP.
Based on joint work with Stefan Grosser, Toniann Pitassi, and Robert Robere.
-
Friday Apr 26 at 14:00
in seminar room E:2116, Ole Römers väg 3, Lund University,
and on Zoom
A crash course in answer-set programming
(video)
(Markus Hecher, MIT)
In this talk we briefly showcase the answer set programming
(ASP) framework and demonstrate some use cases for rapid
prototyping. After introducing basic notation and extended
syntactic sugar, we will discuss the computational complexity of
different fragments of interest. In the propositional (ground)
case, ASP reaches up to the second level of the polynomial
hierarchy (third level for optimization). Since ASP enables the
use of first-order like variables, there are also fragments
reaching beyond NEXPTIME. In practice, this results in the
so-called ASP grounding bottleneck, as practical systems
typically comprise two phases: During grounding, first-order
like variables are instantiated with every potential domain
value, which is then followed by a (ground) solving
phase. However, if predicate arities are bounded (practically
well-motivated), compared to the ground case, complexity only
increases by one level in the polynomial hierarchy. Based on
these complexity results, we will briefly discuss recent works
that propose alternative grounding techniques by reducing from
simpler non-ground fragments to harder ground fragments.
-
Thursday Apr 25 at 16:00 (note the time!)
in seminar room E:1426, Ole Römers väg 3, Lund University,
and on Zoom
Verifying a SAT solver from ground up
(video)
(Mathias Fleury, Albert-Ludwigs-Universität Freiburg)
In this seminar, I will present my verified SAT solver, IsaSAT,
starting from an abstract CDCL calculus, through a more
pragmatic version PCDCL, down to reasonably efficient code using
a refinement chain. In the first part of the presentation, I
will present the general approach and the second part, I will
show more details how the refinement is done and what are the
critical parts where performance is lost.
-
Tuesday Apr 23 at 14:00
on Zoom
Compressing CFI graphs and lower bounds for Weisfeiler-Leman refinements
(video)
(Daniel Neuen, Universität Bremen)
The k-dimensional Weisfeiler-Leman (k-WL) algorithm is a simple
combinatorial algorithm that was originally designed as a graph
isomorphism heuristic. It naturally finds applications in
Babai's quasipolynomial time isomorphism algorithm, practical
isomorphism solvers, and algebraic graph theory. However, it
also has surprising connections to other areas such as logic,
proof complexity, combinatorial optimization, and machine
learning.
The algorithm iteratively computes a coloring of the k-tuples of
vertices of a graph. Since Fürer's linear lower bound [ICALP 2001], it has been an open question whether there is a
super-linear lower bound for the iteration number for k-WL on
graphs. In this talk, I present a novel method of compressing
CFI graphs that leads to an Ω(nk/2)
lower bound for all k.
Joint work with Martin Grohe, Moritz Lichter and Pascal Schweitzer
(published at FOCS 2023).
-
Friday Mar 1 at 14:00
in seminar room E:2116, Ole Römers väg 3, Lund University,
and on Zoom
On the power of interactive proofs for learning
(video)
(Ninad Rajgopal, University of Cambridge)
We study interactive proof systems for verifying the results of
agnostic learning tasks, which was recently initiated by
Goldwasser, Rothblum, Shafer and Yehudayoff [GRSY21], as
PAC-verification. Here, for a given target hypothesis class H
and for any unknown n-variate Boolean function f, the verifier
who only gets random example access to f (over the uniform
distribution) interacts with an untrusted prover who has query
access to f, to output a hypothesis that is as close to f as an
optimal hypothesis from H, up to a small error. Such proof
systems enable a verifier to delegate an agnostic learning task
to an untrusted prover, with the goal being one of verification
using much fewer random examples and/or running time, than
actually performing learning (using queries) from scratch.
In this talk, I will present delegation proof systems for
learning certain Boolean hypothesis classes that are central to
the study of learning theory. Firstly, we show an interactive
proof for learning all the ε-heavy Fourier coefficients of a
function using poly(1/ε) many examples (i.e., Goldreich-Levin
theorem), whereas the prover uses poly(n/ε) many membership
queries, giving a verification speed-up that is independent of
the input length n. Next, we show such interactive proofs for
learning the class AC0[2] building on the work by
Carmosino-Impagliazzo-Kabanets-Kolokolova [CIKK17], where the
verifier uses quasi-polynomially many random examples. In
contrast, this class has been notoriously resistant beyond
trivial brute force learning, even for constructing realisable
learners (without a prover) using random examples. Finally, we
show that if we do not insist on an efficient honest prover,
then the model becomes trivial; even P/Poly can be learnt using
O(1) many examples.
This is joint work with Tom Gur, Mohammad Mahdi Jahanara,
Mohammad Mahdi Khodabandeh, Bahar Salamatian, and Igor Shinkar,
and will appear at STOC 2024.
-
Tuesday Feb 13 at 16:00 (note the time!)
on Zoom
The weak pigeonhole principle and the complexity of explicit constructions
(video)
(Oliver Korten, Columbia University)
The probabilistic method is a ubiquitous tool used to establish
the existence of interesting and useful combinatorial objects,
for example strong error correcting codes and expander graphs. A
drawback of this method is that it is typically
non-constructive: it indicates no efficient deterministic means
by which to construct the object of interest, and in many cases
replacing the non-constructive argument by a so-called "explicit
construction" requires many years of hard-fought mathematical
advances. In this talk I will discuss recent work establishing a
structural theory for the complexity of such explicit
construction problems. This theory is based around a simple
search problem called "Range Avoidance," where a function is
given from a small set to a much larger one, and a value outside
the range is sought. We will establish that most of the
interesting and unsolved explicit construction problems fall
naturally into a complexity class defined by this problem, and
that one of the most famous among these is "complete" (roughly
in the sense of NP completeness). We will also explore some
subsequent results connecting Range Avoidance to classical
topics in complexity theory including derandomization and
boolean circuit lower bounds.
-
Monday Jan 29 at 16:00 (note the time!)
on Zoom
Fourier growth of communication protocols for XOR functions
(video)
(Uma Girish, Princeton University)
Fourier growth of a Boolean function refers to the growth of the
sum of absolute values of the level-k Fourier
coefficients. Intuitively, functions with small Fourier growth
cannot aggregate many weak signals in the input to obtain a
considerable effect on the output. Upper bounds on the Fourier
growth, even for the first few levels, have important
applications in pseudorandomness, learning theory, circuit lower
bounds and quantum-versus-classical separations. Tight bounds on
the Fourier growth have been extensively studied for various
computational models, including AC0 circuits, branching
programs, decision trees and parity decision trees.
We study the Fourier growth of functions associated with
communication protocols. In our setting, Alice gets a bitstring
x and Bob gets a bitstring y, and together they wish to compute
a Boolean function that only depends on x+y (the point-wise
Alice's and Bob's inputs) while minimizing the amount of
communication. Such communication problems are referred to as
XOR functions and are central in the field of communication
complexity. If a protocol C computes an XOR function, then the
output C(x,y) of the protocol is a function of x + y. This
motivates us to study the XOR-fiber H of a communication
protocol C defined by H(z) := E[ C(x,y) | x + y = z]. XOR-fibers
of communication protocols form a powerful class of functions,
which includes decision trees and parity decision trees. Proving
tight bounds on the Fourier growth of XOR-fibers has
applications to the Gap-Hamming problem and improved quantum
versus classical separations in communication complexity.
In this talk, we present improved Fourier growth bounds for the
XOR-fibers of randomized communication protocols that
communicate at most d bits. For the first level, we show a tight
O(sqrt{d}) bound. For the second level, we show an improved
O(d^{3/2}) bound. We conjecture that the optimal bound is
O(d.polylog(n)) and this is an open question.
Our proof relies on viewing the protocol and its Fourier
spectrum as a martingale. One crucial ingredient we use to
control the step sizes is a spectral notion of k-wise
independence. Loosely speaking, this corresponds to sets such
that the k-th moments of the uniform distribution on the set are
well-behaved in all directions. We show how imposing spectral
k-wise independence on Alice's and Bob's sets allows us to prove
bounds on the level-k Fourier growth of XOR-fibers. We also
provide a way of adaptively partitioning a large set into a few
spectrally k-wise independent sets.
-
Friday Jan 19 at 14:00 on Zoom
TV distance estimation is as easy as probabilistic inference
(video)
(Arnab Bhattacharyya, National University of Singapore)
We discuss the algorithmic problem of computing the total
variation (TV) distance between two high-dimensional probability
distributions. Two highlights are:
-
The problem of exactly computing the TV distance between two
product distributions is #P-complete. This is in stark
contrast with other distance measures such as KL, Chi-square,
and Hellinger which tensorize over the marginals leading to
efficient algorithms.
-
We show a novel connection between TV distance estimation and
probabilistic inference. In particular, we present an
efficient, structure-preserving reduction from relative
approximation of TV distance to probabilistic inference over
directed graphical models. This reduction leads to a fully
polynomial randomized approximation scheme (FPRAS) for
estimating TV distances between distributions that are defined
by Bayes nets of bounded treewidth. Our approach employs a new
notion of partial couplings of high-dimensional distributions,
which might be of independent interest.
Joint work with Sutanu Gayen (IIT Kanpur), Kuldeep Meel (Toronto &
NUS), Dimitrios Myrisiotis (NUS), A. Pavan (Iowa State), and
N.V. Vinodchandran (U. Nebraska Lincoln).
-
Monday Jan 15 at 14:00
in Auditorium No. 3, Hans Christian Ørsted Building,
Universitetsparken 5, University of Copenhagen,
and on Zoom
The planted clique conjecture holds for unary Sherali-Adams
(video)
(Kilian Risse, EPFL)
We show that the planted clique conjecture holds for any algorithm
captured by the linear programming hierarchy Sherali-Adams, if the
hierarchy is restricted to represent numbers in unary instead of the
usual binary encoding. In a bit more detail, we show that unary
Sherali-Adams requires size n^Omega(k) to rule out the existence of
an n^0.1-clique in an Erdős-Rényi random graph whose maximum
clique is of size k ≤ 2 log n.
In the first hour of the seminar we survey results of a similar flavor,
give a high level proof outline and introduce some useful combinatorial
concepts. In the second, more technical, part of the seminar we then
show how these concepts interact and, if time permits, try to flesh out
some details of the proof of key lemmas.
Based on joint work with Susanna de Rezende and Aaron Potechin that
appeared in FOCS 2023.
-
Friday Jan 12 at 14:00 on Zoom
Proving UNSAT in zero knowledge
(video)
(Ning Luo, Northwestern University)
Zero-Knowledge (ZK) protocols have been extensively researched
in recent years, with a focus on finding efficient protocols for
problems in NP, such as proving the satisfiability of a Boolean
formula. However, less attention has been given to proving the
unsatisfiability of a given formula without revealing any
related proofs or even the formula itself. In this talk, I will
talk about our work on a highly efficient zero-knowledge
protocol for proving the unsatisfiability of Boolean formulas in
propositional logic. Our protocol is based on polynomial
equivalence checking, which allows for the verification of a
resolution proof of unsatisfiability in a zero-knowledge
setting. We have implemented this protocol and have demonstrated
its effectiveness by checking the unsatisfiability of real-world
formulae used for verifying system drivers within just 300
seconds. This work was published in CCS 2022 and was awarded
a distinguished paper award. In addition, I will talk about our
recent progress that extended the ZKUNSAT framework to handle
SMT formulas.
MIAO seminars autumn 2023
-
Tuesday Dec 19
at 14:00 on Zoom
Top-down lower bounds for depth-4 circuits
(video)
(Anastasia Sofronova, EPFL)
We present a top-down lower-bound method for depth-4 Boolean
circuits. In particular, we give a new proof of the well-known result
that the parity function requires depth-4 circuits of size exponential
in n^{1/3}. Our proof is an application of robust sunflowers and block
unpredictability. This is a joint work with Mika Göös, Artur Riazanov
and Dmitry Sokolov.
-
Thursday Dec 7 at 14:00
in seminar room N116B, Universitetsparken 1, University of Copenhagen,
and on Zoom
Recursive small-step multi-agent A*: A faster exact solver for Dec-POMDPs
(video)
(Wietze Koops, Radboud University)
Decentralized partially observable Markov decision processes
(Dec-POMDPs) model decentralized multi-agent decision-making under
stochastic dynamics and partial observability. We present recursive
small-step multi-agent A* (RS-MAA*), an exact algorithm that optimizes
the expected reward in Dec-POMDPs. RS-MAA* builds on multi-agent A*
(MAA*), an algorithm that finds policies by exploring a search tree,
but tackles two major scalability concerns. First, we employ a
modified, small-step variant of the search tree that avoids the double
exponential outdegree of the classical formulation. Second, we use a
tight and recursive heuristic that we compute on-the-fly, thereby
avoiding an expensive precomputation. The resulting algorithm is
conceptually simple, yet it shows superior performance on a rich set
of standard benchmarks.
This talk is based on: Wietze Koops, Nils Jansen, Sebastian Junges,
and Thiago D. Simão. Recursive Small-Step Multi-Agent A* for
Dec-POMDPs. IJCAI 2023.
-
Thursday Nov 16 at 14:00
in seminar room E:1145, Ole Römers väg 3, Lund University,
and on Zoom
Constraint solvers, propagators and Galois connections
(video)
(Justin Pearson, Uppsala University)
Constraint Programming is an established paradigm for solving hard
real-world combinatorial problems with industrial-strength solvers.
Crucial for the efficient implementation of a solver is the notion
of propagation and propagators. In constraint programming, modellers
have access to high-level combinatorial abstractions, known as global
constraints. Global constraints come with a propagator that
efficiently removes potential non-solutions for the search tree.
Typically, constraint solvers work on an abstract representation of
the solution space, which means that propagation is not complete, and
must be interleaved with search. Further, specifying what a propagator
should do on an abstract representation is often non-trivial.
In this talk, I will give an introduction to constraint programming,
propagators, and talk about ongoing work using Galois connections to
understand propagation and abstract clause learning.
-
Wednesday Oct 25 at 14:00
in seminar room N116B, Universitetsparken 1, University of Copenhagen,
and on Zoom
Randomness gives little advantage for decision tree size
(video)
(Yogesh Dahiya, Institute of Mathematical Sciences, Chennai)
Nisan's classic result [SICOMP '91] shows that the deterministic
decision tree depth complexity of a total Boolean function is at
most the cube of its randomized decision tree depth complexity.
However, the role
of randomness on decision tree size complexity has not been explored.
Adding to Nisan's result, we show that the logarithm of the
deterministic decision tree size complexity of any total Boolean
function is at most the fourth power of the logarithm of its
bounded-error randomized decision tree size complexity, up to a
polylogarithmic factor in the input dimension. Drawing an analogy, one
can associate the depth of a decision tree with time complexity and
its size with space complexity. Consequently, Nisan's result can be
viewed as derandomizing time, while our result can be seen as
derandomizing space.
Our result has several implications for
derandomizing query complexity in more generalized variants of
decision trees. Specifically, we show that for total Boolean
functions:
-
The deterministic AND-OR query complexity is at most the fourth
power of its randomized counterpart, ignoring a polylog n
factor.
-
The deterministic AND (OR) query complexity is at most the cube of
its randomized counterpart, ignoring a polylog n factor. This resolves
an open question by Knop, Lovett, McGuire, and Yuan [SIGACT
News '21].
-
Combining our result with Ehrenfeucht and Haussler's [Information and
Computation'89] work, we also get that the class of functions
computable by randomized decision trees of polynomial size is
PAC-learnable in quasi-polynomial time.
To obtain our main result on
decision tree size, we use as an intermediate measure the block number
of a Boolean function, studied first by Midrijanis [Arxiv’05], which
can be thought of as a counting analogue of block sensitivity of a
Boolean function that played a central role in Nisan's aforementioned
result.
This is joint work with Arkadev Chattopadhyay, Nikhil Mande, Jaikumar
Radhakrishnan, and Swagato Sanyal that appeared in STOC 2023.
-
Thursday Oct 19 at 14:00
in seminar room N116A, Universitetsparken 1, University of Copenhagen,
and on Zoom
Constant-depth circuits vs. monotone circuits
(video)
(Bruno Pasqualotto Cavalar, University of Warwick)
We establish new separations between the power of monotone and general (non-monotone) Boolean circuits:
-
For every k ≥ 1, there is a monotone function in AC0
(constant-depth poly-size circuits) that requires monotone
circuits of depth Ω(logkn). This significantly extends a
classical result of Okol’nishnikova (1982) and Ajtai and
Gurevich (1987).
-
For every k ≥ 1, there is a monotone function in AC0[⊕]
(constant-depth poly-size circuits extended with parity gates)
that requires monotone circuits of size exp (Ω(logkn)). This
makes progress towards a question posed by Grigni and Sipser
(1992).
Finally, we revisit our separation result against monotone
circuit size and investigate the limits of our approach, which
is based on a monotone lower bound for constraint satisfaction
problems established by Göös et al. (2019) via lifting
techniques. Adapting results of Schaefer (1978) and Allender et
al. (2009), we obtain an unconditional classification of the
monotone circuit complexity of Boolean-valued CSPs via their
polymorphisms. This result and the consequences we derive from
it might be of independent interest.
-
Thursday Oct 12 at 14:00
in seminar room N116A, Universitetsparken 1, University of Copenhagen
Approximate lower bounds arguments
(Leonid Reyzin, Boston University)
Suppose a prover, in possession of a large body of valuable
evidence, wants to quickly convince a verifier by presenting
only a small portion of the evidence.
We define an Approximate Lower Bound Argument, or ALBA, which allows
the prover to do just that: to succinctly prove knowledge of a large
number of elements satisfying a predicate (or, more generally,
elements of a sufficient total weight in case of a weighted
predicate). The argument is approximate because there is a small gap
between what the prover actually knows and what the verifier is
convinced the prover knows. This gap enables very efficient schemes.
We present noninteractive constructions of ALBA in the random
oracle and uniform reference string models and show that our
proof sizes are nearly optimal. We also show how our
constructions can be made particularly communication-efficient
when the evidence is distributed among multiple provers, as in
the blockchain setting.
We demonstrate two very different applications of ALBAs: for
large-scale decentralized signatures and for proving universal
composability of succinct proofs.
Joint work with Pyrros Chaidos, Aggelos Kiayias, and Anatoliy Zinovyev.
MIAO seminars spring 2023
During the spring semester of 2023,
most members of the research group were away for extended visits at the
Simons Institute for the Theory of Computing
at UC Berkeley attending the
Meta-Complexity program
and/or the
extended reunion of the program
Satisfiability: Theory, Practice, and Beyond.
In addition to the seminar activities during these programs, we had some seminars of our own as listed below.
-
Tuesday Jun 6 at 14:00
in Auditorium No. 7, Hans Christian Ørsted Building,
Universitetsparken 5, University of Copenhagen,
and on Zoom
Is your combinatorial search algorithm telling the truth?
(video)
(Ciaran McCreesh, University of Glasgow)
How do you know whether your combinatorial search algorithm is
implemented correctly? You could try testing it, but even if
you're convinced you've done a thorough job, will anyone else
believe you? Another possibility is "correct by construction"
software created using formal methods—but these methods are
far from being able to approach the complexity or performance
of modern satisfiability or constraint programming solvers. In
this talk I'll tell you about a third option, called
proof logging or certifying.
The idea is that, alongside a solution,
an algorithm must produce a mathematical proof in a standard
format that demonstrates that the solution is correct. This
proof can be verified by an independent proof checker, which
should be much simpler, and thus easier to trust. The key
challenge in getting this to work is to find a proof language
which is both simple to verify, and expressive enough to cover
a wide range of solving techniques with very low
overheads. It's not obvious that such a language should even
exist, but I'll argue that the cutting planes proof system with a
dominance-based extension rule might be exactly what we need:
even though this proof system has no notion of what vertices,
graphs, or even integers are, it is strong enough to verify
the reasoning used in state of the art algorithms for problems
like subgraph isomorphism, clique, and maximum common
connected subgraph, and even in constraint programming
solvers.
-
Thursday Jun 1 at 14:00
in Auditorium No. 3, Hans Christian Ørsted Building,
Universitetsparken 5, University of Copenhagen,
and on Zoom
Lower bound functions for optimal classical planning
(video)
(Malte Helmert, University of Basel)
Optimal classical planning is the problem of finding shortest paths in
factored state spaces, i.e., directed graphs that are compactly
described by logic-based representations. The leading approaches are
based on deriving lower-bound distance estimators a.k.a. "admissible
heuristics" from these logic-based representations.
For a long time, the design of admissible heuristics was considered an
ad-hoc activity with very little supporting theory. Modern developments,
starting with Katz and Domshlak's seminal work on cost partitioning of
abstraction heuristics, have changed this perception dramatically,
shaping a comprehensive theory of admissible heuristics. In this talk we
study these developments and highlight some interesting connections to
linear and integer programming.
-
Tuesday Feb 28 at 14:00
on Zoom
Introducing Intel® SAT solver
(video)
(Alexander Nadel, Intel & Technion)
We introduce Intel® SAT Solver (IntelSAT) — an
open-source CDCL SAT solver, recently written from scratch and
released at SAT 2022 conference. IntelSAT is optimized for
applications which generate many mostly satisfiable incremental
SAT queries.
The first part of our talk will include a brief self-contained
introduction into CDCL SAT solving, required to understand the
algorithms in IntelSAT, followed by a high-level review of these
algorithms and experimental results. In the second part of our
talk, we will discuss more gory details along the following
lines.
IntelSAT applies the following Incremental Lazy Backtracking
(ILB) principle: in-between incremental SAT queries, backtrack
only when necessary and to the highest possible decision
level. ILB is enabled by a novel reimplication procedure, which
can reimply an assigned literal at a lower level without
backtracking. Reimplication also helped us to restore the
following two properties, lost in the modern solvers with the
introduction of chronological backtracking: no assigned literal
can be implied at a lower level, conflict analysis always starts
with a clause falsified at the lowest possible level. In
addition, we apply some new heuristics. Integrating IntelSAT
into the MaxSAT solver TT-Open-WBO-Inc resulted in a significant
performance boost on incomplete unweighted MaxSAT Evaluation
benchmarks and improved the state-of-the-art in anytime
unweighted MaxSAT solving.
-
Monday Feb 6 at 10:00 (note the time!)
in room A107, Hans Christian Ørsted Building,
Universitetsparken 5, University of Copenhagen,
and on Zoom
The bridge between proof complexity and TFNP
(video of first half of seminar)
(Siddhartha Jain, UT Austin)
It is well-known that Resolution proofs can be efficiently
simulated by Sherali-Adams (SA) proofs. We show, however, that
any such simulation needs to exploit huge coefficients:
Resolution cannot be efficiently simulated by SA when the
coefficients are written in unary. We also show that Reversible
Resolution (a variant of MaxSAT Resolution) cannot be
efficiently simulated by Nullstellensatz (NS).
These results have consequences for complexity classes of NP
search problems that seem surprising at first blush. In
particular, the result about Resolution vs Sherali Adams
directly implies that the class PLS is not contained in the
class PPADS with respect to an oracle. Together with a couple of
more observations, we showed all remaining oracle separations
between classical TFNP classes introduced in the 1990s.
This is joint work with Mika Göös, Alexandros Hollender, Gilbert Maystre, William Pires, Robert Robere & Ran Tao.
The objective of this talk is to exhibit the tight connection
between these two areas, which was made more explicit by a
recent paper of Buss, Fleming & Impagliazzo. We will also see
how we can use tools from query complexity to answer questions
in proof complexity and potentially vice versa. We will also
list some open problems in both languages: Proof complexity and
TFNP.
-
Tuesday Jan 31 at 14:00
in Auditorium No. 8, Hans Christian Ørsted Building,
Universitetsparken 5, University of Copenhagen,
and on Zoom
Combinatorial problems in algorithmic cheminformatics
(video)
(Jakob Lykke Andersen, University of Southern Denmark)
For the modelling and computational analysis of chemistry on the
level of systems we have developed a range of graph-based
methods. Molecules are modelled by labelled graphs, and graph
transformation rules model generalised reactions. This is used
as the basis for a chemical graph transformation system that can
be used to automatically generate reaction networks. For
analysing the behaviour of reaction networks we have developed a
comprehensive pathway model, based on a generalisation of
network flows from ordinary digraphs to directed hypergraphs.
In this presentation we will take a tour through the algorithmic
landscape underpinning this modelling methodology, and explore
the problems of graph morphism enumeration, graph
canonicalization, and integer hyperflow enumeration, all
problems where no theoreically efficient algorithms are known.
-
Monday Jan 9 at 14:00 on Zoom
Cut generation procedures via decision diagrams
(video)
(Margarita Paz Castro, Pontificia Universidad Católica de Chile)
Decision diagrams (DDs) are graphical structures that can
encode complex combinatorial problems as network flow
problems. This talk explains how to leverage this network flow
reformulation to create valid inequalities for integer
programming problems. We review the main components behind
several cutting plane algorithms based on DDs and present
recent advances in the field.
MIAO seminars autumn 2022
-
Wednesday Dec 7 at 15:00
in seminar room E:2116, Ole Römers väg 3, Lund University,
and on Zoom
Refined core relaxation for core-guided MaxSAT solving
(video)
(Hannes Ihalainen, University of Helsinki)
Maximum satisfiability (MaxSAT) is a viable approach to solving
NP-hard optimization problems. In the realm of core-guided
MaxSAT solving—one of the most effective MaxSAT solving
paradigms today—algorithmic variants employing so-called soft
cardinality constraints have proven very effective. In this
work, propose to combine weight-aware core extraction (WCE)—a
recently proposed approach that enables relaxing multiple cores
instead of a single one during iterations of core-guided
search—with a novel form of structure sharing in the
cardinality-based core relaxation steps performed in core-guided
MaxSAT solvers. In particular, the proposed form of structure
sharing is enabled by WCE, which has so-far not been widely
integrated to MaxSAT solvers, and allows for introducing fewer
variables and clauses during the MaxSAT solving process. Our
results show that the proposed techniques allow for avoiding
potential overheads in the context of soft cardinality
constraint based core-guided MaxSAT solving both in theory and
in practice. In particular, the combination of WCE and structure
sharing improves the runtime performance of a state-of-the-art
core-guided MaxSAT solver implementing the central OLL
algorithm.
The talk is based on:
-
Hannes Ihalainen, Jeremias Berg, Matti Järvisalo:
Refined Core Relaxation for Core-Guided MaxSAT Solving. CP 2021.
-
Wednesday Dec 7 at 14:00
in seminar room E:2116, Ole Römers väg 3, Lund University,
and on Zoom
MaxSAT-based bi-objective Boolean optimization
(video)
(Christoph Jabs, University of Helsinki)
We explore a maximum satisfiability (MaxSAT) based approach to
bi-objective optimization. Bi-objective optimization refers to
the task of finding so-called Pareto-optimal solutions in terms
of two objective functions. Bi-objective optimization problems
naturally arise in various real-world settings. For example, in
the context of learning interpretable representations, such as
decision rules, from data, one wishes to balance between two
objectives, the classification error and the size of the
representation. Our approach is generally applicable to
bi-objective optimizations which allow for propositional
encodings. The approach makes heavy use of incremental Boolean
satisfiability (SAT) solving and draws inspiration from modern
MaxSAT solving approaches. In particular, we describe several
variants of the approach which arise from different approaches
to MaxSAT solving. In addition to computing a single
representative solution per each point of the Pareto front, the
approach allows for enumerating all Pareto-optimal solutions. We
empirically compare the efficiency of the approach to recent
competing approaches, showing practical benefits of our approach
in the contexts of learning interpretable classification rules
and bi-objective set covering.
The talk is based on:
-
Christoph Jabs, Jeremias Berg, Andreas Niskanen, Matti Järvisalo:
MaxSAT-Based Bi-Objective Boolean Optimization. SAT 2022.
-
Tuesday Dec 6 at 14:00
in seminar room N116A&B, Universitetsparken 1, University of Copenhagen,
and on Zoom
The implicit hitting set approach and its instantiation for pseudo-Boolean optimization
(video)
(Matti Järvisalo and Jeremias Berg, University of Helsinki)
The so-called implicit hitting set (IHS) approach offers a
generic framework for developing practical algorithms for
solving various types of optimization problems, the decision
problems counterparts of which are NP-complete or presumably
even harder. With firm fundamental basis in so-called hitting
set duality, successful instantiations for IHS have been
developed for a range of computationally hard problems,
including different declarative optimization paradigms
(including MaxSAT, MaxSMT, answer set optimization,
finite-domain constraint optimization) as well as for various
specific problems setting (including causal discovery,
inconsistency analysis in (quantified) propositional logic, and
propositional abduction among others), by making use of
state-of-the-art integer programming systems and declarative
approaches to inconsistency analysis.
The goals of this two-part talk are to (1) provide a generic
introduction to IHS with a high-level survey on its successful
applications, and to (2) describe in more detail a
recently-developed practical IHS-instantiation in the realm of
pseudo-Boolean optimization (PBO, aka 0-1 integer linear programming).
The second part of the talk is based on:
-
Pavel Smirnov, Jeremias Berg, Matti Järvisalo:
Improvements to the Implicit Hitting Set Approach to Pseudo-Boolean Optimization. SAT 2022.
-
Pavel Smirnov, Jeremias Berg, Matti Järvisalo:
Pseudo-Boolean Optimization by Implicit Hitting Sets. CP 2021.
-
Thursday Dec 1 at 14:00
in seminar room N116A&B, Universitetsparken 1, University of Copenhagen,
and on Zoom
Lower bounds for algebraic formulas
(video)
(Srikanth Srinivasan, Aarhus University)
Say we have a multivariate polynomial P(x_1,..., x_n). An
algebraic formula for P is just an algebraic expression for P with
nested additions and multiplications. A small formula for P implies an
efficient algorithm for evaluating P, and so a lower bound on the size
of any such expression implies that P is possibly hard to evaluate.
How would you show that your favourite polynomial P has no small
formula? In this talk, we will see a technique (building on works of
Nisan, Wigderson and Raz) for doing this that combines some linear
algebra with random restrictions, which are a classical tool in
circuit complexity. This helps us prove lower bounds for special kinds
of algebraic formulas, called set-multilinear formulas.
Based on joint work with Nutan Limaye (ITU) and Sébastien Tavenas
(USMB, Univ Grenoble). The paper can be found at
eccc.weizmann.ac.il/report/2021/094/.
-
Monday Nov 28 at 14:00 on Zoom
Combinatorial solving with provably correct results
(video and
slides )
(Bart Bogaerts, Vrije Universiteit Brussel;
Ciaran McCreesh, University of Glasgow;
and
Jakob Nordström, University of Copenhagen and Lund University)
Modern combinatorial optimization has had a major impact in
science and industry, but there is a quite poor scientific
understanding how state-of-the-art algorithms, so-called
combinatorial solvers, work. More importantly, even mature
commercial solvers are known to sometimes produce wrong
results, which can be fatal for some types of applications.
One way to address this problem is to enhance combinatorial
solvers with proof logging, meaning that they output
not only a result but also a proof of correctness. One can
then feed the problem, result, and proof to a dedicated
proof checker to verify that there are no errors. Crucially,
such proofs should require low overhead to generate and be
easy to check, but should supply 100% guarantees of
correctness.
In this tutorial, we will survey recent progress on proof
logging techniques for Boolean satisfiability (SAT) solving,
pseudo-Boolean optimization, and constraint programming. We
will argue that moving from the clausal format employed in
SAT proof logging to so-called pseudo-Boolean reasoning using
0-1 integer linear constraints seems to hit a sweet spot
between on the one hand making proofs simple and easy to
verify and on the other hand providing sufficient expressive
power to support the sophisticated reasoning in more general
combinatorial optimization paradigms.
-
Friday Nov 25 at 14:00
in Auditorium No. 5, Hans Christian Ørsted Building,
Universitetsparken 5, University of Copenhagen,
and on Zoom
SoS degree lower bound for exact clique
(video
of first half of seminar)
(Shuo Pang, University of Copenhagen)
Consider the claim that "the graph G is w-clique-free", where w is a
fixed parameter greater than, say, 2 log n. We know by simple
counting that the claim is true with high probability over
G ∼
G(n,1/2), but how about proving it for the given sample
G — is it almost always hard?
A closely related algorithmic problem is called
planted clique. It asks to devise an efficient algorithm whose
acceptance rates differ significantly with respect to samples from
G(n,1/2) and from G(n,1/2,w), where the latter distribution
draws a graph from G(n,1/2) and then independently "plants" a
random w-clique. The current state-of-the-art polynomial-time
algorithm only succeeds when w is as large as Ω(sqrt(n)),
and its correctness relies on degree 2 Sum-of-Squares proofs
(SoS). This connection, among others, motivated the study of a
cleanly formulated proof-theoretic question: Can higher degree
SoS do better on average, i.e., prove "G is w-clique-free" for
significantly smaller w and most G from G(n,1/2)?
After a long line of work, it turns out w=sqrt(n) is
essentially the optimal value that SoS proofs of a reasonable
degree can achieve. I will review the recent strongest version
of this result, where SoS could fully employ all the
constraints, including the one for the objective value.
-
Wednesday Nov 23 at 10:00 (note the time!)
on Zoom
(Weighted) Pacose: An iterative SAT-based MaxSAT solver
(video
of first half of seminar)
(Tobias Paxian, Albert-Ludwigs-Universität Freiburg)
In this seminar we present our MaxSAT solver Pacose and its
algorithms. It uses our own polynomial watchdog (PW) encoding
and an dedicated algorithm with three fundamental optimizations
leading to significantly less encoding sizes and faster solving
times. We further introduce two preprocessing methods to improve
solving weighted partial MaxSAT problems: Generalized Boolean
Multilevel Optimization (GBMO) splitting suited weighted MaxSAT
instances into smaller ones; and trimming the MaxSAT instance
(TrimMaxSAT) by excluding unsatisfiable soft clauses.
The talk is based on:
-
Tobias Paxian, Sven Reimer, Bernd Becker:
Dynamic polynomial watchdog encoding for solving weighted MaxSAT. SAT 2018
-
Tobias Paxian, Pascal Raiola, Bernd Becker:
On preprocessing for weighted MaxSAT. VMCAI 2021
-
Monday Nov 21 at 14:00
seminar room N116A&B, Universitetsparken 1, University of Copenhagen,
and on Zoom
Bounded depth proof for Tseitin formulas on the grid; revisited
(video of first half of seminar)
(Kilian Risse, KTH Royal Institute of Technology)
In this seminar we are going to study the Frege proof system: a Frege
refutation of a CNF formula F is a sequence of Boolean formulas, where
each formula is either a clause from F or follows from two previously
derived formulas according to some derivation rule. The final formula
in the sequence should be the constant false formula so that if the
derivation rules are sound, then we can conclude that F has no
satisfying assignment. The length of a refutation is the number of
formulas in the sequence, the depth is the maximum (logical) depth of
any formula occurring and, similarly, the line-size is the maximum
size of any formula in the sequence.
We consider Frege refutations restricted to depth d and line-size
M of the Tseitin formula defined over the n × n grid and show
that such refutations are of length exponential in
n / (log M){O(d)}. This improves upon a recent result of
[Pitassi et al. '21]. The key technical step is a
multi-switching lemma extending the switching lemma of
[Håstad '17] for a space of restrictions related to the
Tseitin contradiction.
The first hour includes a gentle introduction to the Frege proof system
and covers the necessary background to prove our lower bounds. In the
second hour we are going to start with a sketch of the single switching
lemma and, if time permits, we then cover the proof of the
multi-switching lemma.
Based on joint work with Johan Håstad. Preprint available at
arxiv.org/abs/2209.05839.
-
Monday Nov 7 at 14:00
in seminar room E:2116, Ole Römers väg 3, Lund University,
and on Zoom
The shortest even cycle problem is tractable
(video)
(Thore Husfeldt, IT University of Copenhagen and Lund University)
Given a directed graph as input, we show how to efficiently find a
shortest (directed, simple) cycle on an even number of vertices. As
far as we know, no polynomial-time algorithm was previously known
for this problem. In fact, finding any even cycle in a directed
graph in polynomial time was open for more than two decades until
Robertson, Seymour, and Thomas (Ann. of Math. (2) 1999) and,
independently, McCuaig (Electron. J. Combin. 2004; announced jointly
at STOC 1997) gave an efficiently testable structural
characterisation of even-cycle-free directed graphs.
Almost no knowledge of graph algorithms is necessary to
appreciate the result. The second hour will be more about
algebra and combinatorics than algorithms — polynomials,
matrix functions, and generating functions.
This is joint work with Andreas Björklund (Lund) and Petteri Kaski (Helsinki).
-
Tuesday Oct 18 at 14:00 at
DIKU øv-3-0-25, Universitetsparken 1,
University of Copenhagen and on Zoom
Exponential separations using guarded extension variables
(video)
(Emre Yolcu, Carnegie Mellon University)
I will talk about the complexity of proof systems augmenting
resolution with inference rules that allow, given a formula F
in conjunctive normal form, deriving clauses that are not
necessarily logically implied by F but whose addition to F
preserves satisfiability. When the derived clauses are
allowed to introduce variables not occurring in F, the systems
we will consider become equivalent to extended resolution. We
are concerned with the versions of these systems "without new
variables." They are called BC-, RAT-, SBC-, and GER-,
denoting respectively blocked clauses, resolution asymmetric
tautologies, set-blocked clauses, and generalized extended
resolution. Each of these systems formalizes some restricted
version of the ability to make assumptions that hold "without
loss of generality," which is commonly used informally to
simplify or shorten proofs. Except for SBC-, these systems are
known to be exponentially weaker than extended
resolution. They are, however, all equivalent to it under a
relaxed notion of simulation that allows the translation of
the formula along with the proof when moving between proof
systems. I will show how to take advantage of this fact to
construct formulas that separate RAT- from GER- and vice
versa. With a similar strategy, we can also separate SBC- from
RAT-. Additionally, I will briefly describe polynomial-size
SBC- proofs of the pigeonhole principle, which separates SBC-
from GER- by a previously known lower bound. These results
also separate the three systems from BC- since they all
simulate it. We thus obtain an almost complete picture of
their relative strengths.
-
Wednesday Oct 5 at 14:00
in seminar room E:2116, Ole Römers väg 3, Lund University,
and on Zoom
Theoretical barriers for efficient proof search
(video of first half of seminar)
(Susanna F. de Rezende, Lund University)
The proof search problem is a central question in automated
theorem proving and SAT solving. Clearly, if a propositional
tautology F does not have a short (polynomial size) proof in a
proof system P, any algorithm that searches for P-proofs of F
will necessarily take super-polynomial time. But can proofs of
"easy" formulas, i.e., those that have polynomial size proofs,
be found in polynomial time? This question motivates the study
of automatability of proof systems. In this talk, we give an
overview of known non-automatability results, focusing on the
more recent ones, and present some of the main ideas used to
obtain them.
-
Tuesday Oct 4 at 14:00
in seminar room N116A&B, Universitetsparken 1, University of Copenhagen,
and on Zoom
Matrix multiplication and polynomial identity testing
(video)
(Robert Andrews, University of Illinois Urbana-Champaign)
Determining the complexity of matrix multiplication is a
fundamental problem of theoretical computer science. It is
popularly conjectured that ω, the matrix multiplication
exponent, equals 2. If true, this conjecture would yield fast
algorithms for a wide array of problems in linear algebra and
beyond. But what if ω > 2? In this talk, I will
describe how lower bounds on ω can be used to make
progress on derandomizing polynomial identity testing.
-
Friday Sep 30 at 14:00 on Zoom
Towards an algorithmic theory of proof complexity
(video of first half of seminar)
(Albert Atserias, Universitat Politècnica de Catalunya)
A possibly unexpected by-product of the mathematical study of the
lengths of proofs, as is done in the field of propositional proof
complexity, is, I claim, that it may lead to new algorithmic
insights. To explain this, I will first recall the origins of
proof complexity as a field. Then I will explain why our current
understanding of certain proof systems could lead to new
algorithms. The key to this is that, for several proof systems of
practical use, the class of tautologies with low-complexity proofs
comes with a tight semantic characterization. Concretely, the
characterization states that a tautology fails to have a
low-complexity proof precisely when it is locally falsifiable, in
a precise technical sense of the term. One immediate derivative of
this is that, for those proof systems that admit such semantic
characterizations, the statement that a formula has a
low-complexity proofs has: (1) low-complexity certificates when
true, and (2) low-complexity refutations when false. We illustrate
the point by discussing the recently discovered
subexponential-time algorithm that distinguishes the graphs that
are 3-colorable from those that are not even n^eps-colorable in
time exp(n^{1-2eps}), which beats all previously known approaches.
-
Monday Sep 19 at 14:00 on Zoom
On vanishing sums of roots of unity in polynomial calculus and sum-of-squares
(video of first half of seminar)
(Ilario Bonacina, Universitat Politècnica de Catalunya)
Vanishing sums of roots of unity can be seen as a natural
generalization of knapsack from Boolean variables to variables taking
values over the roots of unity. We show that these sums are hard to
prove for polynomial calculus and for sum-of-squares, both in terms of
degree and size. This talk is based on a joint work with M. Lauria
and N. Galesi.
MIAO seminars spring 2022
-
Wednesday Jun 8 at 14:00
in seminar room E:2116, Ole Römers väg 3, Lund University
Short proofs in strong proof systems
(Marijn Heule, Carnegie Mellon University)
The success of satisfiability solving presents us with an
interesting peculiarity: modern solvers can frequently handle
gigantic formulas while failing miserably on supposedly easy
problems. Their poor performance is typically caused by the
weakness of their underlying proof system—resolution. To
overcome this obstacle, we need solvers that are based on
stronger proof systems. Unfortunately, existing strong proof
systems—such as extended resolution or Frege systems—do not
seem to lend themselves to mechanization.
In this talk, we discuss some recently proposed strong proof
systems that are more suitable for mechanization. These proof
systems are surprisingly strong, even without the introduction
of new variables—a key feature of short proofs presented in
the proof-complexity literature. We demonstrate the strength
of these proof systems on two famous problems: the pigeonhole
principle and mutilated chessboards. For both problems, we
present proofs of linear size without new variables. We also
present a novel technique to find such proofs automatically
and show its usefulness on SAT Competition benchmarks. We
conclude with some theoretical and practical challenges.
-
Wednesday Jun 1 at 14:00
in Auditorium No. 2, Hans Christian Ørsted Building,
Universitetsparken 5, University of Copenhagen,
and on Zoom
Certifying correctness for combinatorial algorithms by using pseudo-Boolean reasoning
(video)
(Stephan Gocht, Lund University and University of Copenhagen)
Although solving NP-complete problems is widely believed to
require exponential time in the worst case, state-of-the-art
algorithms are amazingly efficient for many NP-hard
optimisation problems. However, this is achieved through
highly sophisticated algorithms that are prone to
implementation errors which can cause incorrect results, even
for the best commercial tools. A promising approach to address
this problem is to use certifying algorithms that produce not
only the desired output but also a simple, machine-verifiable
certificate or proof of correctness of the output. By
verifying this proof with an external tool, we can guarantee
that the given answer is valid.
This talk will give an introduction to a new proof system that can
certify answers of algorithms for various combinatorial
problems, such as Boolean satisfiability (SAT) solving and
optimisation, constraint programming, and graph solving. This
proof system operates on 0-1 integer linear constraints,
generalising the cutting planes proof system. As a running
example, we will show how to certify the correctness of a
maximum matching algorithm.
After the official seminar, there will be an optional second
part with more technical discussions. This second part will
demonstrate how to generate such machine-verifiable proofs in
practice and how to check them using the verifier VeriPB,
which I developed during my PhD project.
-
Monday May 30 at 14:00
in seminar room E:2116, Ole Römers väg 3, Lund University,
and on Zoom
CDCL versus resolution
(video)
(Marc Vinyals)
The effectiveness of the CDCL algorithm, the one most used to solve SAT in
practice, is complicated to understand, and so far one of the most
successful tools for that has been proof complexity. CDCL is easily
seen to be limited by the resolution proof system, and furthermore it
has been shown to be equivalent to resolution, in the sense that CDCL can
reproduce a given resolution proof with only polynomial overhead.
But the question of the power of CDCL with respect to resolution is
far from closed. To begin with, the previous equivalence is subject to
some assumptions, only some of which are reasonable in practice. In
addition, in a world where algorithms are expected to run in linear
time, a polynomial overhead is too coarse a measure.
In this talk we will discuss what breaks when we try to make the
assumptions more realistic, and how much of an overhead CDCL needs in
order to simulate resolution.
-
Tuesday Mar 22 at 14:00
in seminar room A107,
Hans Christian Ørsted Building, Universitetsparken 5, University of Copenhagen,
and on Zoom
Solving large scale instances of a geometric set cover problem with coloring conflicts
(video of first half of seminar)
(Allan Sapucaia, University of Campinas)
When designing wireless networks, one wants to install
antennas ensuring that all important regions are
covered. There are many different secondary goals to consider
such as minimizing the number of antennas, or ensuring that
the network can still work even when some of the antennas are
down.
In this talk, we will discuss a geometric wireless network
design problem where we want to minimize the number of
antennas, while constrained by frequency assignment
conflicts. We will study a Integer Linear Program model for
this problem, and discuss how it can be improved using both
its geometric and graph-theoretical aspects. We show that the
problem can be efficiently solved in practice using
decomposition and conclude by providing theoretical
explanations for its performance.
MIAO seminars autumn 2021
-
Monday Dec 6 at 17:00 (note the time!)
Core-guided minimal correction set and core enumeration
(video)
(Nina Narodytska, VMware Research)
A set of constraints is unsatisfiable if there is no solution
that satisfies these constraints. To analyze unsatisfiable
problems, the user needs to understand where inconsistencies
come from and how they can be repaired. Minimal unsatisfiable
cores and correction sets are important subsets of constraints
that enable such analysis. In the first part of the talk, we
analyze core-guided MaxSAT algorithms and discuss how the cores
found by these algorithms are related to the cores of the
original unsatisfiable formula. Based on these results, in the
second part, we discuss a new algorithm for extracting minimal
unsatisfiable cores and correction sets. Our new solver
significantly outperforms several state of the art algorithms on
common benchmarks when it comes to extracting correction sets
and compares favorably on core extraction.
-
Monday Nov 22 at 14:00
TFNP: Collapses, separations, and characterisations
(video)
(Mika Göös, EPFL)
We discuss a range of results for total NP search problem classes (TFNP)
that includes new collapses of classes, black-box separations, and
characterisations using propositional proof systems. We will focus on a
surprising collapse result, EoPL = PLS ∩ PPAD, and see its full proof.
Joint work with Alexandros Hollender, Siddhartha Jain, Gilbert
Maystre, William Pires, Robert Robere, and Ran Tao.
-
Wednesday Nov 17 at 14:00
in E:1408, Ole Römers väg 3, Lund University and on Zoom
Decomposition approaches for a large-scale scheduling problem
(video of first half of seminar)
(Elina Rönnberg, Linköping University)
Generic state-of-the-art solvers for discrete optimisation are
exceptionally powerful tools that efficiently solve a variety of
problems. However, the scale and complexity of practically
relevant problems can sometimes render these solvers incapable
of even finding feasible solutions. In such cases, one
possibility is to develop solution approaches that exploit
problem structure to decompose the problem and then use the
generic solvers for addressing the resulting subproblems. The
success of such approaches relies on that the subproblems can be
efficiently solved and that the information gained from that is
sufficient to solve the original problem.
In an industry-academia collaboration between Linköping
University and Saab Aeronautics, we address the scheduling of a
kind of electronic systems in future aircraft. Briefly, this
problem can be described as a rich multiprocessor scheduling
problem that also includes the scheduling of a communication
network. To find feasible solutions to practically relevant
instances of this problem is challenging.
In this talk, I will present two different ways of decomposing
the problem. The first relies on making a strong relaxation of
the problem and then applying a constraint generation
procedure. In this approach, both the relaxed problem and the
subproblem are solved by a mixed-integer programming solver. The
performance of the method is enhanced by an integration with
adaptive large neighbourhood search. The second decomposition
strategy is based on logic-based Benders decomposition. In this
case, the master problem is solved as a mixed-integer program
and the subproblem is formulated as a constraint program. The
computational results from the two approaches are compared. To
conclude the talk, the different ways of exploiting the problem
structure and efficiency of solvers will be discussed and we
will compare the properties of the resulting decomposition
approaches.
-
Joint BARC/MIAO seminar:
Friday Nov 12 at 14:00
in Auditorium No. 10, Hans Christian Ørsted Building,
Universitetsparken 5, University of Copenhagen,
and on Zoom
Superpolynomial lower bounds against low-depth algebraic circuits
(video)
(Nutan Limaye, IT University of Copenhagen)
Every multivariate polynomial P(X) can be written as a sum
of monomials, i.e. a sum of products of variables and field constants.
In general, the size of such an expression is the number of monomials
that have a non-zero coefficient in P.
What happens if we add another layer of complexity, and consider sums
of products of sums (of variables and field constants) expressions?
Now, it becomes unclear how to prove that a given polynomial
P(X) does not have small expressions. In this result, we
solve exactly this problem.
More precisely, we prove that certain explicit polynomials have no
polynomial-sized "Sigma-Pi-Sigma" (sums of products of sums)
representations. We can also show similar results for
Sigma-Pi-Sigma-Pi, Sigma-Pi-Sigma-Pi-Sigma and so on for all
"constant-depth" expressions.
In the first part of this two-part talk, I will present the statements of the main results
and some background. In the second part of the talk, I will give some proof
details.
This is a joint work with Srikanth Srinivasan and Sébastien Tavenas.
-
Monday Nov 8 at 14:00
Lifting with inner-product and other low discrepancy gadgets
(video)
(Sajin Koroth, Simon Fraser University)
Lifting theorems are a powerful technique that exports our
understanding of a weak model of computation (usually query
complexity) to a powerful model of computation (usually
communication complexity). They have been instrumental in
solving many open problems in diverse areas of theoretical
computer science like communication complexity, circuit
complexity, proof complexity, linear programming, game theory,
etc. A typical lifting theorem connects the complexity of
computing any function f in the weak model to computing an
associated function f ∘ g in the powerful model. The associated
function f ∘ g is obtained by composing f with a specific function
g known as the gadget. A key aspect of broad applicability and
success of the lifting theorems is that they work for any f. In
a typical lifting theorem, this universality is achieved by
designing the gadget g with specific properties. The
applicability of a lifting theorem is usually limited by the
choice of gadget g and the size of the gadget. Thus, it is
crucial to understand what functions g can be used as a gadget
and how small g can be. In this talk, we present a lifting theorem
that connects query complexity to communication complexity and
works for almost all gadgets g of "smallish" size.
This talk is based on joint work with Arkadev Chattopadhyay, Yuval Filmus, Or Meir, and Toniann Pitassi.
-
Monday Oct 25 at 14:00
Proof complexity lower bounds by composition
(video)
(Robert Robere, McGill University)
In recent years, there has been an explosion in the development of
so-called "lifting theorems" in proof complexity and other closely
related areas which have led to the resolution of many
long-standing open problems. The basic idea of a lifting theorem
is simple: in order to prove a lower bound in a "complicated"
system that we do not understand (for example, Cutting Planes
refutations), we will "escalate" the hardness from a "simple"
system that we do understand (for example, Resolution refutations)
by taking a hard formula F for the simple system and composing it
with a specially chosen "gadget function" that removes the power
of the complicated system. In many cases, it is possible to prove
that the "best" possible strategy in the complicated system for
refuting the composed formula is to simulate the proof of the
uncomposed formula in the simple system (and thus lower bounds for
the uncomposed formula are "escalated" or "lifted" to lower bounds
for the composed formula). While lifting theorems in circuit
complexity can be traced back to the work of Raz and McKenzie
[RM99], recent developments have pushed these techniques to many
new areas, and have shown how lifting is a flexible tool that
allows quite finely tuned tradeoffs between different parameters
for a wide variety of proof systems.
In this talk, we give an introduction to and survey of lifting
results in proof complexity and related fields, like
communication complexity and circuit complexity, and outline
some of the ways that lifting can be used to control various
"complexity parameters" of unsatisfiable CNF formulas (like
proof length, proof space, proof depth, etc.). We will highlight
several recent developments in the area and indicate some future
directions.
-
Friday Oct 22 at 14:00
Verified proof checkers
(video)
(Magnus Myreen, Chalmers University of Technology)
Solvers, such as e.g. SAT solvers, are often complicated pieces
of software. How can we trust their results? For highly
optimised state-of-the-art solvers, testing is insufficient and
proving the functional correctness of the solver's
implementation is not practically possible. Fortunately, solvers
can often be augmented to produce proof certificates that can be
checked by separate simpler proof checkers. In recent years,
there has been growing interest in formally proving that the
proof checkers will never accept a certificate that contains
flaws.
In this talk, I will describe work that makes it possible to
prove functional correctness of proof checkers down to the
machine code that runs them. I have worked on (and have
supervised work on) several checkers. In this talk, I'll focus
on (1) my work on proving end-to-end correctness of Jared
Davis's Milawa prover, and (2) recent work on a LPR/LRAT checker
for UNSAT proofs. My talk will include a description of the
CakeML project, which was the context of (2). I will use demos
to show what the tools look like when running.
-
Friday Oct 15 at 10:00
Logic-based explainable AI
(video)
(Alexey Ignatiev, Monash University)
Explainable artificial intelligence (XAI) represents arguably
one of the most crucial challenges being faced by the area of AI
these days. Although the majority of approaches to XAI are of
heuristic nature, recent work proposed the use of abductive
reasoning to computing provably correct explanations for machine
learning (ML) predictions. The proposed rigorous approach was
shown to be useful not only for computing trustable explanations
but also for reasoning about explanations computed
heuristically. It was also applied to uncover a close
relationship between XAI and verification of ML models. This
talk will overview the advances of the rigorous logic-based
approach to XAI as well as the use of reasoning in devising
interpretable rule-based ML models including decision trees,
decision sets, and decision lists.
-
Monday Oct 4 at 14:00
Monotone arithmetic lower bounds via communication complexity
(video)
(Arkadev Chattopadhyay, Tata Institute of Fundamental Research, Mumbai)
How much power does negation or cancellation provide to
computation? This is a fundamental question in theoretical
computer science that appears in various parts: in Boolean
circuits, arithmetic circuits and also in communication
complexity. I will talk about some new connections between the
latter two fields and their applications to extend two classical
results from four decades ago:
-
Valiant (1979) showed that monotone arithmetic circuits are
exponentially weaker than general circuits for computing
monotone polynomials. Our first result gives a qualitatively
more powerful separation by showing an exponential separation
between general monotone circuits and constant-depth
multi-linear formulas. Neither such a separation between general
formulas and monotone circuits, nor a separation between
multi-linear circuits and monotone circuits were known
before. Our result uses the recent counter-example to the
Log-Approximate-Rank Conjecture in communication complexity.
-
Jerrum and Snir (1982) also obtained a separation between the
powers of general circuits and monotone ones via a different
polynomial, i.e. the spanning tree polynomial (STP), a
polynomial that is well known to be in
VP, using non-multi-linear cancellations of determinantal
computation. We provide the first extension of this result to show
that the STP remains "robustly hard" for monotone circuits in the
sense of Hrubes' recent notion of epsilon-sensitivity. The latter
result is proved via formulating a discrepancy method for monotone
arithmetic circuits that seems independently interesting.
We will discuss several open problems arising from these results.
(The seminar is based on joint works with Rajit Datta, Utsab Ghosal and
Partha Mukhopadhyay.)
-
Monday Sep 20 at 14:00
KRW composition theorems via lifting
(video)
(Or Meir, University of Haifa)
One of the major open problems in complexity theory is proving
super-logarithmic lower bounds on the depth of circuits (i.e., separating
P from NC0).
Karchmer, Raz, and Wigderson (Computational
Complexity 5(3/4), 1995) suggested approaching this problem by proving
that depth complexity behaves "as expected" with respect to
composition of functions. They showed that the validity of this
conjecture would separate
P from NC0.
Several works have made
progress toward resolving this conjecture by proving special cases. In
particular, these works proved the KRW conjecture for every outer
function, but only for few inner functions. Thus, it is an important
challenge to prove the KRW conjecture for a wider range of inner
functions.
In this work, we extend significantly the range of inner
functions that can be handled. First, we consider the monotone
version of the KRW conjecture. We prove it for every monotone
inner function whose depth complexity can be lower bounded via a
query-to-communication lifting theorem. This allows us to handle
several new and well-studied functions such as the
s-t-connectivity, clique, and generation functions.
In order to carry this progress back to the non-monotone
setting,we introduce a new notion of semi-monotone composition,
which combines the non-monotone complexity of the outer function
with the monotone complexity of the inner function. In this
setting, we prove the KRW conjecture for a similar selection of
inner functions, but only for a specific choice of the outer
function.
-
Friday Sep 17 at 14:15
Estimating the size of unions of sets in streaming model
(video)
(Kuldeep S. Meel, National University of Singapore)
Feeling tired of complicated theorems with complicated proofs?
Fret not; this week will be different: A simple theorem with a simpler
proof for a general problem. I will make a case for our PODS-21 paper
to be the simplest paper ever accepted at PODS.
A set is Delphic if it supports efficient membership, sampling, and
counting calls. The notion of Delphic sets captures several well-known
problems, such as Klee's measure, distinct elements, and model
counting of DNF formulas. We will discuss estimation of the size of
the union of Delphic sets wherein each set is implicitly (and
succinctly) represented, and comes in a streaming fashion.
The primary contribution of our work is a simple, elegant, and
efficient sampling-based algorithm for the estimation of the union in
the streaming setting. Our algorithm has worst case space
complexity and update time that is logarithmic in the size of the
universe and stream size. Consequently, our algorithm provides the
first algorithm with a linear dependence on d for Klee's measure
problem in streaming setting for d>1, thereby settling the open
problem of Woodruff and Tirthpura (PODS-12). The Klee's measure
problem corresponds to computation of volume of multi-dimension
axis-aligned rectangles, i.e., every d-dimension axis-aligned
rectangle can be defined as [a_1,b_1] x [a_2,b_2] x … x [a_d, b_d].
(Joint work with N.V. Vinodchandran and Sourav Chakraborty.)
-
Monday Sep 6 at 14:00
Lifting with sunflowers
(video)
(Ian Mertz, University of Toronto)
Lifting theorems are an important class of techniques which
can transform functions which are hard for weak computation
models into functions which are hard for strong computation
models. Starting with Raz and McKenzie (1999), there has been
a flurry of work proving query-to-communication lifting
theorems, which translate lower bounds on query complexity to
lower bounds for the corresponding communication model. In
this talk I will present a simplified proof of deterministic
query-to-communication lifting. Our proof uses elementary
counting together with a novel connection to the sunflower
lemma.
-
Friday Aug 27 at 10:00
Who is Isabelle, and why is she so picky about my proofs?
(Dmitriy Traytel, University of Copenhagen)
Proof assistants are tools that mechanically check human-written formal proofs,
e.g., of an algorithm's correctness, and support their users in developing
these proofs. Landmark achievements in this area include realistic verified
compilers, operating system kernels, and distributed systems as well as formal
proofs of deep mathematical results such as the four color theorem, the
Feit–Thompson theorem, and the Kepler conjecture. In this talk, I will give a
brief introduction of the Isabelle/HOL proof assistant, of which I am both a user
and a developer. I will explain how Isabelle's foundational approach achieves
the highest level of trustworthiness and will examplify the benefits of using a
proof assistant via three case studies: the formal verification of security
properties of authenticated data structures, the completeness of ordered
resolution, and the correctness of an efficient runtime monitoring algorithm.
MIAO seminars spring 2021
During the spring semester of 2021,
we had online virtual seminars running every weekday
at 17:30 CET from early February to mid-May
as part of the
Satisfiability: Theory, Practice, and Beyond
and
Theoretical Foundations of Computer Systems
programs
at the
Simons Institute
for the Theory of Computing.
On top of this, we had some additional seminars as listed
below.
-
Monday Jun 7 at 14:00
Proof complexity meets finite model theory
(video)
(Joanna Ochremiak, LaBRI, Université de Bordeaux and CNRS)
Finite model theory studies the power of logics on the class of finite
structures. One of its goals is to characterize symmetric computation,
that is, computation that abstracts away details which are not essential
for the given task, by respecting the symmetries of the input.
In this talk I will discuss connections between proof complexity and
finite model theory, focussing on lower bounds. For certain proof
systems, the existence of a succinct refutation can be decided in a
symmetry-preserving way. This allows us to transfer lower bounds from
finite model theory to proof complexity. I will introduce this approach
and explain its key technical ideas such as the method of folding for
dealing with symmetries in linear programs.
-
Friday Jun 4 at 14:00
Abstract cores in implicit hitting set based MaxSAT solving
(video)
(Jeremias Berg, University of Helsinki)
Maximum satisfiability (MaxSat) solving is an active area of research
motivated by numerous successful applications to solving NP-hard
combinatorial optimization problems. One of the most successful
approaches for solving MaxSat instances from real world domains are the
so called implicit hitting set (IHS) solvers. IHS solvers decouple
MaxSat solving into separate core-extraction and optimization steps
which are tackled by an Boolean satisfiability (SAT) and an integer
linear programming (IP) solver, respectively. While the approach shows
state-of-the-art performance on many industrial instances, it is known
that there exists instances on which IHS solvers need to extract an
exponential number of cores before terminating. In this talk I will
present the simplest of these problematic instances
and talk about how abstract cores, a compact representation of a large
number of regular cores that we recently proposed, addresses perhaps the
main bottleneck of IHS solvers. I will show how to incorporate abstract
core reasoning into the IHS algorithm and demonstrate that
that including abstract cores into a state-of-the- art IHS solver
improves its performance enough to surpass the best performing solvers
of the recent MaxSat Evaluations.
-
Friday May 28 at 14:00
Feasible interpolation for algebraic proof systems
(video)
(Tuomas Hakoniemi,
Universitat Politècnica de Catalunya)
In this talk we present a form of feasible interpolation for two
algebraic proof systems, Polynomial Calculus and Sums-of-Squares. We
show that for both systems there is a poly-time algorithm that given two
sets P(x,z) and Q(y,z) of polynomial equalities, a refutation of the union of P(x,z) and Q(y,z) and an assignment a to the z-variables outputs either a refutation of P(x,a) or a refutation of Q(y,a).
Our proofs are fairly logical in nature, in that they rely heavily on
semantic characterizations of resource-bounded refutations to prove the
existence of suitable refutations. These semantic existence proofs
narrow down the search space for the refutations we are after so that we
can actually find them efficiently.
-
Monday May 17 at 14:00
Average-case perfect matching lower bounds
from hardness of Tseitin formulas
(video)
(Kilian Risse, KTH Royal Institute of Technology)
We study the complexity of proving that a sparse random regular
graph on an odd number of vertices does not have a perfect matching,
and related problems involving each vertex being matched some
pre-specified number of times.
We show that this requires proofs of degree Ω(n / log n) in
the Polynomial Calculus (over fields of characteristic ≠ 2) and
Sum-of-Squares proof systems, and exponential size in the
bounded-depth Frege proof system.
This resolves a question by Razborov asking whether
the Lovász-Schrijver proof system requires nδ
rounds to refute these formulas for some δ > 0.
The results are obtained by a
worst-case to average-case reduction of these formulas relying on a
topological embedding theorem which may be of independent interest.
Joint work with with Per Austrin.
-
Monday May 10 at 14:00
On the complexity of branch and cut
(video)
(Noah Fleming, University of Toronto)
The Stabbing Planes proof system was introduced to model practical
branch-and-cut integer programming solvers. As a proof system, Stabbing
Planes can be viewed as a simple generalization of DPLL to reason about
linear inequalities. It is powerful enough to simulate Cutting Planes
and produce short refutations of the Tseitin formulas — certain
unsatisfiable systems of linear equations mod 2 — which are canonical
hard examples for many algebraic proof systems. In a surprising recent
result, Dadush and Tiwari showed that these short Stabbing Planes
refutations of the Tseitin formulas could be translated into Cutting
Planes proofs. This raises the question of whether all Stabbing Planes
can be efficiently translated into Cutting Planes. In recent work, we
give a partial answer to this question.
In this talk I will introduce and motivate the Stabbing Planes proof
system. I will then show how Stabbing Planes proofs with
quasi-polynomially bounded coefficients can be quasi-polynomially
translated into Cutting Planes. As a consequence of this translation, we
can show that Cutting Planes has quasi-polynomial size refutations of
any unsatisfiable system of linear equations over a finite field. A
remarkable property of our translation for systems of linear equations
over finite fields, and the translation of Dadush and Tiwari for the
Tseitin formulas, is that the resulting proofs are also
quasi-polynomially deep. A natural question is thus whether these depth
bounds can be improved. In the last part of the talk, I will discuss
progress towards answering this question in the form of a new depth
lower bound technique for Cutting Planes, which works also for the
stronger semantic Cutting Planes system, and which allows us to
establish the first linear lower bounds on the depth of semantic Cutting
Planes refutations of the Tseitin formulas.
-
Monday May 3 at 14:00
SAT-encodings and scalability
(video)
(André Schidler, Technische Universität Wien)
Boolean satisfiability (SAT) solvers have reached stunning performance
over the last decade. This performance can be harnessed for other
problems by means of SAT encodings: translating the problem instance
into a SAT instance. SAT encodings have been used successfully for many
combinatorial problems and are established in industry. In this talk, I
discuss the general idea behind SAT encodings and two specific
application areas: graph decompositions and machine learning.
Graph decompositions allow for specialized algorithms that can solve
hard problems efficiently. SAT encodings provide not only the means to
optimally compute these decompositions, but can be easily extended by
extra constraints specific to the hard problem to be solved.
For machine learning, SAT encodings gained increased attention in
recent years, as they allow the induction of very small AI models.
Learning small models has become more important in the context of
explainable AI: smaller models are usually easier to understand for
humans. We will focus specifically on decision tree induction.
Scalability is one of the main issues when using SAT encodings. We can
overcome this problem by embedding SAT-based solving into a heuristic
framework. This allows us to trade off a slightly worse result for large
applicability. In the last part of our talk, we discuss our SAT-based
local improvement framework that implements this idea.
-
Wednesday Apr 28 at 15:00
Slicing the hypercube is not easy
(video)
(Amir Yehudayoff, Technion – Israel Institute of Technology)
How many hyperplanes are needed to slice all edges of the hypercube?
This question has been studied in machine learning, geometry and
computational complexity since the 1970s. We shall describe (most of) an
argument showing that more than n0.57 hyperplanes are needed, for large n. We shall also see a couple of applications. This is joint work with Gal Yehuda.
-
Monday Apr 26 at 14:00
Recent lower bounds in algebraic complexity theory
(video)
(Ben Lee Volk, University of Texas at Austin)
Algebraic complexity theory studies the complexity of solving algebraic
computational tasks using algebraic models of computation. One major
problem in this area is to prove lower bounds on the number of
arithmetic operations required for computing explicit polynomials. This
natural mathematical problem is the algebraic analog of the famous P vs. NP problem. It also has tight connections to other classical mathematical areas and to fundamental questions in complexity theory.
In this talk I will provide background and then present some recent
progress on proving lower bounds for models of algebraic computation,
such as the algebraic analogs of NC1 and NL.
Based on joint works with Prerona Chatterjee, Mrinal Kumar, and Adrian She.
-
Friday Apr 23 at 14:00
On the complexity of branching proofs
(video)
(Daniel Dadush, CWI)
We consider the task of proving integer infeasibility of a bounded
convex K in Rn using a general branching
proof system. In a general branching proof, one constructs a branching
tree by adding an integer disjunction ax ⩽ b or
ax ⩾ b+1, for an integer vector a and an
integer b, at each node, such that the leaves of the tree
correspond to empty sets (i.e., K together with the
inequalities picked up from the root to leaf is empty). Recently,
Beame et al (ITCS 2018), asked whether the bit size of the
coefficients in a branching proof, which they named stabbing planes
(SP) refutations, for the case of polytopes derived from SAT formulas,
can be assumed to be polynomial in n. We resolve this
question by showing that any branching proof can be recompiled so that
the integer disjunctions have coefficients of size at most
(nR)O(n2), where R is the radius of an
l1 ball containing K, while increasing
the number of nodes in the branching tree by at most a factor
O(n). As our second contribution, we show that Tseitin
formulas, an important class of infeasible SAT instances, have
quasi-polynomial sized cutting plane (CP) refutations, disproving the
conjecture that Tseitin formulas are (exponentially) hard for CP. As
our final contribution, we give a simple family of polytopes in
[0,1]n requiring branching proofs of length
2n/n.
Joint work with Samarth Tiwari.
-
Monday Apr 19 at 14:00
Kidney exchange programmes —
saving lives with optimisation algorithms
(video)
(William Pettersson, University of Glasgow)
Kidney Exchange Programmes (KEPs) increase the rate of living
donor kidney transplantation, in turn saving lives. In this
talk, I will explain exactly what KEPs are, how KEPs achieve
this goal, and the role of optimisation algorithms within
KEPs. This will include brief explanations of some of the
models used for kidney exchange programmes, some of the more
generic optimisation techniques that we use, as well as recent
advances and specific research directions for the field into
the future. This talk will very much be an overview of kidney
exchange, without going into complex details of optimisation
algorithms, making it suitable for a wider audience.
-
Thursday Apr 8 at 14:00
(Semi)Algebraic proofs over {±1} variables
(video)
(Dmitry Sokolov, St.Petersburg State University)
One of the major open problems in proof complexity is to prove
lower bounds on AC0[p]-Frege proof systems. As a step toward
this goal Impagliazzo, Mouli and Pitassi in a recent paper
suggested proving lower bounds on the size for Polynomial
Calculus over the {±1} basis. In this talk we show a
technique for proving such lower bounds and moreover, we also
give lower bounds on the size for Sum-of-Squares over the
{±1} basis.
We discuss the difference between {0, 1} and {+1, -1} cases and
problems in further generalizations of the lower bounds.
-
Courtesy of Zuse Institute Berlin:
Friday Feb 12 at 14:00
Introduction to IP presolving techniques in PaPILO
(Alexander Hoen, Zuse Institute Berlin)
Presolving is an essential part contributing to the performance of
modern MIP solvers. PaPILO, a new C++ library, provides presolving
routines for MIP and LP problems. This talk will give an introduction
to basic IP-presolving techniques and provide insights into important
design choices enabling PaPILO's capabilities, in particular
regarding its use of parallel hardware. While presolving itself is
designed to be fast, this can come at the cost of failing to find
important reductions due to working limits and heuristic filtering.
Yet, even most commercial solvers do not use multi-threading for the
preprocessing step as of today. PaPILO's design facilitates use of
parallel hardware to allow for more aggressive presolving and
presolving of huge problems. The architecture of PaPILO allows
presolvers generally to run in parallel without requiring expensive
copies of the problem and without special synchronization in the
presolvers themselves. Additionally, the use of Intel's TBB library
aids PaPILO to efficiently exploit recursive parallelism within
expensive presolving routines, such as probing, dominated columns, or
constraint sparsification. Despite PaPILO's use of parallelization,
its results are guaranteed to be deterministic independently of the
number of threads available.
MIAO seminars autumn 2020
-
Friday Oct 9 at 13:15
Model counting with probabilistic component caching
(Shubham Sharma and Kuldeep S. Meel,
National University of Singapore)
Given a Boolean formula F, the problem of model counting, also
referred to as #SAT, seeks to compute the number of solutions of F.
Model counting is a fundamental problem with a wide variety of
applications ranging from planning, quantified information flow to
probabilistic reasoning and the like. The modern #SAT solvers tend to
be either based on static decomposition, dynamic decomposition, or a
hybrid of the two. Despite dynamic decomposition based #SAT solvers
sharing much of their architecture with SAT solvers, the core design
and heuristics of dynamic decomposition-based #SAT solvers has
remained constant for over a decade. In this paper, we revisit the
architecture of the state-of-the-art dynamic decomposition-based #SAT
tool, sharpSAT, and demonstrate that by introducing a new notion of
probabilistic component caching and the usage of universal hashing
for exact model counting along with the development of several new
heuristics can lead to significant performance
improvement over state-of-the-art model-counters. In particular, we
develop GANAK, a new scalable probabilistic exact model counter that
outperforms state-of-the-art exact and approximate model counters for
a wide variety of instances.
-
Friday Sep 25 at 13:15
Manthan: A data-driven approach for Boolean function synthesis
(Priyanka Golia and Kuldeep S. Meel,
National University of Singapore)
Boolean functional synthesis is a fundamental problem in computer
science with wide-ranging applications and has witnessed a surge of
interest resulting in progressively improved techniques over the past
decade. Despite intense algorithmic development, a large number of
problems remain beyond the reach of the current state-of-the-art
techniques. Motivated by the progress in machine learning, we propose
Manthan, a novel data-driven approach to Boolean functional
synthesis. Manthan views functional synthesis as a classification
problem, relying on advances in constrained sampling for data
generation, and advances in automated reasoning for a novel
proof-guided refinement and provable verification. On an extensive
and rigorous evaluation over 609 benchmarks, we demonstrate that
Manthan significantly improves upon the current state of the art,
solving 356 benchmarks in comparison to 280, which is the most solved
by a state-of-the-art technique; thereby, we demonstrate an increase
of 76 benchmarks over the current state of the art. The significant
performance improvements, along with our detailed analysis, highlights
several interesting avenues of future work at the intersection of
machine learning, constrained sampling, and automated reasoning.
-
Friday Aug 28 at 13:15
Tuning Sat4j PB solvers for decision problems
(Romain Wallon,
Université d'Artois)
During the last decades, many improvements in CDCL SAT solvers have
made possible to solve efficiently large problems containing millions of
variables and clauses.
Despite this practical efficiency, some instances remain out of reach
for such solvers.
This is particularly true when the input formula requires an
exponential size refutation proof in the resolution proof system (as for
pigeonhole formulae).
This observation motivated the development of another kind of solvers,
known as pseudo-Boolean (PB) solvers.
These solvers take as inputs a conjunction of PB constraints (integral
linear inequations over Boolean variables) and benefit from the cutting
planes proof system, which is (in theory) stronger than the resolution
proof system.
To implement this proof system, current PB solvers follow the direction
of modern SAT solvers, by implementing a conflict analysis procedure
relying on the application of cutting planes rules.
However, adapting the CDCL architecture to take into account PB
constraints is not as straightforward as it may look.
In particular, many CDCL invariants and properties do not hold anymore
as long as PB constraints are considered.
While some of them may be safely ignored (e.g., when they affect the
decision heuristic, the deletion strategy or the restart policy), some
others must be fixed to ensure the soundness of the solver (e.g., by
ensuring to preserve the conflict during its analysis).
In this talk, we will give an overview of the main differences between
PB solving and classical SAT solving, and present different approaches
that have been designed to take these differences into account to extend
the CDCL architecture to PB problems.
We will in particular discuss the pros and cons of these approaches,
and we will explain how they can be enhanced to improve the practical
performance of PB solvers.
-
Friday Aug 21 at 10:15
Using proofs to analyze SAT solvers
(Janne Kokkala,
Lund University and University of Copenhagen)
The main idea of this seminar is to give an overview of what
one can ask and what one can learn about SAT solvers when
analyzing the proof to estimate what part of the work was
useful. I will start by giving a 17-ish minute alpha test run
for my presentation for our CP paper [1]. After that, I will
discuss earlier works that use the same or a similar idea for
general insights [2], analyzing VSIDS usefulness [3], and
clause exchange for parallel solvers [4,5]. To conclude, we
can discuss how this approach could be used for pseudo-Boolean
solvers.
[1] J. I. Kokkala, J. Nordström.
Using Resolution Proofs to
Analyse CDCL SAT solvers.
[2] L. Simon.
Post Mortem Analysis of SAT Solver Proofs.
[3] S. Malik, V. Ying.
On the efficiency of the VSIDS decision heuristic.
(Workshop presentation.)
[4] G. Audemard, L. Simon.
Lazy
clause exchange policy for parallel SAT solvers.
[5] G. Katsirelos, A. Sabharwal, H. Samulowitz, L. Simon.
Resolution and parallelizability: Barriers to the efficient
parallelization of SAT solvers.
-
Wednesday Aug 19 at 13:15
On computational aspects of the antibandwidth problem
(Markus Sinnl,
Johannes Kepler University Linz)
In this talk, we consider the antibandwidth problem, also
known as dual bandwidth problem, separation problem and
maximum differential coloring problem. Given a labeled graph
(i.e., a numbering of the vertices of a graph), the
antibandwidth of a node is defined as the minimum absolute
difference of its labeling to the labeling of all its adjacent
vertices. The goal in the antibandwidth problem is to find a
labeling maximizing the antibandwidth. The problem is NP-hard
in general graphs and has applications in diverse areas like
scheduling, radio frequency assignment, obnoxious facility
location and map-coloring.
There has been much work on deriving theoretical bounds for
the problem and also in the design of metaheuristics in recent
years. However, the optimality gaps between the best known
solution values and reported upper bounds for the
HarwellBoeing Matrix-instances, which are the commonly used
benchmark instances for this problem, were often very large
(e.g., up to 577%).
We present new mixed-integer programming approaches for the
problem, including one approach, which does not directly
formulate the problem as optimization problem, but as a series
of feasibility problems. We also discuss how these feasibility
problems can be encoded with various SAT-encodings, including
a new and specialised encoding which exploits a certain
staircase-structure occuring in the problem formulation. We
present computational results for all the algorithms,
including a comparison of the MIP and SAT-approaches. Our
developed approaches allow to find the proven optimal solution
for eight instances from literature, where the optimal
solution was unknown and also provide reduced gaps for eleven
additional instances, including improved solution values for
seven instances, the largest optimality gap is now
46%. Instances based on the problem were submitted to the SAT
Competition 2020.
MIAO seminars spring 2020
-
Wednesday Jul 15 at 15:00
Naïve algorithm selection for SAT solving
(Stephan Gocht,
Lund University and University of Copenhagen)
Although solving propositional formulas is an NP-complete
problem, state-of-the-art SAT solvers are able to solve formulas
with millions of variables. To obtain good performance it is
necessary to configure parameters for heuristic
decisions. However, there is no single parameter configuration
that is perfect for all formulas, and choosing the parameters is
a difficult task. The standard approach is to evaluate different
configurations on some formulas and to choose the single
configuration, that performs best overall. This configuration,
which is called single best solver, is then used to solve new
unseen formulas. In this paper we demonstrate how random forests
can be used to choose a configuration dynamically based on
simple features of the formula. The evaluation shows that our
approach is able to outperform the single best solver on
formulas that are similar to the training set, but not on
formulas from completely new domains.
-
Friday Jun 26 at 10:00
Behind the scenes of chronological CDCL
(Sibylle Möhle and Armin Biere,
Johannes Kepler University Linz)
Combining conflict-driven clause learning (CDCL) with
chronological backtracking is challenging: Multiple invariants
considered crucial in modern SAT solvers are violated, if
after conflict analysis the solver does not jump to the
assertion level but to a higher decision level instead. In
their SAT'18 paper "Chronological Backtracking", Alexander
Nadel and Vadim Ryvchim provide a fix to this issue. Moreover,
their SAT solver implementing chronological backtracking won
the main track of the SAT Competition 2018. In our SAT'19
paper, "Backing Backtracking", we present a formalization and
generalization of this method. We prove its correctness and
provide an independent implementation.
In this seminar, we demonstrate the working of chronological
CDCL by means of an example. In this example, after a conflict
the conflicting clause contains only one literal at conflict
level. It is therefore used as a reason for backtracking, thus
saving the effort of conflict analysis. We further show which
invariants are violated and present new ones followed by a
discussion of the rules of our formal framework. We also shed
light onto implementation details, including those which are
not mentioned in our SAT'19 paper.
-
Monday Jun 22 at 13:30
McSplit: A partitioning algorithm for maximum common subgraph
problems
(Ciaran McCreesh and James Trimble,
University of Glasgow)
We will give a short introduction to McSplit, an algorithm for
the maximum common (connected) subgraph problem coauthored
with Patrick Prosser and presented at IJCAI 2017. McSplit
resembles a forward-checking constraint programming algorithm,
but uses a partitioning data structure to store domains which
greatly reduces memory use and time per search node. We will
also present our recent work with Stephan Gocht and Jakob
Nordström on adding proof logging to the algorithm, turning
McSplit into a certifying algorithm whose outputs can be
independently verified.
-
Thursday Jun 18 at 20:30
A pseudo-Boolean approach to nonlinear verification
(Vincent Liew,
University of Washington)
We discuss some new experimental results showing the promise
of using pseudo-Boolean solvers, rather than SAT solvers, to
verify bit-vector problems containing multiplication. We use
this approach to efficiently verify the commutativity of a
multiplier output bit by output bit. We also give some examples
of simple bit-vector inequalities where the pseudo-Boolean
approach significantly outperformed SAT solvers and even
bit-vector solvers. Finally, we give some of our observations on
the strengths and weaknesses of different methods of conflict
analysis used by pseudo-Boolean solvers.
-
Friday May 8 at 13:15
Pseudo-Boolean solvers for answer set programming
(Bart Bogaerts, Vrije Universiteit Brussel)
Answer set programming (ASP) is a well-established knowledge
representation formalism. Most ASP solvers are based on
(extensions of) technology from Boolean satisfiability
solving. While these solvers have shown to be very successful in
many practical applications, their strength is limited by their
underlying proof system, resolution. In this research, we
present a new tool LP2PB that translates ASP programs into
pseudo-Boolean theories, for which solvers based on the
(stronger) cutting plane proof system exist. We evaluate our
tool, and the potential of cutting-plane-based solving for ASP
on traditional ASP benchmarks as well as benchmarks from
pseudo-Boolean solving. Our results are mixed: overall,
traditional ASP solvers still outperform our translational
approach, but several benchmark families are identified where
the balance shifts the other way, thereby suggesting that
further investigation into a stronger proof system for ASP is
valuable.
|