Předchozí program semináře, letní semestr 2006 [Previous
program, Spring 2006]
23. 5. 2006 (úterý [Tuesday], 11.00,
MÚ Žitná 25)
Saurabh Sanghvi and Salil Vadhan: The Round Complexity of Two-Party Random
Selection. STOC'05
(referuje
Marek Krčál)
Abstract: We study the round
complexity of two-party protocols for generating a random n-bit string
such that the output is guaranteed to have bounded bias (according to
some measure) even if one of the two parties deviates from the protocol
(even using unlimited computational resources). Specifically, we
require that the output's statistical difference from the uniform
distribution on {0,1}^n is bounded by a constant less than 1. We
present a protocol for the above problem that has 2 log* n + O(1)
rounds, improving a previous n-round protocol of Goldreich,
Goldwasser, and Linial (FOCS `91). We then prove a matching lower
bound, showing that any protocol guaranteeing bounded statistical
difference requires at least log* n- log* log* n-O(1) rounds. As far as
we know, this is the first nontrivial lower bound on the round
complexity of random selection protocols (of any type) that does not
impose additional constraints (e.g. on communication or
``simulatability'').
E. Viola: On Probabilistic Time versus Alternating Time.
(referuje
Martin Pergel)
Abstract: Sipser and Gács,
and independently Lautemann, proved in '83 that probabilistic
polynomial time is contained in the second level of the polynomial-time
hierarchy, i.e. BPP is in \Sigma_2 P. This is essentially the only
non-trivial upper bound that we have on the power of probabilistic
computation. More precisely, the Sipser-Gács-Lautemann simulation shows
that probabilistic time can be simulated deterministically, using two
quantifiers, **with a quadratic blow-up in the running time**. That is,
BPTime(t) is contained in \Sigma_2 Time(t^2).
We discuss whether this quadratic blow-up in the running time is
necessary. We show that the quadratic blow-up is indeed necessary for
black-box simulations that use two quantifiers, such as those of
Sipser, Gács, and Lautemann. To obtain this result, we prove a new
circuit lower bound for computing **approximate majority**, i.e.
computing the majority of a given bit-string whose fraction of 1's is
bounded away from 1/2 (by a constant): We show that small depth-3
circuits for approximate majority must have bottom fan-in Omega(log n).
On the positive side, we obtain that probabilistic time can be
simulated deterministically, using three quantifiers, in quasilinear
time. That is, BPTime(t) is contained in \Sigma_3 Time(t polylog t).
Along the way, we show that approximate majority can be computed by
uniform polynomial-size depth-3 circuits. This is a uniform version of
a striking result by Ajtai that gives *non-uniform* polynomial-size
depth-3 circuits for approximate majority.
2. 5. 2006 (úterý [Tuesday], 11.00,
MÚ Žitná 25)
G. Cormode,
S. Muthukrishnan: What's hot and what's not: tracking most frequent
items
dynamically.
ACM
Trans. Database Syst. 30(1): 249-278 (2005).
(referuje Ondřej Zajíček)
Abstract: Most database
management systems maintain statistics on the underlying relation. One
of the important statistics is that of the ``hot items'' in the
relation: those that appear many times (most frequently, or more than
some threshold). For example, endbiased histograms keep the hot items
as part of the histogram and are used in selectivity estimation. Hot
items are used as simple outliers in data mining, and in anomaly
detection in networking applications. We present a new algorithm for
dynamically determining the hot items at any time in the relation that
is undergoing deletion operations as well as inserts. Our algorithm
maintains a small space data structure that monitors the transactions
on the relation, and when required, quickly outputs all hot items,
without rescanning the relation in the database. With user specified
probability, it is able to report all hot items. Our algorithm relies
on the idea of ``group testing'', is simple to implement, and has
provable quality, space and time guarantees. Previously known
algorithms for this problem that make similar quality and performance
guarantees can not handle deletions, and those that handle deletions
can not make similar guarantees without rescanning the database. Our
experiments with real and synthetic data shows that our algorithm is
remarkably accurate in dynamically tracking the hot items independent
of the rate of insertions and deletions.
28.
3., 4. 4., 11. 4. 2006 (úterý [Tuesday], 11.00,
MÚ Žitná 25, 1.
patro zadní budovy)
Abstract: The frequency
moments of a sequence containing m_i elements of type i, are the
numbers Fk = \sum m_i^k. We consider the space complexity of
randomized algorithms that approximate the numbers Fk, when the
elements of the sequence are given one by one and cannot be stored.
Surprisingly, it turns out that the numbers F0; F1 and F2 can be
approximated in logarithmic space, whereas the approximation of Fk for
k >=6 requires n^Omega(1) space. Applications to data bases are
mentioned as well.
21.
3. 2006 (úterý [Tuesday], 11.00,
MÚ Žitná 25, 1.
patro zadní budovy)
Marek Krčál: Complexity of graph
planarity testing
Abstract:
We will give the second proof that the problem of
planarity testing is in \SL (symmetric nondeterministic LOGSPACE). It
will be done by showing a reduction of the problem to planarity of
graph with maximal degree three. Note that usual replacing vertices of
degree bigger than three by "little circles" can spoil planarity, we
need to be smarter. Planarity of graphs with maximal degree three was
already solved in paper "Symmetric complementation" by John Reif.
Our approach is very different and hopefully more intuitive than the
one in the first proof by Meena Mahajan and Eric Allender ("Complexity
of planarity testing"), which is pure \SL implementation of a very
complicated parallel algorithm by John Reif.
This result together with recent breakthrough by Omer Reingold that
\SL=\L ("Undirected {ST}-connectivity in log-space") completely solves
the question of complexity of planarity problem, because planarity is
hard for \L (it is again shown in "Complexity of planarity testing").
7. 3. 2006 (úterý [Tuesday], 11.00, MÚ Žitná 25, 1.
patro zadní budovy) Michal Koucký: Sublinear
time algorithms
Abstract: I will give an
introduction to the area of sublinear algorithms which includes
sublinear approximation algorithms and property testing. In general
sublinear algorithms are once which on input of size n run in time
o(n), e.g., poly-logarithmic time. Typically they are randomized and
one cannot expect an exact solution of his problem from them but rather
an approximate one.
I will show such algorithms for three problems: clustering data,
testing a graph connectivity, and testing whether a sequence is sorted.
The talk will loosely follow the survey paper of Kumar and Rubinfeld of
the same title.