Seminář Hora Informaticae

(Založeno: 1994)

Předchozí přednášky

  • 01.07.2019 14:00 - Deepak Kapur (University of Mexico): Interpolant generation algorithms based on quantifier elimination

    In their paper in 2006, Kapur, Majumdar and Zarba observed a connection between quantifier elimination and interpolant generation which was probably well-known but not explicitly reported in the literature on automated reasoning and formal methods. Since then I have been investigating how to develop heuristics for quantifier elimination to generate interpolants. Particularly, there is no need to have access to a proof to generate interpolants, a methodology widely used in the formal methods community. I will start with an interpolant generation algorithm in the quantifier-free theory of equality over uninterpreted symbols (EUF). This is followed by an interpolant generation algorithm for octagonal formulas, which is of complexity O(n^3), where n is the number of variables; an interpolant generated is a conjunction of octagonal formulas. Then, I will move to nonlinear (quadratic) polynomial inequalities and their combination with EUF. The key idea is that nonlinear concave quadratic polynomials can be linearized and a generalization of Motzkin's transposition theorem can be proved. Time permitting, I will discuss our recent work on using linear and nonlinear deep learning methods based on support vector machines, exploring a totally different approach for generating nonlinear interpolants from more complex formulas including transendental functions.

  • 04.03.2019 14:00 - Jérémie Cabessa (Université Paris 2): Automata and Turing Computation with Synfire Ring-Based Neural Networks

    Synfire rings are layered looping neuronal structures that allow for robust, temporally precise and self-sustained spiking activities. We show that finite state automata and Turing machines can be simulated by recurrent neural networks composed of synfire rings. While the original constructions involve simple Boolean neurons, we investigate the generalization of the results to the more biological contexts of Izhikevich and Hodgkin-Huxley neural nets. These considerations show that a neuro-inspired paradigm for abstract computation is possible and potentially exploitable. They might constitute a very first step towards the realization of neuronal computers.

  • 14.01.2019 14:00 - Rudolf Rosa (UFAL MFF UK): Deep Neural Networks in Natural Language Processing

    Similarly to other fields, Natural language processing has recently been revolutionized by the introduction of Deep learning. In my talk, I will focus on two specific properties of natural text as input to a machine learning system, and the current ways in which these are addressed in Deep neural networks:

    • Representing massively multi-valued discrete data (words) by continuous low-dimensional vectors (word embeddings).
    • Handling variable-length input sequences with long-distance relations between elements (sentences) by fixed-sized neural units (attention mechanisms).