- Home
- Institute
- People
- Research
- Applications
- Seminars and Events
- Library
- Doctoral Studies
- Jobs
In Chaos, volume 30, issue č. 1 , 2020, ISSN 1054-1500.
Estimating causal interactions in complex dynamical systems is an important problem encountered in many fields of current science. While a theoretical solution for detecting the causal interactions has been previously formulated in the framework of prediction improvement, it generally requires the computation of high-dimensional information functionals—a situation invoking the curse of dimensionality with increasing network size. Recently, several methods have been proposed to alleviate this problem, based on iterative procedures for the assessment of conditional (in)dependences. In the current work, we bring a comparison of several such prominent approaches. This is done both by theoretical comparison of the algorithms using a formulation in a common framework and by numerical simulations including realistic complex coupling patterns. The theoretical analysis highlights the key similarities and differences between the algorithms, hinting on their comparative strengths and weaknesses. The method assumptions and specific properties such as false positive control and order-dependence are discussed. Numerical simulations suggest that while the accuracy of most of the algorithms is almost indistinguishable, there are substantial differences in their computational demands, ranging theoretically from polynomial to exponential complexity and leading to substantial differences in computation time in realistic scenarios depending on the density and size of networks. Based on the analysis of the algorithms and numerical simulations, we propose a hybrid approach providing competitive accuracy with improved computational efficiency. Characterization of the structure of interactions in large heterogeneous systems based on observational data has become one of the dominant challenges across scientific fields. In many cases, measurements of the dynamical behavior are available, allowing inference of causal interactions among the subsystems by exploiting the principle of temporal precedence of the cause before the effect. Half a century ago, Sir Clive Granger proposed a formal treatment of the problem of detecting such interactions, based on statistical testing of the improvement of the prediction of a target variable by a candidate source variable. This process has been generalized to nonlinear processes using the framework of information theory. However, the practical applicability of this methodology has been hindered by the need to properly account for all other potential intervening variables in the system, bringing in both computational and accuracy issues growing with network size. In this work, we compare several prominent algorithms proposed recently for estimating causal structure in large networks. We introduce the algorithms within a common framework highlighting the similarities and differences and compare their accuracy and computational demands on both simulated random networks and realistic examples derived from real-world brain and climate dynamics datasets. Finally, we suggest an algorithm with competitive accuracy and faster performance.
Springer, 2020,
This book aims to give an encyclopedic overview of the state-of-the-art of Krylov subspace iterative methods for solving nonsymmetric systems of algebraic linear equations and to study their mathematical properties. Solving systems of algebraic linear equations is among the most frequent problems in scientific computing: it is used in many disciplines such as physics, engineering, chemistry, biology, and several others. Krylov methods have progressively emerged as the iterative methods with the highest efficiency while being very robust for solving large linear systems. They may be expected to remain so, independent of progress in modern computer-related fields such as parallel and high performance computing. The mathematical properties of the methods are described and analyzed along with their behavior in finite precision arithmetic. A number of numerical examples demonstrate the properties and the behavior of the described methods. Also considered are the methods’ implementations and coding as Matlab®-like functions. Methods which became popular recently are considered in the general framework of Q-OR (quasi-orthogonal )/Q-MR (quasi-minimum) residual methods. This book can be useful for both practitioners and for readers who are more interested in theory. Together with a review of the state-of-the-art, it presents a number of recent theoretical results of the authors, some of them unpublished, as well as a few original algorithms. Some of the derived formulas might be useful for the design of possible new methods or for future analysis. For the more applied user, the book gives an up-to-date overview of the majority of the available Krylov methods for nonsymmetric linear systems, including well-known convergence properties and, as we said above, template codes that can serve as the base for more individualized and elaborate implementations.
In Neural Networks, volume 128, issue August 2020, pages 199-215 , 2020, ISSN 0893-6080.
In order to refine the analysis of the computational power of discrete-time recurrent neural networks (NNs) between the binary-state NNs which are equivalent to finite automata (level 3 in the Chomsky hierarchy), and the analog-state NNs with rational weights which are Turing complete (Chomsky level 0), we study an intermediate model alphaANN of a binary-state NN that is extended with alpha >= 0 extra analog-state neurons. For rational weights, we establish an analog neuron hierarchy 0ANNs subset 1ANNs subset 2ANNs subseteq 3ANNs and separate its first two levels. In particular, 0ANNs coincide with the binary-state NNs (Chomsky level 3) being a proper subset of 1ANNs which accept at most context-sensitive languages (Chomsky level 1) including some non-context-free ones (above Chomsky level 2). We prove that the deterministic (context-free) language L_# = { 0^n1^n | n >= 1 } cannot be recognized by any 1ANN even with real weights. In contrast, we show that deterministic pushdown automata accepting deterministic languages can be simulated by 2ANNs with rational weights, which thus constitute a proper superset of 1ANNs. Finally, we prove that the analog neuron hierarchy collapses to 3ANNs by showing that any Turing machine can be simulated by a 3ANN having rational weights, with linear-time overhead.
Cham, 2020,
This book explores internet applications in which a crucial role is played by classification, such as spam filtering, recommender systems, malware detection, intrusion detection and sentiment analysis. It explains how such classification problems can be solved using various statistical and machine learning methods, including K nearest neighbours, Bayesian classifiers, the logit method, discriminant analysis, several kinds of artificial neural networks, support vector machines, classification trees and other kinds of rule-based methods, as well as random forests and other kinds of classifier ensembles. The book covers a wide range of available classification methods and their variants, not only those that have already been used in the considered kinds of applications, but also those that have the potential to be used in them in the future. The book is a valuable resource for post-graduate students and professionals alike.
In Computational Geometry-Theory and Applications, volume 89, issue August 2020 , 2020, ISSN 0925-7721.
We study Hamiltonicity for some of the most general variants of Delaunay and Gabriel graphs. Instead of defining these proximity graphs using circles, we use an arbitrary convex shape C. Let S be a point set in the plane. The k-order Delaunay graph of S, denoted k-DGC(S), has vertex set S, and edges defined as follows. Given p,q?S, pq is an edge of k-DGC(S) provided there exists some homothet of C with p and q on its boundary and containing at most k points of S different from p and q. The k-order Gabriel graph, denoted k-GGC(S), is defined analogously, except that the homothets considered are restricted to be smallest homothets of C with p and q on the boundary. We provide upper bounds on the minimum value of k for which k-GGC(S) is Hamiltonian. Since k-GGC(S) ? k-DGC(S), all results carry over to k-DGC(S). In particular, we give upper bounds of 24 for every C and 15 for every point-symmetric C. We also improve these bounds to 7 for squares, 11 for regular hexagons, 12 for regular octagons, and 11 for even-sided regular t-gons (for t?10). These constitute the first general results on Hamiltonicity for convex shape Delaunay and Gabriel graphs. In addition, we show lower bounds of k=3 and k=6 on the existence of a bottleneck Hamiltonian cycle in the k-order Gabriel graph for squares and hexagons, respectively. Finally, we construct a point set such that for an infinite family of regular polygons Pt, the Delaunay graph DGPt does not contain a Hamiltonian cycle.
In Geoscientific Model Development, volume 13, issue č. 3, pages 1335-1372 , 2020, ISSN 1991-959X.
In this paper, we describe the PALM model system 6.0. PALM (formerly an abbreviation for Parallelized Large-eddy Simulation Model and now an independent name) is a Fortran-based code and has been applied for studying a variety of atmospheric and oceanic boundary layers for about 20 years. The model is optimized for use on massively parallel computer architectures. This is a follow-up paper to the PALM 4.0 model description in Maronga et al. (2015). During the last years, PALM has been significantly improved and now offers a variety of new components. In particular, much effort was made to enhance the model with components needed for applications in urban environments, like fully interactive land surface and radiation schemes, chemistry, and an indoor model. This paper serves as an overview paper of the PALM 6.0 model system and we describe its current model core. The individual components for urban applications, case studies, validation runs, and issues with suitable input data are presented and discussed in a series of companion papers in this special issue.
In Journal of Approximation Theory, volume 257, issue September 2020 , 2020, ISSN 0021-9045.
Best approximation by the set of all n-fold linear combinations of a family of characteristic functions of measurable subsets is investigated. Such combinations generalize Heaviside-type neural networks. Existence of best approximation is studied in terms of approximative compactness, which requires convergence of distance-minimizing sequences.
In Extremes, volume 23, issue č. 3, pages 493-499 , 2020, ISSN 1386-1999.
We acknowledge the priority on the introduction of the formula of t-lgHill estimator for the positive extreme value index. We provide a novel motivation for this estimator based on ecologically driven dynamical systems. Another motivation is given directly by applying the general t-Hill procedure to log-gamma distribution. We illustrate the good quality of t-lgHill estimator in comparison to classical Hill estimator on the novel data of the concentration of arsenic in drinking water in the rural area of the Arica and Parinacota Region, Chile.
In IEEE Transactions on Neural Networks and Learning Systems, issue Online 12 October 2020 , 2021, ISSN 2162-237X.
Suitability of shallow (one-hidden-layer) networks with translation-invariant kernel units for function approximation and classification tasks is investigated. It is shown that a critical property influencing the capabilities of kernel networks is how the Fourier transforms of kernels converge to zero. The Fourier transforms of kernels suitable for multivariable approximation can have negative values but must be almost everywhere nonzero. In contrast, the Fourier transforms of kernels suitable for maximal margin classification must be everywhere nonnegative but can have large sets where they are equal to zero (e.g., they can be compactly supported). The behavior of the Fourier transforms of multivariable kernels is analyzed using the Hankel transform. The general results are illustrated by examples of both univariable and multivariable kernels (such as Gaussian, Laplace, rectangle, sinc, and cut power kernels)