Neural Networks | Нейронные сети
11.6K subscribers
726 photos
163 videos
170 files
9.4K links
Все о машинном обучении

По всем вопросам - @notxxx1

№ 4959169263
Download Telegram
​An Algorithmic Barrier to Neural Circuit Understanding
Venkatakrishnan Ramaswamy: https://www.biorxiv.org/content/10.1101/639724v1
#Algorithme #Neuroscience #innovation #technology

🔗 An Algorithmic Barrier to Neural Circuit Understanding
Neuroscience is witnessing extraordinary progress in experimental techniques, especially at the neural circuit level. These advances are largely aimed at enabling us to understand how neural circuit computations mechanistically cause behavior. Here, using techniques from Theoretical Computer Science, we examine how many experiments are needed to obtain such an empirical understanding. It is proved, mathematically, that establishing the most extensive notions of understanding need exponentially-many experiments in the number of neurons, in general, unless a widely-posited hypothesis about computation is false. Worse still, the feasible experimental regime is one where the number of experiments scales sub-linearly in the number of neurons, suggesting a fundamental impediment to such an understanding. Determining which notions of understanding are algorithmically tractable, thus, becomes an important new endeavor in Neuroscience.
​Interrogating theoretical models of neural computation with deep inference
Bittner et al.: https://www.biorxiv.org/content/10.1101/837567v2
#Neuroscience

🔗 Interrogating theoretical models of neural computation with deep inference
A cornerstone of theoretical neuroscience is the circuit model: a system of equations that captures a hypothesized neural mechanism. Such models are valuable when they give rise to an experimentally observed phenomenon – whether behavioral or in terms of neural activity – and thus can offer insights into neural computation. The operation of these circuits, like all models, critically depends on the choices of model parameters. Historically, the gold standard has been to analytically derive the relationship between model parameters and computational properties. However, this enterprise quickly becomes infeasible as biologically realistic constraints are included into the model increasing its complexity, often resulting in ad hoc approaches to understanding the relationship between model and computation. We bring recent machine learning techniques – the use of deep generative models for probabilistic inference – to bear on this problem, learning distributions of parameters that produce the specified properties