Neurodata Without Borders (NWB) is a data standard for neurophysiology, providing neuroscientists with a common standard to share, archive, use, and build analysis tools for neurophysiology data. NWB is designed to store a variety of neurophysiology data, including data from intracellular and extracellular electrophysiology experiments, data from optical physiology experiments, and tracking and stimulus data.
https://www.nwb.org
https://www.nwb.org
👌2
the Turing Machine
Neurodata Without Borders (NWB) is a data standard for neurophysiology, providing neuroscientists with a common standard to share, archive, use, and build analysis tools for neurophysiology data. NWB is designed to store a variety of neurophysiology data,…
CRCNS - Collaborative Research in Computational
Neuroscience - Data sharing
To enable concerted efforts in understanding the brain experimental data and other resources such as stimuli and analysis tools should be widely shared by researchers all over the world. To serve this purpose, this website provides a marketplace and discussion forum for sharing tools and data in neuroscience
https://crcns.org
Neuroscience - Data sharing
To enable concerted efforts in understanding the brain experimental data and other resources such as stimuli and analysis tools should be widely shared by researchers all over the world. To serve this purpose, this website provides a marketplace and discussion forum for sharing tools and data in neuroscience
https://crcns.org
👌3
In Search of Invariance in Brains and Machines
Despite their seemingly impressive performance at image recognition and other perceptual tasks, deep convolutional neural networks are prone to be easily fooled, sensitive to adversarial attack, and have trouble generalizing to data outside the training domain that arise from everyday interactions with the real world. The premise of this talk is that these shortcomings stem from the lack of an appropriate mathematical framework for posing the problems at the core of deep learning - in particular, modeling hierarchical structure, and the ability to describe transformations, such as variations in pose, that occur when viewing objects in the real world. Here I will describe an approach that draws from a well-developed branch of mathematics for representing and computing these transformations: Lie theory. In particular, I shall describe a method for learning shapes and their transformations from images in an unsupervised manner using Lie Group Sparse Coding. Additionally, I will show how the generalized bispectrum can potentially be used to learn invariant representations that are complete and impossible to fool.
https://www.youtube.com/watch?v=GPMcJa88qaE
Despite their seemingly impressive performance at image recognition and other perceptual tasks, deep convolutional neural networks are prone to be easily fooled, sensitive to adversarial attack, and have trouble generalizing to data outside the training domain that arise from everyday interactions with the real world. The premise of this talk is that these shortcomings stem from the lack of an appropriate mathematical framework for posing the problems at the core of deep learning - in particular, modeling hierarchical structure, and the ability to describe transformations, such as variations in pose, that occur when viewing objects in the real world. Here I will describe an approach that draws from a well-developed branch of mathematics for representing and computing these transformations: Lie theory. In particular, I shall describe a method for learning shapes and their transformations from images in an unsupervised manner using Lie Group Sparse Coding. Additionally, I will show how the generalized bispectrum can potentially be used to learn invariant representations that are complete and impossible to fool.
https://www.youtube.com/watch?v=GPMcJa88qaE
YouTube
In Search of Invariance in Brains and Machines
Presented By: Bruno Olshausen | Professor; Helen Wills Neuroscience Institute & School of Optometry and Director; Redwood Center for Theoretical Neuroscience, U.C. Berkeley
Presented: May 15th | 11am - 12pm | Georgia Institute of Technology | IBB 1128
Talk…
Presented: May 15th | 11am - 12pm | Georgia Institute of Technology | IBB 1128
Talk…
What are the necessary steps toward open-neuroscience. Below is a short note by Samuel Gershman regarding the matter.
[ read more ]
Follow for more: @theTuringMachine
[ read more ]
Follow for more: @theTuringMachine
The Transmitter: Neuroscience News and Perspectives
A README for open neuroscience
Making data (and code) useful for yourself automatically makes it useful for others.
What's so special
about the
human brain?
Torrents of data from cell atlases, brain organoids and other methods are finally delivering answers to an age-old question.
By Kerri Smith
Infographics by Nik Spencer
[ link ]
More: @theTuringMachine
about the
human brain?
Torrents of data from cell atlases, brain organoids and other methods are finally delivering answers to an age-old question.
By Kerri Smith
Infographics by Nik Spencer
[ link ]
More: @theTuringMachine
❤4
Breaking Free from Neural Networks and Dynamical Systems
By Hessam Akhlaghpour
This blog post is written as a dialogue between two imaginary characters, arguing about the mainstream idea of brain is a dynamical system.
[ link ]
Follow: @theTuringMachine
By Hessam Akhlaghpour
This blog post is written as a dialogue between two imaginary characters, arguing about the mainstream idea of brain is a dynamical system.
[ link ]
Follow: @theTuringMachine
🔥2
I'm delighted to share that I published my first preprint
"From spiking neuronal networks to interpretable dynamics: a diffusion-approximation framework" in bioRxiv. In this study we introduced a framework to interpret complex spiking neuronal network dynamics using nonlinear Hawkes process models and diffusion approximations. This approach allows extracting tractable equations, offering dynamical insights into behavior and cognition.... [ read more ]
Follow: @theTuringMachine
"From spiking neuronal networks to interpretable dynamics: a diffusion-approximation framework" in bioRxiv. In this study we introduced a framework to interpret complex spiking neuronal network dynamics using nonlinear Hawkes process models and diffusion approximations. This approach allows extracting tractable equations, offering dynamical insights into behavior and cognition.... [ read more ]
Follow: @theTuringMachine
👏12❤5👍1
If you're looking for an multimedia way of learning differential equations, look no further. Follow below online book from Dr. Trefor Bazett
Introduction to Differential Equations
This book consists of an introduction to Differential Equations, primarily focusing on Ordinary Differential Equations (ODEs). It is used specifically in Math 204 at the University of Victoria, but covers a fairly typical one-semester introductory course for students who have taken Calculus.
Follow for more: @theTuringMachine.
Introduction to Differential Equations
This book consists of an introduction to Differential Equations, primarily focusing on Ordinary Differential Equations (ODEs). It is used specifically in Math 204 at the University of Victoria, but covers a fairly typical one-semester introductory course for students who have taken Calculus.
Follow for more: @theTuringMachine.
👍3❤1
A recipe for pain-free academic writing
A short document guide for scientific writing.
https://gmarkkula.github.io/writing-recipe/
A short document guide for scientific writing.
https://gmarkkula.github.io/writing-recipe/
gmarkkula.github.io
A recipe for pain-free(?) academic writing
Judging by most reports on the subject, for most people most of the time, writing is a painful experience. However, if what you’re writing is something like a paper or a thesis, you’re in luck: I think academic writing can be an exception to the universal…
👍1
I'm going to give a talk at Computational Machinery of Cognition (CMC) lab on my recent paper. Today at 2pm CET.
You can join if you're interested.
From spiking neuronal networks to interpretable dynamics: a diffusion-approximation framework
Abstract:
Modeling and interpreting the complex recurrent dynamics of neuronal spiking activity is essential to understanding how networks implement behavior and cognition. Nonlinear Hawkes process models can capture a large range of spiking dynamics, but remain difficult to interpret, due to their discontinuous and stochastic nature. To address this challenge, we introduce a novel framework based on a piecewise deterministic Markov process representation of the nonlinear Hawkes process (NH-PDMP) followed by a diffusion approximation. We analytically derive stability conditions and dynamical properties of the obtained diffusion processes for single-neuron and network models. We established the accuracy of the diffusion approximation framework by comparing it with exact continuous-time simulations of the original neuronal NH-PDMP models. Our framework offers an analytical and geometric account of the neuronal dynamics repertoire captured by nonlinear Hawkes process models, both for the canonical responses of single-neurons and neuronal-network dynamics, such as winner-take-all and traveling wave phenomena. Applied to human and nonhuman primate recordings of neuronal spiking activity during speech processing and motor tasks, respectively, our approach revealed that task features can be retrieved from the dynamical landscape of the fitted models. The combination of NH-PDMP representations and diffusion approximations thus provides a novel dynamical analysis framework to reveal single-neuron and neuronal-population dynamics directly from models fitted to spiking data.
[ meeting link ]
You can join if you're interested.
From spiking neuronal networks to interpretable dynamics: a diffusion-approximation framework
Abstract:
Modeling and interpreting the complex recurrent dynamics of neuronal spiking activity is essential to understanding how networks implement behavior and cognition. Nonlinear Hawkes process models can capture a large range of spiking dynamics, but remain difficult to interpret, due to their discontinuous and stochastic nature. To address this challenge, we introduce a novel framework based on a piecewise deterministic Markov process representation of the nonlinear Hawkes process (NH-PDMP) followed by a diffusion approximation. We analytically derive stability conditions and dynamical properties of the obtained diffusion processes for single-neuron and network models. We established the accuracy of the diffusion approximation framework by comparing it with exact continuous-time simulations of the original neuronal NH-PDMP models. Our framework offers an analytical and geometric account of the neuronal dynamics repertoire captured by nonlinear Hawkes process models, both for the canonical responses of single-neurons and neuronal-network dynamics, such as winner-take-all and traveling wave phenomena. Applied to human and nonhuman primate recordings of neuronal spiking activity during speech processing and motor tasks, respectively, our approach revealed that task features can be retrieved from the dynamical landscape of the fitted models. The combination of NH-PDMP representations and diffusion approximations thus provides a novel dynamical analysis framework to reveal single-neuron and neuronal-population dynamics directly from models fitted to spiking data.
[ meeting link ]
Google
Real-time meetings by Google. Using your browser, share your video, desktop, and presentations with teammates and customers.
🔥5❤2👍2💔1
Neural ODEs
This web page is the companion website to our NeurIPS 2020 tutorial, created by Zico Kolter, David Duvenaud, and Matt Johnson. The page constain notes to accompany the tutorial... [ more ]
@theTuringMachine
This web page is the companion website to our NeurIPS 2020 tutorial, created by Zico Kolter, David Duvenaud, and Matt Johnson. The page constain notes to accompany the tutorial... [ more ]
@theTuringMachine
👍3
Forwarded from the last neural cell (Aleksejs Timčenko)
📚A collection of papers & tools on dynamical systems and deep learning
https://github.com/yantijin/dynamic-systems-DL
It's divided into the following topics:
Differential Equations in Deep Learning
General Architectures
Neural ODEs (ordinary diff. eq.)
Neural SDEs, CSDEs, CDEs...
Normalizing Flows
Applications
Energy based models
Hamilton
Applications
Lagrange
Deep Learning Methods for Differential Equations
Solving Differential Equations
Learning PDEs
Applications
Model Discovery
Deep Control
Model-Predictive-Control
Dynamical System View of Deep Learning
Recurrent Neural Networks
Theory and Perspectives
Optimization
Signals and systems
Software and Libraries
Python
Julia
Websites and Blogs
----
https://github.com/yantijin/dynamic-systems-DL
It's divided into the following topics:
Differential Equations in Deep Learning
General Architectures
Neural ODEs (ordinary diff. eq.)
Neural SDEs, CSDEs, CDEs...
Normalizing Flows
Applications
Energy based models
Hamilton
Applications
Lagrange
Deep Learning Methods for Differential Equations
Solving Differential Equations
Learning PDEs
Applications
Model Discovery
Deep Control
Model-Predictive-Control
Dynamical System View of Deep Learning
Recurrent Neural Networks
Theory and Perspectives
Optimization
Signals and systems
Software and Libraries
Python
Julia
Websites and Blogs
----
GitHub
GitHub - yantijin/dynamic-systems-DL: Collection of resources that combine dynamic systems, control with deep learning.
Collection of resources that combine dynamic systems, control with deep learning. - yantijin/dynamic-systems-DL
❤4👍3
Low-rank RNNs in ten minutes
Artificial neural networks are super cool. They are known for all sorts of computational prowesses, and they also happen to model brain processes quite well. Among artificial neural networks, there are recurrent neural networks (RNNs), which contain a pool of interconnected neurons, whose activity evolves over time. These networks can be trained to perform all sorts of cognitive tasks, and they exhibit activity patterns that are quite similar to what is observed in many brain areas... [ Read more ]
A post by Adrian Valente
Follow: @theTuringMachine
Artificial neural networks are super cool. They are known for all sorts of computational prowesses, and they also happen to model brain processes quite well. Among artificial neural networks, there are recurrent neural networks (RNNs), which contain a pool of interconnected neurons, whose activity evolves over time. These networks can be trained to perform all sorts of cognitive tasks, and they exhibit activity patterns that are quite similar to what is observed in many brain areas... [ Read more ]
A post by Adrian Valente
Follow: @theTuringMachine
❤4👍1
Forwarded from the last neural cell (Aleksejs Timčenko)
✨ Continuing sharing newsletters for inspiration and general perspective broadening
Neural manifolds: Latest buzzword or pathway to understand the brain?
Check out this insightful piece from The Transmitter on neural manifolds and population coding.
The article presents a case for viewing neural information processing through the lens of coordinated population activity rather than individual neuron responses. The neural manifold framework – where neural activity is represented as trajectories on a geometric surface in high-dimensional space – offers a powerful way to understand how the brain processes information.
🔥 Key insight: while neural activity involves billions of neurons, it appears to be constrained to lower-dimensional manifolds, similar to how individual words gain meaning only in the context of sentences. The author addresses common misconceptions, clarifying that manifolds aren't just about dimensionality reduction, but rather about understanding the geometry and topology of neural population dynamics.
---
The Transmitter consistently delivers such perspectives on neuroscience, bridging theoretical frameworks with experimental insights. Worth following for anyone interested in systems neuroscience and neural computation!
They can send you weekly or monthly digests, whichever you prefer.
P.S. [For
Neural manifolds: Latest buzzword or pathway to understand the brain?
Check out this insightful piece from The Transmitter on neural manifolds and population coding.
The article presents a case for viewing neural information processing through the lens of coordinated population activity rather than individual neuron responses. The neural manifold framework – where neural activity is represented as trajectories on a geometric surface in high-dimensional space – offers a powerful way to understand how the brain processes information.
🔥 Key insight: while neural activity involves billions of neurons, it appears to be constrained to lower-dimensional manifolds, similar to how individual words gain meaning only in the context of sentences. The author addresses common misconceptions, clarifying that manifolds aren't just about dimensionality reduction, but rather about understanding the geometry and topology of neural population dynamics.
@Altime: I am also starting my work in neural manifolds in visual processing, so going to share some cool stuff with you once it's done 👀
---
The Transmitter consistently delivers such perspectives on neuroscience, bridging theoretical frameworks with experimental insights. Worth following for anyone interested in systems neuroscience and neural computation!
They can send you weekly or monthly digests, whichever you prefer.
P.S. [For
lang=ru
audience] Так как мы пишем на двух языках, советую пользоваться встроенным переводом от телеграма, если возникают трудности.The Transmitter
Neural manifolds: Latest buzzword or pathway to understand the brain?
When you cut away the misconceptions, neural manifolds present a conceptually appropriate level at which systems neuroscientists can study the brain.
👍3❤1
Forwarded from Complex Systems Studies
Mathematical Methods in Computational Neuroscience - Application deadline
📆 Mar 31, 2025
https://www.compneuronrsn.org/
📆 Mar 31, 2025
https://www.compneuronrsn.org/
Mathematical Methods
Mathematical Methods in Computational Neuroscience
Summer school in Eresfjord, Norway (July 8th - 26th, 2024)
👍2
The Elements of Style
Author: William Strunk, Jr.
Asserting that one must first know the rules to break them, this classic reference book is a must-have for any student and conscientious writer. Intended for use in which the practice of composition is combined with the study of literature, it gives in brief space the principal requirements of plain English style and concentrates attention on the rules of usage and principles of composition most commonly violated.
[ more ]
@theTuringMachine
Author: William Strunk, Jr.
Asserting that one must first know the rules to break them, this classic reference book is a must-have for any student and conscientious writer. Intended for use in which the practice of composition is combined with the study of literature, it gives in brief space the principal requirements of plain English style and concentrates attention on the rules of usage and principles of composition most commonly violated.
[ more ]
@theTuringMachine
🙏2❤1
Forwarded from Scientific Programming (ziaee)
Global Brain Reconfiguration After Local Neural Manipulation.wav
37.2 MB
Our new research article from PNAS investigates how localized brain manipulations, such as lesions or silencing, impact the entire brain's functional connectivity in mice. Combining fMRI data with computational modeling, the study reveals that these targeted interventions lead to widespread network reconfigurations, sometimes decreasing and other times increasing connectivity. We used personalized brain simulations to explore the underlying mechanisms of this phenomenon, known as diaschisis, finding that alterations in local neuronal excitability drive these global changes. Their findings offer insights into understanding the broad effects of focal brain disruptions and could inform the development of more precise therapeutic strategies targeting brain dynamics. The data and analysis tools are publicly available.
https://www.pnas.org/doi/10.1073/pnas.2405706122
https://www.pnas.org/doi/10.1073/pnas.2405706122
👍2