the last neural cell
1.14K subscribers
91 photos
8 videos
14 files
116 links
we write about BCI, AI and brain research.

authors:
@kovalev_alvi - visual neural interfaces - UMH, Spain | CEO of ALVI Labs
@Altime - comp neuro phd @ GTC Tübingen

Our chat: @neural_cell_chat
Download Telegram
Screenshot_1.png
554.9 KB
#03 Review. Part 2.

Imagined speech can be decoded from low- and cross-frequency intracranial EEG features.

🔥Visual part
#04 Review.
Neurons learn by predicting future activity

📑 paper (Nature Machine Intelligence, 2022)

💡 "Neurons have intrinsic predictive learning rule, which is updating synaptic weights (strength of connections) based on minimizing “surprise”: difference between actual and predicted activity. This rule optimizes energy balance of a neuron"

What the authors did to show this suggestion indeed could be the case?

🔥 Read the full review using free Medium link:
https://medium.com/@timchenko.alexey/bdb51a7a00cf?source=friends_link&sk=97920dd2d602e9187bd8fabeb1b39a0b

Feel free to comment on anything that caught your attention or you didn't quite understand. I want these reviews to be concise and clear, so your feedback is highly appreciated ☺️
🔥72🤩2
#05 Review.
#bci #deeplearning

Neuroprosthesis for Decoding Speech in a Paralyzed Person with Anarthriapaper
Cool video about it → video

🧐 At a glance:

Anarthria (the inability to articulate speech) makes it hard for paralyzed people to interact with the world. The opportunity to decode words and sentences directly from cerebral activity (ECoG) could give such patients a way to communicate.

Authors build AI model to predict word from neural activity. They achieve 98% accuracy for speech detection and 47% for word classification from 50 classes.

🔥 Read the full review using free Medium link → medium
🔥6👍1🐳1
Hi everyone, we're really glad you're reading us. There are already 120 🔥🔥

We have great news. We have created a blog on Medium where all our reviews are collected in one place and categorized.
Telegram articles will also continue to be published.

If you have a subscription to Medium then follow this link and subscribe
Any feedback would be welcome
https://medium.com/the-last-neural-cell
🔥11🤩2
#06 Summary.
Brain stimulation and imaging methods #2. Overview of optogenetics.
#neuroscience #neurostimulation

⚡️ Briefly
Optogenetics is a set of methods aimed at genetically modifying neurons to interact with light. Light-sensitive proteins allow for both stimulation and recording of neuronal activity with high spatial and temporal precision.

🔎 Contents:
- How does optogenetics work
- Why is it useful: features & experimental insights
- Advantages & disadvantages
- Potential improvements

👉 Summary [ link ]

This is a method overview to set the ground for the upcoming summaries on brain-to-brain and closed-loop interface in mice. Stay tuned 😎
👍4🔥2🌭1
That's awesome 🤩

I think that this benchmark can push development in Artificial General Intelligence (AGI).

It can be a next gen Turing test.

For english docs refer to the github link in the post 😉

#interesting #ml
🔥5
🛋 BIG-Bench: встречам бенчмарк и пейпер «BEYOND THE IMITATION GAME»

Название пейпера «ПО ТУ СТОРОНУ ИГРЫ В ИМИТАЦЮ» отсылает нас к работе Тьюринга, и предоставляет актуальный на 2022 год бенчмарк, призванный количественно оценить большие языковые модели, такие как GPT-3 или PaLM.

Современная мейнстрим парадигма в NLP выглядит так: «не работает на 1.3B — попробуй 12B, не выходит на 12B, бери 175B, и т.д. Не нужно искать новые подходы — attention и параметры are all you need, как говорится..»

🤔 Но как оценивать эти огромные модели?

Чтобы решить эту проблему, 442 (WHAT?!) ресерчара из 132 организаций представили тест Beyond the Imitation Game (BIG-bench). Темы теста разнообразны, они связаны с лингвистикой, развитием детей, математикой, биологией, физикой, социальными предубеждениями, разработкой программного обеспечения и т. д.

BIG-bench фокусируется на задачах, которые, как считается, выходят за рамки возможностей текущих языковых моделей.

🪑 BIG-bench
⚗️ Colab для эвала T5
🧻 paper

@мишин лернинг
👍4🌚1
Stumbled upon a book of G.Buzsaki "The Brain from Inside-Out" (2019) after reading "Rhythms of the Brain" (btw, hit 🤔 if you would like to see a summary of the book in this channel).

Some gentlemen have kindly created a thorough document with each 2019's book chapter summarized and discussed:

[book club link]

Check out if you enjoy such neuroscience topics as:
- neural code
- oscillations
- memory coding
- systems/network neuroscience
- relation of action and cognition

#interesting #neuroscience
🤔10👍2
#07 Summary. A fast intracortical brain–machine interface with patterned optogenetic feedback
#bci #optogenetics

[ paper ]

⚡️ Briefly
The neuroengineers provided a proof-of-concept of a fast closed-loop brain-computer interface (BCI) in mice. They used a control signal from the motor cortex to control a virtual bar and stimulated sensory cortex if the bar “touched” a mouse. The mouse successfully learnt the behavioral task relying solely on artificial inputs and outputs.

🔎 Contents:
- Research pipeline (recording, stimulation, processing, closed-loop setup, behavioral task)
- Achieved results and performance
- Limitations and future development
- Potential and final thoughts

👉 Summary [ link ]

Looking forward to your comments and suggestions! Next time - optogenetic brain-to-brain interface 😉
🔥10👍3🐳1
#08 Summary. Masked Autoencoder is all you need for any modality.
#deeplearning #ml

⚡️ Briefly
To solve complicated tasks machine learning algorithm should understand data and extract good features from it. Usually training generalizing models requires a lot of annotated data. However it is expensive and in some cases impossible.

Masked Autoencoder technique allows to train model on unlabeled data and obtain surprisingly good feature representation for all common modalities.

🔎 Contents:
- Explanation of MAE approach
- Recipe for all domains
- Crazy experimental results for all types of data

👉 Summary [ link ]

Papers
:
BERT : text
MAE : image
M3MAE : image + text
MAE that listen : audio spectrograms
VideoMAE : video

Looking forward to your comments and suggestions!

Next time - Mind blowing paper from Meta AI about speech reconstruction from noninvasive brain signals. 🔥🔥🔥
🔥10👏1🐳1
Connect to the new DS / DL community in the telegram. That's alternative to Open Data Science community.

So, if you are interested in deep learning, feel free to jump in. 🥳

https://t.iss.one/betterdatacommunity
🔥2
GPT 4

Now, it was trained on image and text. So very soon you can use images as prompt to chatGPT.

Bing uses GPT 4 under the hood.

Write your ideas about application of multimodal language model.

Link. https://openai.com/research/gpt-4
4
This media is not supported in your browser
VIEW IN TELEGRAM
#09 Summary
🤖 RT-1: Robotics Transformer for Real-World Control at Scale

project page → link

⚡️Long story short:
RT-1 can efficiently perform everyday tasks using a hand manipulator based on text instructions.

🤓Methodology:
The research employs imitation learning to train the agent, which is composed of pre-trained language and image models, along with a decoder for predicting actions.

📌Key Components:
1. The model receives text instructions and derives sentence embeddings using a pre-trained T5 model.
2. It processes six images (the robot's environment) via EfficientNet, with text embedding integration as detailed in the paper.
3. Subsequently, RT-1 processes multimodal (text + images) features using a decoder-only model.

📊Training:
RT-1 was trained in a supervised setting with the aim of predicting the next action, as a human annotator would. The dataset consists of 130k demonstrations across 744 tasks. During training, RT-1 is given six frames, resulting in 48 tokens (6x8) from image and text instructions.

💃Intriguing insights:
1. Auto-regressive methods tend to slow down and yield poorer performance.
2. Discretizing the action space enables solving classification problems rather than regression, and allows sampling from the prediction distribution.
3. Continuous actions perform worse in comparison.
4. Input tokens are computed only once, with overlapped inference applied.
5. Data diversity proves to be more critical than data quantity.

😅My thoughts:
The RT-1 model demonstrates impressive results in accomplishing everyday tasks based on text instructions.
AI models like RT-1, with their increasing abilities in complex tasks, may soon deserve human names, such as "Robert" for RT-1, to highlight their advancements.
Please open Telegram to view this post
VIEW IN TELEGRAM
4🐳2🔥1
This media is not supported in your browser
VIEW IN TELEGRAM
FingerFlex: Inferring Finger Trajectories from ECoG signals

We are happy to share our preprint FingerFlex. We propose new state of the art model for prediction finger movements from brain activity (ECoG).
✍️ paper
🧑‍💻 github

Authors: Vlad Lomtev, @kovalev_alvi, Alex Timchenko

Architecture.
We use a convolutional encoder-decoder architecture adapted for finger movement regression on electrocorticographic (ECoG) brain data. We show great performance with simple U-Net inspired architecture. As ECoG features we used wavelet transformation.

Data.
We use open-source motor-brain datasets: BCI Competition IV and Stanford dataset. These datasets have concurrently recorded brain activity and finger movements.

Results.
We beat all competitors on BCI Competition IV with a correlation coefficient between true and predicted trajectories up to 0.74.

🔥 Look at the video. Demonstration of our model on validation data.
13🦄3🔥2
ML papers | 01-13 June 2023

💎 Video + Text

Probabilistic Adaptation of Text-to-Video Models

What: Finetune large pretrain text to video model on small domain specific videos.

Complicated but interesting. You can finetune pretrain diffusion model on your domain with small additional block.

Video-LLaMA: An Instruction-tuned Audio-Visual Language Model for Video Understanding

What: Finetune LLM for understanding video+audio.

Use Q-Former for getting audio and video features. Then add it to pretrained llama model.

🧬Diffusion

Iterative α-(de)Blending: a Minimalist Deterministic Diffusion Model

What: propose simple implementation and intuition of diffusion model.

Good start to dive into the field and try on your data.

💎Audio Transformers

Simple and Controllable Music Generation

What: propose decoder for text 2 audio based on latent audio features.

They use vq quantization. Check it if you don't hear about it.
It allows to represent data with a limited number of vectors.


💎If you like this format please write in comments.
#digest
Please open Telegram to view this post
VIEW IN TELEGRAM
9👍2🔥2🤩2🦄1
🧬 Tasty papers | 13-20 June 2023

Multimodal

🟣LLaMA-Adapter V2: Parameter-Efficient Visual Instruction Model

Add visual information to LLM using trainable adapters.

Expand LLaMA Adapters V1 to vision.
+ Apply early fusion for visual tokens.
+ Add calibration of norm, bias of the LLM model.
+ Finetune on image-text dataset.

Audio

🟣High-Fidelity Audio Compression with Improved RVQGAN

Compress natural audio to discrete tokens with VQ technique.

Train universal compression model on all audio data: speech, music, noise.
+ add vector quantization.
+ add adversarial loss (GAN loss).

🟣Voicebox: Text-Guided Multilingual Universal Speech Generation at Scale

Audio generative "diffusion" model trained on 50k hours data.

Use Flow Matching, similar w/ diffusion, but better
Masked train setting with context information. The model can synthesize speech, noise removal, content editing,

Neuro

🟢Decoding and synthesizing tonal language speech from brain activity

Decode tonal language from ECoG data with CNN-LSTM models.

Adapt multi-stream model -> looks unnecessary complicated.
Record small datasets. Overall 10 minutes per patient for 8 different syllables.

#digest
Please open Telegram to view this post
VIEW IN TELEGRAM
3🦄3🔥1
Media is too big
VIEW IN TELEGRAM
Introducing motor interface for amputee

That is the first AI model for decoding precise finger movements for people with hand amputation. It uses only 8 surface EMG electrodes.

ALVI Interface can decode different types of moves in virtual reality
🔘finger flexion
🔘finger extension
🟣typing
🟣some more

💎Full demo: YouTube link

Subscribe and follow the further progress of ALVI Labs:
Twitter: link
Instagram: link
Please open Telegram to view this post
VIEW IN TELEGRAM
🔥16👍5🦄2👎1