Machine, are you learning?
857 subscribers
33 photos
5 videos
22 files
102 links
Insights in recent Machine Learning topics, approaches, models and papers.
Interested in collaboration, DM @infatum
Download Telegram
Очікування: вирішив піти в мл, щоб не стати yml developer.
Реальність: you shall not pass!!!
😁6👍1
https://arxiv.org/abs/2406.11045

Kolmogorov-Arnold Networks outperform convolutional neural networks on feature extraction task for human activity recognition.

“Initial Investigation of Kolmogorov-Arnold Networks (KANs) as Feature Extractors for IMU Based Human Activity Recognition explores the use of a novel neural network architecture, the Kolmogorov-Arnold Networks (KANs) as feature extractors for sensor-based (specifically IMU) Human Activity Recognition (HAR).

We present an initial performance investigation of the KAN feature extractor on four public HAR datasets. It shows that the KAN-based feature extractor outperforms CNN-based extractors on all datasets while being more parameter efficient.”
MLP vs KAN kernels
👍2
So true
🌚4🤣2
Коли виросту, хочу бути як він

When I'll mature - wanna be like him
😁7❤‍🔥5👍2🤡21
Researchers at Flatiron Institute developed Simulation-Based Inference of Galaxies(SimBIG) with unprecedented precision to predict major cosmological parameters:

https://scitechdaily.com/rewriting-cosmic-calculations-new-ai-unlocks-the-universes-settings/
👍2
🤡7😭2👍1
😁6😭6🤡1
😁12👍1🐳1🙈1
When you outplayed and outsmarted yourself
😁8💯6
This media is not supported in your browser
VIEW IN TELEGRAM
🔥 The code of DynOMo is out 🔥

👉DynOMo is a novel model able to track any point in a dynamic scene over time through 3D reconstruction from monocular video: 2D and 3D point tracking from unposed monocular camera input

👉Review https://t.ly/t5pCf
👉Paper https://lnkd.in/dwhzz4_t
👉Repo github.com/dvl-tum/DynOMo
👉Project https://lnkd.in/dMyku2HW
3
https://arxiv.org/pdf/2501.12948

DeepSeek-R1-Zero is a pure RL model without any supervised data and fine-tuning which achieved paramount reasoning capabilities and was actually trained on a DeepSeek-V3-Base model using GRPO(Group Relative Policy Optimisation) approach. Which is truly an amazing result, that shows how undervalued RL potential is. As I foreseen — the next big leap in AI will be achieved by RL massive adoption and incorporation with pre-trained DL models.

Is RL mass-adoption coming?


#DeepSeek #reinforcementlearning #LLM #GRPO #RL
5
Andrej Karpathy: “I don't have too too much to add on top of this earlier post on V3 and I think it applies to R1 too (which is the more recent, thinking equivalent).

I will say that Deep Learning has a legendary ravenous appetite for compute, like no other algorithm that has ever been developed in AI. You may not always be utilizing it fully but I would never bet against compute as the upper bound for achievable intelligence in the long run. Not just for an individual final training run, but also for the entire innovation / experimentation engine that silently underlies all the algorithmic innovations.

Data has historically been seen as a separate category from compute, but even data is downstream of compute to a large extent - you can spend compute to create data. Tons of it. You've heard this called synthetic data generation, but less obviously, there is a very deep connection (equivalence even) between "synthetic data generation" and "reinforcement learning". In the trial-and-error learning process in RL, the "trial" is model generating (synthetic) data, which it then learns from based on the "error" (/reward). Conversely, when you generate synthetic data and then rank or filter it in any way, your filter is straight up equivalent to a 0-1 advantage function - congrats you're doing crappy RL.

Last thought. Not sure if this is obvious. There are two major types of learning, in both children and in deep learning. There is 1) imitation learning (watch and repeat, i.e. pretraining, supervised finetuning), and 2) trial-and-error learning (reinforcement learning). My favorite simple example is AlphaGo - 1) is learning by imitating expert players, 2) is reinforcement learning to win the game. Almost every single shocking result of deep learning, and the source of all *magic* is always 2. 2 is significantly significantly more powerful. 2 is what surprises you. 2 is when the paddle learns to hit the ball behind the blocks in Breakout. 2 is when AlphaGo beats even Lee Sedol. And 2 is the "aha moment" when the DeepSeek (or o1 etc.) discovers that it works well to re-evaluate your assumptions, backtrack, try something else, etc. It's the solving strategies you see this model use in its chain of thought. It's how it goes back and forth thinking to itself. These thoughts are *emergent* (!!!) and this is actually seriously incredible, impressive and new (as in publicly available and documented etc.). The model could never learn this with 1 (by imitation), because the cognition of the model and the cognition of the human labeler is different. The human would never know to correctly annotate these kinds of solving strategies and what they should even look like. They have to be discovered during reinforcement learning as empirically and statistically useful towards a final outcome.

(Last last thought/reference this time for real is that RL is powerful but RLHF is not. RLHF is not RL. I have a separate rant on that in an earlier tweet”

https://x.com/karpathy/status/1883941452738355376
4
https://thehealthcareinsights.com/swedish-scientists-unveil-the-worlds-first-living-computer-built-from-human-brain-tissue/

Swedish scientists have created the world’s first ‘living computer’ and it is made out of human brain tissue. It is composed of 16 organoids also called clumps of brain cells. Organoids are tiny, self-organized three-dimensional tissue cultures made from stem cells.

This is an alternative (out of the box) solution proposed to the statistical algorithmic AI approach. And maybe, just maybe it can surpass one day silicon based technology with much better efficiency in terms of cost and energy. To harness a billion years of evolution to develop a thinking machine or to harness a byproduct of it to imitate it? The choice is yours

#biology #biologicalAI #ArtificialIntelligence #brain #tissue