Forwarded from Tomas
This media is not supported in your browser
VIEW IN TELEGRAM
π¨With me you will make money! I have made over $20,000 in the last week! π₯
I don't care where you are and what you can do, I will help absolutely everyone earn money.
My name is Lisa and:
βοΈ I will teach you trading for FREE in a short period of time
βοΈ I will give you FREE signals every day
βοΈ I will help you to get income of 1,000$ in a week
Sounds unbelievable?
You have 2 hours to join our channel.
But itβs true - just look at the results in my channel and JOIN FOR FREE ππ» https://t.iss.one/+fJ0XM3sZkaxkNjgx
I don't care where you are and what you can do, I will help absolutely everyone earn money.
My name is Lisa and:
βοΈ I will teach you trading for FREE in a short period of time
βοΈ I will give you FREE signals every day
βοΈ I will help you to get income of 1,000$ in a week
Sounds unbelievable?
But itβs true - just look at the results in my channel and JOIN FOR FREE ππ» https://t.iss.one/+fJ0XM3sZkaxkNjgx
π1
SAM2Long: Enhancing SAM 2 for Long Video Segmentation with a Training-Free Memory Tree
π₯ Github: https://github.com/mark12ding/sam2long
π Paper: https://arxiv.org/abs/2410.16268v1
π€ HF: https://huggingface.co/papers/2410.16268
π₯ Github: https://github.com/mark12ding/sam2long
π Paper: https://arxiv.org/abs/2410.16268v1
π€ HF: https://huggingface.co/papers/2410.16268
π4
https://t.iss.one/DataScienceT
Please open Telegram to view this post
VIEW IN TELEGRAM
π4
Donβt sleep on Vision Language Models (VLMs).
With the releases of Llama 3.2 and ColQwen2, multimodal models are gaining more and more traction.
VLMs are multimodal models that can handle image and text modalities:
Input: Image and text
Output: Text
They can be used for many use cases, including visual question answering or document understanding (as in the case of ColQwen2).
How do they work under the hood?
The main challenge in VLMs is to unify the image and text representations.
For this, a typical VLM architecture consists of the following components:
β’ image encoder (e.g., CLIP, SigLIP)
β’ embedding projector to align image and text representations
β’ text decoder (e.g., Vicuna, Gemma)
huggingface.co/blog/vlms
https://t.iss.one/DataScienceT
With the releases of Llama 3.2 and ColQwen2, multimodal models are gaining more and more traction.
VLMs are multimodal models that can handle image and text modalities:
Input: Image and text
Output: Text
They can be used for many use cases, including visual question answering or document understanding (as in the case of ColQwen2).
How do they work under the hood?
The main challenge in VLMs is to unify the image and text representations.
For this, a typical VLM architecture consists of the following components:
β’ image encoder (e.g., CLIP, SigLIP)
β’ embedding projector to align image and text representations
β’ text decoder (e.g., Vicuna, Gemma)
huggingface.co/blog/vlms
https://t.iss.one/DataScienceT
π6β€2
Forwarded from Python | Machine Learning | Coding | R
π¦ Biggest Sale Of The Year NOW ON π¦Double 11 Shopping Festival Event is live! Check out your most loved for less. β¨ποΈ
Enjoy SPOTO Double 11 Crazy Sale to Join Lucky Draw and win gifts worth up to $1000!πΈ
πβ―οΈ: https://www.spotoexam.com/snsdouble11sale2024/?id=snstxrbzhussein
ππTest Your IT Skills for Free: https://bit.ly/48q8Cb3
ππ²Contact for 1v1 IT Certs Exam Help: https://wa.link/k0vy3x
ππ JOIN IT Study GROUP to Get Madness Discount π: https://chat.whatsapp.com/HqzBlMaOPci0wYvkEtcCDa
Enjoy SPOTO Double 11 Crazy Sale to Join Lucky Draw and win gifts worth up to $1000!πΈ
πβ―οΈ: https://www.spotoexam.com/snsdouble11sale2024/?id=snstxrbzhussein
ππTest Your IT Skills for Free: https://bit.ly/48q8Cb3
ππ²Contact for 1v1 IT Certs Exam Help: https://wa.link/k0vy3x
ππ JOIN IT Study GROUP to Get Madness Discount π: https://chat.whatsapp.com/HqzBlMaOPci0wYvkEtcCDa
π2
Forwarded from Data Science Books
π Your balance is credited $4,000 , the owner of the channel wants to contact you!
Dear subscriber, we would like to thank you very much for supporting our channel, and as a token of our gratitude we would like to provide you with free access to Lisa's investor channel, with the help of which you can earn today
t.iss.one/Lisainvestor
Be sure to take advantage of our gift, admission is free, don't miss the opportunity, change your life for the better.
You can follow the link :
https://t.iss.one/+j4-NLonPlWJmZDVh
Dear subscriber, we would like to thank you very much for supporting our channel, and as a token of our gratitude we would like to provide you with free access to Lisa's investor channel, with the help of which you can earn today
t.iss.one/Lisainvestor
Be sure to take advantage of our gift, admission is free, don't miss the opportunity, change your life for the better.
You can follow the link :
https://t.iss.one/+j4-NLonPlWJmZDVh
π1
Constrained Diffusion Implicit Models!
We use diffusion models to solve noisy inverse problems like inpainting, sparse-recovery, and colorization. 10-50x faster than previous methods!
Paper: arxiv.org/pdf/2411.00359
Demo: https://t.co/m6o9GLnnZF
https://t.iss.one/DataScienceT
We use diffusion models to solve noisy inverse problems like inpainting, sparse-recovery, and colorization. 10-50x faster than previous methods!
Paper: arxiv.org/pdf/2411.00359
Demo: https://t.co/m6o9GLnnZF
https://t.iss.one/DataScienceT
π6
Forwarded from Python | Machine Learning | Coding | R
A promising digital wallet will distribute $40 for free to every user who creates an account on this wallet
Terms of creating an account: Subscribe to their channel only.
https://t.iss.one/TronKeeperBot/app?startapp=418788114
Terms of creating an account: Subscribe to their channel only.
https://t.iss.one/TronKeeperBot/app?startapp=418788114
Telegram
TronKeeper
TronKeeper is your secure Tron wallet on Telegram. invite friends, and earn USDT rewards! Simple, fast, and secure . π
A Github repository with practical exercises, notebooks with code for developing, pre-training, and fine-tuning a GPT-type LLM model based on one of the best books on building an LLM from scratch.
In this book, you will learn and understand how large language models work from the inside, creating your own LLM step by step, with a detailed explanation of each stage in clear language, diagrams and examples.
The method described in the book demonstrates the approach used to create large fundamental models such as those underlying ChatGPT.
In the repository, each chapter of the book has several (3-4) applied examples in ipynb format or as an executable python script. The code is aimed at a wide audience, is designed to run on regular laptops and does not require specialized equipment.
Setting
Chapter 2: Working with Text Data
Chapter 3: Code of Attention Mechanisms
Chapter 4: Implementing the GPT Model from Scratch
Chapter 5: Pre-training on unlabeled data
Chapter 6: Fine-tuning for Classification
Chapter 7: Fine-tuning to Follow Instructions
https://t.iss.one/DataScienceT
Please open Telegram to view this post
VIEW IN TELEGRAM
Please open Telegram to view this post
VIEW IN TELEGRAM
π10
Docling Technical Report
Paper: https://arxiv.org/pdf/2408.09869v3.pdf
Code 1: https://github.com/DS4SD/docling
Code 2: https://github.com/DS4SD/docling-core
https://t.iss.one/DataScienceTβ
Paper: https://arxiv.org/pdf/2408.09869v3.pdf
Code 1: https://github.com/DS4SD/docling
Code 2: https://github.com/DS4SD/docling-core
https://t.iss.one/DataScienceT
Please open Telegram to view this post
VIEW IN TELEGRAM
π1
OmniGen: Unified Image Generation
Paper: https://arxiv.org/pdf/2409.11340v1.pdf
Code: https://github.com/vectorspacelab/omnigen
Datasets: DreamBooth - MagicBrush
https://t.iss.one/DataScienceTβοΈ
Paper: https://arxiv.org/pdf/2409.11340v1.pdf
Code: https://github.com/vectorspacelab/omnigen
Datasets: DreamBooth - MagicBrush
https://t.iss.one/DataScienceT
Please open Telegram to view this post
VIEW IN TELEGRAM
β€1π1
Forwarded from Tomas
π€EARN YOUR $100 TODAY! EASY!
Lisa Trader has launched a free marathon on her VIP channel.
Now absolutely everyone can earn from trading. It has become even easier to earn in the cryptocurrency market, you can start today!
WHAT DO YOU NEED TO START?
1. Subscribe to the channel SIGNALS BY LISA TRADER π.
2. Write βMARATHONβ in private messages. She will then tell you how to get on the vip channel for absolutely FREE!
πCLICK HEREπ
πCLICK HEREπ
πCLICK HEREπ
Lisa Trader has launched a free marathon on her VIP channel.
Now absolutely everyone can earn from trading. It has become even easier to earn in the cryptocurrency market, you can start today!
WHAT DO YOU NEED TO START?
1. Subscribe to the channel SIGNALS BY LISA TRADER π.
2. Write βMARATHONβ in private messages. She will then tell you how to get on the vip channel for absolutely FREE!
πCLICK HEREπ
πCLICK HEREπ
πCLICK HEREπ
Most classical ML algorithms cannot be trained with a batch implementation.
This is concerning because enterprises typically deal with tabular data and classical ML algorithms, such as tree-based methods, are frequently used for modeling.
For instance, to train a random forest from sklearn, the entire dataset must be present in memory. This limits its usage to only small/intermediate datasets.
There are two ways to extend random forests to large datasets.
1) Use big-data frameworks like Spark MLlib to train them.
2) Use random patches, which I learned from the PhD thesis of Dr. Gilles Louppe β Understanding Random Forests.
> Hereβs what he proposed.
Note: This approach only works in an ensemble setting. So, you would have to train multiple models.
The idea is to sample random data patches (both rows and columns) and train a decision tree model on the patch.
Repeat this step multiple times to obtain the entire random forest model.
> Here's why it works.
The core objective of Bagging is to build trees that are as different as possible.
In this case, the dataset overlap between any two trees is NOT expected to be huge compared to the typical random forest. This aids in the Bagging objective.
His thesis presented benchmarks on 13 datasets:
- Random patches performed better than the random forest on 11 datasets.
- On the other two datasets, the difference was quite small (~0.05).
And this is how we can train a random forest model on large datasets that do not fit into memory.
https://t.iss.one/DataScienceTβοΈ
This is concerning because enterprises typically deal with tabular data and classical ML algorithms, such as tree-based methods, are frequently used for modeling.
For instance, to train a random forest from sklearn, the entire dataset must be present in memory. This limits its usage to only small/intermediate datasets.
There are two ways to extend random forests to large datasets.
1) Use big-data frameworks like Spark MLlib to train them.
2) Use random patches, which I learned from the PhD thesis of Dr. Gilles Louppe β Understanding Random Forests.
> Hereβs what he proposed.
Note: This approach only works in an ensemble setting. So, you would have to train multiple models.
The idea is to sample random data patches (both rows and columns) and train a decision tree model on the patch.
Repeat this step multiple times to obtain the entire random forest model.
> Here's why it works.
The core objective of Bagging is to build trees that are as different as possible.
In this case, the dataset overlap between any two trees is NOT expected to be huge compared to the typical random forest. This aids in the Bagging objective.
His thesis presented benchmarks on 13 datasets:
- Random patches performed better than the random forest on 11 datasets.
- On the other two datasets, the difference was quite small (~0.05).
And this is how we can train a random forest model on large datasets that do not fit into memory.
https://t.iss.one/DataScienceT
Please open Telegram to view this post
VIEW IN TELEGRAM
π4β€2
OpenCoder doesn't get enough love
They open-sourced the entire pipeline to create QwenCoder-level code models.
This includes:
- Large datasets
- High-quality models
- Eval framework
Tons of great lessons and observations in the paper
π Paper: arxiv.org/abs/2411.04905
https://t.iss.one/DataScienceTβ
They open-sourced the entire pipeline to create QwenCoder-level code models.
This includes:
- Large datasets
- High-quality models
- Eval framework
Tons of great lessons and observations in the paper
https://t.iss.one/DataScienceT
Please open Telegram to view this post
VIEW IN TELEGRAM
π6
π§Ήπͺ£ MOP+MiHo+NCC πΌοΈπ: Image Matching Filtering and Refinement by Planes and Beyond
π₯ Github: https://github.com/fb82/miho
π Paper: https://arxiv.org/abs/2411.09484v1
π Dataset: https://paperswithcode.com/dataset/scannet
https://t.iss.one/DataScienceTβ
π Dataset: https://paperswithcode.com/dataset/scannet
https://t.iss.one/DataScienceT
Please open Telegram to view this post
VIEW IN TELEGRAM
π1
Explore "Pretraining LLMs," a short course developed with upstageai.
The course covers pretraining from scratch, continuing pretraining on custom data, and how using smaller open-source models can reduce costs.
Take the course for free: https://hubs.la/Q02YFKyx0
https://t.iss.one/DataScienceTβ
The course covers pretraining from scratch, continuing pretraining on custom data, and how using smaller open-source models can reduce costs.
Take the course for free: https://hubs.la/Q02YFKyx0
https://t.iss.one/DataScienceT
Please open Telegram to view this post
VIEW IN TELEGRAM
Please open Telegram to view this post
VIEW IN TELEGRAM
π7