ML Research Hub
32.6K subscribers
3.76K photos
185 videos
23 files
4.02K links
Advancing research in Machine Learning – practical insights, tools, and techniques for researchers.

Admin: @HusseinSheikho || @Hussein_Sheikho
Download Telegram
Please open Telegram to view this post
VIEW IN TELEGRAM
👍61
Pandas ➡️ Polars ➡️ SQL ➡️ PySpark translations:

Is it useful to you

📂 Tags: #pandas #Polars #sql #Pyspark

https://t.iss.one/codeprogrammer ⭐️
Please open Telegram to view this post
VIEW IN TELEGRAM
👍7
WiLoR: End-to-end 3D Hand Localization and Reconstruction in-the-wild

Paper: https://arxiv.org/pdf/2409.12259v1.pdf

Code: https://github.com/rolpotamias/WiLoR

Datasets: FreiHAND - HO-3D v2 - COCO-WholeBody

https://t.iss.one/DataScienceT 🏵
Please open Telegram to view this post
VIEW IN TELEGRAM
👍3
Please open Telegram to view this post
VIEW IN TELEGRAM
👍21
👍4
📈How to make $15,000 in a month in 2024?

Easy!!! Lisa is now the hippest trader who is showing crazy results in the market!

She was able to make over $15,000 in the last month! ❗️

Right now she has started a marathon on her channel and is running it absolutely free. 💡

To participate in the marathon, you will need to :

1. Subscribe to the channel SIGNALS BY LISA TRADER 📈
2. Write in private messages : “Marathon” and start participating!

👉CLICK HERE👈
👍3
Generalizable and Animatable Gaussian Head Avatar

🖥 Github: https://github.com/xg-chu/gagavatar

📕 Paper: https://arxiv.org/abs/2410.07971v1

https://t.iss.one/DataScienceT 🏵
Please open Telegram to view this post
VIEW IN TELEGRAM
👍27
Efficiently Democratizing Medical LLMs for 50 Languages via a Mixture of Language Family Experts

💻 Github: https://github.com/freedomintelligence/apollomoe

🔖 Paper: https://arxiv.org/abs/2410.10626v1

🤗 Dataset: https://paperswithcode.com/dataset/mmlu

https://t.iss.one/DataScienceT 🏵
Please open Telegram to view this post
VIEW IN TELEGRAM
👍21
LOOKING FOR A NEW SOURCE OF INCOME?
Average earnings from 100$ a day

Lisa is looking for people who want to earn money. If you are responsible, motivated and want to change your life. Welcome to her channel.

WHAT YOU NEED TO WORK:
1. phone or computer
2. Free 15-20 minutes a day
3. desire to earn

❗️ Requires 20 people ❗️
Access is available at the link below
👇

https://t.iss.one/+NhwYZAXFlT8yZDIx
1👍1
Please open Telegram to view this post
VIEW IN TELEGRAM
👍3
Please open Telegram to view this post
VIEW IN TELEGRAM
👍52
Forwarded from Tomas
This media is not supported in your browser
VIEW IN TELEGRAM
🚨With me you will make money! I have made over $20,000 in the last week! 🔥

I don't care where you are and what you can do, I will help absolutely everyone earn money.

My name is Lisa and:
✔️ I will teach you trading for FREE in a short period of time
✔️ I will give you FREE signals every day
✔️ I will help you to get income of 1,000$ in a week

Sounds unbelievable?

You have 2 hours to join our channel.

But it’s true - just look at the results in my channel and JOIN FOR FREE 👉🏻 https://t.iss.one/+fJ0XM3sZkaxkNjgx
👍1
SAM2Long: Enhancing SAM 2 for Long Video Segmentation with a Training-Free Memory Tree

🖥 Github: https://github.com/mark12ding/sam2long

📕 Paper: https://arxiv.org/abs/2410.16268v1

🤗 HF: https://huggingface.co/papers/2410.16268
👍4
📖 LLM-Agent-Paper-List is a repository of papers on the topic of agents based on large language models (LLM)! The papers are divided into categories such as LLM agent architectures, autonomous LLM agents, reinforcement learning (RL), natural language processing methods, multimodal approaches and tools for developing LLM agents, and more.

🖥 Github

https://t.iss.one/DataScienceT
Please open Telegram to view this post
VIEW IN TELEGRAM
👍4
Don’t sleep on Vision Language Models (VLMs).

With the releases of Llama 3.2 and ColQwen2, multimodal models are gaining more and more traction.

VLMs are multimodal models that can handle image and text modalities:

Input: Image and text
Output: Text

They can be used for many use cases, including visual question answering or document understanding (as in the case of ColQwen2).

How do they work under the hood?

The main challenge in VLMs is to unify the image and text representations.

For this, a typical VLM architecture consists of the following components:

• image encoder (e.g., CLIP, SigLIP)
• embedding projector to align image and text representations
• text decoder (e.g., Vicuna, Gemma)

huggingface.co/blog/vlms

https://t.iss.one/DataScienceT
👍62
🔦 Biggest Sale Of The Year NOW ON 🔦Double 11 Shopping Festival Event is live! Check out your most loved for less. 🛍️

Enjoy SPOTO Double 11 Crazy Sale to Join Lucky Draw and win gifts worth up to $1000!💸
🎁⏯️: https://www.spotoexam.com/snsdouble11sale2024/?id=snstxrbzhussein

🔗📝Test Your IT Skills for Free: https://bit.ly/48q8Cb3

🔗📲Contact for 1v1 IT Certs Exam Help: https://wa.link/k0vy3x
🌐📚 JOIN IT Study GROUP to Get Madness Discount 👇: https://chat.whatsapp.com/HqzBlMaOPci0wYvkEtcCDa
👍2
Forwarded from Data Science Library
🎁 Your balance is credited $4,000 , the owner of the channel wants to contact you!

Dear subscriber, we would like to thank you very much for supporting our channel, and as a token of our gratitude we would like to provide you with free access to Lisa's investor channel, with the help of which you can earn today

t.iss.one/Lisainvestor

Be sure to take advantage of our gift, admission is free, don't miss the opportunity, change your life for the better.

You can follow the link :
https://t.iss.one/+j4-NLonPlWJmZDVh
👍1