Data Science | Machine Learning with Python for Researchers
31.3K subscribers
1.5K photos
102 videos
22 files
1.78K links
Admin: @HusseinSheikho

The Data Science and Python channel is for researchers and advanced programmers

Buy ads: https://telega.io/c/dataScienceT
Download Telegram
πŸ’² FinGPT: Open-Source Financial Large Language Models

Unlike proprietary models, FinGPT takes a data-centric approach, providing researchers and practitioners with accessible and transparent resources to develop their FinLLMs.

πŸ–₯ Github: https://github.com/ai4finance-foundation/fingpt

⭐️ FinNLP: https://github.com/ai4finance-foundation/finnlp

πŸ“• Paper: https://arxiv.org/abs/2306.06031v1

πŸ”— Project: https://ai4finance-foundation.github.io/FinNLP/

https://t.iss.one/DataScienceT
❀‍πŸ”₯4πŸ‘4❀1
You can now download and watch all paid data science courses for free by subscribing to our new channel

https://t.iss.one/udemy13
πŸ‘2❀‍πŸ”₯1
This media is not supported in your browser
VIEW IN TELEGRAM
πŸ§” 4DHumans: Reconstructing and Tracking Humans with Transformers

Fully "transformerized" version of a network for human mesh recovery.

πŸ–₯ Github: https://github.com/shubham-goel/4D-Humans

⭐️ Colab: https://colab.research.google.com/drive/1Ex4gE5v1bPR3evfhtG7sDHxQGsWwNwby?usp=sharing

πŸ“• Paper: https://arxiv.org/pdf/2305.20091.pdf

πŸ”— Project: https://shubham-goel.github.io/4dhumans/

https://t.iss.one/DataScienceT
❀‍πŸ”₯2
πŸ”₯ Scalable Diffusion Models with Transformers (DiT)

git clone https://github.com/facebookresearch/DiT.git

πŸ–₯ Github: https://github.com/facebookresearch/DiT

πŸ–₯ Colab: https://colab.research.google.com/github/facebookresearch/DiT/blob/main/run_DiT.ipynb

⭐️ Project: https://www.wpeebles.com/DiT

⏩ Paprer: https://arxiv.org/abs/2212.09748

βœ”οΈ Dataset: https://paperswithcode.com/dataset/imagenet

https://t.iss.one/DataScienceT
❀‍πŸ”₯3πŸ‘1
Galactic: Scaling End-to-End Reinforcement Learning for Rearrangement
at 100k Steps-Per-Second

πŸ–₯ Github: https://github.com/facebookresearch/galactic

⏩ Paper: https://arxiv.org/pdf/2306.07552v1.pdf

πŸ’¨ Dataset: https://paperswithcode.com/dataset/vizdoom

https://t.iss.one/DataScienceT
❀‍πŸ”₯4❀1
Macaw-LLM: Multi-Modal Language Modeling with Image, Audio, Video, and Text Integration

Macaw-LLM is a model of its kind, bringing together state-of-the-art models for processing visual, auditory, and textual information, namely CLIP, Whisper, and LLaMA.

πŸ–₯ Github: https://github.com/lyuchenyang/macaw-llm

⭐️ Model: https://tinyurl.com/yem9m4nf

πŸ“• Paper: https://tinyurl.com/4rsexudv

πŸ”— Dataset: https://github.com/lyuchenyang/Macaw-LLM/blob/main/data

https://t.iss.one/DataScienceT
πŸ‘4❀2❀‍πŸ”₯1
Semi-supervised learning made simple with self-supervised clustering [CVPR 2023]

πŸ–₯ Github: https://github.com/pietroastolfi/suave-daino

⏩ Paper: https://arxiv.org/pdf/2306.07483v1.pdf

πŸ’¨ Dataset: https://paperswithcode.com/dataset/imagenet

https://t.iss.one/DataScienceT
❀‍πŸ”₯2❀1πŸ‘1
This media is not supported in your browser
VIEW IN TELEGRAM
🌐 WizMap: Scalable Interactive Visualization for Exploring Large Machine Learning Embeddings

πŸ–₯ Github: https://github.com/poloclub/wizmap

⭐️ Colab: https://colab.research.google.com/drive/1GNdmBnc5UA7OYBZPtHu244eiAN-0IMZA?usp=sharing

πŸ“• Paper: https://arxiv.org/abs/2306.09328v1

πŸ”— Web demo: https://poloclub.github.io/wizmap.

https://t.iss.one/DataScienceT
❀1❀‍πŸ”₯1πŸ‘1πŸ†1
How do Transformers work?

All
the Transformer models mentioned above (GPT, BERT, BART, T5, etc.) have been trained as language models. This means they have been trained on large amounts of raw text in a self-supervised fashion. Self-supervised learning is a type of training in which the objective is automatically computed from the inputs of the model. That means that humans are not needed to label the data!

This type of model develops a statistical understanding of the language it has been trained on, but it’s not very useful for specific practical tasks. Because of this, the general pretrained model then goes through a process called transfer learning. During this process, the model is fine-tuned in a supervised way β€” that is, using human-annotated labels β€” on a given task

πŸ”— Read More

🌸 https://t.iss.one/DataScienceT
πŸ‘3❀2❀‍πŸ”₯2
Data Science With Python Workflow Cheat Sheet

Creator: business Science
Stars ⭐️: 75
Forked By: 38

https://github.com/business-science/cheatsheets/blob/master/Data_Science_With_Python_Workflow.pdf

https://t.iss.one/DataScienceT
πŸ‘5❀3
80+ Jupyter Notebook tutorials on image classification, object detection and image segmentation in various domains
πŸ“Œ Agriculture and Food
πŸ“Œ Medical and Healthcare
πŸ“Œ Satellite
πŸ“Œ Security and Surveillance
πŸ“Œ ADAS and Self Driving Cars
πŸ“Œ Retail and E-Commerce
πŸ“Œ Wildlife

Classification library
https://github.com/Tessellate-Imaging/monk_v1

Notebooks - https://github.com/Tessellate-Imaging/monk_v1/tree/master/study_roadmaps/4_image_classification_zoo

Detection and Segmentation Library
https://github.com/Tessellate-Imaging/

Monk_Object_Detection
Notebooks: https://github.com/Tessellate-Imaging/Monk_Object_Detection/tree/master/application_model_zoo

https://t.iss.one/DataScienceT
πŸ‘7❀‍πŸ”₯3
Choose JOBITT! Receive +10% of your first salary as a bonus from JOBITT!
Find your dream job with JOBITT! Get more, starting with your first paycheck! Find many job options on our Telegram channel: https://t.iss.one/ujobit
❀2❀‍πŸ”₯2πŸ‘2
πŸ“Œ LOMO: LOw-Memory Optimization

New optimizer, LOw-Memory Optimization enables the full parameter fine-tuning of a 7B model on a single RTX 3090, or a 65B model on a single machine with 8Γ—RTX 3090, each with 24GB memory.

πŸ–₯ Github: https://github.com/OpenLMLab/LOMO/tree/main

πŸ“• Paper: https://arxiv.org/pdf/2306.09782.pdf

πŸ”— Dataset: https://paperswithcode.com/dataset/superglue

https://t.iss.one/DataScienceT
❀2❀‍πŸ”₯1