ML Research Hub
32.7K subscribers
3.93K photos
217 videos
23 files
4.23K links
Advancing research in Machine Learning โ€“ practical insights, tools, and techniques for researchers.

Admin: @HusseinSheikho || @Hussein_Sheikho
Download Telegram
Executable Code Actions Elicit Better LLM Agents

1 Feb 2024 ยท Xingyao Wang, Yangyi Chen, Lifan Yuan, Yizhe Zhang, Yunzhu Li, Hao Peng, Heng Ji

Large Language Model (LLM) agents, capable of performing a broad range of actions, such as invoking tools and controlling robots, show great potential in tackling real-world challenges. LLM agents are typically prompted to produce actions by generating #JSON or text in a pre-defined format, which is usually limited by constrained action space (e.g., the scope of pre-defined tools) and restricted flexibility (e.g., inability to compose multiple tools). This work proposes to use executable Python code to consolidate LLM agents' actions into a unified action space (CodeAct). Integrated with a Python interpreter, CodeAct can execute code actions and dynamically revise prior actions or emit new actions upon new observations through multi-turn interactions. Our extensive analysis of 17 LLMs on API-Bank and a newly curated benchmark shows that CodeAct outperforms widely used alternatives (up to 20% higher success rate). The encouraging performance of CodeAct motivates us to build an open-source #LLM agent that interacts with environments by executing interpretable code and collaborates with users using natural language. To this end, we collect an instruction-tuning dataset CodeActInstruct that consists of 7k multi-turn interactions using CodeAct. We show that it can be used with existing data to improve models in agent-oriented tasks without compromising their general capability. CodeActAgent, finetuned from Llama2 and Mistral, is integrated with #Python interpreter and uniquely tailored to perform sophisticated tasks (e.g., model training) using existing libraries and autonomously self-debug.


Paper: https://arxiv.org/pdf/2402.01030v4.pdf

Codes:
https://github.com/epfllm/megatron-llm
https://github.com/xingyaoww/code-act

Datasets: MMLU - GSM8K - HumanEval - MATH

https://t.iss.one/DataScienceT โš ๏ธ
Please open Telegram to view this post
VIEW IN TELEGRAM
โค3๐Ÿ‘3๐Ÿ”ฅ1๐Ÿ‘1
๐Ÿ“š Become a professional data scientist with these 17 resources!



1๏ธโƒฃ Python libraries for machine learning

โ—€๏ธ Introducing the best Python tools and packages for building ML models.

โž–โž–โž–

2๏ธโƒฃ Deep Learning Interactive Book

โ—€๏ธ Learn deep learning concepts by combining text, math, code, and images.

โž–โž–โž–

3๏ธโƒฃ Anthology of Data Science Learning Resources

โ—€๏ธ The best courses, books, and tools for learning data science.

โž–โž–โž–

4๏ธโƒฃ Implementing algorithms from scratch

โ—€๏ธ Coding popular ML algorithms from scratch

โž–โž–โž–

5๏ธโƒฃ Machine Learning Interview Guide

โ—€๏ธ Fully prepared for job interviews

โž–โž–โž–

6๏ธโƒฃ Real-world machine learning projects

โ—€๏ธ Learning how to build and deploy models.

โž–โž–โž–

7๏ธโƒฃ Designing machine learning systems

โ—€๏ธ How to design a scalable and stable ML system.

โž–โž–โž–

8๏ธโƒฃ Machine Learning Mathematics

โ—€๏ธ Basic mathematical concepts necessary to understand machine learning.

โž–โž–โž–

9๏ธโƒฃ Introduction to Statistical Learning

โ—€๏ธ Learn algorithms with practical examples.

โž–โž–โž–

1๏ธโƒฃ Machine learning with a probabilistic approach

โ—€๏ธ Better understanding modeling and uncertainty with a statistical perspective.

โž–โž–โž–

1๏ธโƒฃ UBC Machine Learning

โ—€๏ธ Deep understanding of machine learning concepts with conceptual teaching from one of the leading professors in the field of ML,

โž–โž–โž–

1๏ธโƒฃ Deep Learning with Andrew Ng

โ—€๏ธ A strong start in the world of neural networks, CNNs and RNNs.

โž–โž–โž–

1๏ธโƒฃ Linear Algebra with 3Blue1Brown

โ—€๏ธ Intuitive and visual teaching of linear algebra concepts.

โž–โž–โž–

๐Ÿ”ด Machine Learning Course

โ—€๏ธ A combination of theory and practical training to strengthen ML skills.

โž–โž–โž–

1๏ธโƒฃ Mathematical Optimization with Python

โ—€๏ธ You will learn the basic concepts of optimization with Python code.

โž–โž–โž–

1๏ธโƒฃ Explainable models in machine learning

โ—€๏ธ Making complex models understandable.

โž–โž–โž–

โšซ๏ธ Data Analysis with Python

โ—€๏ธ Data analysis skills using Pandas and NumPy libraries.


#DataScience #MachineLearning #DeepLearning #Python #AI #MLProjects #DataAnalysis #ExplainableAI #100DaysOfCode #TechEducation #MLInterviewPrep #NeuralNetworks #MathForML #Statistics #Coding #AIForEveryone #PythonForDataScience



โšก๏ธ BEST DATA SCIENCE CHANNELS ON TELEGRAM ๐ŸŒŸ
Please open Telegram to view this post
VIEW IN TELEGRAM
๐Ÿ‘10โค2
๐ŸŽ“ 2025 Top IT Certification โ€“ Free Study Materials Are Here!

๐Ÿ”ฅWhether you're preparing for #Cisco #AWS #PMP #Python #Excel #Google #Microsoft #AI or any other in-demand certification โ€“ SPOTO has got you covered!

๐Ÿ“˜ Download the FREE IT Certs Exam E-book:
๐Ÿ‘‰ https://bit.ly/4lNVItV
๐Ÿง  Test Your IT Skills for FREE:
๐Ÿ‘‰ https://bit.ly/4imEjW5
โ˜๏ธ Download Free AI Materials :
๐Ÿ‘‰ https://bit.ly/3F3lc5B

๐Ÿ“ž Need 1-on-1 IT Exam Help? Contact Now:
๐Ÿ‘‰ https://wa.link/k0vy3x
๐ŸŒ Join Our IT Study Group for Daily Updates & Tips:
๐Ÿ‘‰ https://chat.whatsapp.com/E3Vkxa19HPO9ZVkWslBO8s
โค3
This media is not supported in your browser
VIEW IN TELEGRAM
NVIDIA introduces Describe Anything Model (DAM)

a new state-of-the-art model designed to generate rich, detailed descriptions for specific regions in images and videos. Users can mark these regions using points, boxes, scribbles, or masks.
DAM sets a new benchmark in multimodal understanding, with open-source code under the Apache license, a dedicated dataset, and a live demo available on Hugging Face.

Explore more below:
Paper: https://lnkd.in/dZh82xtV
Project Page: https://lnkd.in/dcv9V2ZF
GitHub Repo: https://lnkd.in/dJB9Ehtb
Hugging Face Demo: https://lnkd.in/dXDb2MWU
Review: https://t.ly/la4JD

#NVIDIA #DescribeAnything #ComputerVision #MultimodalAI #DeepLearning #ArtificialIntelligence #MachineLearning #OpenSource #HuggingFace #GenerativeAI #VisualUnderstanding #Python #AIresearch

https://t.iss.one/DataScienceT โœ…
Please open Telegram to view this post
VIEW IN TELEGRAM
๐Ÿ‘5
๐ŸŽฏ ุงุจุฏุฃ ุฑุญู„ุชูƒ ุงู„ุงุญุชุฑุงููŠุฉ ููŠ ุงู„ุจุฑู…ุฌุฉ ู…ุน
#Python_Mastery_Course ๐Ÿ
ู‡ู„ ุชุฑุบุจ ุจุชุนู„ู… ู„ุบุฉ ุงู„ุจุฑู…ุฌุฉ ุงู„ุฃูƒุซุฑ ุทู„ุจู‹ุง ููŠ ุงู„ุนุงู„ู…ุŸ
ู‡ู„ ุชุญู„ู… ุจุงู„ูˆุตูˆู„ ุฅู„ู‰ ู…ุฌุงู„ุงุช ู…ุซู„ ุงู„ุฐูƒุงุก ุงู„ุงุตุทู†ุงุนูŠุŒ ุชุญู„ูŠู„ ุงู„ุจูŠุงู†ุงุช ุฃูˆ ุชุตู…ูŠู… ุงู„ูˆุงุฌู‡ุงุชุŸ
๐Ÿ“ข ู‡ุฐู‡ ุงู„ุฏูˆุฑุฉ ุฎูุตุตุช ู„ุชูƒูˆู† ู†ู‚ุทุฉ ุงู†ุทู„ุงู‚ูƒ ู†ุญูˆ ุงู„ู…ุณุชู‚ุจู„!
________________________________________
๐Ÿš€ ู…ุงุฐุง ุณุชุชุนู„ู… ููŠ ู‡ุฐู‡ ุงู„ุฏูˆุฑุฉุŸ
๐Ÿ”น ุงู„ูˆุญุฏุฉ 1: ุฃุณุงุณูŠุงุช ุจุงูŠุซูˆู† (ุงู„ู…ุชุบูŠุฑุงุช โ€“ ุฃู†ูˆุงุน ุงู„ุจูŠุงู†ุงุช โ€“ ุงู„ุนู…ู„ูŠุงุช โ€“ ุฃุณุงุณูŠุงุช ุงู„ูƒูˆุฏ)
๐Ÿ”น ุงู„ูˆุญุฏุฉ 2: ุงู„ุชุญูƒู… ููŠ ุณูŠุฑ ุงู„ุจุฑู†ุงู…ุฌ (ุงู„ุดุฑูˆุท โ€“ ุงู„ุญู„ู‚ุงุช โ€“ ุฃูˆุงู…ุฑ ุงู„ุชุญูƒู…)
๐Ÿ”น ุงู„ูˆุญุฏุฉ 3: ู‡ูŠุงูƒู„ ุงู„ุจูŠุงู†ุงุช (ู‚ูˆุงุฆู… โ€“ ู‚ูˆุงู…ูŠุณ โ€“ ู…ุฌู…ูˆุนุงุช โ€“ Tuples)
๐Ÿ”น ุงู„ูˆุญุฏุฉ 4: ุงู„ุฏูˆุงู„ (ุฅู†ุดุงุก โ€“ ู…ุนุงู…ู„ุงุช โ€“ ุงู„ู†ุทุงู‚ โ€“ ุงู„ุชูƒุฑุงุฑ)
๐Ÿ”น ุงู„ูˆุญุฏุฉ 5: ุงู„ูˆุญุฏุงุช (Modules)
๐Ÿ”น ุงู„ูˆุญุฏุฉ 6: ุงู„ุชุนุงู…ู„ ู…ุน ุงู„ู…ู„ูุงุช ูˆู…ู„ูุงุช CSV
๐Ÿ”น ุงู„ูˆุญุฏุฉ 7: ู…ุนุงู„ุฌุฉ ุงู„ุงุณุชุซู†ุงุกุงุช ุจุงุญุชุฑุงู
๐Ÿ”น ุงู„ูˆุญุฏุฉ 8: ุงู„ุจุฑู…ุฌุฉ ุงู„ูƒุงุฆู†ูŠุฉ (OOP)
๐Ÿ”น ุงู„ูˆุญุฏุฉ 9: ุงู„ู…ูุงู‡ูŠู… ุงู„ู…ุชู‚ุฏู…ุฉ:
โ€ƒโ€ƒโœ… ุงู„ู…ูˆู„ุฏุงุช (Generators)
โ€ƒโ€ƒโœ… ุงู„ูƒุงุฆู†ุงุช ุงู„ู‚ุงุจู„ุฉ ู„ู„ุชูƒุฑุงุฑ (Iterators)
โ€ƒโ€ƒโœ… ุงู„ู…ุฒูŠู†ุงุช (Decorators)
๐Ÿ’ก ุนู†ุฏ ุงู†ุชู‡ุงุฆูƒ ุณุชูƒูˆู† ู‚ุงุฏุฑู‹ุง ุนู„ู‰:
โœ”๏ธ ุจู†ุงุก ู…ุดุงุฑูŠุน ุญู‚ูŠู‚ูŠุฉ ุจู„ุบุฉ ุจุงูŠุซูˆู†
โœ”๏ธ ุงู„ุงู†ุชู‚ุงู„ ุจุซู‚ุฉ ุฅู„ู‰ ู…ุฌุงู„ุงุช ู…ุชู‚ุฏู…ุฉ ู…ุซู„ ุงู„ุฐูƒุงุก ุงู„ุงุตุทู†ุงุนูŠ ูˆุชุญู„ูŠู„ ุงู„ุจูŠุงู†ุงุช
โœ”๏ธ ุฃุชู…ุชุฉ ุงู„ู…ู‡ุงู… ูˆุงู„ุชุนุงู…ู„ ู…ุน ุงู„ุจูŠุงู†ุงุช ุจุงุญุชุฑุงู

๐ŸŽฅ ู†ุธุงู… ุงู„ุฏูˆุฑุฉ:
โ€ข ุจุซ ู…ุจุงุดุฑ Live ู…ุน ุงู„ู…ุฏุฑุจ ุฏ. ู…ุญู…ุฏ ุนู…ุงุฏ ุนุฑูู‡
โ€ข ุฌู…ูŠุน ุงู„ู…ุญุงุถุฑุงุช ุณุชูุฑูุน ุนู„ู‰ ุงู„ู…ูˆู‚ุน ู„ุชุดุงู‡ุฏู‡ุง ููŠ ุงู„ูˆู‚ุช ุงู„ุฐูŠ ูŠู†ุงุณุจูƒ
๐Ÿ•’ ู…ุฏุฉ ุงู„ุฏูˆุฑุฉ: 25 ุณุงุนุฉ ุชุฏุฑูŠุจูŠุฉ
๐Ÿ“… ุชุงุฑูŠุฎ ุงู„ุจุฏุงูŠุฉ:15- 6
๐Ÿ’ฐ ุฎุตู… ู„ู„ุญุฌุฒ ุงู„ู…ุจูƒุฑ
ุชูˆุงุตู„ ุงู„ุขู† ู…ุน ุฐูƒุฑ ูƒูˆุฏ ุงู„ุฏูˆุฑุฉ"001"
https://t.iss.one/Agartha_Support
๐Ÿš€ FREE IT Study Kits for 2025 โ€” Grab Yours Now!

Just found these zero-cost resources from SPOTO๐Ÿ‘‡
Perfect if you're prepping for #Cisco, #AWS, #PMP, #AI, #Python, #Excel, or #Cybersecurity!
โœ… 100% Free
โœ… No signup traps
โœ… Instantly downloadable

๐Ÿ“˜ IT Certs E-book: https://bit.ly/4fJSoLP
โ˜๏ธ Cloud & AI Kits: https://bit.ly/3F3lc5B
๐Ÿ“Š Cybersecurity, Python & Excel: https://bit.ly/4mFrA4g
๐Ÿง  Skill Test (Free!): https://bit.ly/3PoKH39
Tag a friend & level up together ๐Ÿ’ช

๐ŸŒ Join the IT Study Group: https://chat.whatsapp.com/E3Vkxa19HPO9ZVkWslBO8s
๐Ÿ“ฒ 1-on-1 Exam Help: https://wa.link/k0vy3x
๐Ÿ‘‘Last 24 HOURS to grab Mid-Year Mega Sale prices๏ผDonโ€™t miss Lucky Draw๐Ÿ‘‡
https://bit.ly/43VgcbT
๐Ÿš€ 2025 FREE Study Recourses from SPOTO for yโ€™all โ€” Donโ€™t Miss Out!
โœ… 100% Free Downloads
โœ… No signup / spam

๐Ÿ“˜ #Python, Cybersecurity & Excel: https://bit.ly/4lYeVYp
๐Ÿ“Š #Cloud Computing: https://bit.ly/45Rj1gm
โ˜๏ธ #AI Kits: https://bit.ly/4m4bHTc
๐Ÿ” #CCNA Courses: https://bit.ly/45TL7rm
๐Ÿง  Free Online Practice โ€“ Test Now: https://bit.ly/41Kurjr

September 8th to 21th, SPOTO launches the Lowest Price Ever on ALL products! ๐Ÿ”ฅ
Amazing Discounts for ๐Ÿ“Œ CCNA 200-301 ๐Ÿ“Œ CCNP 400-007 and moreโ€ฆ
๐Ÿ“ฒ Contact admin to grab them: https://wa.link/uxde01
โค1
๐Ÿ’ก ViT for Fashion MNIST Classification

This lesson demonstrates how to use a pre-trained Vision Transformer (ViT) to classify an image from the Fashion MNIST dataset. ViT treats an image as a sequence of patches, similar to how language models treat sentences, making it a powerful architecture for computer vision tasks. We will use a model from the Hugging Face Hub that is already fine-tuned for this specific dataset.

from transformers import ViTImageProcessor, ViTForImageClassification
from datasets import load_dataset
import torch

# 1. Load a model fine-tuned on Fashion MNIST and its processor
model_name = "abhishek/autotrain-fashion-mnist-283834433"
processor = ViTImageProcessor.from_pretrained(model_name)
model = ViTForImageClassification.from_pretrained(model_name)

# 2. Load the dataset and get a sample image
dataset = load_dataset("fashion_mnist", split="test")
image = dataset[100]['image'] # Get the 100th image

# 3. Preprocess the image and prepare it for the model
inputs = processor(images=image, return_tensors="pt")

# 4. Perform inference to get the classification logits
with torch.no_grad():
outputs = model(**inputs)
logits = outputs.logits

# 5. Get the predicted class and its label
predicted_class_idx = logits.argmax(-1).item()
predicted_class = model.config.id2label[predicted_class_idx]

print(f"Image is a: {dataset[100]['label']}")
print(f"Model predicted: {predicted_class}")


Code explanation: This script uses the transformers library to load a ViT model specifically fine-tuned for Fashion MNIST classification. It then loads the dataset, selects a single sample image, and uses the model's processor to convert it into the correct input format. The model performs inference, and the script identifies the most likely class from the output logits, printing the final human-readable prediction.

#Python #MachineLearning #ViT #ComputerVision #HuggingFace

โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”
By: @DataScienceT โœจ
๐Ÿ’ก ViT for Fashion MNIST Classification

This lesson demonstrates how to use a pre-trained Vision Transformer (ViT) to classify an image from the Fashion MNIST dataset. ViT treats an image as a sequence of patches, similar to how language models treat sentences, making it a powerful architecture for computer vision tasks. We will use a model from the Hugging Face Hub that is already fine-tuned for this specific dataset.

from transformers import ViTImageProcessor, ViTForImageClassification
from datasets import load_dataset
import torch

# 1. Load a model fine-tuned on Fashion MNIST and its processor
model_name = "abhishek/autotrain-fashion-mnist-283834433"
processor = ViTImageProcessor.from_pretrained(model_name)
model = ViTForImageClassification.from_pretrained(model_name)

# 2. Load the dataset and get a sample image
dataset = load_dataset("fashion_mnist", split="test")
image = dataset[100]['image'] # Get the 100th image

# 3. Preprocess the image and prepare it for the model
inputs = processor(images=image, return_tensors="pt")

# 4. Perform inference to get the classification logits
with torch.no_grad():
outputs = model(**inputs)
logits = outputs.logits

# 5. Get the predicted class and its label
predicted_class_idx = logits.argmax(-1).item()
predicted_class = model.config.id2label[predicted_class_idx]

print(f"Image is a: {dataset[100]['label']}")
print(f"Model predicted: {predicted_class}")


Code explanation: This script uses the transformers library to load a ViT model specifically fine-tuned for Fashion MNIST classification. It then loads the dataset, selects a single sample image, and uses the model's processor to convert it into the correct input format. The model performs inference, and the script identifies the most likely class from the output logits, printing the final human-readable prediction.

#Python #MachineLearning #ViT #ComputerVision #HuggingFace

โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”
By: @DataScienceT โœจ
๐Ÿค–๐Ÿง  Reflex: Build Full-Stack Web Apps in Pure Python โ€” Fast, Flexible and Powerful

๐Ÿ—“๏ธ 29 Oct 2025
๐Ÿ“š AI News & Trends

Building modern web applications has traditionally required mastering multiple languages and frameworks from JavaScript for the frontend to Python, Java or Node.js for the backend. For many developers, switching between different technologies can slow down productivity and increase complexity. Reflex eliminates that problem. It is an innovative open-source full-stack web framework that allows developers to ...

#Reflex #FullStack #WebDevelopment #Python #OpenSource #WebApps
Top 100 Data Analyst Interview Questions & Answers

#DataAnalysis #InterviewQuestions #SQL #Python #Statistics #CaseStudy #DataScience

Part 1: SQL Questions (Q1-30)

#1. What is the difference between DELETE, TRUNCATE, and DROP?
A:
โ€ข DELETE is a DML command that removes rows from a table based on a WHERE clause. It is slower as it logs each row deletion and can be rolled back.
โ€ข TRUNCATE is a DDL command that quickly removes all rows from a table. It is faster, cannot be rolled back, and resets table identity.
โ€ข DROP is a DDL command that removes the entire table, including its structure, data, and indexes.

#2. Select all unique departments from the employees table.
A: Use the DISTINCT keyword.

SELECT DISTINCT department
FROM employees;


#3. Find the top 5 highest-paid employees.
A: Use ORDER BY and LIMIT.

SELECT name, salary
FROM employees
ORDER BY salary DESC
LIMIT 5;


#4. What is the difference between WHERE and HAVING?
A:
โ€ข WHERE is used to filter records before any groupings are made (i.e., it operates on individual rows).
โ€ข HAVING is used to filter groups after aggregations (GROUP BY) have been performed.

-- Find departments with more than 10 employees
SELECT department, COUNT(employee_id)
FROM employees
GROUP BY department
HAVING COUNT(employee_id) > 10;


#5. What are the different types of SQL joins?
A:
โ€ข (INNER) JOIN: Returns records that have matching values in both tables.
โ€ข LEFT (OUTER) JOIN: Returns all records from the left table, and the matched records from the right table.
โ€ข RIGHT (OUTER) JOIN: Returns all records from the right table, and the matched records from the left table.
โ€ข FULL (OUTER) JOIN: Returns all records when there is a match in either the left or right table.
โ€ข SELF JOIN: A regular join, but the table is joined with itself.

#6. Write a query to find the second-highest salary.
A: Use OFFSET or a subquery.

-- Method 1: Using OFFSET
SELECT salary
FROM employees
ORDER BY salary DESC
LIMIT 1 OFFSET 1;

-- Method 2: Using a Subquery
SELECT MAX(salary)
FROM employees
WHERE salary < (SELECT MAX(salary) FROM employees);


#7. Find duplicate emails in a customers table.
A: Group by the email column and use HAVING to find groups with a count greater than 1.

SELECT email, COUNT(email)
FROM customers
GROUP BY email
HAVING COUNT(email) > 1;


#8. What is a primary key vs. a foreign key?
A:
โ€ข A Primary Key is a constraint that uniquely identifies each record in a table. It must contain unique values and cannot contain NULL values.
โ€ข A Foreign Key is a key used to link two tables together. It is a field (or collection of fields) in one table that refers to the Primary Key in another table.

#9. Explain Window Functions. Give an example.
A: Window functions perform a calculation across a set of table rows that are somehow related to the current row. Unlike aggregate functions, they do not collapse rows.

-- Rank employees by salary within each department
SELECT
name,
department,
salary,
RANK() OVER (PARTITION BY department ORDER BY salary DESC) as dept_rank
FROM employees;


#10. What is a CTE (Common Table Expression)?
A: A CTE is a temporary, named result set that you can reference within a SELECT, INSERT, UPDATE, or DELETE statement. It helps improve readability and break down complex queries.
โค2
โœจGradio: Hassle-Free Sharing and Testing of ML Models in the Wild

๐Ÿ“ Summary:
Gradio is an open-source Python package that creates visual interfaces for ML models, making them accessible to non-specialized users via a URL. This improves collaboration by allowing easy interaction, feedback, and trust-building in interdisciplinary settings.

๐Ÿ”น Publication Date: Published on Jun 6, 2019

๐Ÿ”น Paper Links:
โ€ข arXiv Page: https://arxiv.org/abs/1906.02569
โ€ข PDF: https://arxiv.org/pdf/1906.02569
โ€ข Github: https://github.com/gradio-app/gradio

๐Ÿ”น Models citing this paper:
โ€ข https://huggingface.co/CxECHO/CE

โœจ Datasets citing this paper:
โ€ข https://huggingface.co/datasets/society-ethics/papers

โœจ Spaces citing this paper:
โ€ข https://huggingface.co/spaces/orYx-models/Nudge_Generator
โ€ข https://huggingface.co/spaces/society-ethics/about
โ€ข https://huggingface.co/spaces/mindmime/gradio

==================================

For more data science resources:
โœ“ https://t.iss.one/DataScienceT

#Gradio #MachineLearning #MLOps #Python #DataScience