Large Language Models as Tool Makers
In this work, we take an initial step towards removing this dependency by proposing a closed-loop framework, referred to as LLMs A s Tool Makers (LATM), where LLMs create their own reusable tools for problem-solving.
π₯ Github: https://github.com/ctlllll/llm-toolmaker
β© Paper: https://arxiv.org/pdf/2305.17126v1.pdf
π Dataset: https://paperswithcode.com/dataset/big-bench
https://t.iss.one/DataScienceT
In this work, we take an initial step towards removing this dependency by proposing a closed-loop framework, referred to as LLMs A s Tool Makers (LATM), where LLMs create their own reusable tools for problem-solving.
π₯ Github: https://github.com/ctlllll/llm-toolmaker
β© Paper: https://arxiv.org/pdf/2305.17126v1.pdf
π Dataset: https://paperswithcode.com/dataset/big-bench
https://t.iss.one/DataScienceT
β€βπ₯1π1
π₯ A Practical Toolkit for Multilingual Question and Answer Generation
Multilingual/multidomain question generation datasets, models, and python library for question generation.
π₯ Github: https://github.com/asahi417/lm-question-generation
β© Paper: https://arxiv.org/abs/2305.17416v1
π Dataset: https://paperswithcode.com/dataset/squad
https://t.iss.one/DataScienceT
Multilingual/multidomain question generation datasets, models, and python library for question generation.
π₯ Github: https://github.com/asahi417/lm-question-generation
β© Paper: https://arxiv.org/abs/2305.17416v1
π Dataset: https://paperswithcode.com/dataset/squad
https://t.iss.one/DataScienceT
π1
π¦ BigTrans π
BigTrans which adapts LLaMA that covers only 20 languages and enhances it with multilingual translation capability on more than 100 languag
π₯ Github: https://github.com/ZNLP/BigTrans/tree/main
β© Paper: https://arxiv.org/abs/2305.18098v1
π Dataset: https://paperswithcode.com/dataset/flores-200
https://t.iss.one/DataScienceT
BigTrans which adapts LLaMA that covers only 20 languages and enhances it with multilingual translation capability on more than 100 languag
π₯ Github: https://github.com/ZNLP/BigTrans/tree/main
β© Paper: https://arxiv.org/abs/2305.18098v1
π Dataset: https://paperswithcode.com/dataset/flores-200
https://t.iss.one/DataScienceT
This media is not supported in your browser
VIEW IN TELEGRAM
π₯ GPT4Tools: Teaching LLM to Use Tools via Self-instruction
GPT4Tools is a centralized system that can control multiple visual foundation models. It is based on Vicuna (LLaMA), and 71K self-built instruction data.
π₯ Github: https://github.com/stevengrove/gpt4tools
β© Paper: https://arxiv.org/abs/2305.18752v1
π Project: https://gpt4tools.github.io/
https://t.iss.one/DataScienceT
GPT4Tools is a centralized system that can control multiple visual foundation models. It is based on Vicuna (LLaMA), and 71K self-built instruction data.
π₯ Github: https://github.com/stevengrove/gpt4tools
β© Paper: https://arxiv.org/abs/2305.18752v1
π Project: https://gpt4tools.github.io/
https://t.iss.one/DataScienceT
This media is not supported in your browser
VIEW IN TELEGRAM
Introducing BERTopic Integration with the Hugging Face Hub
BERTopic provides a powerful tool for users to uncover significant topics within text collections, thereby gaining valuable insights.
pip install bertopic
π€ Hugging face: https://huggingface.co/blog/bertopic
π₯ Github: https://github.com/MaartenGr/BERTopic
β© Colab: https://colab.research.google.com/#fileId=https://huggingface.co/spaces/davanstrien/blog_notebooks/blob/main/BERTopic_hub_starter.ipynb
π Docs: https://maartengr.github.io/BERTopic/getting_started/quickstart/quickstart.html
https://t.iss.one/DataScienceT
BERTopic provides a powerful tool for users to uncover significant topics within text collections, thereby gaining valuable insights.
pip install bertopic
π€ Hugging face: https://huggingface.co/blog/bertopic
π₯ Github: https://github.com/MaartenGr/BERTopic
β© Colab: https://colab.research.google.com/#fileId=https://huggingface.co/spaces/davanstrien/blog_notebooks/blob/main/BERTopic_hub_starter.ipynb
π Docs: https://maartengr.github.io/BERTopic/getting_started/quickstart/quickstart.html
https://t.iss.one/DataScienceT
Dynamic Sparse Training with Structured Sparsity
π₯ Github: https://github.com/calgaryml/condensed-sparsity
β© Paper: https://arxiv.org/pdf/2305.02299v1.pdf
π¨ Dataset: https://paperswithcode.com/dataset/cifar-10
https://t.iss.one/DataScienceT
π₯ Github: https://github.com/calgaryml/condensed-sparsity
β© Paper: https://arxiv.org/pdf/2305.02299v1.pdf
π¨ Dataset: https://paperswithcode.com/dataset/cifar-10
https://t.iss.one/DataScienceT
SSSegmenation
π₯ Github: https://github.com/segmentationblwx/sssegmentation
β© Paper: https://arxiv.org/pdf/2305.17091v1.pdf
π¨ Dataset: https://paperswithcode.com/dataset/cityscapes
https://t.iss.one/DataScienceT
π₯ Github: https://github.com/segmentationblwx/sssegmentation
β© Paper: https://arxiv.org/pdf/2305.17091v1.pdf
π¨ Dataset: https://paperswithcode.com/dataset/cityscapes
https://t.iss.one/DataScienceT
β€βπ₯3
π₯ 10 Free Machine Learning Courses from Top Universities
1. Introduction to Machine Learning - UC Berkeley
2. Introduction to Machine Learning - Carnegie Mellon University
3. Machine Learning - Stanford University
4. Machine Learning & Data Mining - Caltech
5. Learning from Data - Caltech
6. Machine Learning for Intelligent Systems - Cornell University
7. Large Scale Machine Learning - University of Toronto
8. Machine Learning with Large Datasets - Carnegie Mellon University
9. Foundations of Machine Learning and Statistical Inference - Caltech
10. Algorithmic Aspects of Machine Learning - MIT
https://t.iss.one/DataScienceT
1. Introduction to Machine Learning - UC Berkeley
2. Introduction to Machine Learning - Carnegie Mellon University
3. Machine Learning - Stanford University
4. Machine Learning & Data Mining - Caltech
5. Learning from Data - Caltech
6. Machine Learning for Intelligent Systems - Cornell University
7. Large Scale Machine Learning - University of Toronto
8. Machine Learning with Large Datasets - Carnegie Mellon University
9. Foundations of Machine Learning and Statistical Inference - Caltech
10. Algorithmic Aspects of Machine Learning - MIT
https://t.iss.one/DataScienceT
β€βπ₯7π4β€1π1
Hiera: A Hierarchical Vision Transformer without the Bells-and-Whistles
Hiera is a hierarchical vision transformer that is fast, powerful, and, above all, simple. It outperforms the state-of-the-art across a wide array of image and video tasks while being much faster.
pip install hiera-transformer
π₯ Github: https://github.com/stevengrove/gpt4tools
β© Paper: https://arxiv.org/abs/2306.00989v1
π Dataset: https://paperswithcode.com/dataset/inaturalist
https://t.iss.one/DataScienceT
Hiera is a hierarchical vision transformer that is fast, powerful, and, above all, simple. It outperforms the state-of-the-art across a wide array of image and video tasks while being much faster.
pip install hiera-transformer
π₯ Github: https://github.com/stevengrove/gpt4tools
β© Paper: https://arxiv.org/abs/2306.00989v1
π Dataset: https://paperswithcode.com/dataset/inaturalist
https://t.iss.one/DataScienceT
β€βπ₯3π1
Wuerstchen: Efficient Pretraining of Text-to-Image Models
Novel technique for text-to-image synthesis that unites competitive performance with unprecedented cost-effectiveness and ease of training on constrained hardwar
π₯ Github: https://github.com/dome272/wuerstchen
β© Paper: https://arxiv.org/abs/2306.00637v1
π Colab: https://colab.research.google.com/drive/1UTP9Xn2UIrVbAXyL-SKEvyLmgVWdw-Vy
https://t.iss.one/DataScienceT
Novel technique for text-to-image synthesis that unites competitive performance with unprecedented cost-effectiveness and ease of training on constrained hardwar
π₯ Github: https://github.com/dome272/wuerstchen
β© Paper: https://arxiv.org/abs/2306.00637v1
π Colab: https://colab.research.google.com/drive/1UTP9Xn2UIrVbAXyL-SKEvyLmgVWdw-Vy
https://t.iss.one/DataScienceT
β€βπ₯3
If youβre a developer wanting to use large language model tools, our new course is for you.
Youβll learn how to use different prompts at various stages in the system-building process, strategies for parsing long documents, and much more!
Join for free:
https://learn.deeplearning.ai/chatgpt-building-system
β More reaction = more posts
@CodeProgrammer β₯οΈ
Youβll learn how to use different prompts at various stages in the system-building process, strategies for parsing long documents, and much more!
Join for free:
https://learn.deeplearning.ai/chatgpt-building-system
β More reaction = more posts
@CodeProgrammer β₯οΈ
β€βπ₯5
TabEAE
π₯ Github: https://github.com/stardust-hyx/tabeae
β© Paper: https://arxiv.org/pdf/2306.00502v1.pdf
π¨ Dataset: https://paperswithcode.com/dataset/wikievents
https://t.iss.one/DataScienceT
π₯ Github: https://github.com/stardust-hyx/tabeae
β© Paper: https://arxiv.org/pdf/2306.00502v1.pdf
π¨ Dataset: https://paperswithcode.com/dataset/wikievents
https://t.iss.one/DataScienceT
π GRES: Generalized Referring Expression Segmentation
New benchmark (GRES), which extends the classic RES to allow expressions to refer to an arbitrary number of target objects.
π₯ Github: https://github.com/henghuiding/ReLA
β© Paper: https://arxiv.org/abs/2306.00968
π Project: https://henghuiding.github.io/GRES/
π New dataset: https://github.com/henghuiding/gRefCOCO
https://t.iss.one/DataScienceT
New benchmark (GRES), which extends the classic RES to allow expressions to refer to an arbitrary number of target objects.
π₯ Github: https://github.com/henghuiding/ReLA
β© Paper: https://arxiv.org/abs/2306.00968
π Project: https://henghuiding.github.io/GRES/
π New dataset: https://github.com/henghuiding/gRefCOCO
https://t.iss.one/DataScienceT
β€βπ₯3
π¦ Gorilla: Large Language Model Connected with Massive APIs
Gorilla a finetuned LLaMA-based model that surpasses the performance of GPT-4 on writing API calls.
π₯ Github: https://github.com/ShishirPatil/gorilla
π Paper: https://arxiv.org/abs/2305.15334
π Demo: https://drive.google.com/file/d/1E0k5mG1mTiaz0kukyK1PdeohJipTFh6j/view?usp=share_link
π Project: https://shishirpatil.github.io/gorilla/
βοΈ Colab: https://colab.research.google.com/drive/1DEBPsccVLF_aUnmD0FwPeHFrtdC0QIUP?usp=sharing
https://t.iss.one/DataScienceT
Gorilla a finetuned LLaMA-based model that surpasses the performance of GPT-4 on writing API calls.
π₯ Github: https://github.com/ShishirPatil/gorilla
π Paper: https://arxiv.org/abs/2305.15334
π Demo: https://drive.google.com/file/d/1E0k5mG1mTiaz0kukyK1PdeohJipTFh6j/view?usp=share_link
π Project: https://shishirpatil.github.io/gorilla/
βοΈ Colab: https://colab.research.google.com/drive/1DEBPsccVLF_aUnmD0FwPeHFrtdC0QIUP?usp=sharing
https://t.iss.one/DataScienceT
π3β€βπ₯2π1
Segment Anything 3D
SAM-3D: A toolbox transfers 2D SAM segments into 3D scene-level point clouds.
π₯ Github: https://github.com/pointcept/segmentanything3d
β© Paper: https://arxiv.org/abs/2306.03908v1
π Dataset: https://paperswithcode.com/dataset/scannet
https://t.iss.one/DataScienceT
SAM-3D: A toolbox transfers 2D SAM segments into 3D scene-level point clouds.
π₯ Github: https://github.com/pointcept/segmentanything3d
β© Paper: https://arxiv.org/abs/2306.03908v1
π Dataset: https://paperswithcode.com/dataset/scannet
https://t.iss.one/DataScienceT
β€βπ₯2π1
Donate us
PayPal address:
https://www.paypal.me/HusseinSheikho
TRC-20 Address:
TMzAr8AFcZ1n5RXZa3BHPXHBRqugx9Skr7
PayPal and MasterCard:
https://boosty.to/datascienceteam/donate
PayPal address:
https://www.paypal.me/HusseinSheikho
TRC-20 Address:
TMzAr8AFcZ1n5RXZa3BHPXHBRqugx9Skr7
PayPal and MasterCard:
https://boosty.to/datascienceteam/donate
PayPal.Me
Pay Programmring using PayPal.Me
Go to PayPal.Me/HusseinSheikho and enter the amount. It's safer and more secure. Don't have a PayPal account? No problem.
πΌ PandaLM: ReProducible and Automated Language Model Assessment
Judge large language model, named PandaLM, which is trained to distinguish the superior model given several LLMs. PandaLM's focus extends beyond just the objective correctness of responses, which is the main focus of traditional evaluation datasets.
π₯ Github: https://github.com/weopenml/pandalm
π Paper: https://arxiv.org/abs/2306.05087v1
π Dataset: https://github.com/tatsu-lab/stanford_alpaca#data-release
https://t.iss.one/DataScienceT
Judge large language model, named PandaLM, which is trained to distinguish the superior model given several LLMs. PandaLM's focus extends beyond just the objective correctness of responses, which is the main focus of traditional evaluation datasets.
π₯ Github: https://github.com/weopenml/pandalm
π Paper: https://arxiv.org/abs/2306.05087v1
π Dataset: https://github.com/tatsu-lab/stanford_alpaca#data-release
https://t.iss.one/DataScienceT
β€βπ₯2