OpenAssistant Conversations -- Democratizing Large Language Model Alignment
📝https://github.com/laion-ai/open-assistant
📝https://github.com/laion-ai/open-assistant
GitHub
GitHub - LAION-AI/Open-Assistant: OpenAssistant is a chat-based assistant that understands tasks, can interact with third-party…
OpenAssistant is a chat-based assistant that understands tasks, can interact with third-party systems, and retrieve information dynamically to do so. - LAION-AI/Open-Assistant
HuaTuo: Tuning LLaMA Model with Chinese Medical Knowledge
📝https://github.com/scir-hi/huatuo-llama-med-chinese
📝https://github.com/scir-hi/huatuo-llama-med-chinese
GitHub
GitHub - SCIR-HI/Huatuo-Llama-Med-Chinese: Repo for BenTsao [original name: HuaTuo (华驼)], Instruction-tuning Large Language Models…
Repo for BenTsao [original name: HuaTuo (华驼)], Instruction-tuning Large Language Models with Chinese Medical Knowledge. 本草(原名:华驼)模型仓库,基于中文医学知识的大语言模型指令微调 - GitHub - SCIR-HI/Huatuo-Llama-Med-Chinese:...
Multimodal C4: An Open, Billion-scale Corpus of Images Interleaved With Text
📝https://github.com/allenai/mmc4
📝https://github.com/allenai/mmc4
GitHub
GitHub - allenai/mmc4: MultimodalC4 is a multimodal extension of c4 that interleaves millions of images with text.
MultimodalC4 is a multimodal extension of c4 that interleaves millions of images with text. - allenai/mmc4
Inpaint Anything: Segment Anything Meets Image Inpainting
📝https://github.com/geekyutao/inpaint-anything
📝https://github.com/geekyutao/inpaint-anything
GitHub
GitHub - geekyutao/Inpaint-Anything: Inpaint anything using Segment Anything and inpainting models.
Inpaint anything using Segment Anything and inpainting models. - geekyutao/Inpaint-Anything
Chameleon: Plug-and-Play Compositional Reasoning with Large Language Models
📝https://github.com/lupantech/chameleon-llm
📝https://github.com/lupantech/chameleon-llm
GitHub
GitHub - lupantech/chameleon-llm: Codes for "Chameleon: Plug-and-Play Compositional Reasoning with Large Language Models".
Codes for "Chameleon: Plug-and-Play Compositional Reasoning with Large Language Models". - lupantech/chameleon-llm
DINOv2: Learning Robust Visual Features without Supervision
📝https://github.com/facebookresearch/dinov2
📝https://github.com/facebookresearch/dinov2
GitHub
GitHub - facebookresearch/dinov2: PyTorch code and models for the DINOv2 self-supervised learning method.
PyTorch code and models for the DINOv2 self-supervised learning method. - facebookresearch/dinov2
Transformer-Based Visual Segmentation: A Survey
📝https://github.com/lxtgh/awesome-segmenation-with-transformer
📝https://github.com/lxtgh/awesome-segmenation-with-transformer
GitHub
GitHub - lxtGH/Awesome-Segmenation-With-Transformer: Transformer-Based Visual Segmentation: A Survey
Transformer-Based Visual Segmentation: A Survey. Contribute to lxtGH/Awesome-Segmenation-With-Transformer development by creating an account on GitHub.
Anything-3D: Towards Single-view Anything Reconstruction in the Wild
📝https://github.com/anything-of-anything/anything-3d
📝https://github.com/anything-of-anything/anything-3d
GitHub
GitHub - Anything-of-anything/Anything-3D: Segment-Anything + 3D. Let's lift anything to 3D.
Segment-Anything + 3D. Let's lift anything to 3D. Contribute to Anything-of-anything/Anything-3D development by creating an account on GitHub.
Unleashing Infinite-Length Input Capacity for Large-scale Language Models with Self-Controlled Memory System
📝https://github.com/wbbeyourself/scm4llms
📝https://github.com/wbbeyourself/scm4llms
GitHub
GitHub - wbbeyourself/SCM4LLMs
Contribute to wbbeyourself/SCM4LLMs development by creating an account on GitHub.
Adafactor: Adaptive Learning Rates with Sublinear Memory Cost
📝https://github.com/booydar/t5-experiments
📝https://github.com/booydar/t5-experiments
GitHub
GitHub - booydar/recurrent-memory-transformer: [NeurIPS 22] [AAAI 24] Recurrent Transformer-based long-context architecture.
[NeurIPS 22] [AAAI 24] Recurrent Transformer-based long-context architecture. - booydar/recurrent-memory-transformer
WizardLM: Empowering Large Language Models to Follow Complex Instructions
📝https://github.com/nlpxucan/wizardlm
📝https://github.com/nlpxucan/wizardlm
GitHub
GitHub - nlpxucan/WizardLM: LLMs build upon Evol Insturct: WizardLM, WizardCoder, WizardMath
LLMs build upon Evol Insturct: WizardLM, WizardCoder, WizardMath - nlpxucan/WizardLM
AudioGPT: Understanding and Generating Speech, Music, Sound, and Talking Head
📝https://github.com/aigc-audio/audiogpt
📝https://github.com/aigc-audio/audiogpt
GitHub
GitHub - AIGC-Audio/AudioGPT: AudioGPT: Understanding and Generating Speech, Music, Sound, and Talking Head
AudioGPT: Understanding and Generating Speech, Music, Sound, and Talking Head - AIGC-Audio/AudioGPT
Text-to-Audio Generation using Instruction-Tuned LLM and Latent Diffusion Model
📝https://github.com/declare-lab/tango
📝https://github.com/declare-lab/tango
GitHub
GitHub - declare-lab/tango: A family of diffusion models for text-to-audio generation.
A family of diffusion models for text-to-audio generation. - declare-lab/tango
Harnessing the Power of LLMs in Practice: A Survey on ChatGPT and Beyond
📝https://github.com/mooler0410/llmspracticalguide
📝https://github.com/mooler0410/llmspracticalguide
GitHub
GitHub - Mooler0410/LLMsPracticalGuide: A curated list of practical guide resources of LLMs (LLMs Tree, Examples, Papers)
A curated list of practical guide resources of LLMs (LLMs Tree, Examples, Papers) - Mooler0410/LLMsPracticalGuide
Hidet: Task-Mapping Programming Paradigm for Deep Learning Tensor Programs
📝https://github.com/hidet-org/hidet
📝https://github.com/hidet-org/hidet
GitHub
GitHub - hidet-org/hidet: An open-source efficient deep learning framework/compiler, written in python.
An open-source efficient deep learning framework/compiler, written in python. - hidet-org/hidet
DataComp: In search of the next generation of multimodal datasets
📝https://github.com/mlfoundations/datacomp
📝https://github.com/mlfoundations/datacomp
GitHub
GitHub - mlfoundations/datacomp: DataComp: In search of the next generation of multimodal datasets
DataComp: In search of the next generation of multimodal datasets - mlfoundations/datacomp