Machine learning books and papers
22.9K subscribers
976 photos
54 videos
928 files
1.32K links
Admin: @Raminmousa
Watsapp: +989333900804
ID: @Machine_learn
link: https://t.iss.one/Machine_learn
Download Telegram
This media is not supported in your browser
VIEW IN TELEGRAM
๐Ÿ’ก SAM2Long, a training-free enhancement to SAM 2 for long-term video segmentation


๐ŸŸกTechnical Report: https://huggingface.co/papers/2410.16268
๐ŸŸกGithub: https://github.com/Mark12Ding/SAM2Long
๐ŸŸกHomepage: https://mark12ding.github.io/project/SAM2Long/



@Machine_learn
Please open Telegram to view this post
VIEW IN TELEGRAM
๐Ÿ‘3
๐Ÿ“‘ A guide to RNA sequencing and functional analysis


๐Ÿ“Ž Study the paper

@Machine_learn
๐Ÿ‘4โค1
The State of AI Report

๐Ÿ“š Report

@Machine_learn
๐Ÿ‘2
NotebookLlama: An Open Source version of NotebookLM

๐Ÿ“š Book

@Machine_learn
โค5
Tutorial on Diffusion Models for Imaging and Vision

๐Ÿ“š Book

@Machine_learn
โค5๐Ÿ‘2
An Infinite Descent into Pure Mathematics

๐Ÿ“š Book

@Machine_learn
๐Ÿ‘3โค1
Forwarded from Github LLMs
๐ŸŒŸ Zamba2-Instruct

ะ’ ัะตะผะตะนัั‚ะฒะต 2 ะผะพะดะตะปะธ:

๐ŸŸขZamba2-1.2B-instruct;
๐ŸŸ Zamba2-2.7B-instruct.



# Clone repo
git clone https://github.com/Zyphra/transformers_zamba2.git
cd transformers_zamba2

# Install the repository & accelerate:
pip install -e .
pip install accelerate

# Inference:
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch

tokenizer = AutoTokenizer.from_pretrained("Zyphra/Zamba2-2.7B-instruct")
model = AutoModelForCausalLM.from_pretrained("Zyphra/Zamba2-2.7B-instruct", device_map="cuda", torch_dtype=torch.bfloat16)

user_turn_1 = "user_prompt1."
assistant_turn_1 = "assistant_prompt."
user_turn_2 = "user_prompt2."
sample = [{'role': 'user', 'content': user_turn_1}, {'role': 'assistant', 'content': assistant_turn_1}, {'role': 'user', 'content': user_turn_2}]
chat_sample = tokenizer.apply_chat_template(sample, tokenize=False)

input_ids = tokenizer(chat_sample, return_tensors='pt', add_special_tokens=False).to("cuda")
outputs = model.generate(**input_ids, max_new_tokens=150, return_dict_in_generate=False, output_scores=False, use_cache=True, num_beams=1, do_sample=False)
print((tokenizer.decode(outputs[0])))





๐Ÿ–ฅGitHub

https://t.iss.one/deep_learning_proj
Please open Telegram to view this post
VIEW IN TELEGRAM
๐Ÿ‘5โค1
๐Ÿ“• Applied Causal #Inference Powered by #MachineLearning

๐Ÿ“ŒBook

@Machine_learn
๐Ÿ‘2
THINKING LLMS: GENERAL INSTRUCTION FOLLOWING WITH THOUGHT GENERATION

๐Ÿ“š Reed

@Machine_learn
๐Ÿ‘1
ุจุง ุนุฑุถ ุณู„ุงู… ุงู…ุฑูˆุฒ ุงุฎุฑูŠู† ูˆู‚ุช ุจุฑุงูŠ ู…ุดุงุฑูƒุช ุฏุฑ ุงูŠู† ู…ู‚ุงู„ู‡ ู…ูŠ ุจุงุดุฏ...!
๐Ÿ‘1
โšก๏ธ Stable Diffusion 3.5 Large.

# install Diffusers
pip install -U diffusers


# Inference
import torch
from diffusers import StableDiffusion3Pipeline

pipe = StableDiffusion3Pipeline.from_pretrained("stabilityai/stable-diffusion-3.5-large", torch_dtype=torch.bfloat16)
pipe = pipe.to("cuda")

image = pipe(
"A happy woman laying on a grass",
num_inference_steps=28,
guidance_scale=3.5,
).images[0]
image.save("woman.png")





๐ŸŸกArxiv



@Machine_learn
Please open Telegram to view this post
VIEW IN TELEGRAM
๐Ÿ‘2๐Ÿ”ฅ2
๐ŸŒŸ Aya Expanse


๐ŸŸขAya Expanse 32B
๐ŸŸขAya Expanse 8B


๐ŸŸ Aya Expanse 32B-GGUF
๐ŸŸ Aya Expanse 8B-GGUF

Expanse 8B Transformers :

from transformers import AutoTokenizer, AutoModelForCausalLM

model_id = "CohereForAI/aya-expanse-8b"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id)

# Format the message with the chat template
messages = [{"role": "user", "content": " %prompt% "}]
input_ids = tokenizer.apply_chat_template(messages, tokenize=True, add_generation_prompt=True, return_tensors="pt")
## <BOS_TOKEN><|START_OF_TURN_TOKEN|><|USER_TOKEN|>%prompt%<|END_OF_TURN_TOKEN|><|START_OF_TURN_TOKEN|><|CHATBOT_TOKEN|>

gen_tokens = model.generate(
input_ids,
max_new_tokens=100,
do_sample=True,
temperature=0.3,
)

gen_text = tokenizer.decode(gen_tokens[0])
print(gen_text)





๐ŸŸกGGUF 32B
๐ŸŸกGGUF 8B
๐ŸŸกDemo


@Machine_learn
Please open Telegram to view this post
VIEW IN TELEGRAM
Intermediate Python

๐Ÿ“– Book

@Machine_learn
SAM2Long: Enhancing SAM 2 for Long Video Segmentation with a Training-Free Memory Tree

๐Ÿ–ฅ Github: https://github.com/mark12ding/sam2long

๐Ÿ“• Paper: https://arxiv.org/abs/2410.16268v1

๐Ÿค— HF: https://huggingface.co/papers/2410.16268

@Machine_learn
Forwarded from Papers
๐Ÿ’ Title:BERTCaps: BERT Capsule for persian Multi-domain Sentiment Analysis.

๐Ÿ”บAbstract:
Sentiment classification is widely known as a domain-dependent problem. In order to learn an accurate domain-specific sentiment classifier, a large number of labeled samples are needed, which are expensive and time-consuming to annotate. Multi-domain sentiment analysis based on multi-task learning can leverage labeled samples in each single domain, which can alleviate the need for large amount of labeled data in all domains. In this article, the purpose is BERTCaps to provide a multi-domain classifier. In this model, BERT was used for Instance Representation and Capsule was used for instance learning. In the evaluation dataset, the model was able to achieve an accuracy of 0.9712 in polarity classification and an accuracy of 0.8509 in domain classification.

journal: https://www.sciencedirect.com/journal/array
If:2.3

ุฌุงูŠฺฏุงู‡ ูข ูˆ ูค ุงูŠู† ู…ู‚ุงู„ู‡ ุฑูˆ ู†ูŠุงุฒ ุฏุงุฑูŠู….
ุฏูˆุณุชุงู†ูŠ ูƒู‡ ู…ุงูŠู„ ุจู‡ ุดุฑูƒุช ู‡ุณุชู† ู…ูŠ ุชูˆู†ู† ุจู‡ ุงูŠุฏูŠ ุจู†ุฏู‡ ูพูŠุงู… ุจุฏู†.
@Raminmousa
@Paper4money
@Machine_learn
Please open Telegram to view this post
VIEW IN TELEGRAM
โค1
Ms - SmolLM2 1.7B - beats Qwen 2.5 1.5B & Llama 3.21B, Apache 2.0 licensed, trained on 11 Trillion tokens ๐Ÿ”ฅ

> 135M, 360M, 1.7B parameter model
> Trained on FineWeb-Edu, DCLM, The Stack, along w/ new mathematics and coding datasets
> Specialises in Text rewriting, Summarization & Function Calling
> Integrated with transformers & model on the hub!

You can run the 1.7B in less than 2GB VRAM on a Q4 ๐Ÿ‘‘

Fine-tune, run inference, test, train, repeat - intelligence is just 5 lines of code away!

https://huggingface.co/collections/HuggingFaceTB/smollm2-6723884218bcda64b34d7db9

@Machine_learn
๐Ÿ‘1
๐Ÿ“‘A Survey of Deep Learning Methods for Estimating the Accuracy of Protein Quaternary Structure Models



๐Ÿ“Ž Study the paper

@Machine_learn
โค1
Data Pipelines with Apache Airflow

๐Ÿ“˜ book

@Machine_learn
โค5
Forwarded from Github LLMs
๐Ÿ“– LLM-Agent-Paper-List is a repository of papers on the topic of agents based on large language models (LLM)! The papers are divided into categories such as LLM agent architectures, autonomous LLM agents, reinforcement learning (RL), natural language processing methods, multimodal approaches and tools for developing LLM agents, and more.

๐Ÿ–ฅ Github

https://t.iss.one/deep_learning_proj
Please open Telegram to view this post
VIEW IN TELEGRAM
๐Ÿ‘4