Machine learning books and papers pinned «با عرض سلام می خواهیم مقاله ی جدیدی را تحت عنوان زیر شروع کنیم: Comparative survey on Transfer Learning for multi-modal wound image classification مقالات قبلی که در این رابطه نوشتیم به ترتیب زیر می باشند: تیم 1: [1]چاپ شده در Expert system with application…»
Signatures of unconventional superconductivity near reentrant and fractional quantum anomalous Hall insulators
📚 Paper
@Machine_learn
📚 Paper
@Machine_learn
❤1
BioPars: Persian biomedical data
Model: BioPars
Dataset: ParsMed
Benchmark: BioParsQa
Next week submit
@Machine_learn
Model: BioPars
Dataset: ParsMed
Benchmark: BioParsQa
Next week submit
@Machine_learn
👍3❤1
هدف اين كانال حل مشكل سايت زني به مقالات. از طرفي كساني كه نيازمند هزينه سايت هستن نيز مي تونن با سايت زدن به هر مقاله ي اين كانال بخشي از هزينه رو دريافت كنن.
https://t.iss.one/papercite
https://t.iss.one/papercite
Telegram
Paper cite
ارسال مقاله جهت سايت
@Raminmousa
------
هدف اين كانال حل مشكل سايت زني به مقالات. از طرفي كساني كه نيازمند هزينه سايت هستن نيز مي تونن با سايت زدن به هر مقاله ي اين كانال بخشي از هزينه رو دريافت كنن.
@Raminmousa
------
هدف اين كانال حل مشكل سايت زني به مقالات. از طرفي كساني كه نيازمند هزينه سايت هستن نيز مي تونن با سايت زدن به هر مقاله ي اين كانال بخشي از هزينه رو دريافت كنن.
This media is not supported in your browser
VIEW IN TELEGRAM
Crystal Generation with Space Group Informed Transformer
🖥 Github: https://github.com/deepmodeling/crystalformer
📕 Paper: https://arxiv.org/abs/2504.02367v1
🔗 Dataset: https://paperswithcode.com/dataset/alex-20
@Machine_learn
🖥 Github: https://github.com/deepmodeling/crystalformer
📕 Paper: https://arxiv.org/abs/2504.02367v1
🔗 Dataset: https://paperswithcode.com/dataset/alex-20
@Machine_learn
🔥1
4 advanced attention mechanisms you should know:
• Slim attention — 8× less memory, 5× faster generation by storing only K from KV pairs and recomputing V.
• XAttention — 13.5× speedup on long sequences via "looking" at the sum of values along diagonal lines in the attention matrix.
• Kolmogorov-Arnold Attention, KArAt — Adaptable attention with learnable activation functions using KANs instead of softmax.
• Multi-token attention (MTA) — Lets the model consider groups of nearby words together for smarter long-context handling.
Read the overview of them in our free article on https://huggingface.co/blog/Kseniase/attentions
@Machine_learn
• Slim attention — 8× less memory, 5× faster generation by storing only K from KV pairs and recomputing V.
• XAttention — 13.5× speedup on long sequences via "looking" at the sum of values along diagonal lines in the attention matrix.
• Kolmogorov-Arnold Attention, KArAt — Adaptable attention with learnable activation functions using KANs instead of softmax.
• Multi-token attention (MTA) — Lets the model consider groups of nearby words together for smarter long-context handling.
Read the overview of them in our free article on https://huggingface.co/blog/Kseniase/attentions
@Machine_learn
👍4
Forwarded from Papers
با عرض سلام برای یکی از مقالاتمون نیازمند نفر اول داریم که co-author مقاله هم باشه.
مجله ی ارسالی scientific report natue
https://www.nature.com/srep/
می باشد.
شرایط واگذاری رو در صورت نیاز می تونین با ایدی بنده ست کنین.
@Raminmousa
@Machine_learn
@Paper4money
مجله ی ارسالی scientific report natue
https://www.nature.com/srep/
می باشد.
شرایط واگذاری رو در صورت نیاز می تونین با ایدی بنده ست کنین.
@Raminmousa
@Machine_learn
@Paper4money
Nature
Scientific Reports
Scientific Reports publishes original research in all areas of the natural and clinical sciences. We believe that if your research is scientifically valid and ...
❤1👍1
Machine learning books and papers pinned «با عرض سلام برای یکی از مقالاتمون نیازمند نفر اول داریم که co-author مقاله هم باشه. مجله ی ارسالی scientific report natue https://www.nature.com/srep/ می باشد. شرایط واگذاری رو در صورت نیاز می تونین با ایدی بنده ست کنین. @Raminmousa @Machine_learn…»
CPPO: Accelerating the Training of Group Relative Policy Optimization-Based Reasoning Models
28 Mar 2025 · Zhihang Lin, Mingbao Lin, Yuan Xie, Rongrong Ji
Paper: https://arxiv.org/pdf/2503.22342v1.pdf
Code: https://github.com/lzhxmu/cppo
Datasets: GSM8K - MATH
@Machine_learn
28 Mar 2025 · Zhihang Lin, Mingbao Lin, Yuan Xie, Rongrong Ji
Paper: https://arxiv.org/pdf/2503.22342v1.pdf
Code: https://github.com/lzhxmu/cppo
Datasets: GSM8K - MATH
@Machine_learn
⚡2❤1
با عرض سلام دوره ی خصوصی SYFA را داریم برگذار میکنیم که هدف نحوه اشنایی با فرایند نگارش و چاپ مقالات می باشد. جلسات ۱ ساعته و خصوصی می باشند. که هر هفته به ازای هر شخص ۲ جلسه برگذار خواهد شد. جهت ثبت نام و ست کردن زمان با ایدی بنده در ارتباط باشین.
@Raminmousa
@Raminmousa
Data-engineer-handbook
This is a repo with links to everything you'd ever want to learn about data engineering
Creator: DataExpert-io
Stars ⭐️: 24.9k
Forked by: 4.9k
Github Repo:
https://github.com/DataExpert-io/data-engineer-handbook
#github
➖➖➖➖➖➖➖➖➖➖➖➖➖➖
@Machine_learn
This is a repo with links to everything you'd ever want to learn about data engineering
Creator: DataExpert-io
Stars ⭐️: 24.9k
Forked by: 4.9k
Github Repo:
https://github.com/DataExpert-io/data-engineer-handbook
#github
➖➖➖➖➖➖➖➖➖➖➖➖➖➖
@Machine_learn
GitHub
GitHub - DataExpert-io/data-engineer-handbook: This is a repo with links to everything you'd ever want to learn about data engineering
This is a repo with links to everything you'd ever want to learn about data engineering - DataExpert-io/data-engineer-handbook
❤2
This media is not supported in your browser
VIEW IN TELEGRAM
⛽ VoRA: Vision as LoRA ⛽
#ByteDance introduces #VoRA (Vision as #LoRA) — a novel framework that transforms #LLMs into Multimodal Large Language Models (MLLMs) by integrating vision-specific LoRA layers.
All training data, source code, and model weights are openly available!
Key Resources:
Overview: https://t.ly/guNVN
Paper: arxiv.org/pdf/2503.20680
GitHub Repo: github.com/Hon-Wong/VoRA
Project Page: georgeluimmortal.github.io/vora-homepage.github.io
@Machine_learn
#ByteDance introduces #VoRA (Vision as #LoRA) — a novel framework that transforms #LLMs into Multimodal Large Language Models (MLLMs) by integrating vision-specific LoRA layers.
All training data, source code, and model weights are openly available!
Key Resources:
Overview: https://t.ly/guNVN
Paper: arxiv.org/pdf/2503.20680
GitHub Repo: github.com/Hon-Wong/VoRA
Project Page: georgeluimmortal.github.io/vora-homepage.github.io
@Machine_learn
👍1