ML Research Hub
32.8K subscribers
4.37K photos
268 videos
23 files
4.72K links
Advancing research in Machine Learning – practical insights, tools, and techniques for researchers.

Admin: @HusseinSheikho || @Hussein_Sheikho
Download Telegram
ML Research Hub
πŸ’Έ PacketSDK--A New Way To Make Revenue From Your Apps Regardless of whether your app is on desktop, mobile, TV, or Unity platforms, no matter which app monetization tools you’re using, PacketSDK can bring you additional revenue! ● Working Principle: Convert…
I want to share a tool that I genuinely believe can make a real difference for anyone building apps: PacketSDK. Many developers have strong active-user bases but still struggle to increase revenue. That’s exactly why this solution stands outβ€”it adds extra income without disrupting users or interfering with your existing monetization methods.

Why I strongly recommend it:

* It turns your active users into immediate profit without showing ads.
* Integration is fast and straightforwardβ€”around 30 minutes.
* It works on all platforms: mobile, desktop, TV, Unity, and more.

As a channel owner, I recommend trying this service; you have nothing to lose.

I used it and found its earnings amazing.
✨Harmony: Harmonizing Audio and Video Generation through Cross-Task Synergy

πŸ“ Summary:
Harmony improves audio-visual synchronization in generative AI. It introduces a Cross-Task Synergy training paradigm, a Global-Local Decoupled Interaction Module, and Synchronization-Enhanced CFG. This significantly enhances generation fidelity and fine-grained audio-visual alignment, achieving s...

πŸ”Ή Publication Date: Published on Nov 26

πŸ”Ή Paper Links:
β€’ arXiv Page: https://arxiv.org/abs/2511.21579
β€’ PDF: https://arxiv.org/pdf/2511.21579
β€’ Project Page: https://sjtuplayer.github.io/projects/Harmony/
β€’ Github: https://github.com/sjtuplayer/Harmony

==================================

For more data science resources:
βœ“ https://t.iss.one/DataScienceT

#GenerativeAI #AudioVisual #DeepLearning #AISynchronization #AIResearch
This media is not supported in your browser
VIEW IN TELEGRAM
✨Block Cascading: Training Free Acceleration of Block-Causal Video Models

πŸ“ Summary:
Block Cascading accelerates block-causal video generation via training-free parallelization. It starts future blocks with partially denoised predecessors, transforming sequential pipelines into parallel cascades for a 2x speedup without quality loss.

πŸ”Ή Publication Date: Published on Nov 25

πŸ”Ή Paper Links:
β€’ arXiv Page: https://arxiv.org/abs/2511.20426
β€’ PDF: https://arxiv.org/pdf/2511.20426
β€’ Project Page: https://hmrishavbandy.github.io/block_cascading_page/
β€’ Github: https://hmrishavbandy.github.io/block_cascading_page/

==================================

For more data science resources:
βœ“ https://t.iss.one/DataScienceT

#VideoGeneration #AIAcceleration #ParallelProcessing #DeepLearning #ComputerVision
✨Image-Free Timestep Distillation via Continuous-Time Consistency with Trajectory-Sampled Pairs

πŸ“ Summary:
TBCM is a self-contained method that distills diffusion models by extracting latent representations directly from the teacher model trajectory. This eliminates external data, greatly improving efficiency and quality for few-step generation with reduced resources.

πŸ”Ή Publication Date: Published on Nov 25

πŸ”Ή Paper Links:
β€’ arXiv Page: https://arxiv.org/abs/2511.20410
β€’ PDF: https://arxiv.org/pdf/2511.20410
β€’ Github: https://github.com/hustvl/TBCM

==================================

For more data science resources:
βœ“ https://t.iss.one/DataScienceT

#DiffusionModels #ModelDistillation #GenerativeAI #AIResearch #MachineLearning
✨RAISECity: A Multimodal Agent Framework for Reality-Aligned 3D World Generation at City-Scale

πŸ“ Summary:
RAISECity uses an agentic framework with multimodal tools for reality-aligned, high-quality, city-scale 3D world generation. It iteratively refines scenes, achieving superior precision and fidelity compared to existing methods.

πŸ”Ή Publication Date: Published on Nov 22

πŸ”Ή Paper Links:
β€’ arXiv Page: https://arxiv.org/abs/2511.18005
β€’ PDF: https://arxiv.org/pdf/2511.18005
β€’ Github: https://github.com/tsinghua-fib-lab/RAISECity

==================================

For more data science resources:
βœ“ https://t.iss.one/DataScienceT

#3DGeneration #GenerativeAI #MultimodalAI #VirtualWorlds #ComputerGraphics
✨Multimodal Evaluation of Russian-language Architectures

πŸ“ Summary:
Mera Multi is the first open multimodal evaluation framework for Russian-language AI, addressing a lack of such benchmarks. It introduces 18 new instruction-based tasks across text, image, audio, and video, created with Russian cultural specificity and a leakage prevention methodology.

πŸ”Ή Publication Date: Published on Nov 19

πŸ”Ή Paper Links:
β€’ arXiv Page: https://arxiv.org/abs/2511.15552
β€’ PDF: https://arxiv.org/pdf/2511.15552
β€’ Project Page: https://mera.a-ai.ru/en/multi
β€’ Github: https://github.com/MERA-Evaluation/MERA_MULTIMODAL/tree/main

==================================

For more data science resources:
βœ“ https://t.iss.one/DataScienceT

#MultimodalAI #RussianAI #AIEvaluation #Benchmarks #AIresearch
✨WizardCoder: Empowering Code Large Language Models with Evol-Instruct

πŸ“ Summary:
WizardCoder is a Code LLM fine-tuned using Evol-Instruct for complex instructions. It significantly outperforms open-source and major closed LLMs on code generation benchmarks.

πŸ”Ή Publication Date: Published on Jun 14, 2023

πŸ”Ή Paper Links:
β€’ arXiv Page: https://arxiv.org/abs/2306.08568
β€’ PDF: https://arxiv.org/pdf/2306.08568
β€’ Github: https://github.com/nlpxucan/WizardLM

πŸ”Ή Models citing this paper:
β€’ https://huggingface.co/WizardLMTeam/WizardCoder-Python-34B-V1.0
β€’ https://huggingface.co/WizardLMTeam/WizardCoder-15B-V1.0
β€’ https://huggingface.co/alpindale/WizardLM-2-8x22B

✨ Datasets citing this paper:
β€’ https://huggingface.co/datasets/WizardLMTeam/WizardLM_evol_instruct_V2_196k
β€’ https://huggingface.co/datasets/nickrosh/Evol-Instruct-Code-80k-v1
β€’ https://huggingface.co/datasets/WizardLMTeam/WizardLM_evol_instruct_70k

✨ Spaces citing this paper:
β€’ https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard
β€’ https://huggingface.co/spaces/Intel/low_bit_open_llm_leaderboard
β€’ https://huggingface.co/spaces/FallnAI/Quantize-HF-Models

==================================

For more data science resources:
βœ“ https://t.iss.one/DataScienceT

#CodeLLM #LLM #AIE #CodeGeneration #EvolInstruct
✨UniGame: Turning a Unified Multimodal Model Into Its Own Adversary

πŸ“ Summary:
UniGame is a self-adversarial post-training framework that improves unified multimodal models. It resolves inconsistencies between understanding and generation by using a lightweight perturber to make the model its own adversary. This boosts consistency, understanding, generation, and robustness.

πŸ”Ή Publication Date: Published on Nov 24

πŸ”Ή Paper Links:
β€’ arXiv Page: https://arxiv.org/abs/2511.19413
β€’ PDF: https://arxiv.org/pdf/2511.19413
β€’ Github: https://github.com/AIFrontierLab/UniGame

==================================

For more data science resources:
βœ“ https://t.iss.one/DataScienceT

#MultimodalAI #AdversarialLearning #AIResearch #MachineLearning #ModelRobustness
❀1
✨Reinforcing Action Policies by Prophesying

πŸ“ Summary:
ProphRL improves Vision-Language-Action policies by overcoming imitation learning limits. It uses Prophet, a learned world model simulator, with tailored reinforcement learning FA-GRPO and FlowScale for data-efficient and stable post-training. This yields significant success gains on benchmarks a...

πŸ”Ή Publication Date: Published on Nov 25

πŸ”Ή Paper Links:
β€’ arXiv Page: https://arxiv.org/abs/2511.20633
β€’ PDF: https://arxiv.org/pdf/2511.20633
β€’ Project Page: https://logosroboticsgroup.github.io/ProphRL/
β€’ Github: https://github.com/LogosRoboticsGroup/ProphRL

==================================

For more data science resources:
βœ“ https://t.iss.one/DataScienceT

#ReinforcementLearning #ProphRL #WorldModels #Robotics #DeepLearning
✨Position: The Complexity of Perfect AI Alignment -- Formalizing the RLHF Trilemma

πŸ“ Summary:
RLHF faces an Alignment Trilemma: representativeness, tractability, and robustness are proven intractable to achieve simultaneously. Current RLHF sacrifices representativeness globally, causing biases and pathologies.

πŸ”Ή Publication Date: Published on Nov 23

πŸ”Ή Paper Links:
β€’ arXiv Page: https://arxiv.org/abs/2511.19504
β€’ PDF: https://arxiv.org/pdf/2511.19504

==================================

For more data science resources:
βœ“ https://t.iss.one/DataScienceT

#AIAlienment #RLHF #AISafety #MachineLearning #AIResearch
❀2
✨Gradio: Hassle-Free Sharing and Testing of ML Models in the Wild

πŸ“ Summary:
Gradio is an open-source Python package that creates visual interfaces for ML models, making them accessible to non-specialized users via a URL. This improves collaboration by allowing easy interaction, feedback, and trust-building in interdisciplinary settings.

πŸ”Ή Publication Date: Published on Jun 6, 2019

πŸ”Ή Paper Links:
β€’ arXiv Page: https://arxiv.org/abs/1906.02569
β€’ PDF: https://arxiv.org/pdf/1906.02569
β€’ Github: https://github.com/gradio-app/gradio

πŸ”Ή Models citing this paper:
β€’ https://huggingface.co/CxECHO/CE

✨ Datasets citing this paper:
β€’ https://huggingface.co/datasets/society-ethics/papers

✨ Spaces citing this paper:
β€’ https://huggingface.co/spaces/orYx-models/Nudge_Generator
β€’ https://huggingface.co/spaces/society-ethics/about
β€’ https://huggingface.co/spaces/mindmime/gradio

==================================

For more data science resources:
βœ“ https://t.iss.one/DataScienceT

#Gradio #MachineLearning #MLOps #Python #DataScience
This media is not supported in your browser
VIEW IN TELEGRAM
✨NAF: Zero-Shot Feature Upsampling via Neighborhood Attention Filtering

πŸ“ Summary:
NAF upsamples Vision Foundation Model features zero-shot by learning adaptive spatial-and-content weights. It outperforms VFM-specific upsamplers without retraining, achieving state-of-the-art performance across various tasks efficiently.

πŸ”Ή Publication Date: Published on Nov 23

πŸ”Ή Paper Links:
β€’ arXiv Page: https://arxiv.org/abs/2511.18452
β€’ PDF: https://arxiv.org/pdf/2511.18452
β€’ Github: https://github.com/valeoai/NAF?tab=readme-ov-file

==================================

For more data science resources:
βœ“ https://t.iss.one/DataScienceT

#ZeroShotLearning #ComputerVision #FeatureUpsampling #DeepLearning #AIResearch
✨G^2VLM: Geometry Grounded Vision Language Model with Unified 3D Reconstruction and Spatial Reasoning

πŸ“ Summary:
G^2VLM integrates 3D geometry learning into vision-language models to overcome their spatial intelligence deficits. It unifies 3D reconstruction and spatial reasoning, leveraging learned 3D features to achieve strong performance in both tasks.

πŸ”Ή Publication Date: Published on Nov 26

πŸ”Ή Paper Links:
β€’ arXiv Page: https://arxiv.org/abs/2511.21688
β€’ PDF: https://arxiv.org/pdf/2511.21688
β€’ Project Page: https://gordonhu608.github.io/g2vlm.github.io/
β€’ Github: https://github.com/InternRobotics/G2VLM

πŸ”Ή Models citing this paper:
β€’ https://huggingface.co/InternRobotics/G2VLM-2B-MoT

==================================

For more data science resources:
βœ“ https://t.iss.one/DataScienceT

#VisionLanguageModels #3DReconstruction #SpatialReasoning #ComputerVision #ArtificialIntelligence
❀1
✨MIRA: Multimodal Iterative Reasoning Agent for Image Editing

πŸ“ Summary:
MIRA is a multimodal iterative reasoning agent that enhances diffusion-based image editing. It tackles complex instructions by breaking them into atomic edits via a perception-reasoning-action loop with visual feedback. This improves semantic consistency and perceptual quality, outperforming othe...

πŸ”Ή Publication Date: Published on Nov 26

πŸ”Ή Paper Links:
β€’ arXiv Page: https://arxiv.org/abs/2511.21087
β€’ PDF: https://arxiv.org/pdf/2511.21087

==================================

For more data science resources:
βœ“ https://t.iss.one/DataScienceT

#AI #ImageEditing #MultimodalAI #DiffusionModels #ComputerVision
✨Multi-Crit: Benchmarking Multimodal Judges on Pluralistic Criteria-Following

πŸ“ Summary:
Multi-Crit evaluates multimodal models as judges on following diverse criteria using novel metrics. Findings reveal current models struggle with consistent adherence and flexibility to pluralistic criteria. This highlights gaps in capabilities and lays a foundation for building reliable AI evalua...

πŸ”Ή Publication Date: Published on Nov 26

πŸ”Ή Paper Links:
β€’ arXiv Page: https://arxiv.org/abs/2511.21662
β€’ PDF: https://arxiv.org/pdf/2511.21662
β€’ Project Page: https://multi-crit.github.io/
β€’ Github: https://multi-crit.github.io/

==================================

For more data science resources:
βœ“ https://t.iss.one/DataScienceT

#MultimodalAI #AIEvaluation #BenchmarkingAI #AIJudges #MachineLearning
✨Agentic Learner with Grow-and-Refine Multimodal Semantic Memory

πŸ“ Summary:
MLLMs often repeat errors due to insufficient multimodal memory. ViLoMem is a dual-stream memory framework that builds schema-based knowledge by separately encoding visual distractions and logical errors. This method significantly improves accuracy and reduces repeated errors across multiple benc...

πŸ”Ή Publication Date: Published on Nov 26

πŸ”Ή Paper Links:
β€’ arXiv Page: https://arxiv.org/abs/2511.21678
β€’ PDF: https://arxiv.org/pdf/2511.21678

==================================

For more data science resources:
βœ“ https://t.iss.one/DataScienceT

#MLLMs #MultimodalAI #AIMemory #DeepLearning #AIResearch
This media is not supported in your browser
VIEW IN TELEGRAM
✨Canvas-to-Image: Compositional Image Generation with Multimodal Controls

πŸ“ Summary:
Canvas-to-Image unifies diverse controls like text, poses, and layouts into a single canvas image for high-fidelity compositional image generation. Its multi-task training helps it understand and integrate these controls, outperforming existing methods in adherence and identity.

πŸ”Ή Publication Date: Published on Nov 26

πŸ”Ή Paper Links:
β€’ arXiv Page: https://arxiv.org/abs/2511.21691
β€’ PDF: https://arxiv.org/pdf/2511.21691
β€’ Project Page: https://snap-research.github.io/canvas-to-image/

==================================

For more data science resources:
βœ“ https://t.iss.one/DataScienceT

#ImageGeneration #GenerativeAI #MultimodalAI #ComputerVision #DeepLearning
✨Video Generation Models Are Good Latent Reward Models

πŸ“ Summary:
Traditional video reward models are inefficient, operating in pixel space. PRFL uses pre-trained video generation models as latent reward models, optimizing preferences entirely in latent space. This significantly improves human alignment and reduces memory and training time.

πŸ”Ή Publication Date: Published on Nov 26

πŸ”Ή Paper Links:
β€’ arXiv Page: https://arxiv.org/abs/2511.21541
β€’ PDF: https://arxiv.org/pdf/2511.21541
β€’ Project Page: https://kululumi.github.io/PRFL/
β€’ Github: https://kululumi.github.io/PRFL/

==================================

For more data science resources:
βœ“ https://t.iss.one/DataScienceT

#VideoGeneration #ReinforcementLearning #LatentSpace #AIResearch #MachineLearning
✨Gated Attention for Large Language Models: Non-linearity, Sparsity, and Attention-Sink-Free

πŸ“ Summary:
Applying a head-specific sigmoid gate after Scaled Dot-Product Attention in large language models significantly improves performance, stability, and scaling. This simple modification mitigates attention sink and enhances long-context extrapolation by introducing non-linearity and sparse gating.

πŸ”Ή Publication Date: Published on May 10

πŸ”Ή Paper Links:
β€’ arXiv Page: https://arxiv.org/abs/2505.06708
β€’ PDF: https://arxiv.org/pdf/2505.06708
β€’ Github: https://github.com/qiuzh20/gated_attention

==================================

For more data science resources:
βœ“ https://t.iss.one/DataScienceT

#LLM #AttentionMechanism #DeepLearning #NLP #AIResearch