Machine learning books and papers
22.7K subscribers
972 photos
54 videos
928 files
1.31K links
Admin: @Raminmousa
Watsapp: +989333900804
ID: @Machine_learn
link: https://t.iss.one/Machine_learn
Download Telegram
ViDoRAG: Visual Document Retrieval-Augmented Generation via Dynamic Iterative Reasoning Agents

25 Feb 2025 · Qiuchen Wang, Ruixue Ding, Zehui Chen, Weiqi Wu, Shihang Wang, Pengjun Xie, Feng Zhao ·

Understanding information from visually rich documents remains a significant challenge for traditional Retrieval-Augmented Generation (RAG) methods. Existing benchmarks predominantly focus on image-based question answering (QA), overlooking the fundamental challenges of efficient retrieval, comprehension, and reasoning within dense visual documents. To bridge this gap, we introduce ViDoSeek, a novel dataset designed to evaluate RAG performance on visually rich documents requiring complex reasoning. Based on it, we identify key limitations in current RAG approaches: (i) purely visual retrieval methods struggle to effectively integrate both textual and visual features, and (ii) previous approaches often allocate insufficient reasoning tokens, limiting their effectiveness. To address these challenges, we propose #ViDoRAG, a novel multi-agent RAG framework tailored for complex reasoning across visual documents. ViDoRAG employs a Gaussian Mixture Model (GMM)-based hybrid strategy to effectively handle multi-modal retrieval. To further elicit the model's reasoning capabilities, we introduce an iterative agent workflow incorporating exploration, summarization, and reflection, providing a framework for investigating test-time scaling in RAG domains. Extensive experiments on ViDoSeek validate the effectiveness and generalization of our approach. Notably, ViDoRAG outperforms existing methods by over 10% on the competitive #ViDoSeek benchmark.

Paper: https://arxiv.org/pdf/2502.18017v1.pdf

Code: https://github.com/Alibaba-NLP/ViDoRAG

@Machine_learn
5👍1
CS229 Lecture Notes
Andrew Ng and Tengyu Ma


📚 Link


@Machine_learn
👍4
Forwarded from Papers
با عرض سلام براي مقاله بالا نياز به نفر ٣ ام هستيم.
مجله هاي پيشنهادي جهت سابميت.

🔺🔺🔺🔸🔸🔸🔺🔺🔺
-Soft computing
- Computational Economics
- Multimedia Tools and Applicaion
جهت ثبت اسم با ايدي بنده در ارتباط باشين
@Raminmousa
@Machine_learn
@paper4money
Please open Telegram to view this post
VIEW IN TELEGRAM
🔥1
Kiss3DGen: Repurposing Image Diffusion Models for 3D Asset Generation

🖥 Github: https://github.com/EnVision-Research/Kiss3DGen

📕 Paper: https://arxiv.org/abs/2503.01370v1

🌟 Dataset: https://paperswithcode.com/dataset/nerf

@Machine_learn
Please open Telegram to view this post
VIEW IN TELEGRAM
2
Know You First and Be You Better: Modeling Human-Like User Simulators via Implicit Profiles

26 Feb 2025 · Kuang Wang, Xianfei Li, Shenghao Yang, Li Zhou, Feng Jiang, Haizhou Li ·

User simulators are crucial for replicating human interactions with dialogue systems, supporting both collaborative training and automatic evaluation, especially for large language models (LLMs). However, existing simulators often rely solely on text utterances, missing implicit user traits such as personality, speaking style, and goals. In contrast, persona-based methods lack generalizability, as they depend on predefined profiles of famous individuals or archetypes. To address these challenges, we propose User Simulator with implicit Profiles (#USP), a framework that infers implicit user profiles from human-machine conversations and uses them to generate more personalized and realistic dialogues. We first develop an LLM-driven extractor with a comprehensive profile schema. Then, we refine the simulation through conditional supervised fine-tuning and reinforcement learning with cycle consistency, optimizing it at both the utterance and conversation levels. Finally, we adopt a diverse profile sampler to capture the distribution of real-world user profiles. Experimental results demonstrate that USP outperforms strong baselines in terms of authenticity and diversity while achieving comparable performance in consistency. Furthermore, dynamic multi-turn evaluations based on USP strongly align with mainstream benchmarks, demonstrating its effectiveness in real-world applications
.
Paper: https://arxiv.org/pdf/2502.18968v1.pdf

Code: https://github.com/wangkevin02/USP

Dataset: LMSYS-USP


@Machine_learn
👍1
Forwarded from Papers
با عرض سلام براي مقاله بالا نياز به نفر 1 یا سوم ام هستيم.
مجله پيشنهادي جهت سابميت.

https://www.springerprofessional.de/financial-innovation/50101254
If6️⃣. 5
هزینه نفر اول ۵۰۰$ و هزینه نفر سوم ۳۰۰$ می باشد

🔺🔺🔺🔸🔸🔸🔺🔺🔺

جهت ثبت اسم با ايدي بنده در ارتباط باشين
@Raminmousa
@Machine_learn
@paper4money
Please open Telegram to view this post
VIEW IN TELEGRAM
1
Attention from Beginners Point of View

📚 Reed


@Machine_learn
👍3
A SURVEY ON POST-TRAINING OF LARGE LANGUAGE MODELS

📚 Read

@Machine_learn
👍4
🔥 Exercises in Machine Learning

Book

@Machine_learn
👍31🔥1
با عرض سلام برای مقاله زیر نیاز به کسی داریم که هزینه سرور با ما شریک بشه.

Multi-modal wound classification using wound image and location by vit-wavelet and transformer
🔸🔸🔸🔸🔸🔸🔸
Jouranl: scientific reports(nature)
هزینه مشارکت نفر ۵ ام ۳۰۰$ می باشد.
🔻@Raminmousa
Please open Telegram to view this post
VIEW IN TELEGRAM
The Matrix Cookbook

📚 Link

@Machine_learn
🔥3👍1
Controlling Latent Diffusion Using Latent CLIP

📚 Read

@Machine_learn
👍2
Forwarded from Papers
با عرض سلام براي مقاله بالا نياز به نفر سوم ام هستيم.
مجله پيشنهادي جهت سابميت.

https://www.springerprofessional.de/financial-innovation/50101254
If6️⃣. 5
هزینه نفر سوم ۱۵ میلیون می باشد

🔺🔺🔺🔸🔸🔸🔺🔺🔺

جهت ثبت اسم با ايدي بنده در ارتباط باشين
@Raminmousa
@Machine_learn
@paper4money
Please open Telegram to view this post
VIEW IN TELEGRAM
Everything You Always Wanted To Know About Mathematics*

📓 book

@Machine_learn
MonSter: Marry Monodepth to Stereo Unleashes Power

15 Jan 2025 · Junda Cheng, Longliang Liu, Gangwei Xu, Xianqi Wang, Zhaoxing Zhang, Yong Deng, Jinliang Zang, Yurui Chen, Zhipeng Cai, Xin Yang ·

Stereo matching recovers depth from image correspondences. Existing methods struggle to handle ill-posed regions with limited matching cues, such as occlusions and textureless areas. To address this, we propose MonSter, a novel method that leverages the complementary strengths of monocular depth estimation and stereo matching. MonSter integrates monocular depth and stereo matching into a dual-branch architecture to iteratively improve each other. Confidence-based guidance adaptively selects reliable stereo cues for monodepth scale-shift recovery. The refined monodepth is in turn guides stereo effectively at ill-posed regions. Such iterative mutual enhancement enables MonSter to evolve monodepth priors from coarse object-level structures to pixel-level geometry, fully unlocking the potential of stereo matching. As shown in Fig.1, MonSter ranks 1st across five most commonly used leaderboards -- SceneFlow, KITTI 2012, KITTI 2015, Middlebury, and ETH3D. Achieving up to 49.5% improvements (Bad 1.0 on ETH3D) over the previous best method. Comprehensive analysis verifies the effectiveness of MonSter in ill-posed regions. In terms of zero-shot generalization, MonSter significantly and consistently outperforms state-of-the-art across the board. The code is publicly available at: https://github.com/Junda24/MonSter.

Paper: https://arxiv.org/pdf/2501.08643v1.pdf

Code: https://github.com/junda24/monster

Datasets: KITTI - TartanAir

@Machine_learn
👍5
Seg-Zero: Reasoning-Chain Guided Segmentation via Cognitive Reinforcement

🖥 Github: https://github.com/yunncheng/MMRL

📕 Paper: https://arxiv.org/abs/2503.08497v1

🌟 Dataset: https://paperswithcode.com/dataset/imagenet-s

@Machine_learn
Please open Telegram to view this post
VIEW IN TELEGRAM
1🔥1