This media is not supported in your browser
VIEW IN TELEGRAM
๐ฆ Segment & Track Any Cell ๐ฆ
๐RWTH unveils a novel zero-shot cell tracking framework by integrating Segment Anything 2 (SAM2) into the tracking pipeline. Source Code released๐
๐Review https://t.ly/n_srg
๐Paper https://arxiv.org/pdf/2509.09943
๐Repo https://github.com/zhuchen96/sam4celltracking
๐RWTH unveils a novel zero-shot cell tracking framework by integrating Segment Anything 2 (SAM2) into the tracking pipeline. Source Code released๐
๐Review https://t.ly/n_srg
๐Paper https://arxiv.org/pdf/2509.09943
๐Repo https://github.com/zhuchen96/sam4celltracking
โค4๐ฅ2๐1
๐ฅ How We Use ChatGPT ๐ฅ
๐By July 2025, ChatGPT has 700M+ users sending more than 2.5B+ messages per day. About 29,000 messages per second. This paper documents eight important facts about ChatGPT usage in the last three years. 63 pages of impressive statistics. To read.๐
๐Review https://t.ly/QYHSi
๐By July 2025, ChatGPT has 700M+ users sending more than 2.5B+ messages per day. About 29,000 messages per second. This paper documents eight important facts about ChatGPT usage in the last three years. 63 pages of impressive statistics. To read.๐
๐Review https://t.ly/QYHSi
๐ฅ5โค1๐1
This media is not supported in your browser
VIEW IN TELEGRAM
๐ก๏ธ3D Prompted Vision-LLM๐ก๏ธ
๐#Nvidia unveils SR-3D, a novel aware vision-language model that connects single-view 2D images and multi-view 3D data through a shared visual token space. Flexible region prompting, allowing users to annotate regions with bounding boxes, segmentation masks on any frame, or directly in 3D, without the need for exhaustive multi-frame labeling. Code & Dataset announced๐
๐Review https://t.ly/5Y2c5
๐Paper https://arxiv.org/pdf/2509.13317
๐Project https://www.anjiecheng.me/sr3d
๐Repo TBA
๐#Nvidia unveils SR-3D, a novel aware vision-language model that connects single-view 2D images and multi-view 3D data through a shared visual token space. Flexible region prompting, allowing users to annotate regions with bounding boxes, segmentation masks on any frame, or directly in 3D, without the need for exhaustive multi-frame labeling. Code & Dataset announced๐
๐Review https://t.ly/5Y2c5
๐Paper https://arxiv.org/pdf/2509.13317
๐Project https://www.anjiecheng.me/sr3d
๐Repo TBA
โค6๐ฅ5๐1๐1
This media is not supported in your browser
VIEW IN TELEGRAM
๐ Superpixel Anything (SOTA) ๐
๐ SuperPixel Anything Model, a versatile framework for segmenting images. Extracting image features for superpixel generation blended with a large-scale pretrained model for semantic-agnostic segmentation to ensure superpixels alignement with masks. Damn romantic. Repo & Dataset available๐
๐Review https://t.ly/rpxRh
๐Paper arxiv.org/pdf/2509.12791
๐Repo github.com/waldo-j/spam
๐ SuperPixel Anything Model, a versatile framework for segmenting images. Extracting image features for superpixel generation blended with a large-scale pretrained model for semantic-agnostic segmentation to ensure superpixels alignement with masks. Damn romantic. Repo & Dataset available๐
๐Review https://t.ly/rpxRh
๐Paper arxiv.org/pdf/2509.12791
๐Repo github.com/waldo-j/spam
โค11๐ฅ5๐1
Iโm keeping the channel free from interaction to avoid SPAM. The only way to interact is commenting the post after being accepted in the subchannel. Do you like this setting?
Anonymous Poll
92%
โ
YES, keep this configuration
8%
โ NO, open the main channel to comment for everyone
โค2๐ฅ1
AI with Papers - Artificial Intelligence & Deep Learning pinned ยซIโm keeping the channel free from interaction to avoid SPAM. The only way to interact is commenting the post after being accepted in the subchannel. Do you like this setting?ยป
This media is not supported in your browser
VIEW IN TELEGRAM
๐ฝDAM for SAM2 Tracking๐ฝ
๐From the University of Ljubljana a novel distractor-aware drop-in memory module for SAM2. Reducing the tracking drift toward distractors and improves redetection capability after object occlusions. DAM4SAM outperforms SAM2.1, SOTA on 10 benchmarks. Repo released ๐
๐Review https://t.ly/8aR59
๐Paper https://arxiv.org/pdf/2509.13864
๐Project jovanavidenovic.github.io/dam-4-sam/
๐Repo github.com/jovanavidenovic/DAM4SAM
๐From the University of Ljubljana a novel distractor-aware drop-in memory module for SAM2. Reducing the tracking drift toward distractors and improves redetection capability after object occlusions. DAM4SAM outperforms SAM2.1, SOTA on 10 benchmarks. Repo released ๐
๐Review https://t.ly/8aR59
๐Paper https://arxiv.org/pdf/2509.13864
๐Project jovanavidenovic.github.io/dam-4-sam/
๐Repo github.com/jovanavidenovic/DAM4SAM
๐ฅ9๐4๐คฏ1
๐ฅ๐ฅ It's time to decide whether you want to give LinkedIn your data for AI training or not ๐ฅ๐ฅ
Poll: https://lnkd.in/p/ddnenZgH
Set here: https://linkedin.com/mypreferences/d/settings/data-for-ai-improvement
Poll: https://lnkd.in/p/ddnenZgH
Set here: https://linkedin.com/mypreferences/d/settings/data-for-ai-improvement
โค8๐ฅ1๐1
This media is not supported in your browser
VIEW IN TELEGRAM
๐ณ Invariant Saliency Detection ๐ณ
๐SI-SOD: invariant salient object detection in scenarios when multiple salient objects of significantly different sizes appear within a single image. Repo released๐
๐Review https://lnkd.in/p/dZBfbSsf
๐Paper https://arxiv.org/pdf/2509.15573
๐Project https://ferry-li.github.io/SI_SOD/
๐Repo https://github.com/Ferry-Li/SI-SOD
๐SI-SOD: invariant salient object detection in scenarios when multiple salient objects of significantly different sizes appear within a single image. Repo released๐
๐Review https://lnkd.in/p/dZBfbSsf
๐Paper https://arxiv.org/pdf/2509.15573
๐Project https://ferry-li.github.io/SI_SOD/
๐Repo https://github.com/Ferry-Li/SI-SOD
๐ฅ3โค1
This media is not supported in your browser
VIEW IN TELEGRAM
๐ซ WINNER of LSVOS Challenge ๐ซ
๐SaSaSa2VA introduces Segmentation Augmentation to improve global video understanding while remaining efficient, and employs Selective Averaging at inference to robustly fuse complementary predictions. This approach achieves SOTA on the 7th LSVOS Challenge (RVOS track). A practical solution with full repo under Apache๐
๐Review https://t.ly/aH4mB
๐Paper https://arxiv.org/pdf/2509.16972
๐Repo https://github.com/magic-research/Sa2VA
๐SaSaSa2VA introduces Segmentation Augmentation to improve global video understanding while remaining efficient, and employs Selective Averaging at inference to robustly fuse complementary predictions. This approach achieves SOTA on the 7th LSVOS Challenge (RVOS track). A practical solution with full repo under Apache๐
๐Review https://t.ly/aH4mB
๐Paper https://arxiv.org/pdf/2509.16972
๐Repo https://github.com/magic-research/Sa2VA
๐ฅ5โค3๐1
This media is not supported in your browser
VIEW IN TELEGRAM
๐MOSEv2 Challenge Winner๐
๐A practical solution for complex segmentation based on the Segment Concept (SeC), a concept-driven segmentation framework that shifts from conventional feature matching to the progressive construction and utilization of high-level, object-centric representations. Repo under Apache 2.0๐
๐Review https://t.ly/2MjNm
๐Paper arxiv.org/pdf/2509.19183
๐Paper (SeC) arxiv.org/pdf/2507.15852
๐Repo github.com/OpenIXCLab/SeC
๐Project rookiexiong7.github.io/projects/SeC/
๐A practical solution for complex segmentation based on the Segment Concept (SeC), a concept-driven segmentation framework that shifts from conventional feature matching to the progressive construction and utilization of high-level, object-centric representations. Repo under Apache 2.0๐
๐Review https://t.ly/2MjNm
๐Paper arxiv.org/pdf/2509.19183
๐Paper (SeC) arxiv.org/pdf/2507.15852
๐Repo github.com/OpenIXCLab/SeC
๐Project rookiexiong7.github.io/projects/SeC/
โค4๐1๐ฅ1
This media is not supported in your browser
VIEW IN TELEGRAM
๐ CLOPS: Vision-Driven Avatar ๐
๐CLOPS is the first human avatar solely uses egocentric vision to perceive its surroundings and navigate. CLOPS is able to realistically move in a scene and use egocentric vision in order to find a goal in a loop of visual perception & motion. Code announced๐
๐Review https://t.ly/RXp64
๐Paper https://arxiv.org/pdf/2509.19259
๐Project markos-diomataris.github.io/projects/clops/
๐Repo TBA
๐CLOPS is the first human avatar solely uses egocentric vision to perceive its surroundings and navigate. CLOPS is able to realistically move in a scene and use egocentric vision in order to find a goal in a loop of visual perception & motion. Code announced๐
๐Review https://t.ly/RXp64
๐Paper https://arxiv.org/pdf/2509.19259
๐Project markos-diomataris.github.io/projects/clops/
๐Repo TBA
โค9๐ฅ7๐1