This media is not supported in your browser
VIEW IN TELEGRAM
๐ฅDepth Anything 3 is out๐ฅ
๐ByteDance unveils Depth Anything 3 (DA3), a model that predicts spatially consistent geometry from arbitrary visual inputs, with or without known camera poses. Repo under Apache 2.0๐
๐Review https://t.ly/AOPu7
๐Paper arxiv.org/pdf/2511.10647
๐Project https://lnkd.in/dnByyn2z
๐Repo https://lnkd.in/daCVz_4a
๐Demo https://lnkd.in/dKUZiJt
๐ByteDance unveils Depth Anything 3 (DA3), a model that predicts spatially consistent geometry from arbitrary visual inputs, with or without known camera poses. Repo under Apache 2.0๐
๐Review https://t.ly/AOPu7
๐Paper arxiv.org/pdf/2511.10647
๐Project https://lnkd.in/dnByyn2z
๐Repo https://lnkd.in/daCVz_4a
๐Demo https://lnkd.in/dKUZiJt
๐ฅ18โค9๐1๐1
This media is not supported in your browser
VIEW IN TELEGRAM
๐ฉ๏ธ It's "Time-to-Move" ๐ฉ๏ธ
๐Technion + Nvidia Time-to-Move (TTM) is a training-free, plug-and-play framework for motion- and appearance-controlled video generation with I2V diffusion models (Wan 2.2, CogVideoX, & Stable VD). Impressive results!
๐Review https://t.ly/0pwXm
๐Paper https://lnkd.in/dxD3uHYb
๐Project https://lnkd.in/dcE5juyM
๐Repo https://lnkd.in/dMMUjybJ
๐Technion + Nvidia Time-to-Move (TTM) is a training-free, plug-and-play framework for motion- and appearance-controlled video generation with I2V diffusion models (Wan 2.2, CogVideoX, & Stable VD). Impressive results!
๐Review https://t.ly/0pwXm
๐Paper https://lnkd.in/dxD3uHYb
๐Project https://lnkd.in/dcE5juyM
๐Repo https://lnkd.in/dMMUjybJ
1๐2๐ฅ2โค1
This media is not supported in your browser
VIEW IN TELEGRAM
โ Multi-Shot Video Segmentation โ
๐Fudan focuses on an underexplored task of multi-shot video object segmentation (MVOS). Benchmark and repo available (the extension part of SAM) under Apache 2.0๐
๐Review https://t.ly/WBW00
๐Paper https://arxiv.org/pdf/2511.13715
๐Project https://henghuiding.com/SAAS/
๐Repo https://github.com/FudanCVL/SAAS
๐Fudan focuses on an underexplored task of multi-shot video object segmentation (MVOS). Benchmark and repo available (the extension part of SAM) under Apache 2.0๐
๐Review https://t.ly/WBW00
๐Paper https://arxiv.org/pdf/2511.13715
๐Project https://henghuiding.com/SAAS/
๐Repo https://github.com/FudanCVL/SAAS
1๐ฅ6โค2
This media is not supported in your browser
VIEW IN TELEGRAM
๐ฅ SAM 3/3D are OUT!! ๐ฅ
๐#META released SAM 3, a unified model for detection, segmentation, tracking of objects in images & video using text, exemplar & visual prompts. Repo/Models under proprietary license๐
๐Review https://t.ly/lnRZN
๐Paper https://t.ly/5tq9N
๐Project https://ai.meta.com/sam3/
๐Demo: https://segment-anything.com
๐Repo https://github.com/facebookresearch/sam3
๐#META released SAM 3, a unified model for detection, segmentation, tracking of objects in images & video using text, exemplar & visual prompts. Repo/Models under proprietary license๐
๐Review https://t.ly/lnRZN
๐Paper https://t.ly/5tq9N
๐Project https://ai.meta.com/sam3/
๐Demo: https://segment-anything.com
๐Repo https://github.com/facebookresearch/sam3
๐ฅ22โค4๐1
This media is not supported in your browser
VIEW IN TELEGRAM
๐ฏUnwrapping of 3D Meshes๐ฏ
๐PartUV is a novel part-based UV unwrapping method for 3D meshes; it combines learned part priors with geometric cues to generate a compact set of part-aligned charts. Repo released๐
๐Review https://t.ly/8dNIY
๐Paper arxiv.org/pdf/2511.16659
๐Project www.zhaoningwang.com/PartUV/
๐Repo github.com/EricWang12/PartUV
๐PartUV is a novel part-based UV unwrapping method for 3D meshes; it combines learned part priors with geometric cues to generate a compact set of part-aligned charts. Repo released๐
๐Review https://t.ly/8dNIY
๐Paper arxiv.org/pdf/2511.16659
๐Project www.zhaoningwang.com/PartUV/
๐Repo github.com/EricWang12/PartUV
โค14๐2๐ฅ1
๐ Upsample Anything ๐
๐Upsample Anything, a novel universal, training-free up-sampler via lightweight test-time optimization. No code but it's a relevant paper๐
๐Review https://t.ly/7LE6G
๐Paper https://lnkd.in/dsUfdtih
๐Upsample Anything, a novel universal, training-free up-sampler via lightweight test-time optimization. No code but it's a relevant paper๐
๐Review https://t.ly/7LE6G
๐Paper https://lnkd.in/dsUfdtih
๐ฅ7โค3๐2๐1
This media is not supported in your browser
VIEW IN TELEGRAM
๐ฆSingle Synthetic Image per Class๐ฆ
๐MIT unveils Linear Gradient Matching (H/T Torralba), a novel method of distillation to use a single synthetic image per class for linear classifiers training (and more). Repo available๐
๐Review https://t.ly/dD3un
๐Paper arxiv.org/pdf/2511.16674
๐Project linear-gradient-matching.github.io/
๐Repo github.com/GeorgeCazenavette/linear-gradient-matching
๐MIT unveils Linear Gradient Matching (H/T Torralba), a novel method of distillation to use a single synthetic image per class for linear classifiers training (and more). Repo available๐
๐Review https://t.ly/dD3un
๐Paper arxiv.org/pdf/2511.16674
๐Project linear-gradient-matching.github.io/
๐Repo github.com/GeorgeCazenavette/linear-gradient-matching
1โค6๐ฅ2๐1๐1
This media is not supported in your browser
VIEW IN TELEGRAM
๐งช EfficientSAM3 is out ๐งช
๐Bristol announces EfficientSAM3, a family of efficient models built on Progressive Hierarchical Distillation that transfers capability from SAM3 to lightweight students. Code coming (in sync with SAM3 release)๐
๐Review https://t.ly/bfXP2
๐Paper arxiv.org/pdf/2511.15833
๐Project simonzeng7108.github.io/efficientsam3/
๐Repo github.com/SimonZeng7108/efficientsam3
๐Bristol announces EfficientSAM3, a family of efficient models built on Progressive Hierarchical Distillation that transfers capability from SAM3 to lightweight students. Code coming (in sync with SAM3 release)๐
๐Review https://t.ly/bfXP2
๐Paper arxiv.org/pdf/2511.15833
๐Project simonzeng7108.github.io/efficientsam3/
๐Repo github.com/SimonZeng7108/efficientsam3
โค3๐2๐ฅ1๐1