This media is not supported in your browser
VIEW IN TELEGRAM
🎢 Robo-quadruped Parkour🎢
👉LAAS-CNRS unveils a novel RL approach to perform agile skills that are reminiscent of parkour, such as walking, climbing high steps, leaping over gaps, and crawling under obstacles. Data and Code available💙
👉Review https://t.ly/-6VRm
👉Paper arxiv.org/pdf/2409.13678
👉Project gepetto.github.io/SoloParkour/
👉Code github.com/Gepetto/SoloParkour
👉LAAS-CNRS unveils a novel RL approach to perform agile skills that are reminiscent of parkour, such as walking, climbing high steps, leaping over gaps, and crawling under obstacles. Data and Code available💙
👉Review https://t.ly/-6VRm
👉Paper arxiv.org/pdf/2409.13678
👉Project gepetto.github.io/SoloParkour/
👉Code github.com/Gepetto/SoloParkour
🔥5👍2👏1🤯1
This media is not supported in your browser
VIEW IN TELEGRAM
🩰 Dressed Humans in the wild 🩰
👉ETH (+ #Microsoft ) ReLoo: novel 3D-HQ reconstruction of humans dressed in loose garments from mono in-the-wild clips. No prior assumptions about the garments. Source Code announced, coming 💙
👉Review https://t.ly/evgmN
👉Paper arxiv.org/pdf/2409.15269
👉Project moygcc.github.io/ReLoo/
👉Code github.com/eth-ait/ReLoo
👉ETH (+ #Microsoft ) ReLoo: novel 3D-HQ reconstruction of humans dressed in loose garments from mono in-the-wild clips. No prior assumptions about the garments. Source Code announced, coming 💙
👉Review https://t.ly/evgmN
👉Paper arxiv.org/pdf/2409.15269
👉Project moygcc.github.io/ReLoo/
👉Code github.com/eth-ait/ReLoo
🤯9❤2👍1🔥1
This media is not supported in your browser
VIEW IN TELEGRAM
🌾 New SOTA Edge Detection 🌾
👉CUP (+ ESPOCH) unveils the new SOTA for Edge Detection (NBED); superior performance consistently across multiple benchmarks, even compared with huge computational cost and complex training models. Source Code released💙
👉Review https://t.ly/zUMcS
👉Paper arxiv.org/pdf/2409.14976
👉Code github.com/Li-yachuan/NBED
👉CUP (+ ESPOCH) unveils the new SOTA for Edge Detection (NBED); superior performance consistently across multiple benchmarks, even compared with huge computational cost and complex training models. Source Code released💙
👉Review https://t.ly/zUMcS
👉Paper arxiv.org/pdf/2409.14976
👉Code github.com/Li-yachuan/NBED
🔥11👍5👏1
This media is not supported in your browser
VIEW IN TELEGRAM
👩🦰 SOTA Gaussian Haircut 👩🦰
👉ETH et. al unveils Gaussian Haircut, the new SOTA in hair reconstruction via dual representation (classic + 3D Gaussian). Code and Model announced💙
👉Review https://t.ly/aiOjq
👉Paper arxiv.org/pdf/2409.14778
👉Project https://lnkd.in/dFRm2ycb
👉Repo https://lnkd.in/d5NWNkb5
👉ETH et. al unveils Gaussian Haircut, the new SOTA in hair reconstruction via dual representation (classic + 3D Gaussian). Code and Model announced💙
👉Review https://t.ly/aiOjq
👉Paper arxiv.org/pdf/2409.14778
👉Project https://lnkd.in/dFRm2ycb
👉Repo https://lnkd.in/d5NWNkb5
🔥16👍2❤1🤯1
This media is not supported in your browser
VIEW IN TELEGRAM
🍇SPARK: Real-time Face Capture🍇
👉Technicolor Group unveils SPARK, a novel high-precision 3D face capture via collection of unconstrained videos of a subject as prior information. New SOTA able to handle unseen pose, expression and lighting. Impressive results. Code & Model announced💙
👉Review https://t.ly/rZOgp
👉Paper arxiv.org/pdf/2409.07984
👉Project kelianb.github.io/SPARK/
👉Repo github.com/KelianB/SPARK/
👉Technicolor Group unveils SPARK, a novel high-precision 3D face capture via collection of unconstrained videos of a subject as prior information. New SOTA able to handle unseen pose, expression and lighting. Impressive results. Code & Model announced💙
👉Review https://t.ly/rZOgp
👉Paper arxiv.org/pdf/2409.07984
👉Project kelianb.github.io/SPARK/
👉Repo github.com/KelianB/SPARK/
🔥10❤2👏1💩1
This media is not supported in your browser
VIEW IN TELEGRAM
🦴 One-Image Object Detection 🦴
👉Delft University (+Hensoldt Optronics) introduces OSSA, a novel unsupervised domain adaptation method for object detection that utilizes a single, unlabeled target image to approximate the target domain style. Code released💙
👉Review https://t.ly/-li2G
👉Paper arxiv.org/pdf/2410.00900
👉Code github.com/RobinGerster7/OSSA
👉Delft University (+Hensoldt Optronics) introduces OSSA, a novel unsupervised domain adaptation method for object detection that utilizes a single, unlabeled target image to approximate the target domain style. Code released💙
👉Review https://t.ly/-li2G
👉Paper arxiv.org/pdf/2410.00900
👉Code github.com/RobinGerster7/OSSA
🔥19👏2⚡1👍1🥰1
This media is not supported in your browser
VIEW IN TELEGRAM
🛳️ EVER Ellipsoid Rendering 🛳️
👉UCSD & Google present EVER, a novel method for real-time differentiable emission-only volume rendering. Unlike 3DGS it does not suffer from popping artifacts and view dependent density, achieving ∼30 FPS at 720p on #NVIDIA RTX4090.
👉Review https://t.ly/zAfGU
👉Paper arxiv.org/pdf/2410.01804
👉Project half-potato.gitlab.io/posts/ever/
👉UCSD & Google present EVER, a novel method for real-time differentiable emission-only volume rendering. Unlike 3DGS it does not suffer from popping artifacts and view dependent density, achieving ∼30 FPS at 720p on #NVIDIA RTX4090.
👉Review https://t.ly/zAfGU
👉Paper arxiv.org/pdf/2410.01804
👉Project half-potato.gitlab.io/posts/ever/
🔥13❤2👍2👏1🤯1😱1🍾1
🔥 "Deep Gen-AI" Full Course 🔥
👉A fresh course from Stanford about the probabilistic foundations and algorithms for deep generative models. A novel overview about the evolution of the genAI in #computervision, language and more...
👉Review https://t.ly/ylBxq
👉Course https://lnkd.in/dMKH9gNe
👉Lectures https://lnkd.in/d_uwDvT6
👉A fresh course from Stanford about the probabilistic foundations and algorithms for deep generative models. A novel overview about the evolution of the genAI in #computervision, language and more...
👉Review https://t.ly/ylBxq
👉Course https://lnkd.in/dMKH9gNe
👉Lectures https://lnkd.in/d_uwDvT6
❤21🔥7👏2👍1🥰1🤩1
This media is not supported in your browser
VIEW IN TELEGRAM
🐏 EFM3D: 3D Ego-Foundation 🐏
👉#META presents EFM3D, the first benchmark for 3D object detection and surface regression on HQ annotated egocentric data of Project Aria. Datasets & Code released💙
👉Review https://t.ly/cDJv6
👉Paper arxiv.org/pdf/2406.10224
👉Project www.projectaria.com/datasets/aeo/
👉Repo github.com/facebookresearch/efm3d
👉#META presents EFM3D, the first benchmark for 3D object detection and surface regression on HQ annotated egocentric data of Project Aria. Datasets & Code released💙
👉Review https://t.ly/cDJv6
👉Paper arxiv.org/pdf/2406.10224
👉Project www.projectaria.com/datasets/aeo/
👉Repo github.com/facebookresearch/efm3d
🔥9❤2👍2⚡1👏1😍1
This media is not supported in your browser
VIEW IN TELEGRAM
🥦Gaussian Splatting VTON🥦
👉GS-VTON is a novel image-prompted 3D-VTON which, by leveraging 3DGS as the 3D representation, enables the transfer of pre-trained knowledge from 2D VTON models to 3D while improving cross-view consistency. Code announced💙
👉Review https://t.ly/sTPbW
👉Paper arxiv.org/pdf/2410.05259
👉Project yukangcao.github.io/GS-VTON/
👉Repo github.com/yukangcao/GS-VTON
👉GS-VTON is a novel image-prompted 3D-VTON which, by leveraging 3DGS as the 3D representation, enables the transfer of pre-trained knowledge from 2D VTON models to 3D while improving cross-view consistency. Code announced💙
👉Review https://t.ly/sTPbW
👉Paper arxiv.org/pdf/2410.05259
👉Project yukangcao.github.io/GS-VTON/
👉Repo github.com/yukangcao/GS-VTON
🔥14❤3👍1👏1😍1
This media is not supported in your browser
VIEW IN TELEGRAM
💡Diffusion Models Relighting💡
👉#Netflix unveils DifFRelight, a novel free-viewpoint facial relighting via diffusion model. Precise lighting control, high-fidelity relit facial images from flat-lit inputs.
👉Review https://t.ly/fliXU
👉Paper arxiv.org/pdf/2410.08188
👉Project www.eyelinestudios.com/research/diffrelight.html
👉#Netflix unveils DifFRelight, a novel free-viewpoint facial relighting via diffusion model. Precise lighting control, high-fidelity relit facial images from flat-lit inputs.
👉Review https://t.ly/fliXU
👉Paper arxiv.org/pdf/2410.08188
👉Project www.eyelinestudios.com/research/diffrelight.html
🔥17❤7⚡2👍2😍2👏1
This media is not supported in your browser
VIEW IN TELEGRAM
🥎POKEFLEX: Soft Object Dataset🥎
👉PokeFlex from ETH is a dataset that includes 3D textured meshes, point clouds, RGB & depth maps of deformable objects. Pretrained models & dataset announced💙
👉Review https://t.ly/GXggP
👉Paper arxiv.org/pdf/2410.07688
👉Project https://lnkd.in/duv-jS7a
👉Repo
👉PokeFlex from ETH is a dataset that includes 3D textured meshes, point clouds, RGB & depth maps of deformable objects. Pretrained models & dataset announced💙
👉Review https://t.ly/GXggP
👉Paper arxiv.org/pdf/2410.07688
👉Project https://lnkd.in/duv-jS7a
👉Repo
👍7🔥2🥰1👏1😱1
This media is not supported in your browser
VIEW IN TELEGRAM
🔥 DEPTH ANY VIDEO is out! 🔥
👉DAV is a novel foundation model for image/video depth estimation.The new SOTA for accuracy & consistency, up to 150 FPS!
👉Review https://t.ly/CjSz2
👉Paper arxiv.org/pdf/2410.10815
👉Project depthanyvideo.github.io/
👉Code github.com/Nightmare-n/DepthAnyVideo
👉DAV is a novel foundation model for image/video depth estimation.The new SOTA for accuracy & consistency, up to 150 FPS!
👉Review https://t.ly/CjSz2
👉Paper arxiv.org/pdf/2410.10815
👉Project depthanyvideo.github.io/
👉Code github.com/Nightmare-n/DepthAnyVideo
🔥14🤯3❤1👍1
This media is not supported in your browser
VIEW IN TELEGRAM
🪞Robo-Emulation via Video Imitation🪞
👉OKAMI (UT & #Nvidia) is a novel foundation method that generates a manipulation plan from a single RGB-D video and derives a policy for execution.
👉Review https://t.ly/_N29-
👉Paper arxiv.org/pdf/2410.11792
👉Project https://lnkd.in/d6bHF_-s
👉OKAMI (UT & #Nvidia) is a novel foundation method that generates a manipulation plan from a single RGB-D video and derives a policy for execution.
👉Review https://t.ly/_N29-
👉Paper arxiv.org/pdf/2410.11792
👉Project https://lnkd.in/d6bHF_-s
👍4🤯2🔥1
This media is not supported in your browser
VIEW IN TELEGRAM
🔥 CoTracker3 by #META is out! 🔥
👉#Meta (+VGG Oxford) unveils CoTracker3, a new tracker that outperforms the previous SoTA by a large margin using only the 0.1% of the training data 🤯🤯🤯
👉Review https://t.ly/TcRIv
👉Paper arxiv.org/pdf/2410.11831
👉Project cotracker3.github.io/
👉Code github.com/facebookresearch/co-tracker
👉#Meta (+VGG Oxford) unveils CoTracker3, a new tracker that outperforms the previous SoTA by a large margin using only the 0.1% of the training data 🤯🤯🤯
👉Review https://t.ly/TcRIv
👉Paper arxiv.org/pdf/2410.11831
👉Project cotracker3.github.io/
👉Code github.com/facebookresearch/co-tracker
❤14🔥3🤯3🍾2👍1😱1😍1
This media is not supported in your browser
VIEW IN TELEGRAM
🦠 Neural Metamorphosis 🦠
👉NU Singapore unveils NeuMeta to transform neural nets by allowing a single model to adapt on the fly to different sizes, generating the right weights when needed.
👉Review https://t.ly/DJab3
👉Paper arxiv.org/pdf/2410.11878
👉Project adamdad.github.io/neumeta
👉Code github.com/Adamdad/neumeta
👉NU Singapore unveils NeuMeta to transform neural nets by allowing a single model to adapt on the fly to different sizes, generating the right weights when needed.
👉Review https://t.ly/DJab3
👉Paper arxiv.org/pdf/2410.11878
👉Project adamdad.github.io/neumeta
👉Code github.com/Adamdad/neumeta
❤7🔥3🤯3😱2⚡1👏1
This media is not supported in your browser
VIEW IN TELEGRAM
☀️ GS + Depth = SOTA ☀️
👉DepthSplat, the new SOTA in depth estimation & novel view synthesis. The key feature is the cross-task interaction between Gaussian Splatting & depth estimation. Source Code to be released soon💙
👉Review https://t.ly/87HuH
👉Paper arxiv.org/abs/2410.13862
👉Project haofeixu.github.io/depthsplat/
👉Code github.com/cvg/depthsplat
👉DepthSplat, the new SOTA in depth estimation & novel view synthesis. The key feature is the cross-task interaction between Gaussian Splatting & depth estimation. Source Code to be released soon💙
👉Review https://t.ly/87HuH
👉Paper arxiv.org/abs/2410.13862
👉Project haofeixu.github.io/depthsplat/
👉Code github.com/cvg/depthsplat
🤯9🔥8❤3⚡1👏1
This media is not supported in your browser
VIEW IN TELEGRAM
🔥BitNet: code of 1-bit LLM released🔥
👉BitNet by #Microsoft, announced in late 2023, is a 1-bit Transformer architecture designed for LLMs. BitLinear as a drop-in replacement of the nn.Linear layer in order to train 1-bit weights from scratch. Source Code just released 💙
👉Review https://t.ly/3G2LA
👉Paper arxiv.org/pdf/2310.11453
👉Code https://lnkd.in/duPADJVb
👉BitNet by #Microsoft, announced in late 2023, is a 1-bit Transformer architecture designed for LLMs. BitLinear as a drop-in replacement of the nn.Linear layer in order to train 1-bit weights from scratch. Source Code just released 💙
👉Review https://t.ly/3G2LA
👉Paper arxiv.org/pdf/2310.11453
👉Code https://lnkd.in/duPADJVb
🔥21❤5🤯2👍1🥰1
This media is not supported in your browser
VIEW IN TELEGRAM
🧿 Look Ma, no markers 🧿
👉#Microsoft unveils the first technique for marker-free, HQ reconstruction of COMPLETE human body, including eyes and tongue, without requiring any calibration, manual intervention or custom hardware. Impressive results! Repo for training & Dataset released💙
👉Review https://t.ly/5fN0g
👉Paper arxiv.org/pdf/2410.11520
👉Project microsoft.github.io/SynthMoCap/
👉Repo github.com/microsoft/SynthMoCap
👉#Microsoft unveils the first technique for marker-free, HQ reconstruction of COMPLETE human body, including eyes and tongue, without requiring any calibration, manual intervention or custom hardware. Impressive results! Repo for training & Dataset released💙
👉Review https://t.ly/5fN0g
👉Paper arxiv.org/pdf/2410.11520
👉Project microsoft.github.io/SynthMoCap/
👉Repo github.com/microsoft/SynthMoCap
🤯16👍10🔥3😱3❤1👏1
This media is not supported in your browser
VIEW IN TELEGRAM
🪁 PL2Map: efficient neural 2D-3D 🪁
👉PL2Map is a novel neural network tailored for efficient representation of complex point & line maps. A natural representation of 2D-3D correspondences
👉Review https://t.ly/D-bVD
👉Paper arxiv.org/pdf/2402.18011
👉Project https://thpjp.github.io/pl2map
👉Code https://github.com/ais-lab/pl2map
👉PL2Map is a novel neural network tailored for efficient representation of complex point & line maps. A natural representation of 2D-3D correspondences
👉Review https://t.ly/D-bVD
👉Paper arxiv.org/pdf/2402.18011
👉Project https://thpjp.github.io/pl2map
👉Code https://github.com/ais-lab/pl2map
🔥14🤯8👍2❤1🤩1