Neural Networks | Нейронные сети
11.6K subscribers
801 photos
183 videos
170 files
9.45K links
Все о машинном обучении

По всем вопросам - @notxxx1

№ 4959169263
Download Telegram
​The #NeurIPS AI for social good workshop is live. Check out the live stream if you can't make it in person

https://slideslive.com/38922118/joint-workshop-on-ai-for-social-good-1

🔗 Joint Workshop on AI for Social Good 1
The accelerating pace of intelligent systems research and real world deployment presents three clear challenges for producing "good" intelligent systems: (1) the research community lacks incentives...
🎥 Building IMU-based Gesture Recognition! (awesome project!)
👁 1 раз 556 сек.
An inertial measurement unit (IMU) is a device that can sense motion and orientation. If you combine IMUs with machine learning, you can detect gestures! For last Halloween, I built a magic wand that combines IMUs, machine learning, and DIY electronics to detect different gestures when waving the wand. In this talk, we’ll go over the steps I took to build the wand and how you can do it too! Hardware prototyping is becoming more accessible than ever for people without a traditional hardware engineering backg
🎥 Biological and Artificial Reinforcement Learning 1 | NeurIPS 2019
👁 1 раз 2294 сек.
Continue to support the channel: https://paypal.me/aipursuit
-----------------------------------------------------------------------------------------
Subscribe ⇢ https://www.youtube.com/channel/UCe_QLqna7cFtTCfZ0a8pycg?sub_confirmation=1
-----------------------------------------------------------------------------------------
Video is reposted for education purpose.
🎥 How Machine Learning Drives the Deceptive World of Deepfakes
👁 1 раз 452 сек.
Deepfakes are spreading fast, and while some have playful intentions, others can cause serious harm. We stepped inside this deceptive new world to see what experts are doing to catch this altered content.
»Subscribe to Seeker! https://bit.ly/subscribeseeker
»Watch more Focal Point | https://bit.ly/2s0cf7w

Chances are you’ve seen a deepfake; Donald Trump, Barack Obama, and Mark Zuckerberg have all been targets of the computer-generated replications.

A deepfake is a video or an audio clip where deep l
Коллеги, простите что отвлекаю, но правда, неделю уже ищу, схему электрическую для - OpenBCI, подскажите пожалуйста, где можно ее найти?
Deep Learning for Computer Vision with Python Dr Adrian Rosebrock

Наш телеграм канал - tglink.me/ai_machinelearning_big_data

📝 Deep_Learning_for_Computer_Vision_with_Python_Dr_Adrian_Rosebrock_2017_PDF_ENG.pdf - 💾27 660 308
​Few-shot Video-to-Video Synthesis

https://www.youtube.com/watch?v=8AZBuyEuDqc&feature=youtu.be

[Paper] 👉 https://arxiv.org/abs/1910.12713

[Video]👉https://www.youtube.com/watch?v=8AZBuyEuDqc&feature=youtu.be

Code👉https://github.com/NVlabs/few-shot-vid2vid

🔗 Few-shot Video-to-Video Synthesis
Video-to-video synthesis (vid2vid) aims at converting an input semantic video, such as videos of human poses or segmentation masks, to an output photorealistic video. While the state-of-the-art of vid2vid has advanced significantly, existing approaches share two major limitations. First, they are data-hungry. Numerous images of a target human subject or a scene are required for training. Second, a learned model has limited generalization capability. A pose-to-human vid2vid model can only synthesize poses of the single person in the training set. It does not generalize to other humans that are not in the training set. To address the limitations, we propose a few-shot vid2vid framework, which learns to synthesize videos of previously unseen subjects or scenes by leveraging few example images of the target at test time. Our model achieves this few-shot generalization capability via a novel network weight generation module utilizing an attention mechanism. We conduct extensive experimental validations with compar


🎥 Few-Shot Video-to-Video Synthesis (NeurIPS 2019)
👁 1 раз 128 сек.
Few-shot photorealistic video-to-video translation. It can be used for generating human motions from poses, synthesizing people talking from edge maps, or turning semantic label maps into photo-realistic videos. For more details, please visit https://nvlabs.github.io/few-shot-vid2vid/.