Neural Network Compression Framework (NNCF)
This module contains a PyTorch-based framework and samples for neural networks compression. The framework organized as a Python module that can be built and used in a standalone mode. The framework architecture is unified to make it easy to add different compression methods. The samples demonstrate the usage of compression algorithms for three different use cases on public models and datasets: Image Classification, Object Detection and Semantic Segmentation.
https://github.com/opencv/openvino_training_extensions/tree/develop/pytorch_toolkit/nncf
🔗 opencv/openvino_training_extensions
Trainable models and NN optimization tools. Contribute to opencv/openvino_training_extensions development by creating an account on GitHub.
This module contains a PyTorch-based framework and samples for neural networks compression. The framework organized as a Python module that can be built and used in a standalone mode. The framework architecture is unified to make it easy to add different compression methods. The samples demonstrate the usage of compression algorithms for three different use cases on public models and datasets: Image Classification, Object Detection and Semantic Segmentation.
https://github.com/opencv/openvino_training_extensions/tree/develop/pytorch_toolkit/nncf
🔗 opencv/openvino_training_extensions
Trainable models and NN optimization tools. Contribute to opencv/openvino_training_extensions development by creating an account on GitHub.
Double Deep Q Learning Is Simple with Keras
https://www.youtube.com/watch?v=UCgsv6tMReY
🎥 Double Deep Q Learning Is Simple with Keras
👁 1 раз ⏳ 2820 сек.
https://www.youtube.com/watch?v=UCgsv6tMReY
🎥 Double Deep Q Learning Is Simple with Keras
👁 1 раз ⏳ 2820 сек.
In this tutorial you are going to code a double deep Q learning agent in Keras, and beat the lunar lander environment. Double Q Learning resolves the inherent bias in Q learning by decoupling action selection and action-value estimation.
Silver et al. showed in 2015 that we can get significantly better results than vanilla deep Q learning, in the Atari environments.
Simple Deep Q Network w/Pytorch: https://youtu.be/UlJzzLYgYoE
Reinforcement Learning Crash Course: https://youtu.be/sOiNMW8k4T0
Policy GradiYouTube
Double Deep Q Learning Is Simple with Keras
In this tutorial you are going to code a double deep Q learning agent in Keras, and beat the lunar lander environment. Double Q Learning resolves the inherent bias in Q learning by decoupling action selection and action-value estimation.
Silver et al. showed…
Silver et al. showed…
How To Use Data Science For Social Impact
Transparency • Corruption • Collaboration
https://towardsdatascience.com/how-to-use-data-science-for-social-impact-e9b272b1a4b3?source=collection_home---4------2-----------------------
🔗 How To Use Data Science For Social Impact
Transparency • Corruption • Collaboration • Trust
Transparency • Corruption • Collaboration
https://towardsdatascience.com/how-to-use-data-science-for-social-impact-e9b272b1a4b3?source=collection_home---4------2-----------------------
🔗 How To Use Data Science For Social Impact
Transparency • Corruption • Collaboration • Trust
Medium
How To Use Data Science For Social Impact
Transparency • Corruption • Collaboration • Trust
Federated Learning: A New AI Business Model
Federated learning is not only a promising technology but also a possible brand new AI business model. Indeed, as a consultant
https://towardsdatascience.com/federated-learning-a-new-ai-business-model-ec6b4141b1bf?source=collection_home---4------0-----------------------
🔗 Federated Learning: A New AI Business Model
Federated learning is not only a promising technology but also a possible brand new AI business model. Indeed, as a consultant, I have…
Federated learning is not only a promising technology but also a possible brand new AI business model. Indeed, as a consultant
https://towardsdatascience.com/federated-learning-a-new-ai-business-model-ec6b4141b1bf?source=collection_home---4------0-----------------------
🔗 Federated Learning: A New AI Business Model
Federated learning is not only a promising technology but also a possible brand new AI business model. Indeed, as a consultant, I have…
Medium
Federated Learning: A New AI Business Model
Federated learning is not only a promising technology but also a possible brand new AI business model. Indeed, as a consultant, I have…
Bayesian Modeling Airlines Customer Service Twitter Response Time
Student’s t-distribution, Poisson distribution, Negative Binomial distribution, Hierarchical modeling and Regression
https://towardsdatascience.com/bayesian-modeling-airlines-customer-service-twitter-response-time-74af893f02c0?source=collection_home---4------0-----------------------
🔗 Bayesian Modeling Airlines Customer Service Twitter Response Time
Student’s t-distribution, Poisson distribution, Negative Binomial distribution, Hierarchical modeling and Regression
Student’s t-distribution, Poisson distribution, Negative Binomial distribution, Hierarchical modeling and Regression
https://towardsdatascience.com/bayesian-modeling-airlines-customer-service-twitter-response-time-74af893f02c0?source=collection_home---4------0-----------------------
🔗 Bayesian Modeling Airlines Customer Service Twitter Response Time
Student’s t-distribution, Poisson distribution, Negative Binomial distribution, Hierarchical modeling and Regression
Medium
Bayesian Modeling Airlines Customer Service Twitter Response Time
Student’s t-distribution, Poisson distribution, Negative Binomial distribution, Hierarchical modeling and Regression
YOLO, YOLOv2 and YOLOv3: All You want to know
🔗 YOLO, YOLOv2 and YOLOv3: All You want to know
During last few years Object detection become one of the hottest areas of computer vision , and many researchers are racing to get the…
🔗 YOLO, YOLOv2 and YOLOv3: All You want to know
During last few years Object detection become one of the hottest areas of computer vision , and many researchers are racing to get the…
Medium
YOLO, YOLOv2 and YOLOv3: All You want to know
During last few years Object detection become one of the hottest areas of computer vision , and many researchers are racing to get the…
Почему так сложно выбрать, какое кино посмотреть (и нейросети эту проблему не решат)
Знакома ли вам ситуация: решили провести вечер дома и посмотреть какое-нибудь кино в хорошей компании, но, попытавшись определиться, какое — провели за выбором столько времени, что на кино его не осталось — или пропало желание — или, всё-таки, начали что-то смотреть, но настроение было уже не то?
Большинство людей списывают эту проблему на свою недостаточную осведомлённость о мире кино, и пытаются её решить с помощью разных подборок и рейтингов, либо спрашивая совета — а бизнесы, в свою очередь, стараются сделать то же самое, предлагая пользователям подборки и рейтинги, либо разрабатывая рекомендательные системы. Тем не менее, проблема никуда не желает деваться — и развитие рекомендательных систем только перекрасило её в другие тона: теперь пользователи, вместо того, чтобы спрашивать совета у знакомых и незнакомых в интернете, бесконечно листают ряды ярких постеров на Netflix (проблема-то глобальная) или каком-нибудь ivi. Бизнесы, тем временем, за отсутствием лучших идей, продолжают пытаться запихнуть кубик в замочную скважину, надеясь, что всё-таки сумеют сделать рекомендательную систему, которая научится-таки угадывать, чего хочет пользователь, который не знает, чего он хочет; правда, надежды разработчиков на коллективный разум уже не оправдались: призывать на помощь других пользователей не помогло — каталоги отзывов и сервисы вопросов-ответов только уменьшают боль, не избавляя от неё, — так что теперь все ставки на разум искусственный — уж ИИ-то точно должен раскусить этот орешек!
Не раскусит. Во-первых, нейросети — это не ИИ.
https://habr.com/ru/post/462615/
🔗 Почему так сложно выбрать, какое кино посмотреть (и нейросети эту проблему не решат)
Знакома ли вам ситуация: решили провести вечер дома и посмотреть какое-нибудь кино в хорошей компании, но, попытавшись определиться, какое — провели за выбором с...
Знакома ли вам ситуация: решили провести вечер дома и посмотреть какое-нибудь кино в хорошей компании, но, попытавшись определиться, какое — провели за выбором столько времени, что на кино его не осталось — или пропало желание — или, всё-таки, начали что-то смотреть, но настроение было уже не то?
Большинство людей списывают эту проблему на свою недостаточную осведомлённость о мире кино, и пытаются её решить с помощью разных подборок и рейтингов, либо спрашивая совета — а бизнесы, в свою очередь, стараются сделать то же самое, предлагая пользователям подборки и рейтинги, либо разрабатывая рекомендательные системы. Тем не менее, проблема никуда не желает деваться — и развитие рекомендательных систем только перекрасило её в другие тона: теперь пользователи, вместо того, чтобы спрашивать совета у знакомых и незнакомых в интернете, бесконечно листают ряды ярких постеров на Netflix (проблема-то глобальная) или каком-нибудь ivi. Бизнесы, тем временем, за отсутствием лучших идей, продолжают пытаться запихнуть кубик в замочную скважину, надеясь, что всё-таки сумеют сделать рекомендательную систему, которая научится-таки угадывать, чего хочет пользователь, который не знает, чего он хочет; правда, надежды разработчиков на коллективный разум уже не оправдались: призывать на помощь других пользователей не помогло — каталоги отзывов и сервисы вопросов-ответов только уменьшают боль, не избавляя от неё, — так что теперь все ставки на разум искусственный — уж ИИ-то точно должен раскусить этот орешек!
Не раскусит. Во-первых, нейросети — это не ИИ.
https://habr.com/ru/post/462615/
🔗 Почему так сложно выбрать, какое кино посмотреть (и нейросети эту проблему не решат)
Знакома ли вам ситуация: решили провести вечер дома и посмотреть какое-нибудь кино в хорошей компании, но, попытавшись определиться, какое — провели за выбором с...
Хабр
Почему так сложно выбрать, какое кино посмотреть (и нейросети эту проблему не решат)
Знакома ли вам ситуация: решили провести вечер дома и посмотреть какое-нибудь кино в хорошей компании, но, попытавшись определиться, какое — провели за выбором столько времени, что на кино его не...
Deep learning by Ian Goodfellow and Yoshua Bengio
Free book for math of deep learning
https://www.deeplearningbook.org/
🔗 Deep Learning
Free book for math of deep learning
https://www.deeplearningbook.org/
🔗 Deep Learning
Writing tests for the Albumentations library
How to write unit tests in Data Science pipelines using pytest without pain.
https://towardsdatascience.com/writing-test-for-the-image-augmentation-albumentation-library-a73d7bc1caa7?source=topic_page---------------------------20
🔗 Writing tests for the Albumentations library
How to write unit tests in Data Science pipelines using pytest without pain.
How to write unit tests in Data Science pipelines using pytest without pain.
https://towardsdatascience.com/writing-test-for-the-image-augmentation-albumentation-library-a73d7bc1caa7?source=topic_page---------------------------20
🔗 Writing tests for the Albumentations library
How to write unit tests in Data Science pipelines using pytest without pain.
Medium
Writing tests for the Albumentations library
How to write unit tests in Data Science pipelines using pytest without pain.
George Hotz on hacking and crime: "You only have to mess up once" | AI Podcast Clips
🔗 George Hotz on hacking and crime: "You only have to mess up once" | AI Podcast Clips
This is a clip from a conversation with George Hotz on the Artificial Intelligence podcast. You can watch the full conversation here: https://bit.ly/2YLIPom or watch other AI clips here: https://bit.ly/2JYkbfZ George Hotz is the founder of Comma.ai, a machine learning based vehicle automation company. He is an outspoken personality in the field of AI and technology in general. He first gained recognition for being the first person to carrier-unlock an iPhone, and since then has done quite a few interesting t
🔗 George Hotz on hacking and crime: "You only have to mess up once" | AI Podcast Clips
This is a clip from a conversation with George Hotz on the Artificial Intelligence podcast. You can watch the full conversation here: https://bit.ly/2YLIPom or watch other AI clips here: https://bit.ly/2JYkbfZ George Hotz is the founder of Comma.ai, a machine learning based vehicle automation company. He is an outspoken personality in the field of AI and technology in general. He first gained recognition for being the first person to carrier-unlock an iPhone, and since then has done quite a few interesting t
YouTube
George Hotz on hacking and crime: "You only have to mess up once" | AI Podcast Clips
This is a clip from a conversation with George Hotz on the Artificial Intelligence podcast. You can watch the full conversation here: https://bit.ly/2YLIPom o...
Как собрать и заставить работать навигатор, используя микроконтроллер и набор недорогих датчиков.
🔗 Микронавигатор на STM32F100
Как собрать и заставить работать навигатор, используя микроконтроллер и набор недорогих датчиков. Законный вопрос — нафиг он нужен, если…
🔗 Микронавигатор на STM32F100
Как собрать и заставить работать навигатор, используя микроконтроллер и набор недорогих датчиков. Законный вопрос — нафиг он нужен, если…
Medium
Микронавигатор на STM32F100
Как собрать и заставить работать навигатор, используя микроконтроллер и набор недорогих датчиков. Законный вопрос — нафиг он нужен, если…
Weight Agnostic Neural Networks
https://weightagnostic.github.io/
git: https://github.com/google/brain-tokyo-workshop/tree/master/WANNRelease
https://arxiv.org/pdf/1906.04358.pdf
🔗 Weight Agnostic Neural Networks
Networks that can already (sort of) perform tasks with random weights.
https://weightagnostic.github.io/
git: https://github.com/google/brain-tokyo-workshop/tree/master/WANNRelease
https://arxiv.org/pdf/1906.04358.pdf
🔗 Weight Agnostic Neural Networks
Networks that can already (sort of) perform tasks with random weights.
Weight Agnostic Neural Networks
Networks that can already (sort of) perform tasks with random weights.
Data Science and Machine Learning for Non Programmers | Data Science for Beginners | Edureka
https://www.youtube.com/watch?v=jgPChUZP57I
🎥 Data Science and Machine Learning for Non Programmers | Data Science for Beginners | Edureka
👁 1 раз ⏳ 2301 сек.
https://www.youtube.com/watch?v=jgPChUZP57I
🎥 Data Science and Machine Learning for Non Programmers | Data Science for Beginners | Edureka
👁 1 раз ⏳ 2301 сек.
** Machine Learning Engineer Masters Program: https://www.edureka.co/masters-program/machine-learning-engineer-training **
This Edureka video on "Data Science and Machine Learning for Non-programmers" is specifically dedicated to non-IT professionals who are trying to make a career in Data Science and Machine Learning without the experience of working on programming languages. Here’s a list of topics that are covered in this Data Science for Beginners session:
Introduction to Data Science and Machine LeaYouTube
Data Science and Machine Learning for Non Programmers | Data Science for Beginners | Edureka
🔥Edureka Data Science Masters Program: https://www.edureka.co/masters-program/data-scientist-certification
This Edureka video on "Data Science and Machine Learning for Non-programmers" is a part of the Data Science for Beginners Tutorial Series which is specifically…
This Edureka video on "Data Science and Machine Learning for Non-programmers" is a part of the Data Science for Beginners Tutorial Series which is specifically…
Most People Screw Up Multiple Percent Changes. Here’s How to Get Them Right.
Solving a Common Math Problem with Everyday Applications
🔗 Most People Screw Up Multiple Percent Changes. Here’s How to Get Them Right.
Solving a Common Math Problem with Everyday Applications
Solving a Common Math Problem with Everyday Applications
🔗 Most People Screw Up Multiple Percent Changes. Here’s How to Get Them Right.
Solving a Common Math Problem with Everyday Applications
Medium
Most People Screw Up Multiple Percent Changes. Here’s How to Get Them Right.
Solving a Common Math Problem with Everyday Applications
SqueezeNAS: Fast neural architecture search for faster semantic segmentation
Наш телеграм канал - tglink.me/ai_machinelearning_big_data
Authors: Albert Shaw, Daniel Hunter, Forrest Iandola, Sammy Sidhu
Abstract: For real time applications utilizing Deep Neural Networks (DNNs), it is critical that the models achieve high-accuracy on the target task and low-latency inference on the target computing platform. While Neural Architecture Search (NAS) has been effectively used to develop low-latency networks for image classification, there has been relatively little effort
https://arxiv.org/abs/1908.01748
🔗 SqueezeNAS: Fast neural architecture search for faster semantic segmentation
For real time applications utilizing Deep Neural Networks (DNNs), it is critical that the models achieve high-accuracy on the target task and low-latency inference on the target computing platform. While Neural Architecture Search (NAS) has been effectively used to develop low-latency networks for image classification, there has been relatively little effort to use NAS to optimize DNN architectures for other vision tasks. In this work, we present what we believe to be the first proxyless hardware-aware search targeted for dense semantic segmentation. With this approach, we advance the state-of-the-art accuracy for latency-optimized networks on the Cityscapes semantic segmentation dataset. Our latency-optimized small SqueezeNAS network achieves 68.02% validation class mIOU with less than 35 ms inference times on the NVIDIA AGX Xavier. Our latency-optimized large SqueezeNAS network achieves 73.62% class mIOU with less than 100 ms inference times. We demonstrate that significant performance gains are possible by
Наш телеграм канал - tglink.me/ai_machinelearning_big_data
Authors: Albert Shaw, Daniel Hunter, Forrest Iandola, Sammy Sidhu
Abstract: For real time applications utilizing Deep Neural Networks (DNNs), it is critical that the models achieve high-accuracy on the target task and low-latency inference on the target computing platform. While Neural Architecture Search (NAS) has been effectively used to develop low-latency networks for image classification, there has been relatively little effort
https://arxiv.org/abs/1908.01748
🔗 SqueezeNAS: Fast neural architecture search for faster semantic segmentation
For real time applications utilizing Deep Neural Networks (DNNs), it is critical that the models achieve high-accuracy on the target task and low-latency inference on the target computing platform. While Neural Architecture Search (NAS) has been effectively used to develop low-latency networks for image classification, there has been relatively little effort to use NAS to optimize DNN architectures for other vision tasks. In this work, we present what we believe to be the first proxyless hardware-aware search targeted for dense semantic segmentation. With this approach, we advance the state-of-the-art accuracy for latency-optimized networks on the Cityscapes semantic segmentation dataset. Our latency-optimized small SqueezeNAS network achieves 68.02% validation class mIOU with less than 35 ms inference times on the NVIDIA AGX Xavier. Our latency-optimized large SqueezeNAS network achieves 73.62% class mIOU with less than 100 ms inference times. We demonstrate that significant performance gains are possible by
How to Implement CycleGAN Models From Scratch With Keras
https://machinelearningmastery.com/how-to-develop-cyclegan-models-from-scratch-with-keras/
🔗 How to Implement CycleGAN Models From Scratch With Keras
The Cycle Generative adversarial Network, or CycleGAN for short, is a generator model for converting images from one domain to another domain. For example, the model can be used to translate images of horses to images of zebras, or photographs of city landscapes at night to city landscapes during the day. The benefit of the …
https://machinelearningmastery.com/how-to-develop-cyclegan-models-from-scratch-with-keras/
🔗 How to Implement CycleGAN Models From Scratch With Keras
The Cycle Generative adversarial Network, or CycleGAN for short, is a generator model for converting images from one domain to another domain. For example, the model can be used to translate images of horses to images of zebras, or photographs of city landscapes at night to city landscapes during the day. The benefit of the …
MachineLearningMastery.com
How to Implement CycleGAN Models From Scratch With Keras - MachineLearningMastery.com
The Cycle Generative adversarial Network, or CycleGAN for short, is a generator model for converting images from one domain to another domain. For example, the model can be used to translate images of horses to images of zebras, or photographs of city landscapes…
The Inspection Paradox is Everywhere
The inspection paradox is a statistical illusion you’ve probably never heard of. But once you learn about it, you see it everywhere.
https://towardsdatascience.com/the-inspection-paradox-is-everywhere-2ef1c2e9d709?source=collection_home---4------0-----------------------
🔗 The Inspection Paradox is Everywhere
The inspection paradox is a statistical illusion you’ve probably never heard of. But once you learn about it, you see it everywhere.
The inspection paradox is a statistical illusion you’ve probably never heard of. But once you learn about it, you see it everywhere.
https://towardsdatascience.com/the-inspection-paradox-is-everywhere-2ef1c2e9d709?source=collection_home---4------0-----------------------
🔗 The Inspection Paradox is Everywhere
The inspection paradox is a statistical illusion you’ve probably never heard of. But once you learn about it, you see it everywhere.
Medium
The Inspection Paradox is Everywhere
The inspection paradox is a statistical illusion you’ve probably never heard of. But once you learn about it, you see it everywhere.
https://github.com/facebookresearch/FixRes
https://arxiv.org/abs/1906.06423
🔗 facebookresearch/FixRes
This repository reproduces the results of the paper: "Fixing the train-test resolution discrepancy" https://arxiv.org/abs/1906.06423 - facebookresearch/FixRes
https://arxiv.org/abs/1906.06423
🔗 facebookresearch/FixRes
This repository reproduces the results of the paper: "Fixing the train-test resolution discrepancy" https://arxiv.org/abs/1906.06423 - facebookresearch/FixRes
GitHub
GitHub - facebookresearch/FixRes: This repository reproduces the results of the paper: "Fixing the train-test resolution discrepancy"…
This repository reproduces the results of the paper: "Fixing the train-test resolution discrepancy" https://arxiv.org/abs/1906.06423 - facebookresearch/FixRes
TuneNet: One-Shot Residual Tuning for System Identification and Sim-to-Real Robot Task Transfer
https://arxiv.org/abs/1907.11200
🔗 TuneNet: One-Shot Residual Tuning for System Identification and Sim-to-Real Robot Task Transfer
As researchers teach robots to perform more and more complex tasks, the need for realistic simulation environments is growing. Existing techniques for closing the reality gap by approximating real-world physics often require extensive real world data and/or thousands of simulation samples. This paper presents TuneNet, a new machine learning-based method to directly tune the parameters of one model to match another using an $\textit{iterative residual tuning}$ technique. TuneNet estimates the parameter difference between two models using a single observation from the target and minimal simulation, allowing rapid, accurate and sample-efficient parameter estimation. The system can be trained via supervised learning over an auto-generated simulated dataset. We show that TuneNet can perform system identification, even when the true parameter values lie well outside the distribution seen during training, and demonstrate that simulators tuned with TuneNet outperform existing techniques for predicting
https://arxiv.org/abs/1907.11200
🔗 TuneNet: One-Shot Residual Tuning for System Identification and Sim-to-Real Robot Task Transfer
As researchers teach robots to perform more and more complex tasks, the need for realistic simulation environments is growing. Existing techniques for closing the reality gap by approximating real-world physics often require extensive real world data and/or thousands of simulation samples. This paper presents TuneNet, a new machine learning-based method to directly tune the parameters of one model to match another using an $\textit{iterative residual tuning}$ technique. TuneNet estimates the parameter difference between two models using a single observation from the target and minimal simulation, allowing rapid, accurate and sample-efficient parameter estimation. The system can be trained via supervised learning over an auto-generated simulated dataset. We show that TuneNet can perform system identification, even when the true parameter values lie well outside the distribution seen during training, and demonstrate that simulators tuned with TuneNet outperform existing techniques for predicting
arXiv.org
TuneNet: One-Shot Residual Tuning for System Identification and...
As researchers teach robots to perform more and more complex tasks, the need for realistic simulation environments is growing. Existing techniques for closing the reality gap by approximating...
Online Machine Learning with Tensorflow.js
An end to end guide on how to create, train and test a Machine Learning model in your browser using Tensorflow.js.
https://towardsdatascience.com/online-machine-learning-with-tensorflow-js-2ae232352901?source=collection_home---4------1-----------------------
🔗 Online Machine Learning with Tensorflow.js
An end to end guide on how to create, train and test a Machine Learning model in your browser using Tensorflow.js.
An end to end guide on how to create, train and test a Machine Learning model in your browser using Tensorflow.js.
https://towardsdatascience.com/online-machine-learning-with-tensorflow-js-2ae232352901?source=collection_home---4------1-----------------------
🔗 Online Machine Learning with Tensorflow.js
An end to end guide on how to create, train and test a Machine Learning model in your browser using Tensorflow.js.
Medium
Online Machine Learning with Tensorflow.js
An end to end guide on how to create, train and test a Machine Learning model in your browser using Tensorflow.js.
Why Real Neurons Learn Faster
A closer look into differences between natural nervous systems & artificial #NeuralNetworks
https://www.codeproject.com/Articles/1275031/Why-Real-Neurons-Learn-Faster
🔗 Why Real Neurons Learn Faster
A closer look into differences between natural nervous systems and artificial neural networks
A closer look into differences between natural nervous systems & artificial #NeuralNetworks
https://www.codeproject.com/Articles/1275031/Why-Real-Neurons-Learn-Faster
🔗 Why Real Neurons Learn Faster
A closer look into differences between natural nervous systems and artificial neural networks
Codeproject
Why Real Neurons Learn Faster
A closer look into differences between natural nervous systems and artificial neural networks