Most People Screw Up Multiple Percent Changes. Here’s How to Get Them Right.
Solving a Common Math Problem with Everyday Applications
🔗 Most People Screw Up Multiple Percent Changes. Here’s How to Get Them Right.
Solving a Common Math Problem with Everyday Applications
Solving a Common Math Problem with Everyday Applications
🔗 Most People Screw Up Multiple Percent Changes. Here’s How to Get Them Right.
Solving a Common Math Problem with Everyday Applications
Medium
Most People Screw Up Multiple Percent Changes. Here’s How to Get Them Right.
Solving a Common Math Problem with Everyday Applications
SqueezeNAS: Fast neural architecture search for faster semantic segmentation
Наш телеграм канал - tglink.me/ai_machinelearning_big_data
Authors: Albert Shaw, Daniel Hunter, Forrest Iandola, Sammy Sidhu
Abstract: For real time applications utilizing Deep Neural Networks (DNNs), it is critical that the models achieve high-accuracy on the target task and low-latency inference on the target computing platform. While Neural Architecture Search (NAS) has been effectively used to develop low-latency networks for image classification, there has been relatively little effort
https://arxiv.org/abs/1908.01748
🔗 SqueezeNAS: Fast neural architecture search for faster semantic segmentation
For real time applications utilizing Deep Neural Networks (DNNs), it is critical that the models achieve high-accuracy on the target task and low-latency inference on the target computing platform. While Neural Architecture Search (NAS) has been effectively used to develop low-latency networks for image classification, there has been relatively little effort to use NAS to optimize DNN architectures for other vision tasks. In this work, we present what we believe to be the first proxyless hardware-aware search targeted for dense semantic segmentation. With this approach, we advance the state-of-the-art accuracy for latency-optimized networks on the Cityscapes semantic segmentation dataset. Our latency-optimized small SqueezeNAS network achieves 68.02% validation class mIOU with less than 35 ms inference times on the NVIDIA AGX Xavier. Our latency-optimized large SqueezeNAS network achieves 73.62% class mIOU with less than 100 ms inference times. We demonstrate that significant performance gains are possible by
Наш телеграм канал - tglink.me/ai_machinelearning_big_data
Authors: Albert Shaw, Daniel Hunter, Forrest Iandola, Sammy Sidhu
Abstract: For real time applications utilizing Deep Neural Networks (DNNs), it is critical that the models achieve high-accuracy on the target task and low-latency inference on the target computing platform. While Neural Architecture Search (NAS) has been effectively used to develop low-latency networks for image classification, there has been relatively little effort
https://arxiv.org/abs/1908.01748
🔗 SqueezeNAS: Fast neural architecture search for faster semantic segmentation
For real time applications utilizing Deep Neural Networks (DNNs), it is critical that the models achieve high-accuracy on the target task and low-latency inference on the target computing platform. While Neural Architecture Search (NAS) has been effectively used to develop low-latency networks for image classification, there has been relatively little effort to use NAS to optimize DNN architectures for other vision tasks. In this work, we present what we believe to be the first proxyless hardware-aware search targeted for dense semantic segmentation. With this approach, we advance the state-of-the-art accuracy for latency-optimized networks on the Cityscapes semantic segmentation dataset. Our latency-optimized small SqueezeNAS network achieves 68.02% validation class mIOU with less than 35 ms inference times on the NVIDIA AGX Xavier. Our latency-optimized large SqueezeNAS network achieves 73.62% class mIOU with less than 100 ms inference times. We demonstrate that significant performance gains are possible by
How to Implement CycleGAN Models From Scratch With Keras
https://machinelearningmastery.com/how-to-develop-cyclegan-models-from-scratch-with-keras/
🔗 How to Implement CycleGAN Models From Scratch With Keras
The Cycle Generative adversarial Network, or CycleGAN for short, is a generator model for converting images from one domain to another domain. For example, the model can be used to translate images of horses to images of zebras, or photographs of city landscapes at night to city landscapes during the day. The benefit of the …
https://machinelearningmastery.com/how-to-develop-cyclegan-models-from-scratch-with-keras/
🔗 How to Implement CycleGAN Models From Scratch With Keras
The Cycle Generative adversarial Network, or CycleGAN for short, is a generator model for converting images from one domain to another domain. For example, the model can be used to translate images of horses to images of zebras, or photographs of city landscapes at night to city landscapes during the day. The benefit of the …
MachineLearningMastery.com
How to Implement CycleGAN Models From Scratch With Keras - MachineLearningMastery.com
The Cycle Generative adversarial Network, or CycleGAN for short, is a generator model for converting images from one domain to another domain. For example, the model can be used to translate images of horses to images of zebras, or photographs of city landscapes…
The Inspection Paradox is Everywhere
The inspection paradox is a statistical illusion you’ve probably never heard of. But once you learn about it, you see it everywhere.
https://towardsdatascience.com/the-inspection-paradox-is-everywhere-2ef1c2e9d709?source=collection_home---4------0-----------------------
🔗 The Inspection Paradox is Everywhere
The inspection paradox is a statistical illusion you’ve probably never heard of. But once you learn about it, you see it everywhere.
The inspection paradox is a statistical illusion you’ve probably never heard of. But once you learn about it, you see it everywhere.
https://towardsdatascience.com/the-inspection-paradox-is-everywhere-2ef1c2e9d709?source=collection_home---4------0-----------------------
🔗 The Inspection Paradox is Everywhere
The inspection paradox is a statistical illusion you’ve probably never heard of. But once you learn about it, you see it everywhere.
Medium
The Inspection Paradox is Everywhere
The inspection paradox is a statistical illusion you’ve probably never heard of. But once you learn about it, you see it everywhere.
https://github.com/facebookresearch/FixRes
https://arxiv.org/abs/1906.06423
🔗 facebookresearch/FixRes
This repository reproduces the results of the paper: "Fixing the train-test resolution discrepancy" https://arxiv.org/abs/1906.06423 - facebookresearch/FixRes
https://arxiv.org/abs/1906.06423
🔗 facebookresearch/FixRes
This repository reproduces the results of the paper: "Fixing the train-test resolution discrepancy" https://arxiv.org/abs/1906.06423 - facebookresearch/FixRes
GitHub
GitHub - facebookresearch/FixRes: This repository reproduces the results of the paper: "Fixing the train-test resolution discrepancy"…
This repository reproduces the results of the paper: "Fixing the train-test resolution discrepancy" https://arxiv.org/abs/1906.06423 - facebookresearch/FixRes
TuneNet: One-Shot Residual Tuning for System Identification and Sim-to-Real Robot Task Transfer
https://arxiv.org/abs/1907.11200
🔗 TuneNet: One-Shot Residual Tuning for System Identification and Sim-to-Real Robot Task Transfer
As researchers teach robots to perform more and more complex tasks, the need for realistic simulation environments is growing. Existing techniques for closing the reality gap by approximating real-world physics often require extensive real world data and/or thousands of simulation samples. This paper presents TuneNet, a new machine learning-based method to directly tune the parameters of one model to match another using an $\textit{iterative residual tuning}$ technique. TuneNet estimates the parameter difference between two models using a single observation from the target and minimal simulation, allowing rapid, accurate and sample-efficient parameter estimation. The system can be trained via supervised learning over an auto-generated simulated dataset. We show that TuneNet can perform system identification, even when the true parameter values lie well outside the distribution seen during training, and demonstrate that simulators tuned with TuneNet outperform existing techniques for predicting
https://arxiv.org/abs/1907.11200
🔗 TuneNet: One-Shot Residual Tuning for System Identification and Sim-to-Real Robot Task Transfer
As researchers teach robots to perform more and more complex tasks, the need for realistic simulation environments is growing. Existing techniques for closing the reality gap by approximating real-world physics often require extensive real world data and/or thousands of simulation samples. This paper presents TuneNet, a new machine learning-based method to directly tune the parameters of one model to match another using an $\textit{iterative residual tuning}$ technique. TuneNet estimates the parameter difference between two models using a single observation from the target and minimal simulation, allowing rapid, accurate and sample-efficient parameter estimation. The system can be trained via supervised learning over an auto-generated simulated dataset. We show that TuneNet can perform system identification, even when the true parameter values lie well outside the distribution seen during training, and demonstrate that simulators tuned with TuneNet outperform existing techniques for predicting
arXiv.org
TuneNet: One-Shot Residual Tuning for System Identification and...
As researchers teach robots to perform more and more complex tasks, the need for realistic simulation environments is growing. Existing techniques for closing the reality gap by approximating...
Online Machine Learning with Tensorflow.js
An end to end guide on how to create, train and test a Machine Learning model in your browser using Tensorflow.js.
https://towardsdatascience.com/online-machine-learning-with-tensorflow-js-2ae232352901?source=collection_home---4------1-----------------------
🔗 Online Machine Learning with Tensorflow.js
An end to end guide on how to create, train and test a Machine Learning model in your browser using Tensorflow.js.
An end to end guide on how to create, train and test a Machine Learning model in your browser using Tensorflow.js.
https://towardsdatascience.com/online-machine-learning-with-tensorflow-js-2ae232352901?source=collection_home---4------1-----------------------
🔗 Online Machine Learning with Tensorflow.js
An end to end guide on how to create, train and test a Machine Learning model in your browser using Tensorflow.js.
Medium
Online Machine Learning with Tensorflow.js
An end to end guide on how to create, train and test a Machine Learning model in your browser using Tensorflow.js.
Why Real Neurons Learn Faster
A closer look into differences between natural nervous systems & artificial #NeuralNetworks
https://www.codeproject.com/Articles/1275031/Why-Real-Neurons-Learn-Faster
🔗 Why Real Neurons Learn Faster
A closer look into differences between natural nervous systems and artificial neural networks
A closer look into differences between natural nervous systems & artificial #NeuralNetworks
https://www.codeproject.com/Articles/1275031/Why-Real-Neurons-Learn-Faster
🔗 Why Real Neurons Learn Faster
A closer look into differences between natural nervous systems and artificial neural networks
Codeproject
Why Real Neurons Learn Faster
A closer look into differences between natural nervous systems and artificial neural networks
Reimplemented HDRNet model aka Deep Bilateral Learning for Real-Time Image Enhancements in PyTorch
https://github.com/creotiv/hdrnet-pytorch
🔗 creotiv/hdrnet-pytorch
Unofficial PyTorch implementation of 'Deep Bilateral Learning for Real-Time Image Enhancement', SIGGRAPH 2017 https://groups.csail.mit.edu/graphics/hdrnet/ - creotiv/hdrnet-pytorch
https://github.com/creotiv/hdrnet-pytorch
🔗 creotiv/hdrnet-pytorch
Unofficial PyTorch implementation of 'Deep Bilateral Learning for Real-Time Image Enhancement', SIGGRAPH 2017 https://groups.csail.mit.edu/graphics/hdrnet/ - creotiv/hdrnet-pytorch
GitHub
GitHub - creotiv/hdrnet-pytorch: Unofficial PyTorch implementation of 'Deep Bilateral Learning for Real-Time Image Enhancement'…
Unofficial PyTorch implementation of 'Deep Bilateral Learning for Real-Time Image Enhancement', SIGGRAPH 2017 https://groups.csail.mit.edu/graphics/hdrnet/ - creotiv/hdrnet-pytorch
BlurNet: Defense by Filtering the Feature Maps
Authors: Ravi Raju, Mikko Lipasti
Abstract: Recently, the field of adversarial machine learning has been garnering attention by showing that state-of-the-art deep neural networks are vulnerable to adverserial examples, stemming from small perturbations being added to the input image. Adversarial examples are generated by a malicious adversary by obtaining access
https://arxiv.org/abs/1908.02256
🔗 BlurNet: Defense by Filtering the Feature Maps
Recently, the field of adversarial machine learning has been garnering attention by showing that state-of-the-art deep neural networks are vulnerable to adverserial examples, stemming from small perturbations being added to the input image. Adversarial examples are generated by a malicious adversary by obtaining access to the model parameters, such as gradient information, to alter the input or by attacking a substitute model and transferring those malicious examples over to attack the victim model. Specifically, one of these attack algorithms, Robust Physical Perturbations ($RP_2$), generates adverserial images of stop signs with black and white stickers to achieve high targeted misclassification rates against standard-architecture traffic sign classifiers. In this paper, we propose BlurNet, a defense against the $RP_2$ attack. First, we motivate the defense with a frequency analysis of the first layer feature maps of the network on the LISA dataset by demonstrating high frequency noise i
Authors: Ravi Raju, Mikko Lipasti
Abstract: Recently, the field of adversarial machine learning has been garnering attention by showing that state-of-the-art deep neural networks are vulnerable to adverserial examples, stemming from small perturbations being added to the input image. Adversarial examples are generated by a malicious adversary by obtaining access
https://arxiv.org/abs/1908.02256
🔗 BlurNet: Defense by Filtering the Feature Maps
Recently, the field of adversarial machine learning has been garnering attention by showing that state-of-the-art deep neural networks are vulnerable to adverserial examples, stemming from small perturbations being added to the input image. Adversarial examples are generated by a malicious adversary by obtaining access to the model parameters, such as gradient information, to alter the input or by attacking a substitute model and transferring those malicious examples over to attack the victim model. Specifically, one of these attack algorithms, Robust Physical Perturbations ($RP_2$), generates adverserial images of stop signs with black and white stickers to achieve high targeted misclassification rates against standard-architecture traffic sign classifiers. In this paper, we propose BlurNet, a defense against the $RP_2$ attack. First, we motivate the defense with a frequency analysis of the first layer feature maps of the network on the LISA dataset by demonstrating high frequency noise i
10 Speed Learning Techniques
🔗 10 Speed Learning Techniques
I'm going to show you a day in my life in this episode. These are the daily habits that I practice to optimize my ability to learn and thus serve you better. I consider my body an input/output machine, so in order to optimize my output (educational content), I've got to optimize the input (my physical/mental health). My job is to educate the public on how relatively complex technologies work, and this requires me to learn a lot really fast. I've learned that large gains can be made in my ability to learn ju
🔗 10 Speed Learning Techniques
I'm going to show you a day in my life in this episode. These are the daily habits that I practice to optimize my ability to learn and thus serve you better. I consider my body an input/output machine, so in order to optimize my output (educational content), I've got to optimize the input (my physical/mental health). My job is to educate the public on how relatively complex technologies work, and this requires me to learn a lot really fast. I've learned that large gains can be made in my ability to learn ju
YouTube
10 Speed Learning Techniques
I'm going to show you a day in my life in this episode. These are the daily habits that I practice to optimize my ability to learn and thus serve you better. I consider my body an input/output machine, so in order to optimize my output (educational content)…
Deep Learning & Reinforcement Learning Summer School 2019 Recap
🎥 Deep Learning & Reinforcement Learning Summer School 2019 Recap
👁 1 раз ⏳ 3909 сек.
🎥 Deep Learning & Reinforcement Learning Summer School 2019 Recap
👁 1 раз ⏳ 3909 сек.
Deep Learning and Reinforcement Learning Summer School 2019
A recap by Lucas Souza, Numenta Research Engineer.
Numenta Research Meeting - Aug 7 2019
Discuss at https://discourse.numenta.org/t/deep-learning-reinforcement-learning-summer-school-2019-recap/6434/2Vk
Deep Learning & Reinforcement Learning Summer School 2019 Recap
Deep Learning and Reinforcement Learning Summer School 2019
A recap by Lucas Souza, Numenta Research Engineer.
Numenta Research Meeting - Aug 7 2019
Discuss at https://discourse.numenta.org/t/deep-learning-reinforcement-learning-summer-school-2019-recap/6434/2
A recap by Lucas Souza, Numenta Research Engineer.
Numenta Research Meeting - Aug 7 2019
Discuss at https://discourse.numenta.org/t/deep-learning-reinforcement-learning-summer-school-2019-recap/6434/2
Navigating intelligent automation
The hyping, and over-hyping of new technologies is certainly not a new phenomenon, but the rate of rise and fall has been accelerated
https://medium.com/luminovo/navigating-intelligent-automation-c0d0b2fb3e67?source=topic_page---------0------------------1
🔗 Navigating intelligent automation:
The hyping, and over-hyping of new technologies is certainly not a new phenomenon, but the rate of rise and fall has been accelerated to a…
The hyping, and over-hyping of new technologies is certainly not a new phenomenon, but the rate of rise and fall has been accelerated
https://medium.com/luminovo/navigating-intelligent-automation-c0d0b2fb3e67?source=topic_page---------0------------------1
🔗 Navigating intelligent automation:
The hyping, and over-hyping of new technologies is certainly not a new phenomenon, but the rate of rise and fall has been accelerated to a…
Medium
Navigating intelligent automation:
The hyping, and over-hyping of new technologies is certainly not a new phenomenon, but the rate of rise and fall has been accelerated to a…
Must-read papers on GNN
https://github.com/thunlp/GNNPapers
🔗 thunlp/GNNPapers
Must-read papers on graph neural networks (GNN). Contribute to thunlp/GNNPapers development by creating an account on GitHub.
https://github.com/thunlp/GNNPapers
🔗 thunlp/GNNPapers
Must-read papers on graph neural networks (GNN). Contribute to thunlp/GNNPapers development by creating an account on GitHub.
GitHub
GitHub - thunlp/GNNPapers: Must-read papers on graph neural networks (GNN)
Must-read papers on graph neural networks (GNN). Contribute to thunlp/GNNPapers development by creating an account on GitHub.
Visual Product Search for Smart Retail Checkout
Doing cool things with data!
https://towardsdatascience.com/visual-product-search-for-smart-retail-checkout-eb7e1f34a351?source=collection_home---4------1-----------------------
🔗 Visual Product Search for Smart Retail Checkout
Doing cool things with data!
Doing cool things with data!
https://towardsdatascience.com/visual-product-search-for-smart-retail-checkout-eb7e1f34a351?source=collection_home---4------1-----------------------
🔗 Visual Product Search for Smart Retail Checkout
Doing cool things with data!
Medium
Visual Product Search for Smart Retail Checkout
Doing cool things with data!
Experimentation in Data Science
When AB testing doesn’t cut it
https://towardsdatascience.com/experimentation-in-data-science-90521e74ee4c?source=collection_home---4------0-----------------------
🔗 Experimentation in Data Science
When AB testing doesn’t cut it
When AB testing doesn’t cut it
https://towardsdatascience.com/experimentation-in-data-science-90521e74ee4c?source=collection_home---4------0-----------------------
🔗 Experimentation in Data Science
When AB testing doesn’t cut it
Medium
Experimentation in Data Science
When AB testing doesn’t cut it
The Main Highlights From CVPR2019 , Assaf Mushinsky Chief Scientist and Co-founder.
🎥 The Main Highlights From CVPR2019 , Assaf Mushinsky Chief Scientist and Co-founder.
👁 1 раз ⏳ 1781 сек.
🎥 The Main Highlights From CVPR2019 , Assaf Mushinsky Chief Scientist and Co-founder.
👁 1 раз ⏳ 1781 сек.
The following topics:
* 2D & 3D Object Detection
* Instance and Panoptic Segmentation
* Efficient Deep LearningVk
The Main Highlights From CVPR2019 , Assaf Mushinsky Chief Scientist and Co-founder.
The following topics:
* 2D & 3D Object Detection
* Instance and Panoptic Segmentation
* Efficient Deep Learning
* 2D & 3D Object Detection
* Instance and Panoptic Segmentation
* Efficient Deep Learning
Top 10 Books to Learn Machine Learning
Here is the list of Top 10 Books
Book #1: Incognito: the Secret Lives of the Brain by David Eaglemann https://fatimekerimli.files.wordpress...
Book #2 - How Smart Machines think by Sean Gerrish (sign up to scribd for free, download the book, then cancel your trial so its free) https://www.scribd.com/document/40421...
Book #3 - The Hundred Page Machine Learning Book by Andrej Burkov https://github.com/ZakiaSalod/The-Hun...
Book #4 - Python Machine Learning 2nd Edition by Sebastian Reschka https://github.com/rasbt/python-machi...
Book #5 - Grokking Deep Learning by Andrew Trask https://github.com/ontiyonke/Free-Dee...
Book #6 - Probabilistic Programming and Bayesian Methods for Hackers by Cameron Davidson https://github.com/CamDavidsonPilon/P...
Book #7 - Doing Data Science: Straight Talk From The Frontline by Rachel Schutt https://github.com/SayantanMitra87/Da...
Book #8 - Reinforcement Learning by Sutton and Barto https://incompleteideas.net/book/bookd...
Book #9 - The Book of Why by Judea Pearl https://www.academia.edu/36682718/_Ju...
Book #10 - Quantum Machine Learning by Peter Wittek https://doc.lagout.org/Others/Data%20...
🔗
Here is the list of Top 10 Books
Book #1: Incognito: the Secret Lives of the Brain by David Eaglemann https://fatimekerimli.files.wordpress...
Book #2 - How Smart Machines think by Sean Gerrish (sign up to scribd for free, download the book, then cancel your trial so its free) https://www.scribd.com/document/40421...
Book #3 - The Hundred Page Machine Learning Book by Andrej Burkov https://github.com/ZakiaSalod/The-Hun...
Book #4 - Python Machine Learning 2nd Edition by Sebastian Reschka https://github.com/rasbt/python-machi...
Book #5 - Grokking Deep Learning by Andrew Trask https://github.com/ontiyonke/Free-Dee...
Book #6 - Probabilistic Programming and Bayesian Methods for Hackers by Cameron Davidson https://github.com/CamDavidsonPilon/P...
Book #7 - Doing Data Science: Straight Talk From The Frontline by Rachel Schutt https://github.com/SayantanMitra87/Da...
Book #8 - Reinforcement Learning by Sutton and Barto https://incompleteideas.net/book/bookd...
Book #9 - The Book of Why by Judea Pearl https://www.academia.edu/36682718/_Ju...
Book #10 - Quantum Machine Learning by Peter Wittek https://doc.lagout.org/Others/Data%20...
🔗
Data Science
Наш телеграм канал - tglink.me/ai_machinelearning_big_data
✅Чем отличаются data analyst, data engineer и data scientist
✅Карьера в data science: типичные ошибки на собеседовании
✅Дискуссия «Тренды data science»
✅От исследований к продакшену: TDD, CRISP DM, контроль версий
✅Как в YouDo машинное обучение катится в продакшен
✅Крафтим артефакты: о воспроизводимости и трекинге зависимостей ✅Kaggle подходы для CV в проде: внедрить нельзя выпилить
✅Применение машинного обучения в страховании
✅Deep learning в рекомендательных системах
✅Практический RL: кнуты и пряники
https://vk.com/video-101965347_456274202?list=83f4c0a18a25a67533
🎥 074. Чем отличаются data analyst, data engineer и data scientist – Алексей Натёкин
👁 1 раз ⏳ 1205 сек.
🎥 075. Карьера в data science: типичные ошибки на собеседовании – Валерий Бабушкин
👁 3 раз ⏳ 1103 сек.
🎥 076. Дискуссия «Тренды data science»
👁 1 раз ⏳ 3548 сек.
🎥 077. От исследований к продакшену: TDD, CRISP DM, контроль версий – Арсений Анисимович
👁 1 раз ⏳ 958 сек.
🎥 078. Как в YouDo машинное обучение катится в продакшен – Адам Елдаров
👁 1 раз ⏳ 1471 сек.
🎥 079. Крафтим артефакты: о воспроизводимости и трекинге зависимостей – Михаил Трофимов
👁 1 раз ⏳ 874 сек.
🎥 080. Kaggle подходы для CV в проде: внедрить нельзя выпилить – Арсений Кравченко vs Артур Кузин
👁 1 раз ⏳ 1494 сек.
🎥 081. Применение машинного обучения в страховании – Фрэнк Шихалиев
👁 1 раз ⏳ 1291 сек.
🎥 082. Deep learning в рекомендательных системах – Андрей Зимовнов
👁 1 раз ⏳ 1757 сек.
🎥 083. Практический RL: кнуты и пряники – Сергей Колесников
👁 1 раз ⏳ 1421 сек.
Наш телеграм канал - tglink.me/ai_machinelearning_big_data
✅Чем отличаются data analyst, data engineer и data scientist
✅Карьера в data science: типичные ошибки на собеседовании
✅Дискуссия «Тренды data science»
✅От исследований к продакшену: TDD, CRISP DM, контроль версий
✅Как в YouDo машинное обучение катится в продакшен
✅Крафтим артефакты: о воспроизводимости и трекинге зависимостей ✅Kaggle подходы для CV в проде: внедрить нельзя выпилить
✅Применение машинного обучения в страховании
✅Deep learning в рекомендательных системах
✅Практический RL: кнуты и пряники
https://vk.com/video-101965347_456274202?list=83f4c0a18a25a67533
🎥 074. Чем отличаются data analyst, data engineer и data scientist – Алексей Натёкин
👁 1 раз ⏳ 1205 сек.
- Как войти в сообщество data science?
- О различиях data scientist, data analyst, data engineer, кто из них чем занимается?
- В чём отличия между ...🎥 075. Карьера в data science: типичные ошибки на собеседовании – Валерий Бабушкин
👁 3 раз ⏳ 1103 сек.
- Как найти работу в Data Science, если у тебя еще нет рабочего опыта?
- Стоит ли тратить время на kaggle?
- Какой путь должен пройти дата саентоло...🎥 076. Дискуссия «Тренды data science»
👁 1 раз ⏳ 3548 сек.
- Кто и как диктует моду в data science?
- Какие прикладные задачи из области machine learning на данный момент самые актуальные? Какие не получает...🎥 077. От исследований к продакшену: TDD, CRISP DM, контроль версий – Арсений Анисимович
👁 1 раз ⏳ 958 сек.
- Как организовать эффективное взаимодействие бизнеса, разработчиков и DS?
- Как версионировать данные и возможно ли это?
- Существует ли test driv...🎥 078. Как в YouDo машинное обучение катится в продакшен – Адам Елдаров
👁 1 раз ⏳ 1471 сек.
- Как деплоить, скейлить и управлять жизненным циклом ML моделей?
- Как настроить процесс дообучения и переобучения модели?
- Как выстраивать масшт...🎥 079. Крафтим артефакты: о воспроизводимости и трекинге зависимостей – Михаил Трофимов
👁 1 раз ⏳ 874 сек.
- Как организовать методологию экспериментов с данными?
- Зачем нужна воспроизводимость экспериментов и моделей?
- Как ее добиться?
* 21 октября ...🎥 080. Kaggle подходы для CV в проде: внедрить нельзя выпилить – Арсений Кравченко vs Артур Кузин
👁 1 раз ⏳ 1494 сек.
В ходе дискуссии рассмариваются сильные и слабые стороны кэгглеров с точки зрения переноса их навыков в продакшен. Также будет произведено сравнени...🎥 081. Применение машинного обучения в страховании – Фрэнк Шихалиев
👁 1 раз ⏳ 1291 сек.
Страховая отрасль всегда была достаточно консервативна, к тому же в финансовом секторе в России банки опередили страхование по развитию на десятиле...🎥 082. Deep learning в рекомендательных системах – Андрей Зимовнов
👁 1 раз ⏳ 1757 сек.
- Deep learning в рекомендательных системах
- Collaborative Filtering (CF) в большой рекомендательной системе
* 21 октября 2018 г. в московском оф...🎥 083. Практический RL: кнуты и пряники – Сергей Колесников
👁 1 раз ⏳ 1421 сек.
- Как начать изучать RL?
- Есть ли RL без DL?
- Соревнования по RL: полезно ли участвовать?
- Практика DRL в проде, есть ли какие успешные кейсы?
-...🎥 Всё о Data Science / Big data и дополненная реальность / Интервью с Data Scientist
👁 1 раз ⏳ 7740 сек.
👁 1 раз ⏳ 7740 сек.
В сегодняшнем выпуске у меня в гостях Data Scientist компании Banuba - Вячеслав Архипов.
Слава провел полный экскурс в мир data sciense и анализа данных. Мы поговорили про нейронные сети, про генетические алгоритмы, про data sets, про big data, про machine learning, про deep learning, про биржевую торговлю, про augmented reality (дополненная реальность) и про многое другое.
Мощное техническое интервью с математиком!
Так что, заваривайте чаинский и приятного просмотра! 😎
Канал Славы: https://bit.ly/2YUhJiMVk
Всё о Data Science / Big data и дополненная реальность / Интервью с Data Scientist
В сегодняшнем выпуске у меня в гостях Data Scientist компании Banuba - Вячеслав Архипов.
Слава провел полный экскурс в мир data sciense и анализа данных. Мы поговорили про нейронные сети, про генетические алгоритмы, про data sets, про big data, про machine…
Слава провел полный экскурс в мир data sciense и анализа данных. Мы поговорили про нейронные сети, про генетические алгоритмы, про data sets, про big data, про machine…
🎥 Никита Кричко. Методология использования машинного обучения в нагрузочном тестировании. QA Fest 2018
👁 1 раз ⏳ 2325 сек.
👁 1 раз ⏳ 2325 сек.
The talk from QA Fest conference in Kyiv, Ukraine.
Presentation: https://bit.ly/2xBtJ9Z
Fb: https://www.facebook.com/QAFest/
Website: https://qafest.com/
егодня никого не удивишь высоконагруженными системами. И мало кого в нашей индустрии удивишь отдельно выделенным человеком который занимается нагрузочным тестирование. Большинство людей думают, что они все могут автоматизировать и тесты будут запускаться автоматически. Вот только мало кто знает, что львиная доля времени уходит на анализ результатов (логовVk
Никита Кричко. Методология использования машинного обучения в нагрузочном тестировании. QA Fest 2018
The talk from QA Fest conference in Kyiv, Ukraine.
Presentation: https://bit.ly/2xBtJ9Z
Fb: https://www.facebook.com/QAFest/
Website: https://qafest.com/
егодня никого не удивишь высоконагруженными системами. И мало кого в нашей индустрии удивишь отдельно…
Presentation: https://bit.ly/2xBtJ9Z
Fb: https://www.facebook.com/QAFest/
Website: https://qafest.com/
егодня никого не удивишь высоконагруженными системами. И мало кого в нашей индустрии удивишь отдельно…