📋 Introducing TensorBoard.dev: a new way to share your ML experiment results
https://blog.tensorflow.org/2019/12/introducing-tensorboarddev-new-way-to.html
article : https://arxiv.org/abs/1910.10683
example: https://tensorboard.dev/experiment/EvNO346lT0iYbmeaWmoNCQ/#scalars
https://blog.tensorflow.org/2019/12/introducing-tensorboarddev-new-way-to.html
article : https://arxiv.org/abs/1910.10683
example: https://tensorboard.dev/experiment/EvNO346lT0iYbmeaWmoNCQ/#scalars
blog.tensorflow.org
Introducing TensorBoard.dev: a new way to share your ML experiment results
The TensorFlow blog contains regular news from the TensorFlow team and the community, with articles on Python, TensorFlow.js, TF Lite, TFX, and more.
Check the Data Science channel where you can find a lot of articles, links and commentaries on them.
Join and learn hot topics of data science @opendatascience
Join and learn hot topics of data science @opendatascience
AR-Net: A simple autoregressive neural network for time series
https://ai.facebook.com/blog/ar-net-a-simple-autoregressive-neural-network-for-time-series/
full paper:
https://arxiv.org/abs/1911.12436
https://ai.facebook.com/blog/ar-net-a-simple-autoregressive-neural-network-for-time-series/
full paper:
https://arxiv.org/abs/1911.12436
Facebook
AR-Net: A simple autoregressive neural network for time series
AR-Net is a new framework that combines the best of both traditional statistical models and neural network models for time series modeling. The feed forward model is not only as interpretable as AR models but is also scalable and easier to use.
A Gentle Introduction to the Bayes Optimal Classifier
https://machinelearningmastery.com/bayes-optimal-classifier/
https://svivek.com/teaching/machine-learning/lectures/slides/prob-learning/bayes-optimal-classifier.pdf
https://machinelearningmastery.com/bayes-optimal-classifier/
https://svivek.com/teaching/machine-learning/lectures/slides/prob-learning/bayes-optimal-classifier.pdf
MachineLearningMastery.com
A Gentle Introduction to the Bayes Optimal Classifier - MachineLearningMastery.com
The Bayes Optimal Classifier is a probabilistic model that makes the most probable prediction for a new example. It is described using the Bayes Theorem that provides a principled way for calculating a conditional probability. It is also closely related to…
Training multi-agent AI systems to solve complex tasks through cooperation
https://ai.facebook.com/blog/using-multi-agent-reinforcement-learning-to-improve-collaboration/
full paper: https://arxiv.org/abs/1910.08809
code: https://github.com/TorchCraft/TorchCraftAI/tree/targeting
https://ai.facebook.com/blog/using-multi-agent-reinforcement-learning-to-improve-collaboration/
full paper: https://arxiv.org/abs/1910.08809
code: https://github.com/TorchCraft/TorchCraftAI/tree/targeting
Facebook
Training multi-agent AI systems to solve complex tasks through cooperation
Facebook AI is releasing a novel approach to cooperative multi-agent reinforcement learning, that assigns tasks to individual agents, making them better at generalizing to more complex situations
Netflix open-sources its Python Framework ‘Metaflow’ for building and managing data science projects
https://www.marktechpost.com/2019/12/04/netflix-open-sources-its-python-framework-metaflow-for-building-and-managing-data-science-projects/
Github: https://github.com/Netflix/metaflow
Documentation: https://docs.metaflow.org/
https://www.marktechpost.com/2019/12/04/netflix-open-sources-its-python-framework-metaflow-for-building-and-managing-data-science-projects/
Github: https://github.com/Netflix/metaflow
Documentation: https://docs.metaflow.org/
MarkTechPost
Netflix open-sources its Python Framework 'Metaflow' for building and managing data science projects
Netflix open-sources its human-friendly Python Framework 'Metaflow' to build and manage real-life data science projects with ease. Metaflow was originally developed at Netflix for addressing the needs of its data scientists who work on demanding real-life…
👍1
Deep Double Descent
https://openai.com/blog/deep-double-descent/
Deep Double Descent: Where Bigger Models and More Data Hurt
https://arxiv.org/abs/1912.02292
https://openai.com/blog/deep-double-descent/
Deep Double Descent: Where Bigger Models and More Data Hurt
https://arxiv.org/abs/1912.02292
Openai
Deep double descent
We show that the double descent phenomenon occurs in CNNs, ResNets, and transformers: performance first improves, then gets worse, and then improves again with increasing model size, data size, or training time. This effect is often avoided through careful…
❤1
Understanding Transfer Learning for Medical Imaging
https://ai.googleblog.com/2019/12/understanding-transfer-learning-for.html
Article: https://arxiv.org/abs/1902.07208
https://ai.googleblog.com/2019/12/understanding-transfer-learning-for.html
Article: https://arxiv.org/abs/1902.07208
blog.research.google
Understanding Transfer Learning for Medical Imaging
A new framework for large-scale training of state-of-the-art visual classification models
https://ai.facebook.com/blog/a-new-framework-for-large-scale-training-of-state-of-the-art-visual-classification-models/
Github: https://github.com/facebookresearch/ClassyVision
Multi-modal Research to Production with PyTorch and Facebook
https://nips.cc/ExpoConferences/2019/schedule?workshop_id=16
https://classyvision.ai/
https://ai.facebook.com/blog/a-new-framework-for-large-scale-training-of-state-of-the-art-visual-classification-models/
Github: https://github.com/facebookresearch/ClassyVision
Multi-modal Research to Production with PyTorch and Facebook
https://nips.cc/ExpoConferences/2019/schedule?workshop_id=16
https://classyvision.ai/
Meta
A new framework for large-scale training of state-of-the-art visual classification models
Facebook AI is open-sourcing a new, easy-to-use, production-ready end-to-end framework for large-scale, state-of-the-art image and video classification tasks. It enables anyone to train models on top of PyTorch using very simple abstractions.
PyTorch adds new tools and libraries, welcomes
https://pytorch.org/blog/pytorch-adds-new-tools-and-libraries-welcomes-preferred-networks-to-its-community/
NeurIPS 2019 Expo Workshop
https://nips.cc/ExpoConferences/2019/schedule?workshop_id=16
PyTorch Elastic : https://github.com/pytorch/elastic
@ai_machinelearning_big_data
https://pytorch.org/blog/pytorch-adds-new-tools-and-libraries-welcomes-preferred-networks-to-its-community/
NeurIPS 2019 Expo Workshop
https://nips.cc/ExpoConferences/2019/schedule?workshop_id=16
PyTorch Elastic : https://github.com/pytorch/elastic
@ai_machinelearning_big_data
PyTorch
PyTorch adds new tools and libraries, welcomes Preferred Networks to its community
PyTorch continues to be used for the latest state-of-the-art research on display at the NeurIPS conference next week, making up nearly 70% of papers that cite a framework. In addition, we’re excited to welcome Preferred Networks, the maintainers of the Chainer…
Develop an Intuition for Bayes Theorem With Worked Examples
https://machinelearningmastery.com/intuition-for-bayes-theorem-with-worked-examples/
https://machinelearningmastery.com/intuition-for-bayes-theorem-with-worked-examples/
KingSoft WPS: document image dewarping based on TensorFlow
https://blog.tensorflow.org/2019/12/kingsoft-wps-document-image-dewarping.html
https://blog.tensorflow.org/2019/12/kingsoft-wps-document-image-dewarping.html
blog.tensorflow.org
KingSoft WPS: document image dewarping based on TensorFlow
The TensorFlow blog contains regular news from the TensorFlow team and the community, with articles on Python, TensorFlow.js, TF Lite, TFX, and more.
Fairness Indicators: Scalable Infrastructure for Fair ML Systems
https://ai.googleblog.com/2019/12/fairness-indicators-scalable.html
Github: https://github.com/tensorflow/fairness-indicators
Blog: https://blog.tensorflow.org/2019/12/fairness-indicators-fair-ML-systems.html
https://ai.googleblog.com/2019/12/fairness-indicators-scalable.html
Github: https://github.com/tensorflow/fairness-indicators
Blog: https://blog.tensorflow.org/2019/12/fairness-indicators-fair-ML-systems.html
Googleblog
Fairness Indicators: Scalable Infrastructure for Fair ML Systems
Model-Based Reinforcement Learning:
Theory and Practice
https://bair.berkeley.edu/blog/2019/12/12/mbpo/
Theory and Practice
https://bair.berkeley.edu/blog/2019/12/12/mbpo/
The Berkeley Artificial Intelligence Research Blog
Model-Based Reinforcement Learning:
Theory and Practice
Theory and Practice
The BAIR Blog
Tune Hyperparameters for Classification Machine Learning Algorithms
https://machinelearningmastery.com/hyperparameters-for-classification-machine-learning-algorithms/
https://machinelearningmastery.com/hyperparameters-for-classification-machine-learning-algorithms/
Develop Smaller Speech Recognition Models with NVIDIA’s NeMo Framework
https://devblogs.nvidia.com/develop-smaller-speech-recognition-models-with-nvidias-nemo-framework/
Github: https://github.com/NVIDIA/NeMo
https://ngc.nvidia.com/catalog/models/nvidia:quartznet15x5
https://devblogs.nvidia.com/develop-smaller-speech-recognition-models-with-nvidias-nemo-framework/
Github: https://github.com/NVIDIA/NeMo
https://ngc.nvidia.com/catalog/models/nvidia:quartznet15x5
NVIDIA Technical Blog
Develop Smaller Speech Recognition Models with NVIDIA’s NeMo Framework | NVIDIA Technical Blog
As computers and other personal devices have become increasingly prevalent, interest in conversational AI has grown due to its multitude of potential applications in a variety of situations.
Quality-Diversity optimisation algorithms
https://quality-diversity.github.io/
Code: https://gitlab.com/leo.cazenille/qdpy
https://quality-diversity.github.io/
Code: https://gitlab.com/leo.cazenille/qdpy
Quality-Diversity optimisation algorithms
About
This webpage intends to list papers related to QD algorithms, links to tutorials and workshops, and pointers to existing implementations of QD algorithms.