PyTorch Examples
A repository showcasing examples of using PyTorch
https://github.com/pytorch/examples
A repository showcasing examples of using PyTorch
https://github.com/pytorch/examples
GitHub
GitHub - pytorch/examples: A set of examples around pytorch in Vision, Text, Reinforcement Learning, etc.
A set of examples around pytorch in Vision, Text, Reinforcement Learning, etc. - pytorch/examples
🚀 Introducing TF-GAN: A lightweight GAN library for TensorFlow 2.0
Tensorflow blog: https://medium.com/tensorflow/introducing-tf-gan-a-lightweight-gan-library-for-tensorflow-2-0-36d767e1abae
Code: https://github.com/tensorflow/gan
Free course: https://developers.google.com/machine-learning/gan/
Paper: https://arxiv.org/abs/1805.08318
Tensorflow blog: https://medium.com/tensorflow/introducing-tf-gan-a-lightweight-gan-library-for-tensorflow-2-0-36d767e1abae
Code: https://github.com/tensorflow/gan
Free course: https://developers.google.com/machine-learning/gan/
Paper: https://arxiv.org/abs/1805.08318
Medium
Introducing TF-GAN: A lightweight GAN library for TensorFlow 2.0
Posted by Joel Shor, Yoel Drori, Google Research Tel Aviv, Aaron Sarna, David Westbrook, Paige Bailey
🔥Finally, AI-Based Painting is here!
#GANPaint
video: https://www.youtube.com/watch?v=IqHs_DkmDVo
Semantic Photo Manipulation with a Generative Image Prior
paper: https://ganpaint.io/
#GANPaint
video: https://www.youtube.com/watch?v=IqHs_DkmDVo
Semantic Photo Manipulation with a Generative Image Prior
paper: https://ganpaint.io/
YouTube
Finally, AI-Based Painting is Here!
❤️ Check out Weights & Biases here and sign up for a free demo: https://www.wandb.com/papers
📝 The paper "GANPaint Studio - Semantic Photo Manipulation with a Generative Image Prior" and its online demo are available here:
https://ganpaint.io/
🙏 We would…
📝 The paper "GANPaint Studio - Semantic Photo Manipulation with a Generative Image Prior" and its online demo are available here:
https://ganpaint.io/
🙏 We would…
👍1
A Gentle Introduction to Generative Adversarial Network Loss Functions
https://machinelearningmastery.com/generative-adversarial-network-loss-functions/
https://machinelearningmastery.com/generative-adversarial-network-loss-functions/
Adapt or Get Left Behind: Domain Adaptation through BERT Language Model Finetuning for Aspect-Target Sentiment Classification
https://arxiv.org/abs/1908.11860
https://arxiv.org/abs/1908.11860
❤1
Rules of Machine Learning by Google
Best Practices for ML Engineering
https://developers.google.com/machine-learning/guides/rules-of-ml/
Best Practices for ML Engineering
https://developers.google.com/machine-learning/guides/rules-of-ml/
Google for Developers
Rules of Machine Learning: | Google for Developers
A Gentle Introduction to Jensen’s Inequality
https://machinelearningmastery.com/a-gentle-introduction-to-jensens-inequality/
https://machinelearningmastery.com/a-gentle-introduction-to-jensens-inequality/
MachineLearningMastery.com
A Gentle Introduction to Jensen’s Inequality - MachineLearningMastery.com
It is common in statistics and machine learning to create a linear transform or mapping of a variable. An example is a linear scaling of a feature variable. We have the natural intuition that the mean of the scaled values is the same as the scaled value of…
Giving Lens New Reading Capabilities in Google Go
https://ai.googleblog.com/2019/09/giving-lens-new-reading-capabilities-in.html
https://ai.googleblog.com/2019/09/giving-lens-new-reading-capabilities-in.html
Google AI Blog
Giving Lens New Reading Capabilities in Google Go
Posted by Rajan Patel, Director, Augmented Reality Around the world, millions of people are coming online for the first time, and many o...
Introducing Neural Structured Learning in TensorFlow
https://medium.com/tensorflow/introducing-neural-structured-learning-in-tensorflow-5a802efd7afd
Neural Structured Learning: Training with Structured Signals
Article: https://www.tensorflow.org/neural_structured_learning
Code: https://github.com/tensorflow/neural-structured-learning
https://medium.com/tensorflow/introducing-neural-structured-learning-in-tensorflow-5a802efd7afd
Neural Structured Learning: Training with Structured Signals
Article: https://www.tensorflow.org/neural_structured_learning
Code: https://github.com/tensorflow/neural-structured-learning
Medium
Introducing Neural Structured Learning in TensorFlow
Posted by Da-Cheng Juan (Senior Software Engineer) and Sujith Ravi (Senior Staff Research Scientist)
Pytorch implementation of the paper "Class-Balanced Loss Based on Effective Number of Samples»
https://github.com/vandit15/Class-balanced-loss-pytorch
Class-Balanced Loss Based on Effective Number of Samples
https://github.com/richardaecn/class-balanced-loss
https://github.com/vandit15/Class-balanced-loss-pytorch
Class-Balanced Loss Based on Effective Number of Samples
https://github.com/richardaecn/class-balanced-loss
💬 Announcing Two New Natural Language Dialog Datasets
https://ai.googleblog.com/2019/09/announcing-two-new-natural-language.html
Coached Conversational Preference Elicitation
A dataset consisting of 502 dialogs with 12,000 annotated utterances between a user and an assistant discussing movie preferences in natural language.
https://ai.google/tools/datasets/coached-conversational-preference-elicitation
Accessing the Taskmaster-1 dataset
The full Taskmaster-1 dialog dataset has total 13,215 dialogs with 7708 written and 5507 spoken.
https://storage.googleapis.com/dialog-data-corpus/TASKMASTER-1-2019/landing_page.html
https://ai.googleblog.com/2019/09/announcing-two-new-natural-language.html
Coached Conversational Preference Elicitation
A dataset consisting of 502 dialogs with 12,000 annotated utterances between a user and an assistant discussing movie preferences in natural language.
https://ai.google/tools/datasets/coached-conversational-preference-elicitation
Accessing the Taskmaster-1 dataset
The full Taskmaster-1 dialog dataset has total 13,215 dialogs with 7708 written and 5507 spoken.
https://storage.googleapis.com/dialog-data-corpus/TASKMASTER-1-2019/landing_page.html
Googleblog
Announcing Two New Natural Language Dialog Datasets
How to Develop and Evaluate Naive Classifier Strategies Using Probability
https://machinelearningmastery.com/how-to-develop-and-evaluate-naive-classifier-strategies-using-probability/
https://machinelearningmastery.com/how-to-develop-and-evaluate-naive-classifier-strategies-using-probability/
MachineLearningMastery.com
How to Develop and Evaluate Naive Classifier Strategies Using Probability - MachineLearningMastery.com
A Naive Classifier is a simple classification model that assumes little to nothing about the problem and the performance of which provides a baseline by which all other models evaluated on a dataset can be compared.
There are different strategies that…
There are different strategies that…
DeepMind's OpenSpiel is a collection of environments and algorithms for research in general reinforcement learning and search/planning in games.
code: https://github.com/deepmind/open_spiel
article: https://arxiv.org/abs/1908.09453
code: https://github.com/deepmind/open_spiel
article: https://arxiv.org/abs/1908.09453
GitHub
GitHub - google-deepmind/open_spiel: OpenSpiel is a collection of environments and algorithms for research in general reinforcement…
OpenSpiel is a collection of environments and algorithms for research in general reinforcement learning and search/planning in games. - google-deepmind/open_spiel
Assessing the Quality of Long-Form Synthesized Speech
https://ai.googleblog.com/2019/09/assessing-quality-of-long-form.html
https://ai.googleblog.com/2019/09/assessing-quality-of-long-form.html
research.google
Assessing the Quality of Long-Form Synthesized Speech
Posted by Tom Kenter, Google Research, London Automatically generated speech is everywhere, from directions being read out aloud while you are dr...
Forwarded from Artificial Intelligence
📝 The paper: Adversarial Examples Are Not Bugs, They Are Features
video: https://www.youtube.com/watch?v=AOZw1tgD8dA
available here: https://gradientscience.org/adv/
article: https://distill.pub/2019/advex-bugs-discussion/
video: https://www.youtube.com/watch?v=AOZw1tgD8dA
available here: https://gradientscience.org/adv/
article: https://distill.pub/2019/advex-bugs-discussion/
YouTube
Adversarial Attacks on Neural Networks - Bug or Feature?
❤️ Support us on Patreon: https://www.patreon.com/TwoMinutePapers
📝 The paper "Adversarial Examples Are Not Bugs, They Are Features" is available here:
https://gradientscience.org/adv/
The Distill discussion article is available here:
https://distill.pub/2019/advex…
📝 The paper "Adversarial Examples Are Not Bugs, They Are Features" is available here:
https://gradientscience.org/adv/
The Distill discussion article is available here:
https://distill.pub/2019/advex…
Recursive Sketches for Modular Deep Learning
https://ai.googleblog.com/2019/09/recursive-sketches-for-modular-deep.html
https://ai.googleblog.com/2019/09/recursive-sketches-for-modular-deep.html
Googleblog
Recursive Sketches for Modular Deep Learning
Learning Cross-Modal Temporal Representations from Unlabeled Videos
https://ai.googleblog.com/2019/09/learning-cross-modal-temporal.html
https://ai.googleblog.com/2019/09/learning-cross-modal-temporal.html
Googleblog
Learning Cross-Modal Temporal Representations from Unlabeled Videos
17th September In Moscow MegaFon office will host another meetup. Speakers from Mail.Ru, Altinity, Couchbase and MegaFon will talk about Statefull in Kubernetes. Free admission.
For details and registration : https://pao-megafon--org.timepad.ru/event/1056036/
For details and registration : https://pao-megafon--org.timepad.ru/event/1056036/
Machine Learning for Physics and the Physics of Learning Tutorials
overview: https://www.ipam.ucla.edu/programs/workshops/machine-learning-for-physics-and-the-physics-of-learning-tutorials/
videos: https://www.ipam.ucla.edu/videos/
overview: https://www.ipam.ucla.edu/programs/workshops/machine-learning-for-physics-and-the-physics-of-learning-tutorials/
videos: https://www.ipam.ucla.edu/videos/
5 Reasons to Learn Probability for Machine Learning
https://machinelearningmastery.com/why-learn-probability-for-machine-learning/
https://machinelearningmastery.com/why-learn-probability-for-machine-learning/
MachineLearningMastery.com
5 Reasons to Learn Probability for Machine Learning - MachineLearningMastery.com
Probability is a field of mathematics that quantifies uncertainty. It is undeniably a pillar of the field of machine learning, and many recommend it as a prerequisite subject to study prior to getting started. This is misleading advice, as probability makes…
👍1