Self-training with Noisy Student improves ImageNet classification
New state-of-the-art supervised+unsupervised algorithm on ImageNet
https://arxiv.org/abs/1911.04252
#machine_learning #neural_networks #meta_learning
New state-of-the-art supervised+unsupervised algorithm on ImageNet
https://arxiv.org/abs/1911.04252
#machine_learning #neural_networks #meta_learning
arXiv.org
Self-training with Noisy Student improves ImageNet classification
We present Noisy Student Training, a semi-supervised learning approach that works well even when labeled data is abundant. Noisy Student Training achieves 88.4% top-1 accuracy on ImageNet, which...
A Comprehensive Survey on Graph Neural Networks
Prerequisites concepts: Graph Signal Processing | Functional Analysis | Deep Learning Architectures
https://arxiv.org/abs/1901.00596
#geometric_deep_learning #graph_neural_networks
Prerequisites concepts: Graph Signal Processing | Functional Analysis | Deep Learning Architectures
https://arxiv.org/abs/1901.00596
#geometric_deep_learning #graph_neural_networks
arXiv.org
A Comprehensive Survey on Graph Neural Networks
Deep learning has revolutionized many machine learning tasks in recent years, ranging from image classification and video processing to speech recognition and natural language understanding. The...
Distill and Transfer Learning for Robust Multitask Reinforcement Learning
"Most deep reinforcement learning algorithms are data inefficient in complex and rich environments, limiting their applicability to many scenarios. One direction for improving data efficiency is multitask learning with shared neural network parameters, where efficiency may be improved through transfer across related tasks. In practice, however, this is not usually observed, because gradients from different tasks can interfere negatively, making learning unstable and sometimes even less data efficient. Another issue is the different reward schemes between tasks, which can easily lead to one task dominating the learning of a shared model. We propose a new approach for joint training of multiple tasks, which we refer to as Distral (DIStill & TRAnsfer Learning). Instead of sharing parameters between the different workers, we propose to share a distilled policy that captures common behavior across tasks. Each worker is trained to solve its own task while constrained to stay close to the shared policy, while the shared policy is trained by distillation to be the centroid of all task policies. Both aspects of the learning process are derived by optimizing a joint objective function. We show that our approach supports efficient transfer on complex 3D environments, outperforming several related methods. Moreover, the proposed learning process is more robust and more stable---attributes that are critical in deep reinforcement learning."
https://www.youtube.com/watch?v=scf7Przmh7c
#reinforcement_learning #multi_task_learning #transfer_learning
"Most deep reinforcement learning algorithms are data inefficient in complex and rich environments, limiting their applicability to many scenarios. One direction for improving data efficiency is multitask learning with shared neural network parameters, where efficiency may be improved through transfer across related tasks. In practice, however, this is not usually observed, because gradients from different tasks can interfere negatively, making learning unstable and sometimes even less data efficient. Another issue is the different reward schemes between tasks, which can easily lead to one task dominating the learning of a shared model. We propose a new approach for joint training of multiple tasks, which we refer to as Distral (DIStill & TRAnsfer Learning). Instead of sharing parameters between the different workers, we propose to share a distilled policy that captures common behavior across tasks. Each worker is trained to solve its own task while constrained to stay close to the shared policy, while the shared policy is trained by distillation to be the centroid of all task policies. Both aspects of the learning process are derived by optimizing a joint objective function. We show that our approach supports efficient transfer on complex 3D environments, outperforming several related methods. Moreover, the proposed learning process is more robust and more stable---attributes that are critical in deep reinforcement learning."
https://www.youtube.com/watch?v=scf7Przmh7c
#reinforcement_learning #multi_task_learning #transfer_learning
YouTube
Distill and transfer learning for robust multitask RL
Most deep reinforcement learning algorithms are data inefficient in complex and rich environments, limiting their applicability to many scenarios. One direction for improving data efficiency is multitask learning with shared neural network parameters, where…
Machine Learning for Combinatorial Optimization: a Methodological Tour d’Horizon
"This paper surveys the recent attempts, both from the machine learning and operations research communities, at leveraging machine learning to solve combinatorial optimization problems. Given the hard nature of these problems, state-of-the-art methodologies involve algorithmic decisions that either require too much computing time or are not mathematically well defined. Thus, machine learning looks like a promising candidate to effectively deal with those decisions. We advocate for pushing further the integration of machine learning and combinatorial optimization and detail methodology to do so. A main point of the paper is seeing generic optimization problems as data points and inquiring what is the relevant distribution of problems to use for learning on a given task."
https://arxiv.org/pdf/1811.06128.pdf
"This paper surveys the recent attempts, both from the machine learning and operations research communities, at leveraging machine learning to solve combinatorial optimization problems. Given the hard nature of these problems, state-of-the-art methodologies involve algorithmic decisions that either require too much computing time or are not mathematically well defined. Thus, machine learning looks like a promising candidate to effectively deal with those decisions. We advocate for pushing further the integration of machine learning and combinatorial optimization and detail methodology to do so. A main point of the paper is seeing generic optimization problems as data points and inquiring what is the relevant distribution of problems to use for learning on a given task."
https://arxiv.org/pdf/1811.06128.pdf
How Relevant is the Turing Test in the Age of Sophisbots?
Popular culture has contemplated societies of thinking machines for generations, envisioning futures from utopian to dystopian. These futures are, arguably, here now-we find ourselves at the doorstep of technology that can at least simulate the appearance of thinking, acting, and feeling. The real question is: now what?
https://arxiv.org/pdf/1909.00056.pdf
#machine_learning #technology #ethics
Popular culture has contemplated societies of thinking machines for generations, envisioning futures from utopian to dystopian. These futures are, arguably, here now-we find ourselves at the doorstep of technology that can at least simulate the appearance of thinking, acting, and feeling. The real question is: now what?
https://arxiv.org/pdf/1909.00056.pdf
#machine_learning #technology #ethics
Noam Chomsky: Language, Cognition, and Deep Learning | Artificial Intelligence
Noam Chomsky is one of the greatest minds of our time and is one of the most cited scholars in history. He is a linguist, philosopher, cognitive scientist, historian, social critic, and political activist. He has spent over 60 years at MIT and recently also joined the University of Arizona. This conversation is part of the Artificial Intelligence podcast.
https://www.youtube.com/watch?v=cMscNuSUy0I
#natural_language_processing #deep_learning
Noam Chomsky is one of the greatest minds of our time and is one of the most cited scholars in history. He is a linguist, philosopher, cognitive scientist, historian, social critic, and political activist. He has spent over 60 years at MIT and recently also joined the University of Arizona. This conversation is part of the Artificial Intelligence podcast.
https://www.youtube.com/watch?v=cMscNuSUy0I
#natural_language_processing #deep_learning
YouTube
Noam Chomsky: Language, Cognition, and Deep Learning | Lex Fridman Podcast #53
Quantum Computer Programming
A practical and applied introduction to quantum computer programming, using IBM's free cloud-based quantum machines and Qiskit.
https://youtu.be/aPCZcv-5qfA
#quantum_programming
A practical and applied introduction to quantum computer programming, using IBM's free cloud-based quantum machines and Qiskit.
https://youtu.be/aPCZcv-5qfA
#quantum_programming
YouTube
Quantum Computer Programming w/ Qiskit
A practical and applied introduction to quantum computer programming, using IBM's free cloud-based quantum machines and Qiskit.
Part 2: https://www.youtube.com/watch?v=lB_5pC1MkGg
Text-based tutorials and sample code: https://pythonprogramming.net/quantum…
Part 2: https://www.youtube.com/watch?v=lB_5pC1MkGg
Text-based tutorials and sample code: https://pythonprogramming.net/quantum…
Programming a quantum computer with Cirq (QuantumCasts)
Want to learn how to program a quantum computer using Cirq? In this episode of QuantumCasts, Dave Bacon (Twitter: @dabacon) teaches you what a quantum program looks like via a simple “hello qubit” program. You’ll also learn about some of the exciting challenges facing quantum programmers today, such as whether Noisy Intermediate-Scale Quantum (NISQ) processors have the ability to solve important practical problems. We’ll also delve a little into how the open source Python framework Cirq was designed to help answer that question.
https://www.youtube.com/watch?v=16ZfkPRVf2w
#quantum_programming
Want to learn how to program a quantum computer using Cirq? In this episode of QuantumCasts, Dave Bacon (Twitter: @dabacon) teaches you what a quantum program looks like via a simple “hello qubit” program. You’ll also learn about some of the exciting challenges facing quantum programmers today, such as whether Noisy Intermediate-Scale Quantum (NISQ) processors have the ability to solve important practical problems. We’ll also delve a little into how the open source Python framework Cirq was designed to help answer that question.
https://www.youtube.com/watch?v=16ZfkPRVf2w
#quantum_programming
YouTube
Programming a quantum computer with Cirq (QuantumCasts)
Want to learn how to program a quantum computer using Cirq? In this episode of QuantumCasts, Dave Bacon (Twitter: @dabacon) teaches you what a quantum program looks like via a simple “hello qubit” program. You’ll also learn about some of the exciting challenges…
Adaptive_Computation_and_Machine.pdf
3.4 MB
Foundations Of Machine Learning
✅ A must read book for machine learning researchers
It mainly discusses the mathematical background of machine learning algorithms.
✅ A must read book for machine learning researchers
It mainly discusses the mathematical background of machine learning algorithms.
No.Starch.Python.Oct_.2015.ISBN_.1593276036.pdf
5.4 MB
Python Crash Course
A comprehensive approach to programming
🐍 with Python
✅ For Beginners
A comprehensive approach to programming
🐍 with Python
✅ For Beginners
Dive into Deep Learning (D2L Book)
Dive into Deep Learning: an interactive deep learning book with code, math, and discussions, based on the NumPy interface
https://github.com/d2l-ai/d2l-en
#deep_learning
Dive into Deep Learning: an interactive deep learning book with code, math, and discussions, based on the NumPy interface
https://github.com/d2l-ai/d2l-en
#deep_learning
GitHub
GitHub - d2l-ai/d2l-en: Interactive deep learning book with multi-framework code, math, and discussions. Adopted at 500 universities…
Interactive deep learning book with multi-framework code, math, and discussions. Adopted at 500 universities from 70 countries including Stanford, MIT, Harvard, and Cambridge. - d2l-ai/d2l-en
Necessity of complex numbers in Quantum Mechanics
https://www.youtube.com/watch?v=f079K1f2WQk
#mathematics #quantum_physics
https://www.youtube.com/watch?v=f079K1f2WQk
#mathematics #quantum_physics
YouTube
Necessity of complex numbers
MIT 8.04 Quantum Physics I, Spring 2016
View the complete course: https://ocw.mit.edu/8-04S16
Instructor: Barton Zwiebach
License: Creative Commons BY-NC-SA
More information at https://ocw.mit.edu/terms
More courses at https://ocw.mit.edu
View the complete course: https://ocw.mit.edu/8-04S16
Instructor: Barton Zwiebach
License: Creative Commons BY-NC-SA
More information at https://ocw.mit.edu/terms
More courses at https://ocw.mit.edu
A great discussion with Sebastian Thrun about various topics such as: Flying Cars, Autonomous Vehicles, and Education
https://www.youtube.com/watch?v=ZPPAOakITeQ
#self_driving_cars #education #artificial_intelligence #machine_learning
https://www.youtube.com/watch?v=ZPPAOakITeQ
#self_driving_cars #education #artificial_intelligence #machine_learning
YouTube
Sebastian Thrun: Flying Cars, Autonomous Vehicles, and Education | Lex Fridman Podcast #59
An Overview of Recent State of the Art Deep Learning Algorithms/Architectures
Lecture on most recent research and developments in deep learning, and hopes for 2020. This is not intended to be a list of SOTA benchmark results, but rather a set of highlights of machine learning and AI innovations and progress in academia, industry, and society in general. This lecture is part of the MIT Deep Learning Lecture Series.
https://www.youtube.com/watch?v=0VH1Lim8gL8&t=999s
#deep_learning #artificial_intelligence
Lecture on most recent research and developments in deep learning, and hopes for 2020. This is not intended to be a list of SOTA benchmark results, but rather a set of highlights of machine learning and AI innovations and progress in academia, industry, and society in general. This lecture is part of the MIT Deep Learning Lecture Series.
https://www.youtube.com/watch?v=0VH1Lim8gL8&t=999s
#deep_learning #artificial_intelligence
YouTube
Deep Learning State of the Art (2020) | MIT Deep Learning Series
Lecture on most recent research and developments in deep learning, and hopes for 2020. This is not intended to be a list of SOTA benchmark results, but rathe...
A fruitful relationship between neuroscience and AI
https://deepmind.com/blog/article/Dopamine-and-temporal-difference-learning-A-fruitful-relationship-between-neuroscience-and-AI
#reinforcement_learning #machine_learning #neuroscience #artificial_intelligence
https://deepmind.com/blog/article/Dopamine-and-temporal-difference-learning-A-fruitful-relationship-between-neuroscience-and-AI
#reinforcement_learning #machine_learning #neuroscience #artificial_intelligence
Google DeepMind
Dopamine and temporal difference learning: A fruitful relationship between neuroscience and AI
Learning and motivation are driven by internal and external rewards. Many of our day-to-day behaviours are guided by predicting, or anticipating, whether a given action will result in a positive...
Neural Architecture Search for Transformers
In summary, they employed an evolutionary algorithm, with a novel encoding scheme, to search for an optimal transformer architecture.
https://www.youtube.com/watch?v=khA-fiC1Wa0&feature=youtu.be
In summary, they employed an evolutionary algorithm, with a novel encoding scheme, to search for an optimal transformer architecture.
https://www.youtube.com/watch?v=khA-fiC1Wa0&feature=youtu.be
YouTube
The Evolved Transformer
This video explains the Evolved Transformer model! The Evolved Transformer has been applied to the Meena bot, one of the most impressive chatbots to date. Th...
A Fascinating Philosophical Discussion about the Nature of Consciousness
https://www.youtube.com/watch?v=LW59lMvxmY4
#philosophy #consciousness
https://www.youtube.com/watch?v=LW59lMvxmY4
#philosophy #consciousness
YouTube
David Chalmers: The Hard Problem of Consciousness | Lex Fridman Podcast #69
David Chalmers is a philosopher and cognitive scientist specializing in philosophy of mind, philosophy of language, and consciousness. He is perhaps best known for formulating the hard problem of consciousness which could be stated as "why does the feeling…
Book: The SOAR Cognitive Architecture
Introduction: in development for thirty years, Soar is a general cognitive architecture that integrates knowledge-intensive reasoning, reactive execution, hierarchical reasoning, planning, and learning from experience, with the goal of creating a general computational system that has the same cognitive abilities as humans. In contrast, most AI systems are designed to solve only one type of problem, such as playing chess, searching the Internet, or scheduling aircraft departures. Soar is both a software system for agent development and a theory of what computational structures are necessary to support human-level agents. Over the years, both software system and theory have evolved. This book offers the definitive presentation of Soar from theoretical and practical perspectives, providing comprehensive descriptions of fundamental aspects and new components. The current version of Soar features major extensions, adding reinforcement learning, semantic memory, episodic memory, mental imagery, and an appraisal-based model of emotion. This book describes details of Soar's component memories and processes and offers demonstrations of individual components, components working in combination, and real-world applications. Beyond these functional considerations, the book also proposes requirements for general cognitive architectures and explicitly evaluates how well Soar meets those requirements.
https://dl.acm.org/doi/book/10.5555/2222503
#cognitive_science #neuroscience #reinforcement_learning #artificial_intelligence
Introduction: in development for thirty years, Soar is a general cognitive architecture that integrates knowledge-intensive reasoning, reactive execution, hierarchical reasoning, planning, and learning from experience, with the goal of creating a general computational system that has the same cognitive abilities as humans. In contrast, most AI systems are designed to solve only one type of problem, such as playing chess, searching the Internet, or scheduling aircraft departures. Soar is both a software system for agent development and a theory of what computational structures are necessary to support human-level agents. Over the years, both software system and theory have evolved. This book offers the definitive presentation of Soar from theoretical and practical perspectives, providing comprehensive descriptions of fundamental aspects and new components. The current version of Soar features major extensions, adding reinforcement learning, semantic memory, episodic memory, mental imagery, and an appraisal-based model of emotion. This book describes details of Soar's component memories and processes and offers demonstrations of individual components, components working in combination, and real-world applications. Beyond these functional considerations, the book also proposes requirements for general cognitive architectures and explicitly evaluates how well Soar meets those requirements.
https://dl.acm.org/doi/book/10.5555/2222503
#cognitive_science #neuroscience #reinforcement_learning #artificial_intelligence
Model Predictive Control: Powerful Optimization Strategy for Feedback Control
https://www.youtube.com/watch?v=YwodGM2eoy4
#optimization
https://www.youtube.com/watch?v=YwodGM2eoy4
#optimization
YouTube
Model Predictive Control
This lecture provides an overview of model predictive control (MPC), which is one of the most powerful and general control frameworks. MPC is used extensively in industrial control settings, and can be used with nonlinear systems and systems with constraints…
Complete Statistical Theory of Learning
https://www.youtube.com/watch?v=Ow25mjFjSmg
#statistics #machine_learning
#theory
https://www.youtube.com/watch?v=Ow25mjFjSmg
#statistics #machine_learning
#theory
YouTube
Complete Statistical Theory of Learning (Vladimir Vapnik) | MIT Deep Learning Series
Lecture by Vladimir Vapnik in January 2020, part of the MIT Deep Learning Lecture Series.
Slides: https://bit.ly/2ORVofC
Associated podcast conversation: https://www.youtube.com/watch?v=bQa7hpUpMzM
Series website: https://deeplearning.mit.edu
Playlist: ht…
Slides: https://bit.ly/2ORVofC
Associated podcast conversation: https://www.youtube.com/watch?v=bQa7hpUpMzM
Series website: https://deeplearning.mit.edu
Playlist: ht…