Forwarded from Machine Learning World
  
  Michio Kaku: Future of Humans, Aliens, Space Travel & Physics | Artificial Intelligence (AI) Podcast
Michio Kaku is a theoretical physicist, futurist, and professor at the City College of New York. He is the author of many fascinating books on the nature of our reality and the future of our civilization. This conversation is part of the Artificial Intelligence podcast.
https://www.youtube.com/watch?v=kD5yc1LQrpQ
#artificial_intelligence #physics #cosmology
  
  Michio Kaku is a theoretical physicist, futurist, and professor at the City College of New York. He is the author of many fascinating books on the nature of our reality and the future of our civilization. This conversation is part of the Artificial Intelligence podcast.
https://www.youtube.com/watch?v=kD5yc1LQrpQ
#artificial_intelligence #physics #cosmology
YouTube
  
  Michio Kaku: Future of Humans, Aliens, Space Travel & Physics | Lex Fridman Podcast #45
  Enjoy the videos and music you love, upload original content, and share it all with friends, family, and the world on YouTube.
  Fastest way for learning a new programming language for experts
If you are already an expert in programming, you can learn a new programming language as fast as possible through this website:
https://learnxinyminutes.com/
#programming
  If you are already an expert in programming, you can learn a new programming language as fast as possible through this website:
https://learnxinyminutes.com/
#programming
A must read document for deep learning & machine learning practitioners
https://www.deeplearningbook.org/contents/guidelines.html
#deep_learning #machine_learning
  https://www.deeplearningbook.org/contents/guidelines.html
#deep_learning #machine_learning
A fascinating research paper in the intersection of Graph Neural Networks and Reinforcement Learning for tackling Robotics challenges
https://openreview.net/pdf?id=S1sqHMZCb
#robotics #deep_learning #geometric_deep_learning
  
  
  
  
  
  https://openreview.net/pdf?id=S1sqHMZCb
#robotics #deep_learning #geometric_deep_learning
Self-training with Noisy Student improves ImageNet classification
New state-of-the-art supervised+unsupervised algorithm on ImageNet
https://arxiv.org/abs/1911.04252
#machine_learning #neural_networks #meta_learning
  
  New state-of-the-art supervised+unsupervised algorithm on ImageNet
https://arxiv.org/abs/1911.04252
#machine_learning #neural_networks #meta_learning
arXiv.org
  
  Self-training with Noisy Student improves ImageNet classification
  We present Noisy Student Training, a semi-supervised learning approach that works well even when labeled data is abundant. Noisy Student Training achieves 88.4% top-1 accuracy on ImageNet, which...
  A Comprehensive Survey on Graph Neural Networks
Prerequisites concepts: Graph Signal Processing | Functional Analysis | Deep Learning Architectures
https://arxiv.org/abs/1901.00596
#geometric_deep_learning #graph_neural_networks
  
  Prerequisites concepts: Graph Signal Processing | Functional Analysis | Deep Learning Architectures
https://arxiv.org/abs/1901.00596
#geometric_deep_learning #graph_neural_networks
arXiv.org
  
  A Comprehensive Survey on Graph Neural Networks
  Deep learning has revolutionized many machine learning tasks in recent years, ranging from image classification and video processing to speech recognition and natural language understanding. The...
  Distill and Transfer Learning for Robust Multitask Reinforcement Learning
"Most deep reinforcement learning algorithms are data inefficient in complex and rich environments, limiting their applicability to many scenarios. One direction for improving data efficiency is multitask learning with shared neural network parameters, where efficiency may be improved through transfer across related tasks. In practice, however, this is not usually observed, because gradients from different tasks can interfere negatively, making learning unstable and sometimes even less data efficient. Another issue is the different reward schemes between tasks, which can easily lead to one task dominating the learning of a shared model. We propose a new approach for joint training of multiple tasks, which we refer to as Distral (DIStill & TRAnsfer Learning). Instead of sharing parameters between the different workers, we propose to share a distilled policy that captures common behavior across tasks. Each worker is trained to solve its own task while constrained to stay close to the shared policy, while the shared policy is trained by distillation to be the centroid of all task policies. Both aspects of the learning process are derived by optimizing a joint objective function. We show that our approach supports efficient transfer on complex 3D environments, outperforming several related methods. Moreover, the proposed learning process is more robust and more stable---attributes that are critical in deep reinforcement learning."
https://www.youtube.com/watch?v=scf7Przmh7c
#reinforcement_learning #multi_task_learning #transfer_learning
  
  "Most deep reinforcement learning algorithms are data inefficient in complex and rich environments, limiting their applicability to many scenarios. One direction for improving data efficiency is multitask learning with shared neural network parameters, where efficiency may be improved through transfer across related tasks. In practice, however, this is not usually observed, because gradients from different tasks can interfere negatively, making learning unstable and sometimes even less data efficient. Another issue is the different reward schemes between tasks, which can easily lead to one task dominating the learning of a shared model. We propose a new approach for joint training of multiple tasks, which we refer to as Distral (DIStill & TRAnsfer Learning). Instead of sharing parameters between the different workers, we propose to share a distilled policy that captures common behavior across tasks. Each worker is trained to solve its own task while constrained to stay close to the shared policy, while the shared policy is trained by distillation to be the centroid of all task policies. Both aspects of the learning process are derived by optimizing a joint objective function. We show that our approach supports efficient transfer on complex 3D environments, outperforming several related methods. Moreover, the proposed learning process is more robust and more stable---attributes that are critical in deep reinforcement learning."
https://www.youtube.com/watch?v=scf7Przmh7c
#reinforcement_learning #multi_task_learning #transfer_learning
YouTube
  
  Distill and transfer learning for robust multitask RL
  Most deep reinforcement learning algorithms are data inefficient in complex and rich environments, limiting their applicability to many scenarios. One direction for improving data efficiency is multitask learning with shared neural network parameters, where…
  Machine Learning for Combinatorial Optimization: a Methodological Tour d’Horizon
"This paper surveys the recent attempts, both from the machine learning and operations research communities, at leveraging machine learning to solve combinatorial optimization problems. Given the hard nature of these problems, state-of-the-art methodologies involve algorithmic decisions that either require too much computing time or are not mathematically well defined. Thus, machine learning looks like a promising candidate to effectively deal with those decisions. We advocate for pushing further the integration of machine learning and combinatorial optimization and detail methodology to do so. A main point of the paper is seeing generic optimization problems as data points and inquiring what is the relevant distribution of problems to use for learning on a given task."
https://arxiv.org/pdf/1811.06128.pdf
  "This paper surveys the recent attempts, both from the machine learning and operations research communities, at leveraging machine learning to solve combinatorial optimization problems. Given the hard nature of these problems, state-of-the-art methodologies involve algorithmic decisions that either require too much computing time or are not mathematically well defined. Thus, machine learning looks like a promising candidate to effectively deal with those decisions. We advocate for pushing further the integration of machine learning and combinatorial optimization and detail methodology to do so. A main point of the paper is seeing generic optimization problems as data points and inquiring what is the relevant distribution of problems to use for learning on a given task."
https://arxiv.org/pdf/1811.06128.pdf
How Relevant is the Turing Test in the Age of Sophisbots?
Popular culture has contemplated societies of thinking machines for generations, envisioning futures from utopian to dystopian. These futures are, arguably, here now-we find ourselves at the doorstep of technology that can at least simulate the appearance of thinking, acting, and feeling. The real question is: now what?
https://arxiv.org/pdf/1909.00056.pdf
#machine_learning #technology #ethics
  Popular culture has contemplated societies of thinking machines for generations, envisioning futures from utopian to dystopian. These futures are, arguably, here now-we find ourselves at the doorstep of technology that can at least simulate the appearance of thinking, acting, and feeling. The real question is: now what?
https://arxiv.org/pdf/1909.00056.pdf
#machine_learning #technology #ethics
Noam Chomsky: Language, Cognition, and Deep Learning | Artificial Intelligence
Noam Chomsky is one of the greatest minds of our time and is one of the most cited scholars in history. He is a linguist, philosopher, cognitive scientist, historian, social critic, and political activist. He has spent over 60 years at MIT and recently also joined the University of Arizona. This conversation is part of the Artificial Intelligence podcast.
https://www.youtube.com/watch?v=cMscNuSUy0I
#natural_language_processing #deep_learning
  
  Noam Chomsky is one of the greatest minds of our time and is one of the most cited scholars in history. He is a linguist, philosopher, cognitive scientist, historian, social critic, and political activist. He has spent over 60 years at MIT and recently also joined the University of Arizona. This conversation is part of the Artificial Intelligence podcast.
https://www.youtube.com/watch?v=cMscNuSUy0I
#natural_language_processing #deep_learning
YouTube
  
  Noam Chomsky: Language, Cognition, and Deep Learning | Lex Fridman Podcast #53
  Enjoy the videos and music you love, upload original content, and share it all with friends, family, and the world on YouTube.
  Quantum Computer Programming
A practical and applied introduction to quantum computer programming, using IBM's free cloud-based quantum machines and Qiskit.
https://youtu.be/aPCZcv-5qfA
#quantum_programming
  
  A practical and applied introduction to quantum computer programming, using IBM's free cloud-based quantum machines and Qiskit.
https://youtu.be/aPCZcv-5qfA
#quantum_programming
YouTube
  
  Quantum Computer Programming w/ Qiskit
  A practical and applied introduction to quantum computer programming, using IBM's free cloud-based quantum machines and Qiskit.
Part 2: https://www.youtube.com/watch?v=lB_5pC1MkGg
Text-based tutorials and sample code: https://pythonprogramming.net/quantum…
  Part 2: https://www.youtube.com/watch?v=lB_5pC1MkGg
Text-based tutorials and sample code: https://pythonprogramming.net/quantum…
Programming a quantum computer with Cirq (QuantumCasts)
Want to learn how to program a quantum computer using Cirq? In this episode of QuantumCasts, Dave Bacon (Twitter: @dabacon) teaches you what a quantum program looks like via a simple “hello qubit” program. You’ll also learn about some of the exciting challenges facing quantum programmers today, such as whether Noisy Intermediate-Scale Quantum (NISQ) processors have the ability to solve important practical problems. We’ll also delve a little into how the open source Python framework Cirq was designed to help answer that question.
https://www.youtube.com/watch?v=16ZfkPRVf2w
#quantum_programming
  
  Want to learn how to program a quantum computer using Cirq? In this episode of QuantumCasts, Dave Bacon (Twitter: @dabacon) teaches you what a quantum program looks like via a simple “hello qubit” program. You’ll also learn about some of the exciting challenges facing quantum programmers today, such as whether Noisy Intermediate-Scale Quantum (NISQ) processors have the ability to solve important practical problems. We’ll also delve a little into how the open source Python framework Cirq was designed to help answer that question.
https://www.youtube.com/watch?v=16ZfkPRVf2w
#quantum_programming
YouTube
  
  Programming a quantum computer with Cirq (QuantumCasts)
  Want to learn how to program a quantum computer using Cirq? In this episode of QuantumCasts, Dave Bacon (Twitter: @dabacon) teaches you what a quantum program looks like via a simple “hello qubit” program. You’ll also learn about some of the exciting challenges…
  Adaptive_Computation_and_Machine.pdf
    3.4 MB
  Foundations Of Machine Learning
✅ A must read book for machine learning researchers
It mainly discusses the mathematical background of machine learning algorithms.
  ✅ A must read book for machine learning researchers
It mainly discusses the mathematical background of machine learning algorithms.
No.Starch.Python.Oct_.2015.ISBN_.1593276036.pdf
    5.4 MB
  Python Crash Course
A comprehensive approach to programming
🐍 with Python
✅ For Beginners
  A comprehensive approach to programming
🐍 with Python
✅ For Beginners
Dive into Deep Learning (D2L Book)
Dive into Deep Learning: an interactive deep learning book with code, math, and discussions, based on the NumPy interface
https://github.com/d2l-ai/d2l-en
#deep_learning
  
  Dive into Deep Learning: an interactive deep learning book with code, math, and discussions, based on the NumPy interface
https://github.com/d2l-ai/d2l-en
#deep_learning
GitHub
  
  GitHub - d2l-ai/d2l-en: Interactive deep learning book with multi-framework code, math, and discussions. Adopted at 500 universities…
  Interactive deep learning book with multi-framework code, math, and discussions. Adopted at 500 universities from 70 countries including Stanford, MIT, Harvard, and Cambridge. - d2l-ai/d2l-en
  Necessity of complex numbers in Quantum Mechanics
https://www.youtube.com/watch?v=f079K1f2WQk
#mathematics #quantum_physics
  
  https://www.youtube.com/watch?v=f079K1f2WQk
#mathematics #quantum_physics
YouTube
  
  Necessity of complex numbers
  MIT 8.04 Quantum Physics I, Spring 2016
View the complete course: https://ocw.mit.edu/8-04S16
Instructor: Barton Zwiebach
License: Creative Commons BY-NC-SA
More information at https://ocw.mit.edu/terms
More courses at https://ocw.mit.edu
  View the complete course: https://ocw.mit.edu/8-04S16
Instructor: Barton Zwiebach
License: Creative Commons BY-NC-SA
More information at https://ocw.mit.edu/terms
More courses at https://ocw.mit.edu
A great discussion with Sebastian Thrun about various topics such as: Flying Cars, Autonomous Vehicles, and Education
https://www.youtube.com/watch?v=ZPPAOakITeQ
#self_driving_cars #education #artificial_intelligence #machine_learning
  
  https://www.youtube.com/watch?v=ZPPAOakITeQ
#self_driving_cars #education #artificial_intelligence #machine_learning
YouTube
  
  Sebastian Thrun: Flying Cars, Autonomous Vehicles, and Education | Lex Fridman Podcast #59
  
  An Overview of Recent State of the Art Deep Learning Algorithms/Architectures
Lecture on most recent research and developments in deep learning, and hopes for 2020. This is not intended to be a list of SOTA benchmark results, but rather a set of highlights of machine learning and AI innovations and progress in academia, industry, and society in general. This lecture is part of the MIT Deep Learning Lecture Series.
https://www.youtube.com/watch?v=0VH1Lim8gL8&t=999s
#deep_learning #artificial_intelligence
  
  Lecture on most recent research and developments in deep learning, and hopes for 2020. This is not intended to be a list of SOTA benchmark results, but rather a set of highlights of machine learning and AI innovations and progress in academia, industry, and society in general. This lecture is part of the MIT Deep Learning Lecture Series.
https://www.youtube.com/watch?v=0VH1Lim8gL8&t=999s
#deep_learning #artificial_intelligence
YouTube
  
  Deep Learning State of the Art (2020) | MIT Deep Learning Series
  Lecture on most recent research and developments in deep learning, and hopes for 2020. This is not intended to be a list of SOTA benchmark results, but rathe...