Necessity of complex numbers in Quantum Mechanics
https://www.youtube.com/watch?v=f079K1f2WQk
#mathematics #quantum_physics
https://www.youtube.com/watch?v=f079K1f2WQk
#mathematics #quantum_physics
YouTube
Necessity of complex numbers
MIT 8.04 Quantum Physics I, Spring 2016
View the complete course: https://ocw.mit.edu/8-04S16
Instructor: Barton Zwiebach
License: Creative Commons BY-NC-SA
More information at https://ocw.mit.edu/terms
More courses at https://ocw.mit.edu
View the complete course: https://ocw.mit.edu/8-04S16
Instructor: Barton Zwiebach
License: Creative Commons BY-NC-SA
More information at https://ocw.mit.edu/terms
More courses at https://ocw.mit.edu
A great discussion with Sebastian Thrun about various topics such as: Flying Cars, Autonomous Vehicles, and Education
https://www.youtube.com/watch?v=ZPPAOakITeQ
#self_driving_cars #education #artificial_intelligence #machine_learning
https://www.youtube.com/watch?v=ZPPAOakITeQ
#self_driving_cars #education #artificial_intelligence #machine_learning
YouTube
Sebastian Thrun: Flying Cars, Autonomous Vehicles, and Education | Lex Fridman Podcast #59
An Overview of Recent State of the Art Deep Learning Algorithms/Architectures
Lecture on most recent research and developments in deep learning, and hopes for 2020. This is not intended to be a list of SOTA benchmark results, but rather a set of highlights of machine learning and AI innovations and progress in academia, industry, and society in general. This lecture is part of the MIT Deep Learning Lecture Series.
https://www.youtube.com/watch?v=0VH1Lim8gL8&t=999s
#deep_learning #artificial_intelligence
Lecture on most recent research and developments in deep learning, and hopes for 2020. This is not intended to be a list of SOTA benchmark results, but rather a set of highlights of machine learning and AI innovations and progress in academia, industry, and society in general. This lecture is part of the MIT Deep Learning Lecture Series.
https://www.youtube.com/watch?v=0VH1Lim8gL8&t=999s
#deep_learning #artificial_intelligence
YouTube
Deep Learning State of the Art (2020) | MIT Deep Learning Series
Lecture on most recent research and developments in deep learning, and hopes for 2020. This is not intended to be a list of SOTA benchmark results, but rathe...
A fruitful relationship between neuroscience and AI
https://deepmind.com/blog/article/Dopamine-and-temporal-difference-learning-A-fruitful-relationship-between-neuroscience-and-AI
#reinforcement_learning #machine_learning #neuroscience #artificial_intelligence
https://deepmind.com/blog/article/Dopamine-and-temporal-difference-learning-A-fruitful-relationship-between-neuroscience-and-AI
#reinforcement_learning #machine_learning #neuroscience #artificial_intelligence
Google DeepMind
Dopamine and temporal difference learning: A fruitful relationship between neuroscience and AI
Learning and motivation are driven by internal and external rewards. Many of our day-to-day behaviours are guided by predicting, or anticipating, whether a given action will result in a positive...
Neural Architecture Search for Transformers
In summary, they employed an evolutionary algorithm, with a novel encoding scheme, to search for an optimal transformer architecture.
https://www.youtube.com/watch?v=khA-fiC1Wa0&feature=youtu.be
In summary, they employed an evolutionary algorithm, with a novel encoding scheme, to search for an optimal transformer architecture.
https://www.youtube.com/watch?v=khA-fiC1Wa0&feature=youtu.be
YouTube
The Evolved Transformer
This video explains the Evolved Transformer model! The Evolved Transformer has been applied to the Meena bot, one of the most impressive chatbots to date. Th...
A Fascinating Philosophical Discussion about the Nature of Consciousness
https://www.youtube.com/watch?v=LW59lMvxmY4
#philosophy #consciousness
https://www.youtube.com/watch?v=LW59lMvxmY4
#philosophy #consciousness
YouTube
David Chalmers: The Hard Problem of Consciousness | Lex Fridman Podcast #69
David Chalmers is a philosopher and cognitive scientist specializing in philosophy of mind, philosophy of language, and consciousness. He is perhaps best known for formulating the hard problem of consciousness which could be stated as "why does the feeling…
Book: The SOAR Cognitive Architecture
Introduction: in development for thirty years, Soar is a general cognitive architecture that integrates knowledge-intensive reasoning, reactive execution, hierarchical reasoning, planning, and learning from experience, with the goal of creating a general computational system that has the same cognitive abilities as humans. In contrast, most AI systems are designed to solve only one type of problem, such as playing chess, searching the Internet, or scheduling aircraft departures. Soar is both a software system for agent development and a theory of what computational structures are necessary to support human-level agents. Over the years, both software system and theory have evolved. This book offers the definitive presentation of Soar from theoretical and practical perspectives, providing comprehensive descriptions of fundamental aspects and new components. The current version of Soar features major extensions, adding reinforcement learning, semantic memory, episodic memory, mental imagery, and an appraisal-based model of emotion. This book describes details of Soar's component memories and processes and offers demonstrations of individual components, components working in combination, and real-world applications. Beyond these functional considerations, the book also proposes requirements for general cognitive architectures and explicitly evaluates how well Soar meets those requirements.
https://dl.acm.org/doi/book/10.5555/2222503
#cognitive_science #neuroscience #reinforcement_learning #artificial_intelligence
Introduction: in development for thirty years, Soar is a general cognitive architecture that integrates knowledge-intensive reasoning, reactive execution, hierarchical reasoning, planning, and learning from experience, with the goal of creating a general computational system that has the same cognitive abilities as humans. In contrast, most AI systems are designed to solve only one type of problem, such as playing chess, searching the Internet, or scheduling aircraft departures. Soar is both a software system for agent development and a theory of what computational structures are necessary to support human-level agents. Over the years, both software system and theory have evolved. This book offers the definitive presentation of Soar from theoretical and practical perspectives, providing comprehensive descriptions of fundamental aspects and new components. The current version of Soar features major extensions, adding reinforcement learning, semantic memory, episodic memory, mental imagery, and an appraisal-based model of emotion. This book describes details of Soar's component memories and processes and offers demonstrations of individual components, components working in combination, and real-world applications. Beyond these functional considerations, the book also proposes requirements for general cognitive architectures and explicitly evaluates how well Soar meets those requirements.
https://dl.acm.org/doi/book/10.5555/2222503
#cognitive_science #neuroscience #reinforcement_learning #artificial_intelligence
Model Predictive Control: Powerful Optimization Strategy for Feedback Control
https://www.youtube.com/watch?v=YwodGM2eoy4
#optimization
https://www.youtube.com/watch?v=YwodGM2eoy4
#optimization
YouTube
Model Predictive Control
This lecture provides an overview of model predictive control (MPC), which is one of the most powerful and general control frameworks. MPC is used extensively in industrial control settings, and can be used with nonlinear systems and systems with constraints…
Complete Statistical Theory of Learning
https://www.youtube.com/watch?v=Ow25mjFjSmg
#statistics #machine_learning
#theory
https://www.youtube.com/watch?v=Ow25mjFjSmg
#statistics #machine_learning
#theory
YouTube
Complete Statistical Theory of Learning (Vladimir Vapnik) | MIT Deep Learning Series
Lecture by Vladimir Vapnik in January 2020, part of the MIT Deep Learning Lecture Series.
Slides: https://bit.ly/2ORVofC
Associated podcast conversation: https://www.youtube.com/watch?v=bQa7hpUpMzM
Series website: https://deeplearning.mit.edu
Playlist: ht…
Slides: https://bit.ly/2ORVofC
Associated podcast conversation: https://www.youtube.com/watch?v=bQa7hpUpMzM
Series website: https://deeplearning.mit.edu
Playlist: ht…
Artificial Intelligence from Perspective of Philosophers
https://plato.stanford.edu/entries/artificial-intelligence/
#AI #philosophy #history
https://plato.stanford.edu/entries/artificial-intelligence/
#AI #philosophy #history
Reinforcement Learning and Optimal Control.pdf
2.7 MB
Reinforcement learning and Optimal Control (Draft version)
Desperately looking for the original version of this book. If you could find it, please let me know.
#reinforcement_learning #optimal_control
Desperately looking for the original version of this book. If you could find it, please let me know.
#reinforcement_learning #optimal_control
Deep Reasoning Papers
A repository which contains recent papers including Neural Symbolic Reasoning, Logical Reasoning, Visual Reasoning, natural language reasoning and any other topics connecting deep learning and reasoning.
https://github.com/floodsung/Deep-Reasoning-Papers
#reasoning #deep_learning #artificial_intelligence
A repository which contains recent papers including Neural Symbolic Reasoning, Logical Reasoning, Visual Reasoning, natural language reasoning and any other topics connecting deep learning and reasoning.
https://github.com/floodsung/Deep-Reasoning-Papers
#reasoning #deep_learning #artificial_intelligence
GitHub
GitHub - floodsung/Deep-Reasoning-Papers: Recent Papers including Neural Symbolic Reasoning, Logical Reasoning, Visual Reasoning…
Recent Papers including Neural Symbolic Reasoning, Logical Reasoning, Visual Reasoning, planning and any other topics connecting deep learning and reasoning - floodsung/Deep-Reasoning-Papers
A Collection of Definitions of Intelligence
https://arxiv.org/pdf/0706.3639.pdf
#artificial_intelligence
https://arxiv.org/pdf/0706.3639.pdf
#artificial_intelligence
TensorFlow Quantum: An Open Source Library for Quantum Machine Learning
https://ai.googleblog.com/2020/03/announcing-tensorflow-quantum-open.html
#quantum_computing #machine_learning #quantum_machine_learning
https://ai.googleblog.com/2020/03/announcing-tensorflow-quantum-open.html
#quantum_computing #machine_learning #quantum_machine_learning
research.google
Announcing TensorFlow Quantum: An Open Source Library for Quantum Machine Learni
Posted by Alan Ho, Product Lead and Masoud Mohseni, Technical Lead, Google Research “Nature isn’t classical, damnit, so if you want to make a sim...
The Underlying Mathematics of New Coronavirus (COVID-19) Growth
https://www.youtube.com/watch?v=Kas0tIxDvrg
#math #statistics
https://www.youtube.com/watch?v=Kas0tIxDvrg
#math #statistics
YouTube
Exponential growth and epidemics
A primer on exponential and logistic growth
Help fund future projects: https://www.patreon.com/3blue1brown
An equally valuable form of support is to simply share some of the videos.
Special thanks to these supporters: https://3b1b.co/covid-thanks
Home page:…
Help fund future projects: https://www.patreon.com/3blue1brown
An equally valuable form of support is to simply share some of the videos.
Special thanks to these supporters: https://3b1b.co/covid-thanks
Home page:…
An overview of gradient descent optimization algorithms
Abstract: Gradient descent optimization algorithms, while increasingly popular, are often used as black-box optimizers, as practical explanations of their strengths and weaknesses are hard to come by. This article aims to provide the reader with intuitions with regard to the behaviour of different algorithms that will allow her to put them to use. In the course of this overview, we look at different variants of gradient descent, summarize challenges, introduce the most common optimization algorithms, review architectures in a parallel and distributed setting, and investigate additional strategies for optimizing gradient descent
https://arxiv.org/pdf/1609.04747.pdf
#deep_learning #optimization
Abstract: Gradient descent optimization algorithms, while increasingly popular, are often used as black-box optimizers, as practical explanations of their strengths and weaknesses are hard to come by. This article aims to provide the reader with intuitions with regard to the behaviour of different algorithms that will allow her to put them to use. In the course of this overview, we look at different variants of gradient descent, summarize challenges, introduce the most common optimization algorithms, review architectures in a parallel and distributed setting, and investigate additional strategies for optimizing gradient descent
https://arxiv.org/pdf/1609.04747.pdf
#deep_learning #optimization
What's Wrong with Artificial Intelligence: From the perspective of Prof. Richard Sutton
I hold that AI has gone astray by neglecting its essential objective --- the turning over of responsibility for the decision-making and organization of the AI system to the AI system itself. It has become an accepted, indeed lauded, form of success in the field to exhibit a complex system that works well primarily because of some insight the designers have had into solving a particular problem. This is part of an anti-theoretic, or "engineering stance", that considers itself open to any way of solving a problem. But whatever the merits of this approach as engineering, it is not really addressing the objective of AI. For AI it is not enough merely to achieve a better system; it matters how the system was made. The reason it matters can ultimately be considered a practical one, one of scaling. An AI system too reliant on manual tuning, for example, will not be able to scale past what can be held in the heads of a few programmers. This, it seems to me, is essentially the situation we are in today in AI. Our AI systems are limited because we have failed to turn over responsibility for them to them.
Please forgive me for this which must seem a rather broad and vague criticism of AI. One way to proceed would be to detail the criticism with regard to more specific subfields or subparts of AI. But rather than narrowing the scope, let us first try to go the other way. Let us try to talk in general about the longer-term goals of AI which we can share and agree on. In broadest outlines, I think we all envision systems which can ultimately incorporate large amounts of world knowledge. This means knowing things like how to move around, what a bagel looks like, that people have feet, etc. And knowing these things just means that they can be combined flexibly, in a variety of combinations, to achieve whatever are the goals of the AI. If hungry, for example, perhaps the AI can combine its bagel recognizer with its movement knowledge, in some sense, so as to approach and consume the bagel. This is a cartoon view of AI -- as knowledge plus its flexible combination -- but it suffices as a good place to start. Note that it already places us beyond the goals of a pure performance system. We seek knowledge that can be used flexibly, i.e., in several different ways, and at least somewhat independently of its expected initial use.
With respect to this cartoon view of AI, my concern is simply with ensuring the correctness of the AI's knowledge. There is a lot of knowledge, and inevitably some of it will be incorrrect. Who is responsible for maintaining correctness, people or the machine? I think we would all agree that, as much as possible, we would like the AI system to somehow maintain its own knowledge, thus relieving us of a major burden. But it is hard to see how this might be done; easier to simply fix the knowledge ourselves. This is where we are today.
Date: November 12, 2001
https://incompleteideas.net/IncIdeas/WrongWithAI.html
#artificial_intelligence
I hold that AI has gone astray by neglecting its essential objective --- the turning over of responsibility for the decision-making and organization of the AI system to the AI system itself. It has become an accepted, indeed lauded, form of success in the field to exhibit a complex system that works well primarily because of some insight the designers have had into solving a particular problem. This is part of an anti-theoretic, or "engineering stance", that considers itself open to any way of solving a problem. But whatever the merits of this approach as engineering, it is not really addressing the objective of AI. For AI it is not enough merely to achieve a better system; it matters how the system was made. The reason it matters can ultimately be considered a practical one, one of scaling. An AI system too reliant on manual tuning, for example, will not be able to scale past what can be held in the heads of a few programmers. This, it seems to me, is essentially the situation we are in today in AI. Our AI systems are limited because we have failed to turn over responsibility for them to them.
Please forgive me for this which must seem a rather broad and vague criticism of AI. One way to proceed would be to detail the criticism with regard to more specific subfields or subparts of AI. But rather than narrowing the scope, let us first try to go the other way. Let us try to talk in general about the longer-term goals of AI which we can share and agree on. In broadest outlines, I think we all envision systems which can ultimately incorporate large amounts of world knowledge. This means knowing things like how to move around, what a bagel looks like, that people have feet, etc. And knowing these things just means that they can be combined flexibly, in a variety of combinations, to achieve whatever are the goals of the AI. If hungry, for example, perhaps the AI can combine its bagel recognizer with its movement knowledge, in some sense, so as to approach and consume the bagel. This is a cartoon view of AI -- as knowledge plus its flexible combination -- but it suffices as a good place to start. Note that it already places us beyond the goals of a pure performance system. We seek knowledge that can be used flexibly, i.e., in several different ways, and at least somewhat independently of its expected initial use.
With respect to this cartoon view of AI, my concern is simply with ensuring the correctness of the AI's knowledge. There is a lot of knowledge, and inevitably some of it will be incorrrect. Who is responsible for maintaining correctness, people or the machine? I think we would all agree that, as much as possible, we would like the AI system to somehow maintain its own knowledge, thus relieving us of a major burden. But it is hard to see how this might be done; easier to simply fix the knowledge ourselves. This is where we are today.
Date: November 12, 2001
https://incompleteideas.net/IncIdeas/WrongWithAI.html
#artificial_intelligence
Crafting Papers on Machine Learning
This paper provides some useful hints and advice for preparing machine learning papers. Besides, consider that it is not meant to cover all types of papers.
https://icml.cc/Conferences/2002/craft.html
#machine_learning #writing
This paper provides some useful hints and advice for preparing machine learning papers. Besides, consider that it is not meant to cover all types of papers.
https://icml.cc/Conferences/2002/craft.html
#machine_learning #writing
Model-based evolutionary algorithms: a short survey
Abstract: The evolutionary algorithms (EAs) are a family of nature-inspired algorithms widely used for solving complex optimization problems. Since the operators (e.g. crossover, mutation, selection) in most traditional EAs are developed on the basis of fixed heuristic rules or strategies, they are unable to learn the structures or properties of the problems to be optimized. To equip the EAs with learning abilities, recently, various model-based evolutionary algorithms (MBEAs) have been proposed. This survey briefly reviews some representative MBEAs by considering three different motivations of using models. First, the most commonly seen motivation of using models is to estimate the distribution of the candidate solutions. Second, in evolutionary multi-objective optimization, one motivation of using models is to build the inverse models from the objective space to the decision space. Third, when solving computationally expensive problems, models can be used as surrogates of the fitness functions. Based on the review, some further discussions are also given.
https://link.springer.com/article/10.1007/s40747-018-0080-1
#evolutionary_algorithm #machine_learning
Abstract: The evolutionary algorithms (EAs) are a family of nature-inspired algorithms widely used for solving complex optimization problems. Since the operators (e.g. crossover, mutation, selection) in most traditional EAs are developed on the basis of fixed heuristic rules or strategies, they are unable to learn the structures or properties of the problems to be optimized. To equip the EAs with learning abilities, recently, various model-based evolutionary algorithms (MBEAs) have been proposed. This survey briefly reviews some representative MBEAs by considering three different motivations of using models. First, the most commonly seen motivation of using models is to estimate the distribution of the candidate solutions. Second, in evolutionary multi-objective optimization, one motivation of using models is to build the inverse models from the objective space to the decision space. Third, when solving computationally expensive problems, models can be used as surrogates of the fitness functions. Based on the review, some further discussions are also given.
https://link.springer.com/article/10.1007/s40747-018-0080-1
#evolutionary_algorithm #machine_learning
Complex & Intelligent Systems
Model-based evolutionary algorithms: a short...
Complex & Intelligent Systems - The evolutionary algorithms (EAs) are a family of nature-inspired algorithms widely used for solving complex optimization problems. Since the operators (e.g....
At the Interface of Algebra and Statistics
Abstract: This thesis takes inspiration from quantum physics to investigate mathematical structure that lies at the interface of algebra and statistics. The starting point is a passage from classical probability theory to quantum probability theory. The quantum version of a probability distribution is a density operator, the quantum version of marginalizing is an operation called the partial trace, and the quantum version of a marginal probability distribution is a reduced density operator. Every joint probability distribution on a finite set can be modeled as a rank one density operator. By applying the partial trace, we obtain reduced density operators whose diagonals recover classical marginal probabilities. In general, these reduced densities will have rank higher than one, and their eigenvalues and eigenvectors will contain extra information that encodes subsystem interactions governed by statistics. We decode this information, and show it is akin to conditional probability, and then investigate the extent to which the eigenvectors capture "concepts" inherent in the original joint distribution. The theory is then illustrated with an experiment that exploits these ideas. Turning to a more theoretical application, we also discuss a preliminary framework for modeling entailment and concept hierarchy in natural language, namely, by representing expressions in the language as densities. Finally, initial inspiration for this thesis comes from formal concept analysis, which finds many striking parallels with the linear algebra. The parallels are not coincidental, and a common blueprint is found in category theory. We close with an exposition on free (co)completions and how the free-forgetful adjunctions in which they arise strongly suggest that in certain categorical contexts, the "fixed points" of a morphism with its adjoint encode interesting information.
Introductory Video: https://youtu.be/wiadG3ywJIs
Thesis: https://arxiv.org/abs/2004.05631
#statistics #machine_learning #algebra #quantum_physics
Abstract: This thesis takes inspiration from quantum physics to investigate mathematical structure that lies at the interface of algebra and statistics. The starting point is a passage from classical probability theory to quantum probability theory. The quantum version of a probability distribution is a density operator, the quantum version of marginalizing is an operation called the partial trace, and the quantum version of a marginal probability distribution is a reduced density operator. Every joint probability distribution on a finite set can be modeled as a rank one density operator. By applying the partial trace, we obtain reduced density operators whose diagonals recover classical marginal probabilities. In general, these reduced densities will have rank higher than one, and their eigenvalues and eigenvectors will contain extra information that encodes subsystem interactions governed by statistics. We decode this information, and show it is akin to conditional probability, and then investigate the extent to which the eigenvectors capture "concepts" inherent in the original joint distribution. The theory is then illustrated with an experiment that exploits these ideas. Turning to a more theoretical application, we also discuss a preliminary framework for modeling entailment and concept hierarchy in natural language, namely, by representing expressions in the language as densities. Finally, initial inspiration for this thesis comes from formal concept analysis, which finds many striking parallels with the linear algebra. The parallels are not coincidental, and a common blueprint is found in category theory. We close with an exposition on free (co)completions and how the free-forgetful adjunctions in which they arise strongly suggest that in certain categorical contexts, the "fixed points" of a morphism with its adjoint encode interesting information.
Introductory Video: https://youtu.be/wiadG3ywJIs
Thesis: https://arxiv.org/abs/2004.05631
#statistics #machine_learning #algebra #quantum_physics
YouTube
At the Interface of Algebra and Statistics
This video is a nontechnical introduction to my PhD thesis, which uses basic tools from quantum physics to investigate algebraic and statistical mathematical structure.
"At the Interface of Algebra and Statistics"
available on the arXiv at https://arxi…
"At the Interface of Algebra and Statistics"
available on the arXiv at https://arxi…