On Artificial Intelligence
108 subscribers
27 photos
36 files
466 links
If you want to know more about Science, specially Artificial Intelligence, this is the right place for you
Admin Contact:
@Oriea
Download Telegram
Distill and Transfer Learning for Robust Multitask Reinforcement Learning

"Most deep reinforcement learning algorithms are data inefficient in complex and rich environments, limiting their applicability to many scenarios. One direction for improving data efficiency is multitask learning with shared neural network parameters, where efficiency may be improved through transfer across related tasks. In practice, however, this is not usually observed, because gradients from different tasks can interfere negatively, making learning unstable and sometimes even less data efficient. Another issue is the different reward schemes between tasks, which can easily lead to one task dominating the learning of a shared model. We propose a new approach for joint training of multiple tasks, which we refer to as Distral (DIStill & TRAnsfer Learning). Instead of sharing parameters between the different workers, we propose to share a distilled policy that captures common behavior across tasks. Each worker is trained to solve its own task while constrained to stay close to the shared policy, while the shared policy is trained by distillation to be the centroid of all task policies. Both aspects of the learning process are derived by optimizing a joint objective function. We show that our approach supports efficient transfer on complex 3D environments, outperforming several related methods. Moreover, the proposed learning process is more robust and more stable---attributes that are critical in deep reinforcement learning."


https://www.youtube.com/watch?v=scf7Przmh7c
#reinforcement_learning #multi_task_learning #transfer_learning
Machine Learning for Combinatorial Optimization: a Methodological Tour d’Horizon

"This paper surveys the recent attempts, both from the machine learning and operations research communities, at leveraging machine learning to solve combinatorial optimization problems. Given the hard nature of these problems, state-of-the-art methodologies involve algorithmic decisions that either require too much computing time or are not mathematically well defined. Thus, machine learning looks like a promising candidate to effectively deal with those decisions. We advocate for pushing further the integration of machine learning and combinatorial optimization and detail methodology to do so. A main point of the paper is seeing generic optimization problems as data points and inquiring what is the relevant distribution of problems to use for learning on a given task."


https://arxiv.org/pdf/1811.06128.pdf
How Relevant is the Turing Test in the Age of Sophisbots?

Popular culture has contemplated societies of thinking machines for generations, envisioning futures from utopian to dystopian. These futures are, arguably, here now-we find ourselves at the doorstep of technology that can at least simulate the appearance of thinking, acting, and feeling. The real question is: now what?

https://arxiv.org/pdf/1909.00056.pdf
#machine_learning #technology #ethics
Noam Chomsky: Language, Cognition, and Deep Learning | Artificial Intelligence

Noam Chomsky is one of the greatest minds of our time and is one of the most cited scholars in history. He is a linguist, philosopher, cognitive scientist, historian, social critic, and political activist. He has spent over 60 years at MIT and recently also joined the University of Arizona. This conversation is part of the Artificial Intelligence podcast.

https://www.youtube.com/watch?v=cMscNuSUy0I
#natural_language_processing #deep_learning
Programming a quantum computer with Cirq (QuantumCasts)

Want to learn how to program a quantum computer using Cirq? In this episode of QuantumCasts, Dave Bacon (Twitter: @dabacon) teaches you what a quantum program looks like via a simple “hello qubit” program. You’ll also learn about some of the exciting challenges facing quantum programmers today, such as whether Noisy Intermediate-Scale Quantum (NISQ) processors have the ability to solve important practical problems. We’ll also delve a little into how the open source Python framework Cirq was designed to help answer that question.


https://www.youtube.com/watch?v=16ZfkPRVf2w
#quantum_programming
Adaptive_Computation_and_Machine.pdf
3.4 MB
Foundations Of Machine Learning
A must read book for machine learning researchers

It mainly discusses the mathematical background of machine learning algorithms.
No.Starch.Python.Oct_.2015.ISBN_.1593276036.pdf
5.4 MB
Python Crash Course
A comprehensive approach to programming
🐍 with Python
For Beginners
An Overview of Recent State of the Art Deep Learning Algorithms/Architectures

Lecture on most recent research and developments in deep learning, and hopes for 2020. This is not intended to be a list of SOTA benchmark results, but rather a set of highlights of machine learning and AI innovations and progress in academia, industry, and society in general. This lecture is part of the MIT Deep Learning Lecture Series.

https://www.youtube.com/watch?v=0VH1Lim8gL8&t=999s
#deep_learning #artificial_intelligence
Neural Architecture Search for Transformers

In summary, they employed an evolutionary algorithm, with a novel encoding scheme, to search for an optimal transformer architecture.

https://www.youtube.com/watch?v=khA-fiC1Wa0&feature=youtu.be
Book: The SOAR Cognitive Architecture

Introduction:
in development for thirty years, Soar is a general cognitive architecture that integrates knowledge-intensive reasoning, reactive execution, hierarchical reasoning, planning, and learning from experience, with the goal of creating a general computational system that has the same cognitive abilities as humans. In contrast, most AI systems are designed to solve only one type of problem, such as playing chess, searching the Internet, or scheduling aircraft departures. Soar is both a software system for agent development and a theory of what computational structures are necessary to support human-level agents. Over the years, both software system and theory have evolved. This book offers the definitive presentation of Soar from theoretical and practical perspectives, providing comprehensive descriptions of fundamental aspects and new components. The current version of Soar features major extensions, adding reinforcement learning, semantic memory, episodic memory, mental imagery, and an appraisal-based model of emotion. This book describes details of Soar's component memories and processes and offers demonstrations of individual components, components working in combination, and real-world applications. Beyond these functional considerations, the book also proposes requirements for general cognitive architectures and explicitly evaluates how well Soar meets those requirements.

https://dl.acm.org/doi/book/10.5555/2222503
#cognitive_science #neuroscience #reinforcement_learning #artificial_intelligence