Deep Gravity
384 subscribers
60 photos
35 videos
17 files
495 links
Download Telegram
Starting with the basics is a difficult (and frustrating) approach for learning programming. Instead, find a program that does something cool - a game or a website - and tinker with the code. Experiment and try to get it to do what you want.

Over time, you'll pick up the basics you need by solving problems and modifying/writing useful programs. It's tedious to learn the elements of code by themselves, but much more enjoyable when you're using them in the context of solving a problem.

Strings, lists, and functions are not inspiring. Code that helps to photograph a black hole or discover gravitational waves is definitely inspiring.

Here's a Python notebook you can run right now that shows how to process the data for observing gravitational waves

Here's some of the Python code used for photographing a black hole for the first time

In other words, learn to code from the top-down. First get a look at the big picture: what can I do with code? and then learn the fundamentals as you need them.

We learn code to solve problems and build things, not for the exercise itself!

Tweets by Will Koehrsen a Data scientist at Cortex Intel.

#Programming
#Learning

🔭 @DeepGravity
Algorithmic Improvements for #DeepReinforcement #Learning applied to Interactive Fiction

Text-based games are a natural challenge domain for deep reinforcement learning algorithms. Their state and action spaces are combinatorially large, their reward function is sparse, and they are partially observable: the agent is informed of the consequences of its actions through textual feedback. In this paper we emphasize this latter point and consider the design of a deep reinforcement learning agent that can play from feedback alone. Our design recognizes and takes advantage of the structural characteristics of text-based games. We first propose a contextualisation mechanism, based on accumulated reward, which simplifies the learning problem and mitigates partial observability. We then study different methods that rely on the notion that most actions are ineffectual in any given situation, following Zahavy et al.'s idea of an admissible action. We evaluate these techniques in a series of text-based games of increasing difficulty based on the TextWorld framework, as well as the iconic game Zork. Empirically, we find that these techniques improve the performance of a baseline deep reinforcement learning agent applied to text-based games.

Link

🔭 @DeepGravity