[PYTHON] Reinforcement learning 20 Colaboratory + Pendulum + ChainerRL

It is assumed that you have completed reinforcement learning 19.

It was a limit to mess with so far, so chainerrl / examples / gym / train_dqn_gym.py I made a colaboratory notebook almost as it is. The point I devised is


args = parser.parse_args('')

it's dark. It took about 50 minutes, but it should be fine. I don't really understand the details.

import google.colab.drive
google.colab.drive.mount('gdrive')
!ln -s gdrive/My\ Drive mydrive
!apt-get install -y xvfb python-opengl ffmpeg > /dev/null 2>&1
!pip install pyvirtualdisplay > /dev/null 2>&1
!pip -q install JSAnimation
!pip -q install chainerrl

from __future__ import print_function
from __future__ import unicode_literals
from __future__ import division
from __future__ import absolute_import
from builtins import *  # NOQA
from future import standard_library
standard_library.install_aliases()  # NOQA

import argparse
import os
import sys

from chainer import optimizers
import gym
from gym import spaces
import numpy as np

import chainerrl
from chainerrl.agents.dqn import DQN
from chainerrl import experiments
from chainerrl import explorers
from chainerrl import links
from chainerrl import misc
from chainerrl import q_functions
from chainerrl import replay_buffer

import logging
logging.basicConfig(level=logging.INFO, stream=sys.stdout, format='')

parser = argparse.ArgumentParser()
parser.add_argument('--outdir', type=str, default='mydrive/OpenAI/Pendulum/result')
parser.add_argument('--env', type=str, default='Pendulum-v0')
parser.add_argument('--seed', type=int, default=0),
parser.add_argument('--gpu', type=int, default=0)
parser.add_argument('--final-exploration-steps',type=int, default=10 ** 4)
parser.add_argument('--start-epsilon', type=float, default=1.0)
parser.add_argument('--end-epsilon', type=float, default=0.1)
parser.add_argument('--noisy-net-sigma', type=float, default=None)
parser.add_argument('--demo', action='store_true', default=False)
parser.add_argument('--load', type=str, default=None)
parser.add_argument('--steps', type=int, default=10 ** 5)
parser.add_argument('--prioritized-replay', action='store_true')
parser.add_argument('--replay-start-size', type=int, default=1000)
parser.add_argument('--target-update-interval', type=int, default=10 ** 2)
parser.add_argument('--target-update-method', type=str, default='hard')
parser.add_argument('--soft-update-tau', type=float, default=1e-2)
parser.add_argument('--update-interval', type=int, default=1)
parser.add_argument('--eval-n-runs', type=int, default=100)
parser.add_argument('--eval-interval', type=int, default=10 ** 4)
parser.add_argument('--n-hidden-channels', type=int, default=100)
parser.add_argument('--n-hidden-layers', type=int, default=2)
parser.add_argument('--gamma', type=float, default=0.99)
parser.add_argument('--minibatch-size', type=int, default=None)
parser.add_argument('--render-train', action='store_true')
parser.add_argument('--render-eval', action='store_true')
parser.add_argument('--monitor', action='store_true')
parser.add_argument('--reward-scale-factor', type=float, default=1e-3)
args = parser.parse_args('')

# Set a random seed used in ChainerRL
misc.set_random_seed(args.seed, gpus=(args.gpu,))

if os.path.exists(args.outdir):
  raise RuntimeError('{} exists'.format(args.outdir))
else:
  os.makedirs(args.outdir)

print('Output files are saved in {}'.format(args.outdir))

def clip_action_filter(a):
    return np.clip(a, action_space.low, action_space.high)

def make_env(test):
    env = gym.make(args.env)
    # Use different random seeds for train and test envs
    env_seed = 2 ** 32 - 1 - args.seed if test else args.seed
    env.seed(env_seed)
    # Cast observations to float32 because our model uses float32
    env = chainerrl.wrappers.CastObservationToFloat32(env)
    if args.monitor:
        env = chainerrl.wrappers.Monitor(env, args.outdir)
    if isinstance(env.action_space, spaces.Box):
        misc.env_modifiers.make_action_filtered(env, clip_action_filter)
    if not test:
        # Scale rewards (and thus returns) to a reasonable range so that
        # training is easier
        env = chainerrl.wrappers.ScaleReward(env, args.reward_scale_factor)
    if ((args.render_eval and test) or
            (args.render_train and not test)):
        env = chainerrl.wrappers.Render(env)
    return env

env = make_env(test=False)
timestep_limit = env.spec.tags.get(
    'wrapper_config.TimeLimit.max_episode_steps')
obs_space = env.observation_space
obs_size = obs_space.low.size
action_space = env.action_space

action_size = action_space.low.size
    # Use NAF to apply DQN to continuous action spaces
q_func = q_functions.FCQuadraticStateQFunction(
    obs_size, action_size,
    n_hidden_channels=args.n_hidden_channels,
    n_hidden_layers=args.n_hidden_layers,
    action_space=action_space)
    # Use the Ornstein-Uhlenbeck process for exploration
ou_sigma = (action_space.high - action_space.low) * 0.2
explorer = explorers.AdditiveOU(sigma=ou_sigma)

if args.noisy_net_sigma is not None:
    links.to_factorized_noisy(q_func, sigma_scale=args.noisy_net_sigma)
    # Turn off explorer
    explorer = explorers.Greedy()

# Draw the computational graph and save it in the output directory.
chainerrl.misc.draw_computational_graph(
    [q_func(np.zeros_like(obs_space.low, dtype=np.float32)[None])],
    os.path.join(args.outdir, 'model'))

opt = optimizers.Adam()
opt.setup(q_func)

rbuf_capacity = 5 * 10 ** 5
if args.minibatch_size is None:
    args.minibatch_size = 32
if args.prioritized_replay:
    betasteps = (args.steps - args.replay_start_size) \
        // args.update_interval
    rbuf = replay_buffer.PrioritizedReplayBuffer(
        rbuf_capacity, betasteps=betasteps)
else:
    rbuf = replay_buffer.ReplayBuffer(rbuf_capacity)

agent = DQN(q_func, opt, rbuf, gpu=args.gpu, gamma=args.gamma,
            explorer=explorer, replay_start_size=args.replay_start_size,
            target_update_interval=args.target_update_interval,
            update_interval=args.update_interval,
            minibatch_size=args.minibatch_size,
            target_update_method=args.target_update_method,
            soft_update_tau=args.soft_update_tau,
            )

if args.load:
    agent.load(args.load)

eval_env = make_env(test=True)

experiments.train_agent_with_evaluation(
    agent=agent, env=env, steps=args.steps,
    eval_n_steps=None,
    eval_n_episodes=args.eval_n_runs, eval_interval=args.eval_interval,
    outdir=args.outdir, eval_env=eval_env,
    train_max_episode_len=timestep_limit)

agent.save(args.outdir+'/agent')
import pandas as pd
import glob
import os
score_files = glob.glob(args.outdir+'/scores.txt')
score_files.sort(key=os.path.getmtime)
score_file = score_files[-1]
df = pd.read_csv(score_file, delimiter='\t' )
df

df.plot(x='steps',y='average_q')

from pyvirtualdisplay import Display
display = Display(visible=0, size=(1024, 768))
display.start()

from JSAnimation.IPython_display import display_animation
from matplotlib import animation
import matplotlib.pyplot as plt
%matplotlib inline

frames = []
env = gym.make(args.env)
# Use different random seeds for train and test envs
env_seed =  args.seed
env.seed(env_seed)
# Cast observations to float32 because our model uses float32
env = chainerrl.wrappers.CastObservationToFloat32(env)
misc.env_modifiers.make_action_filtered(env, clip_action_filter)
# Scale rewards (and thus returns) to a reasonable range so that
# training is easier
env = chainerrl.wrappers.ScaleReward(env, args.reward_scale_factor)

envw = gym.wrappers.Monitor(env, args.outdir, force=True)

for i in range(3):
    obs = envw.reset()
    done = False
    R = 0
    t = 0
    while not done and t < 200:
        frames.append(envw.render(mode = 'rgb_array'))
        action = agent.act(obs)
        obs, r, done, _ = envw.step(action)
        R += r
        t += 1
    print('test episode:', i, 'R:', R)
    agent.stop_episode()
#envw.render()
envw.close()

from IPython.display import HTML
plt.figure(figsize=(frames[0].shape[1]/72.0, frames[0].shape[0]/72.0),dpi=72)
patch = plt.imshow(frames[0])
plt.axis('off') 
def animate(i):
    patch.set_data(frames[i])
anim = animation.FuncAnimation(plt.gcf(), animate, frames=len(frames),interval=50)
anim.save(args.outdir+'/test.mp4')
HTML(anim.to_jshtml())

Recommended Posts

Reinforcement learning 20 Colaboratory + Pendulum + ChainerRL
Reinforcement learning 21 Colaboratory + Pendulum + ChainerRL + A2C
Reinforcement learning 18 Colaboratory + Acrobat + ChainerRL
Reinforcement learning 17 Colaboratory + CartPole + ChainerRL
Reinforcement learning 28 colaboratory + OpenAI + chainerRL
Reinforcement learning 19 Colaboratory + Mountain_car + ChainerRL
Reinforcement learning 22 Colaboratory + CartPole + ChainerRL + A3C
Reinforcement learning 24 Colaboratory + CartPole + ChainerRL + ACER
Reinforcement learning 14 Pendulum was done at ChainerRL.
Reinforcement learning 27 colaboratory 90-minute rule measures chainerRL (+ chokozainerRL)
Reinforcement learning 2 Installation of chainerrl
Reinforcement learning 9 ChainerRL magic remodeling
Reinforcement learning 13 Try Mountain_car with ChainerRL.
[Introduction] Reinforcement learning
Future reinforcement learning_2
Future reinforcement learning_1
Reinforcement learning 11 Try OpenAI acrobot with ChainerRL.
Reinforcement learning 12 ChainerRL quick start guide windows version
Reinforcement learning 1 Python installation
Reinforcement learning 3 OpenAI installation
Reinforcement learning for tic-tac-toe
[Reinforcement learning] Bandit task
Python + Unity Reinforcement Learning (Learning)
Reinforcement learning 1 introductory edition
Reinforcement learning 7 Learning data log output
Play with reinforcement learning with MuZero
[Reinforcement learning] Tracking by multi-agent
Reinforcement learning 6 First Chainer RL
Reinforcement learning 5 Try programming CartPole?
Reinforcement learning Learn from today
Reinforcement learning 4 CartPole first step
Deep Reinforcement Learning 1 Introduction to Reinforcement Learning
Reinforcement learning 23 Create and use your own module with Colaboratory
Deep reinforcement learning 2 Implementation of reinforcement learning
DeepMind Reinforcement Learning Framework Acme
Reinforcement learning: Accelerate Value Iteration
I tried deep reinforcement learning (Double DQN) for tic-tac-toe with ChainerRL
Reinforcement learning 34 Make continuous Agent videos
Python + Unity Reinforcement learning environment construction
Explore the maze with reinforcement learning
Reinforcement learning 8 Try using Chainer UI
Reinforcement learning 3 Dynamic programming / TD method
Deep Reinforcement Learning 3 Practical Edition: Breakout
I tried reinforcement learning using PyBrain
Learn while making! Deep reinforcement learning_1