OpenAI is a research organization that promotes friendly artificial intelligence. The phrase “friendly” come from the beneficial of AI to the humankind.

OpenAI was founded by Elon MuskSam AltmanIlya Sutskever, and Greg Brockman. Elon has concern of the dangers coming from AI. [1] Some university student also marshaling a demonstration in May Day regarding robots overtaking labors works.

Straightforward, we are jumping to the useful resources. You could get it here:


There is a tool named Gym, a system for learning. Gym is a tool for developing and improving reinforced learning. Reinforced learning is a branch of machine learning, which agents or software could took and action, and thus maximize it.

I will try Gym now, and later we will use Baseline, since I am newbie into Baseline.  Click the Gym. it will redirect you to

Lets try the introduction model!

To install Gym, open your terminal and type

pip install gym


Now we gonna try the sample source code shown in the main page

import gym
env = gym.make("CartPole-v1")
observation = env.reset()
for _ in range(1000):
  action = env.action_space.sample() # your agent here (this takes random actions)
  observation, reward, done, info = env.step(action)
  if done:
    observation = env.reset()

It was a game that use agent. It was said that:

A pole is attached by an un-actuated joint to a cart, which moves along a frictionless track. The system is controlled by applying a force of +1 or -1 to the cart. The pendulum starts upright, and the goal is to prevent it from falling over. A reward of +1 is provided for every timestep that the pole remains upright. The episode ends when the pole is more than 15 degrees from vertical, or the cart moves more than 2.4 units from the center.

This environment corresponds to the version of the cart-pole problem described by Barto, Sutton, and Anderson [Barto83].

[Barto83] AG Barto, RS Sutton and CW Anderson, “Neuronlike Adaptive Elements That Can Solve Difficult Learning Control Problem”, IEEE Transactions on Systems, Man, and Cybernetics, 1983.
      You could visit the source code here too:
    There is a menu named environment, where you could try the models.
        it was linked to:



Leave a Reply

Your email address will not be published. Required fields are marked *