Difference between revisions of "Reinforcement Learning"

From Wiki2
Jump to navigation Jump to search
 
(15 intermediate revisions by the same user not shown)
Line 1: Line 1:
 
* [https://gym.openai.com/ OpenGym AI]
 
* [https://gym.openai.com/ OpenGym AI]
 +
* [https://rise.cs.berkeley.edu/blog/scaling-multi-agent-rl-with-rllib/ RLLib] [https://ray.readthedocs.io/en/latest/rllib.html docs]
  
 
==Multi-Armed Bandit Examples==
 
==Multi-Armed Bandit Examples==
Line 9: Line 10:
 
* [https://medium.com/idealo-tech-blog/using-deep-learning-to-automatically-rank-millions-of-hotel-images-c7e2d2e5cae2 Hotel Image Ranking] (asthetic & technical quality of images)
 
* [https://medium.com/idealo-tech-blog/using-deep-learning-to-automatically-rank-millions-of-hotel-images-c7e2d2e5cae2 Hotel Image Ranking] (asthetic & technical quality of images)
  
 +
==Multi-Agent Learning==
 +
* Stochastic games, Nash-Q, Gradient Ascent, WOLF, and Mean-field Q learning, particle swarm intelligence, Ant Colony Optimization (Colorni et al., 1991)
 +
* [https://towardsdatascience.com/smart-incentives-and-game-theory-in-decentralized-multi-agent-reinforcement-learning-systems-58442e508378 Game Theory in Smart Decentralised multi-agent RL]
 +
* As above:  It involves multi-agent reinforcement learning to compute the Nash equilibrium and Bayesian optimization to compute the optimal incentive, within a simulated environment. In the Prowler architecture, uses both MARL and Bayesian optimization in very clever ensemble to optimize the incentives in the network of agents. MARL is used to simulate the agents’ actions and produce the Nash equilibrium behavior by the agents for a given choice of parameter by the meta-agent. Bayesian optimization is used to select the parameters of the game that lead to more desirable outcomes. Bayesian optimizations find the best model based on randomness, which matches the dynamics of the system.
 +
 +
==Extra==
  
 
===Git Repos===
 
===Git Repos===
Line 19: Line 26:
 
* [https://github.com/tensorflow/agents TF-Agents] (TF-Agents is a library for Reinforcement Learning in TensorFlow)
 
* [https://github.com/tensorflow/agents TF-Agents] (TF-Agents is a library for Reinforcement Learning in TensorFlow)
 
* [https://github.com/kengz/SLM-Lab SLM-Lab] (Modular Deep Reinforcement Learning framework in PyTorch)
 
* [https://github.com/kengz/SLM-Lab SLM-Lab] (Modular Deep Reinforcement Learning framework in PyTorch)
 +
* [https://github.com/samshipengs/Coordinated-Multi-Agent-Imitation-Learning Coordinated-Multi-Agent-Imitation-Learning]
  
 
===Literature===
 
===Literature===
Line 24: Line 32:
 
* [https://arxiv.org/pdf/1706.06978.pdf Zhou et al. 2018] (Alibaba Group, Deep Interest Network, Click Through Rate Prediction)
 
* [https://arxiv.org/pdf/1706.06978.pdf Zhou et al. 2018] (Alibaba Group, Deep Interest Network, Click Through Rate Prediction)
 
* [https://medium.com/@vermashresth/a-primer-on-deep-reinforcement-learning-frameworks-part-1-6c9ab6a0f555 RL Frameworks]
 
* [https://medium.com/@vermashresth/a-primer-on-deep-reinforcement-learning-frameworks-part-1-6c9ab6a0f555 RL Frameworks]
* [https://arxiv.org/pdf/1802.09756.pdf Real Time Bidding] (Multi-agent reinforcement learning)
+
* [https://arxiv.org/pdf/1802.09756.pdf Real Time Bidding] (Distributed Coordinated Multi-agent reinforcement learning)
 +
[https://chemoinformatician.co.uk/images/RTB_multi-agent.png RTB image]
 +
* [https://rise.cs.berkeley.edu/blog/scaling-multi-agent-rl-with-rllib/ Berkeley Multi-agent RL Scaling OpenSource]
 +
* [https://arxiv.org/pdf/1901.10923.pdf?source=your_stories_page--------------------------- Coordinating the Crowd: Inducing Desirable Equilibria in Non-Cooperative Systems Multi-agent RL 2019]
 +
* [https://arxiv.org/pdf/1902.01554 Kim et al 2019] (Learning to Schedule Communication in Multi-agent Reinforcement Learning)

Latest revision as of 17:48, 10 July 2019

Multi-Armed Bandit Examples


Image Ranking

Multi-Agent Learning

  • Stochastic games, Nash-Q, Gradient Ascent, WOLF, and Mean-field Q learning, particle swarm intelligence, Ant Colony Optimization (Colorni et al., 1991)
  • Game Theory in Smart Decentralised multi-agent RL
  • As above: It involves multi-agent reinforcement learning to compute the Nash equilibrium and Bayesian optimization to compute the optimal incentive, within a simulated environment. In the Prowler architecture, uses both MARL and Bayesian optimization in very clever ensemble to optimize the incentives in the network of agents. MARL is used to simulate the agents’ actions and produce the Nash equilibrium behavior by the agents for a given choice of parameter by the meta-agent. Bayesian optimization is used to select the parameters of the game that lead to more desirable outcomes. Bayesian optimizations find the best model based on randomness, which matches the dynamics of the system.

Extra

Git Repos

Literature

RTB image