File size: 4,358 Bytes
939d7eb
 
1
2
3
text
"has three advantages. first, we may need fewer text/image pairs to learn this mapping now that the inputs and outputs are lower dimensional. second, we are more likely to generate a plausible-looking image; any sensible values of the latent variables should produce something that looks like a plausible example. third, if we introduce randomnesstoeitherthemappingbetweenthetwosetsoflatentvariablesorthemapping fromthelatentvariablestotheimage,thenwecangeneratemultipleimagesthatareall described well by the caption (figure 1.12). 1.3 reinforcement learning the final area of machine learning is reinforcement learning. this paradigm introduces the idea of an agent which lives in a world and can perform certain actions at each time step. the actions change the state of the system but not necessarily in a deterministic way. taking an action can also produce rewards, and the goal of reinforcement learning draft: please send errata to [email protected] 1 introduction is for the agent to learn to choose actions that lead to high rewards on average. one complication is that the reward may occur some time after the action is taken, so associating a reward with an action is not straightforward. this is known as the temporal credit assignment problem. as the agent learns, it must trade off exploration andexploitationofwhatitalreadyknows; perhapstheagenthasalreadylearnedhowto receive modest rewards; should it follow this strategy (exploit what it knows), or should it try different actions to see if it can improve (explore other opportunities)? 1.3.1 two examples consider teaching a humanoid robot to locomote. the robot can perform a limited number of actions at a given time (moving various joints), and these change the state of the world (its pose). we might reward the robot for reaching checkpoints in an obstacle course. to reach each checkpoint, it must perform many actions, and it’s unclear which ones contributed to the reward when it is received and which were irrelevant. this is an example of the temporal credit assignment problem. asecondexampleislearningtoplaychess. again,theagenthasasetofvalidactions (chess moves) at any given time. however, these actions change the state of the system in a non-deterministic way; for any choice of action, the opposing player might respond withmanydifferentmoves. here,wemightsetuparewardstructurebasedoncapturing piecesorjusthaveasinglerewardattheendofthegameforwinning. inthelattercase, the temporal credit assignment problem is extreme; the system must learn which of the many moves it made were instrumental to success or failure. the exploration-exploitation trade-off is also apparent in these two examples. the robot may have discovered that it can make progress by lying on its side and pushing withoneleg. thisstrategywillmovetherobotandyieldsrewards,butmuchmoreslowly than the optimal solution: to balance on its legs and walk. so, it faces a choice between exploitingwhatitalreadyknows(howtoslidealongthefloorawkwardly)andexploring the space of actions (which might result in much faster locomotion). similarly, in the chess example, the agent may learn a reasonable sequence of opening moves. should it exploit this knowledge or explore different opening sequences? itisperhapsnotobvioushowdeeplearningfitsintothereinforcementlearningframe- work. there are several possible approaches, but one technique is to use deep networks to build a mapping from the observed world state to an action. this is known as a policy network. in the robot example, the policy network would learn a mapping from its sensor measurements to joint movements. in the chess example, the network would learnamappingfromthecurrentstateoftheboardtothechoiceofmove(figure1.13). 1.4 ethics it would be irresponsible to write this book without discussing the ethical implications of artificial intelligence. this potent technology will change the world to at least the this work is subject to a creative commons cc-by-nc-nd license. (c) mit press.1.4 ethics 13 figure 1.13 policy networks for reinforcement learning. one way to incorporate deepneuralnetworksintoreinforcementlearningistousethemtodefineamap- pingfromthestate(herepositiononchessboard)totheactions(possiblemoves). this mapping is known as a policy. sameextentaselectricity,theinternalcombustionengine,thetransistor,ortheinternet."