Motor Experience Alters Action Perception through ... ?· connection between motor and sensory components,…

Embed Size (px)

Text of Motor Experience Alters Action Perception through ... ?· connection between motor and sensory...

Motor Experience Alters Action Perception ThroughPredictive Learning of Sensorimotor Information

Jimmy Baraglia, Jorge L. Copete, Yukie Nagai, and Minoru AsadaGraduate School of Engineering, Osaka University2-1 Yamada-oka, Suita, Osaka, 565-0871 Japan

Email: {jimmy.baraglia,jorge.copete,yukie,asada}@ams.eng.osaka-u.ac.jp

AbstractRecent studies have revealed that infants goal-directed action execution strongly alters their perception ofsimilar actions performed by other individuals. Such an abilityto recognize correspondences between self-experience and othersactions may be crucial for the development of higher cognitivesocial skills. However, there is not yet a computational modelor constructive explanation accounting for the role of actiongeneration in the perception of others actions. We hypothesizethat the sensory and motor information are integrated at a neurallevel through a predictive learning process. Thus, the experienceof motor actions alters the representation of the sensorimotorintegration, which causes changes in the perception of othersactions. To test this hypothesis, we built a computational modelthat integrates visual and motor (hereafter, visuomotor) informa-tion using a Recurrent Neural Network (RNN) which is capableof learning temporal sequences of data. We modeled the visualattention of the system based on a prediction error calculatedas the difference between the predicted sensory values and theactual sensory values, which maximizes the attention toward nottoo predictable and not too unpredictable sensory information.We performed a series of experiments with a simulated humanoidrobot. The experiments showed that the motor activation duringself-generated actions biased the robots perception of othersactions. These results highlight the important role of modalitiesintegration in humans, which accounts for a biased perceptionof our environment based on a restricted repertoire of ownexperienced actions.

I. INTRODUCTION

In early infancy, humans are not yet able to detect the goalof others actions. Later on, infants undergo a developmentalprocess that allows them to perceive others actions as goal-directed. Several studies have been carried out to reveal whenand how infants start understanding goal-directed actions. Aremarkable work conducted by Woodward [1] shows thatinfants (5, 6 and 9 months old) with goal-directed actionexperience react differently to actors reaching for and graspingobjects. For the experiments four actors were presented to theinfants: a human arm, a rod, a flat occluder and a mechanicalgrasping tool. The experiment indicated that when infantswere habituated to goal-directed actions (i.e., the human armcondition) they showed a stronger novelty response to testevents that varied the goal of the action (e.g., the graspedobject) than test events that varied the physical propertiesof the action (e.g., the motion path). On the other hand, ifthe actions were not goal-directed (i.e., the rod and the flatoccluder conditions), or were goal-directed but difficult toinfer the agency of the actor (i.e., the mechanical grasping

tool condition), infants did not prefer one type of response tothe other (i.e., the goal of the action versus the properties ofthe action).

Later, Sommerville et al. [2] studied the impact of infantsaction production in perception of others actions. The partic-ipants of the experiment were 3-month-old infants. They weredivided in two groups. The first group of infants participated inan action task which consisted in interacting with two objectsin order to acquire action experience. Due to the fact that 3-month-old infants are not yet able to perform coordinated gazeand manual contact with objects, they put a pair of mittens onthe infants hands so that they can easily make contact withthem. The second group did not participate in the action task.Then, during the habituation phase, both groups of infants sawan actor reaching for and grasping one of two objects. Finally,during the test phase, the position of the objects was reversedand the infants saw new test events: a new goal event (i.e., theactor reached the same position as habituation but grasped adifferent goal), and a new path event (i.e., the actor reached adifferent position but grasped the same goal). Infants lookingtime was measured during the experiment. The results showedthat the looking time of the first group of infants was longerthan the looking time of the second group in the first trial ofthe habituation phase. Also, the first group of infants lookedsignificantly longer at the new goal event than the new pathevent, whereas the second group of infants looked equallyto both events. According to Sommerville et al. [2], thesefindings reflect infants ability to detect the goal structure ofaction after experiencing object-directed behavior, and to applythis knowledge when perceiving others actions. Therefore,in order to understand this phenomenon, we find importantto clarify the underlying mechanism that accounts for theinfluence of the motor system on the perceptual system andenables infants to detect the goal in others actions. Further, weneed to explain the connection of this mechanism to the visualattention, in accordance to [2] which found that self-actionproduction leads to an increase of visual attention to actiongoals. In regard to studies on visual attention, experimentsconducted by Kidd et al. [3] showed that, when varying thecomplexity of a sequence of visual events, the probability ofinfants (7- and 8-month-old) to look away from those eventsbecame higher when the complexity of the stimulus was verylow or very high. Their findings suggested that infants allocatetheir attention in order to maintain an intermediate level of

Proceedings of the 5th IEEE International Conference on Development and Learning and on Epigenetic Robotics pp. 63-69, August 13-16, 2015

complexity.In this study we build a computational model to explain

how action production alters perception of others actions asreported by Sommerville et al. [2]. Specifically, we focus onthe relation between the visual and motor systems and theeffect of action experience on the perception of others actions.Our hypothesis is that motor activity biases visual perceptionthrough the joint representation of visuomotor experiences.As an extension to the aforementioned hypothesis, we alsopropose that the allocation of visual attention is modulated bythe prediction error that results from making correspondencebetween own experience and others actions. We carried outexperiments using iCub Simulator to validate our hypothesis.

II. HYPOTHESISA. Findings in Infant Study

Sommerville et al. [2] reported that infants action experi-ence alters their perception when observing others actions.Here we focused on two main findings:

1) Experience apprehending objects initially increases in-fants attention to similar reaching events performed byanother person (Figure 2 in [2]);

2) Infants with apprehension experience look significantlylonger at new goal events than new path events, whereasinfants without that experience looked equally to bothevents (Figure 3 in [2]).

Sommerville et al. [2] suggested that the first finding mayreflect infants ability to recognize correspondences acrossexecuted and observed actions and/or an increased motivationto attend to action after acting. Regarding the second finding,they indicated that experience apprehending objects directlyprior to viewing the visual habituation event enabled infantsto detect the goal-directed structure of the event in anotherpersons actions.

B. Our Hypothesis

We argue that action experience is acquired through theprocess of integrating motor and sensory information, andthat the joint representation derived from this integration isused to learn to predict others actions. Further, the motorinformation contains a representation of the action goal thatalters the sensorimotor representation in terms of the goal[4], and that is how the motor experience allows infants todetect the goal in others actions. Here, the motor informationaccounts for the set of signals that control the motion of thebody, and the signals that encode the final target of the motion.For the case of reaching for an object with the hand, themotor signals controls the arms to move from their currentposition to a next position, and indicates the final target of themotion action. Based on this argumentation, our hypothesis isthat when perceiving others actions infants make predictionsbased on the sensory information (Fig. 1-a), whereas whengenerating actions infants learn to predict motor and sensoryinformation, and consequently to encode the sensorimotorrepresentation of experiences in terms of goals (Fig. 1-b).Therefore, since both action perception and action production

Fig. 1. Our hypothesis. (a) During the action observation infants receivesensory information and predict sensory information. (b) During the actionobservation infants receive sensory and motor information and predict sensoryand motor information.

share the same predictor, then action production has influenceon action observation.

During this learning process a prediction error arises be-tween the predicted sensory information and the actual one.The magnitude of the prediction error depends on the actionexperience. We argue that the prediction error plays a funda-mental role in the development of several cognitive abilities[5]. Several robotic studies propose that predictive learningcould account for the development of imitation [6], helpingbehavior [7] and goal-directed prediction in early infancy [8].Kilner et al. [9] showed that prediction error could account forthe development of the mirror neuron systems. Their hypothe-sis is that minimizing the prediction error could help inferringthe cause of an observed act