Mar 20, 2020 · Posted by Yen-Chen Lin, Research Intern and Andy Zeng, Research Scientist, Robotics at Google The idea that robots can learn to directly perceive the affordances of actions on objects (i.e., what the robot can or cannot do with an object) is called affordance-based manipulation, explored in research on learning complex vision-based manipulation skills including grasping, pushing, and throwing.