Image Credit: Andrey_Popov/Shutterstock
@KYLE_L_WIGGERS
Source: venturebeat.com, May 2019


Our world might someday look like something out of an Isaac Asimov novel, and not for the worse. In one popular depiction of the far-flung future, robot butlers will attend to our whims and perform menial chores like washing dishes, folding laundry, and walking pets. They’ll look after our children, stand in for nurses and physician assistants at outpatient clinics and hospitals, and personalize meal plans in restaurants for every conceivable diet.

It’s an attractive vision to be sure, but here’s the hard truth: Logistical challenges stand in the way of Rosie-from-the-Jetsons-like self-sufficiency. The bulk of robots today lean heavily on heuristics, or handcrafted rules, to perform tasks. Consider Flippy, the burger-flipping bot from Miso Robotics: Its arms are more dexterous than your average industrial robot, but the motions they’re required to make are relatively few compared with those that would be expected of a hypothetical home robot. Flippy wouldn’t “know” the first thing about tucking a kid into bed, just as it’d be ill-equipped to make a soufflé or crudité.

Promising steps toward truly plug-and-play robots — that is, robots capable of learning skills with limited prior knowledge or instruction — are models like Nvidia AI’s SimOpt. It leverages reinforcement learning, a training approach that uses rewards to drive AI agents toward goals, to transfer simulated work into real-world action. In related research, scientists at Facebook AI and the University of California, Berkeley, employed reinforcement models to imbue robots with a “sense” of touch that extended their ability to move and manipulate objects.

Even cutting-edge reinforcement techniques aren’t particularly efficient — training SimOpt, for instance, requires about 9,600 two-hour simulations in Nvidia’s FleX physics simulation engine — but with the ubiquity of distributed computing, it’s not difficult to imagine how they might be scaled. Software running on powerful cloud AI accelerators might synthesize scenarios targeting a domain — for example, slicing vegetables — until a baseline level of accuracy is achieved, and then transfer the new knowledge to the real-world machine. In this way, robots might literally learn overnight.

Self-supervision is inexorably intertwined with reinforcement learning, which involves proxy tasks that enable AI agents to autonomously learn such that they achieve accuracy on par with (or better than) supervised models with fewer steps. So too is transfer learning, in which an AI system architected for one task is reused as the starting point for a second task.

The techniques are powerful when combined, as a team at Princeton, Columbia, and Google recently demonstrated. They developed TossingBot, a picker robot that learns to grasp and throw objects into boxes in never-before-seen locations. After 10,000 grasp and throw attempts over the course of about 14 hours, TossingBot can firmly grasp an object in a cluttered pile about 87% of the time.

To be clear, even the most sophisticated robot systems today — those that use a combination of reinforcement, transfer, and semi-supervised learning — are nowhere near as capable as even human infants. Mechanical limitations aside, they’re task-oriented and not particularly versatile (as alluded to earlier).

A growing body of research investigates unsupervised learning, which some experts believe might be the key to achieving true autonomy, for things like object sorting, with some adopting a hybrid approach that pairs unsupervised data collection with guided planning. For its part, Facebook is leveraging partially unsupervised reinforcement learning to train AI through repeated simulations that don’t require task-specific training. But it’s early days.

That’s all to say that real-world robots aren’t likely to catch up to their sci-fi betters anytime soon. Setting aside the question of unit economics, enormous barriers stand in the way of humanlike machines. Dogged researchers soldier onward undeterred, and their work will no doubt bear fruit in production systems. In the near term, though, don’t expect the robot deliveryman who drops off your next package to carry on much of a conversation.

For AI coverage, send news tips to Khari Johnson and Kyle Wiggers — and be sure to subscribe to the AI Weekly newsletter and bookmark our AI Channel.

Thanks for reading,

Kyle Wiggers

AI Staff Writer