Latest Artificial Agent perceives surroundings like a Human

University of Texas

University of Texas

The latest innovation of Artificial Intelligence has unveiled an AI agent which perceives the surrounding environment as closely as humans. Computer scientists at the University of Texas, Austin have created the design which will be helpful in carrying out dangerous missions. This will allegedly be a stepping stone for search-and-rescue robots. The latest invention by the team at Texas have developed a general-purpose robot, which gathers visual information to be used for a wide range of tasks. The latest AI creation differs from the previous robotic designs, which were designed to carry out specific tasks, such as recognize an object, estimate its volume, or work in a factory for instance.

“We want an agent that’s generally equipped to enter environments and be ready for new perception tasks as they arise,” said Kristen Grauman, one of the heads of the research study. “It behaves in a way that’s versatile and able to succeed at different tasks because it has learned useful patterns about the visual world.”

Following the example of the process of deep learning, the scientists train AI agents. The team used a type of machine learning inspired by the brain’s neural networks, which trains the AI agents to perceive 360-degree images of different environments. The team allegedly used supercomputers from the University to train the agents with the help of an AI approach referred to as reinforcement learning. The agent uses its previous experience when it is presented with a new scene. The team pointed out that the experience is like that of a tourist standing in the middle of a cathedral, taking snapshots of the new scene around him. In case of the AI robots, they not only take pictures, but it is also a process of selective observation. After each glimpse they make notes and add the new information of the scene to their system. “Just as you bring in prior information about the regularities that exist in previously experienced environments … this agent searches in a nonexhaustive way,” Grauman reported. “It learns to make intelligent guesses about where to gather visual information to succeed in perception tasks.”

Allegedly the robot infers what it would have seen, had it looked in other directions. This is how it reconstructs a full 360-degree image of its surroundings. The scientists worked deeply on making the agent work under tight time constraints, which form an important feature, if the robot has to work in search-and-rescue actions. For instance, the AI agent must be efficient enough to locate people, hazardous objects and flames in a burning building and should be able to relay the information to firefighters.

Currently, it can only stand in a spot and is able to point to a camera in any direction, but it cannot move to a new position. In another feature, it can examine an object and flip it to inspect the back side. The scientists are presently working on enabling the robot to become fully mobile.