Someday, when a storm downs trees and power lines on campus or elsewhere, emergency workers may turn to autonomous robots for help with immediate surveillance.
鈥淢aybe you want a robot to roam around campus, because it鈥檚 safer for them than for a human,鈥 says Anthony Clark, assistant professor of computer science. 鈥淢aybe you have 10 robots that can take pictures and report back, 鈥楬ey, there鈥檚 a tree down here, a limb fallen there, this looks like a power line that鈥檚 down,鈥欌 he says, and technicians can be dispatched immediately to the correct location.
That day may not be too far off, thanks to research being conducted by Clark and three Pomona computer science majors. Right now they are working on computer simulations, exploring how to train autonomous robots to navigate the campus using machine learning. By spring, they hope to test their methods in actual robots, prototypes of which are already under construction elsewhere in Clark鈥檚 lab.
The group scoured the campus this summer to find a building whose interior would present challenges to the autonomous robots. They settled on the Oldenborg Center because it 鈥渨as potentially confusing enough for a robot trying to drive around,鈥 with one hallway, for instance, leading to stairs in one direction and a ramp in the other.
Machine learning, Clark explains, is a subset of artificial intelligence. 鈥淚t is basically an automated system that makes some decisions, and those automated decisions are based on a bunch of training data.鈥 To generate the data, the team created an exquisitely detailed schematic of the Oldenborg interior, down to a water fountain in a hallway. Kenneth Gonzalez 鈥24 took 2,000 photos and used photogrammetry software to determine how many images the robot actually would need for correct decision-making. Liz Johnson 鈥24 created another model with the flexibility to change various elements鈥攆rom carpet to wood or even grass on the floors, for example, or even rocks on the ceiling. Simon Heck 鈥22 worked on the back-end coding.
鈥淭he reason why we want to modify the environment, like having different lighting and changing textures, is so the robot is able to generalize,鈥 says Clark. 鈥淭he dataset will have larger amounts of diverse environments. We don鈥檛 want it to get confused if it鈥檚 going down a hallway and all of a sudden there鈥檚 a new painting on the wall.鈥
Clark says that once the group has models that work in virtual environments and transfer well to the physical world, they will make the tasks more challenging. One idea is to create autonomous robots that fly rather than roll. 鈥淚t鈥檚 pretty much the same process,鈥 Clark says, "but it鈥檚 a lot more complicated.鈥
The goal, Clark says, 鈥渋s a better way to make machine learning models transfer to a real-world device. To me, that means it鈥檚 less likely to bump into walls, and it鈥檚 a lot safer and more energy-efficient.鈥 What keeps him up at night is training a machine and then, for example, a person taller than in the dataset enters the field. The robot mischaracterizes what they are and runs into them. 鈥淚鈥檓 hoping the big takeaway from this work is how do you automatically find things that you weren鈥檛 necessarily looking for?鈥