Chowdhuri codes for driverless cars, researches at Cal

By 2020, business magazine Forbes estimated 10 million driverless cars will be on the road, and by 2030, one in four cars will be driverless. Self-driving cars are quickly becoming a part of our life, and will soon be a way of traveling from place to place. Large companies like Tesla have already rolled out cars with self-driving capabilities, with sensors providing a 360-degree view, and with a view of 250 meters ahead.

But full autonomy still hasn’t been achieved, and cars are still crashing in test drives. In full autonomy, we expect cars to be able to maneuver themselves without mistake. Engineers are slowly working towards this goal by researching different methods to improve perception and behavior. Sauhaarda Chowdhuri (10) is part of the effort to find a way to allow cars to make the best decisions on the road, while having the same human sense.

University of California, Berkeley’s DeepDrive lab is devoted to advancing driverless cars. Chowdhuri was hired to do research after reformatting their code on Github, an online code storage website where programmers can collaborate easily. Other engineers can view and request certain changes to the code. They also can copy the code and modify it for their own uses, without affecting the original code.

Chowdhuri sent his own version of the code back to the lab, which greatly outperformed Berkeley’s own models, and he was hired by Karl Zipser, Principal Investigator and Research Professor. Upon joining the lab, he was allowed to research anything he wanted in the field.

“I was given this dataset, and these model RC cars, and he’d [his professor] say, ‘Do whatever you want,’ so I kind of had to think of some ideas. I stumbled into a salamander spinal cord and brain, which was really fascinating and completely outside of the computer science field, so I studied the animal. I actually created a neural network inspired off of that spinal cord the network’s behaviors and actions can be trained on datasets, essentially learning from them, and then perform tasks it was trained to do.

Chowdhuri paid a two-week visit to Berkeley while he was working on publishing a research paper, as well as doing demos to fundraise money from Nvidia, a technology company that manufactures graphics chips for mobile computing and the automotive market. In addition, he also helped his professor format and process data.

“[My professor] was collecting a whole new dataset with these cars in an arena, like a small room, rather than outdoors and driving on paths so it was in a very small condensed arena with six cars driving around at the same time and avoiding each other, and it’s a really complex environment like cars are randomly crashing into each other,” Chowdhuri said. His professor said that the cars would learn by themselves and gather good dataset for training.

While at Berkeley, Chowdhuri also needed to perform more control trials with multiple cars to validate his findings. He submitted his paper on the salamander neural net to the International Conference on Robotics and Automation (ICRA) in Brisbane on Sept. 15. Results for the conference will be released at the beginning of next year.

However, the salamander neural net was not Chowdhuri’s first idea. He previously considered three other potential projects, but kept running into obstacles that he couldn’t seem to fix which prevented him from researching them. Chowdhuri’s neural network features many behavioral modes that the car does on its own, such as direct mode, which drives straight and avoids obstacles, and follow mode, which is following the car in front. But his modes exhibited strange behaviors that he couldn’t explain.

“In follow mode, there’s usually quite a bit of stopping and slowing down, but in direct mode, there’s none of that,” Chowdhuri said. “The weird thing we noticed is that in direct mode, it suddenly started slowing down, speeding up and stopping when there was no obstacle. And we said, ‘Why is this happening?’ Our hypothesis was that it was learning from the other behavioral modes, because that’s the whole point of my paper; by training on multiple modes, it’ll train better than on a single mode. But when training it only with direct mode data, the car still exhibited the start-stop behavior.”

So Chowdhuri graphed the speed changes against every other possible quantity in his network, but he still couldn’t figure out why it was happening. Another confusing point about the problem was that his colleagues in Berkeley would encounter the problem when running his code, but at home in San Diego, Chowdhuri rarely encountered the problem. Unable to fix it, he decided to research a different topic, namely his salamander inspired neural net, due to the tight deadline for the paper.

Although he just finished his paper, Chowdhuri is already starting to work on a new project by using a process called reinforcement learning, which is learning from past mistakes, and improving on that. Advancements in driverless cars won’t stop, and neither will he.