Select Page

Milind Tambe | Harvard Paulson School of Engineering and Applied Sciences.

In 2007, Milind Tambe got a call from the chief of security at Los Angeles International Airport (LAX). He had a counter-terrorism security issue that he thought Tambe, then a professor of engineering at the University of Southern California, could help solve. “It was a very specific issue, not a general idea of improving security,” recalls Tambe of why his work with artificial intelligence (AI) could be a good fit. “He had several roads into the airport and he had limited number of checkpoints and canine patrols, so he needed to figure out where to put them.” Tambe got to work with his students designing models that used computer science methods rooted in game theory to predict the most favorable but randomized spots for checkpoints, solving the security chief’s conundrum.

For the professor with a social vision, the project was a game-changer. When Tambe left his home country of India to pursue a Ph.D in computer science at Carnegie Mellon, he never expected to be tackling issues of homeland security or homelessness. He was comfortable behind the scenes, designing algorithms and code. Still, Tambe always had a nagging desire to do good in the world and began thinking more about how to put his expertise into action. He was energized and excited by the new direction of his work, “It just seemed like a different way of doing research. I really thought, this is a much better fit for me; I was enjoying it much more.”

It just seemed like a different way of doing research. I really thought, this is a much better fit for me; I was enjoying it much more.

He was soon getting calls from other agencies, including the US Coast Guard and the Federal Air Marshals Service, to help them tackle the allocation of limited resources using Tambe’s pioneering security games methods. The research is credited with more $100 million in savings to US agencies.“It was really exciting being out there and seeing the methods that have been developed in the lab being used in the field, and then, conversely, from the field, we were designing new approaches and methods to solve these problems, because the methods we had didn’t quite always fit.”

Tambe was now doing more than doing good; he was pioneering an entirely new academic field as he put AI to work for social impact.

Tackling the world’s biggest issues

In 2016, Tambe co-founded the Center for Artificial Intelligence in Society at USC, expanding the interdisciplinary nature of his work and the broad-ranging impact it could have. The center was a joint venture between the schools of social work and engineering and set out to focus on research in AI that would help solve major global challenges.

“It just seemed like one by one, I could see more and more areas where AI could be applied,” says Tambe.

One of those areas was wildlife conservation. He partnered with an alliance of non-profits working on wildlife conservation to see if he could help with rampant poaching in wildlife parks in places like Cambodia and Uganda. He traveled to game parks to understand the issues, talked with rangers, mapped the terrain, and then went back to his lab to see what could be done. Tambe says the issue used the same underlying algorithms as the issue at LAX because the challenges had similar underpinnings: how do you station a limited number of rangers to have the maximum impact on poachers sneaking in and laying traps. He and his team were the first to apply AI models, Protection Assistant for Wildlife Security (PAWS), to help rangers remove tens of thousands of traps used to kill endangered wildlife in national parks and the solutions became models for parks around the globe.

Milind Tambe, left, works with Protection Assistant for Wildlife Security (PAWS).

Tambe started to see the impact of stopping issues underway, but he wondered if there was a way to influence people away from bad behaviors from the start. Instead of tackling terrorism, he thought, what if we could use social networks to stop people from getting radicalized in the first place. That idea, he says, caught the attention of his social work colleagues who thought this could help in the area of suicide prevention and HIV. Tambe and his PhD student Bryan Wilder created the first large-scale application of social network algorithms for HIV prevention, which spread HIV-prevention information among youth experiencing homelessness in Los Angeles. The AI-guided interventions had remarkable results, significantly reducing HIV-risk behaviors in the target population.

“It felt really great for me,” says Tambe of the work of the center. “It was very satisfying to see research directly being used and to see people benefiting very directly from the work.”

A focus on India

Tambe had been using AI to solve challenges across the US and even in far-flung areas of the world, but he was finally able to return home to India to use his expertise in ways he had always hoped. He noticed more researchers were getting Ph.Ds in AI in India, but wanted to respond to a growing need to connect their expertise with needs on the ground. To accelerate this process, Tambe joined Google India to start the AI for Social Good initiative, which runs a “matchmaking” service that connects AI researchers globally with non-profits that need their help.

In 2020, AI for Social Good matched up six projects and in 2021, that number jumped to 30. One of the projects is the Mumbai-based non-profit ARMMAN, which uses mHealth to reduce maternal and infant mortality. A key challenge for the organization was that women were dropping out of ARMMAN’s healthcare programs and putting themselves at risk. When pregnant women enroll in the program, they receive life-saving information on their phones to stay safe and healthy through their pregnancy. But 30% were not picking up the calls or not listening fully to the calls. ARMMAN called on Tambe to help.

ARMAAN had lots of data on the expecting mothers, and so Tambe and his team used that information to build AI models that could predict who among the low listening groups were at high risk of dropping out. They also figured out who would likely need a service call to re-engage with the program and who may re-engage on their own. The health workers on the ground had limited capacity, but now they had a plan to target those most in need. Prof. Tambe and team helped the organization make a 30% reduction in the drop out rate.

It’s impacts like these that continue to fuel the professor’s excitement for his work in the world.

AI for Social Good in partnership with ARMMAN. To turn on closed captioning, press the CC button at the bottom of the video. 

Harvard and beyond

Now at Harvard since 2019, Tambe is the Gordon McKay Professor of Computer Science and Director of Center for Research on Computation and Society, which has prioritized public health and wildlife conservation as two key areas of focus. However, he sees endless opportunities at the University to connect and create new ways of doing good in the world.

“It’s just massive in terms of the number of people I could collaborate with,” says the Tambe of Harvard. “These are things that are just awesome in terms of being here, finding all these interdisciplinary experts and there are just so many people to go and collaborate with in other disciplines.”

In his short time at the university, he’s already partnered on issues of tuberculosis and rapid tests and it’s only a matter of time before the man with a mission to make an impact finds myriad more ways to make that happen.