Causal Agents and Reinforcement Learning

Build agents that make decisions using causal reasoning.

Why is it called "reinforcement learning"? Because the agents learn like Pavlov's dog, do something good and get a treat, do something bad and get punished.

But learning in humans and other intelligent species are far more complex. The explain (why didn't that work?), imagine (what would happen if I tried...?), and reflect (how might have things have differently?).

Sequential decision-making systems are common in practice. Reinforcement learning - algorithms that optimize sequential decision-making agents, promises to extend these systems in ways that will bend society. But we'll need causal reasoning to do it.

Remember how deep learning sky-rocketed the careers of those who started early? This course will get you in on the ground floor of the next major milestone in machine learning.



Your Instructor


Robert Osazuwa Ness
Robert Osazuwa Ness

Robert didn't start in machine learning. He started his career by becoming fluent in Mandarin Chinese and moving to Tibet to do developmental economics fieldwork. He later obtained a graduate degree from Johns Hopkins School of Advanced International Studies.
After switching to the tech industry, Robert's interests shifted to modeling data. He attained his Ph.D. in mathematical statistics from Purdue University, and then he worked as a research engineer in various AI startups. He has published in journals and venues across these spaces, including RECOMB and NeurIPS, on topics including causal inference, probabilistic modeling, sequential decision processes, and dynamic models of complex systems. In addition to startup work, he is a machine learning professor at Northeastern University.


This course is closed for enrollment.