By Ben Lansdell
Doing a PhD is a challenge in many ways: late nights, lots of coursework, a certain pervasive "academic guilt" – the feeling that there's always more papers to read, simulations to run, and code to write. And, what can be most challenging, dealing with the uncertainties, frustrations and doubts that come with trying to advance novel research questions. I enjoyed my time thoroughly while I was a PhD student at UW Applied Mathematics. But I also struggled with some of these challenges. It's only looking back now some years later that I think I really understand the value I got out of having gone through the program, and indeed of having struggled in the ways that I did.
I joined AMATH in 2010 to study mathematical biology. I was the first Australian to join the PhD program. Having prior research experience in bioinformatics from the University of Melbourne, I'd come to appreciate the unique challenges biology presents when trying to identify structure, patterns or, dare we say, laws. The problem is biology is messy – there are very few statements you can make that apply with total generality (it's not even true that all cells have DNA in them, for instance) – and its complexity exists across many interconnected spatial and temporal scales, not really bottoming out until what you're looking at is not biology at all, it's chemistry. What simplifying assumptions can one make when studying some biological system? What multi-scale model can you build that captures the right properties and behaviors?
Soon after joining I realized that the quantitative study of the brain, rather than being 'just' a subfield of mathematical/computational biology, is in fact a quite distinct discipline on its own. It is its own unique intersection of numerous areas of mathematics, neuroscience, computer science, statistics, and cognitive science. What makes it unique is the fact that neural activity in the brain has layers of meaning that are not quite as apparent in other biological systems: neural activity represents things – neurons' firing patterns relate to things in our thoughts and perceptions. Describing how the brain creates and manipulates these representations, it turns out, is somewhat akin to describing how the brain implements different algorithms to perform certain computations. I was fascinated with this picture of the brain, and wanted to make progress on developing it further in some way.
But computational neuroscience is filled with many compelling and important problems, which to work on? There's really two parts to this: identifying an interesting and worthwhile problem, and calling it your own to make progress on. To begin, I worked with Nathan Kutz, and we did make good progress on a model of neural activity that occurs during the retinal development – a dynamical systems model of traveling wave-like behavior in neurons in the retina – a phenomenon called retinal waves. However, wanting to get closer to experiments, I joined Adrienne Fairhall's lab in the department of Physiology and Biophysics, and worked on two quite distinct projects: one on analysis to understand how non-human primates (i.e. monkeys) learn to use a prototype brain-computer interface (BCI); and one, later on, on neuron tracking in a jellyfish-like organism called the hydra (a mostly transparent animal with only a distributed nerve net for a nervous system). With the support of Adrienne and our collaborators, I made progress on these too. But ultimately, I never fully committed and made any of these projects feel like mine. The resulting trajectory was one that changed direction a few times, which meant as a result that I didn't obtain as deep an understanding or as many results as I might have on any one of these problems. What expertise was I developing to build into a career?
Eventually, later in my PhD, I stumbled on a problem that I did come to view as my own, in the field of causal inference: very broadly, if two neurons firing in concert don't indicate a causal relationship, necessarily, then what does? I found causal inference to be a conceptually rich field, with many insights and methods that could be useful in neuroscience. These approaches did inform the analysis I did in the BCI study I was a part of. They then became the basis of the work I did while a postdoc at the University of Pennsylvania – applying observational causal inference techniques to models of learning in neural networks. And my knowledge of causal inference forms an important tool in my toolkit today as a machine learning engineer, one that I've used to build experimentation platforms.
The ability to identify research problems, to take ownership over them, and run with them is an important skill, both as a researcher and in industry. This became most clear to me a few years ago when I joined the startup I’m at now (incidentally, I was recruited for this position by former AMATH classmate, Natalie Sheils). Startups tend to place high value on people that take strong ownership over things, that can identify a problem and figure out some way to solve it, without waiting for someone to tell them what to do. A PhD is actually great training for this: you must do, present and sell your research to others, write your own papers, figures, grants, apply for conferences, summer schools and, by the end of the program, be able to identify the worthwhile problems you want to work on. Some people join a PhD program with a clear idea of what they want to research, others find it as they go. I appreciate all the people in AMATH who gave me guidance and support as I found my way.