By Matthew Farrell
Disclaimer: I am not really an expert in the ideas expressed below and so my descriptions may be flawed. Anyone who finds issues is welcome to send me an email.
I've found that quantitative reasoning has helped me solidify, clarify, and organize my view of the world. To pay homage to my PhD years, here are ideas I've encountered on my mathematical journey that I've found useful.
1. Axiomitation of worldviews.
Mathematics is built upon axioms, namely the Zermelo-Fraenkel set theory (ZFC) axioms. These are the "rules" by which sets of objects can be built and modified. Interestingly, these axioms are not self-evident. In fact, initial attempts at inventing such a system (so-called naive set theory) failed -- one could use the axioms to construct sets that contradict the axioms.
Axioms set down a common language for mathematicians to talk to each other. In this way we have some hope of agreeing on if something is true or not (within the agreed-upon system). For instance, we must agree on if we will allow for the axiom of choice to agree on the veracity of some proofs.
I've found it useful to think of worldviews as being built on a set of axioms, or "fundamental assumptions".
In discussing issues, I've found that it is productive to keep in mind the underlying assumptions that someone is making when expressing a worldview, and to ask if that is consistent with my own. If not, a discussion about these assumptions can be a more direct way to come to a common understanding or find the precise points of divergence.
Axioms aren't purely subjective: as with naive set theory, it is quite possible to be using a set of axioms that grossly contradict themselves. We should probably be cautious though, as "viewpoint axioms" are not going to be as clean and consistent as the axioms of math, and it would be dangerous to expect them to be.
2. Bayesian probability theory and choice of prior.
When modeling variability in the world, a so-called Bayesian perspective requires one to make clear, explicit assumptions about this variability. This assumption is called a prior.
When I write error bars on a plot, I may be assuming that the data are samples from a normal distribution (the "bell-shaped curve").
When I was plotting accuracies of models, generating error bars in the standard way resulted in these error bars extending beyond where I knew the data could be. Accuracy is bounded between 0 and 1, and by modeling the data as a distribution that is also bounded, I was able to get error bars that looked much more reasonable.
The point here is similar to 1 above: it can be helpful to be explicit and clear about assumptions that are being made.
3. A milder butterfly effect.
This point I learned through presentations of an old postdoc in my lab, Guillaume Lajoie. Many have the intuitive notion of chaos as described by the butterfly effect: the insertion of a single flap of the wings of a butterfly can significantly alter the course of time. However, chaotic systems can still carry signal information over a long time, which at first seems a bit counter-intuitive. This is because the chaos in some systems is low-dimensional -- only a handful of variables are being impacted by the chaotic divergence at any given moment in time. This can result in a loss of information that is much slower than, say, injecting noise into the system.
This helps resolve a puzzle in my mind of how the world can be chaotic but still hold so much structure.
Thanks to everyone at the department and beyond -- you have made my time here very exciting and fun and fascinating!