Hey all. I’m a machine learning researcher and engineer.
There are many questions, unknowns, and beliefs out there in the world. Each person has their own beliefs,
and these are the output of a unique function that is constantly updating:
experience comes in, the brain filters the information, and acts, updating itself in the process. Millions of years of
natural selection has given us incredible brains, capable of processing massive amounts of information.
Of course, our brains evolved to mesa-optimize things that helped us survive and reproduce back in the day,
not neccessarily accurately analyze certain types of information. So when it comes to processing and analyzing, say,
thousands of numbers on a page, people are both overwhelmed and often fall prey to biases.
I see statistics as a search for the optimal way to evaluate evidence about the world: to estimate what to believe and how confident to be.
This isn't to say that the data you're looking at won't be biased, but statistics gives us the tools to attempt to make
better predictions and decisions.
Humans have been wildy successful as a species. We use tools, we're creative, but crucially: we're incredibly social, cooperative,
and we have the ability to communicate complex ideas to each other. Because of this, we've been able to build knowledge and skills very,
very quickly.
Very recently, we've started building machines that are far more capable at performing tasks we find useful.
Deep neural networks trained on billions of samples are opaque in their decision-making skills (and will fall prey to whatever subtle biases - in the non-ML sense - exist in
their training data), but yield stunning task accuracy.
That's amazing! The potential for good is incredible. Enter the era of Star Trek?! It's also a bit dangerous.
In the short term: economic instability amongst growth, expansion of impersonation,
malware, fake news, and spam. In the medium term: mass unemployment, automated war machines, and the generation of new bioweapons.
In the long term: the potential of AI agents with goals other than those that we want disempowering or destroying humanity.
I love humans. We're great. I care about humans alive today, and I also care about people who haven't been born yet - those who could be. (Note: I'm very pro-choice,
and I don't think these two viewpoints actually conflict if you have a low discount rate!)
I think it's unfair to use high discount rates on human life, and the potential for humanity is so vast, that I think it's worth putting
in a lot of effort to minimize any existential risks that we can.
For some people, this means working in very important areas dealing with climate change and nuclear war. For me, as someone who
loves statistics and has been working in machine learning for over a decade, this means focusing more on AI Safety.
I'm particularly interested in mechanistic interpretability, gated network architectures, and thinking about how to apply common
microeconomic concepts to encourage AI agents to act in ways that are beneficial to us in the medium term.
More about me
In my spare time, I enjoy making art, adventuring, woodworking, kayaking, playing board games, and listening to audiobooks.