Book Report: Fear of a Black Universe by Stephon Alexander

"Many of us working on anything even tangentially-related to ethical AI are already deviants."

Book Report: Fear of a Black Universe by Stephon Alexander
"You are an illustrator for a professional magazine. create a line and watercolor drawing of a professor playing jazz on a saxophone outside of a space observatory. This image is for a piece about the connection between jazz and physics" -Dall-E

Every once in a while here at MoP, we’ll recommend some reading that captures the state of play in AI, and helps us think more deeply about problem solving in this space. 

We’ll start with Stephon Alexander’s Fear of a Black Universe: An Outsider’s Guide to the Future of Physics. Much like AI, physics finds itself in the throes of conflict between hype and reality. [See: the first link under "Stuff Bassey Reads"]

I can't imagine a better read for anyone who wants to use new technology to build something that just might make the world a better place. Stephon Alexander (who also wrote “The Jazz of Physics” to give you an idea of his brain-space) is a Black professor of Physics at Brown University who received some formative advice early in his career: “Stop reading those physics books.”

Alexander's core idea is that innovation and invention so often happen when atypical thinkers find a way to combine the collective knowledge of apparently-unrelated disciplines. In his career, innovation came not just from applying the ethos of free-form Jazz to physics, but also applying the experience of being Black in America to Physics. 

Honestly, reading Fear of a Black Universe felt a bit like Alexander teaching me something about myself. In my career, I’ve found that journalism R&D challenges aren’t too different from that of composing a hip-hop track or writing long-form fiction: How can I tell a story in shorthand language that transcends cultural interpretation? How might I imagine a stable structure for this freestyle that still offers room to riff in the moment? 

“Let him finish! No one ever died from theorizing!” – a distinguished Indian physicist who quieted hecklers at Alexander’s first professional physics talk.

The book is not at all about AI, but it somehow manages to speak incredibly directly to the present tension in ML, which I truly hope is just a phase. Put simply, there are too many STEM grads in every room. (Sorry, Erica.) 

Without deeper collaboration between, well, machines and paper, the LLM revolution may yield the empty husks of “content,” but little of the networked creativity that facilitates its evolution. To be clear: Yes, I know that no one wants to produce endlessly generic content, but it takes an unusual skill to access and instrumentalize the parts of your brain that evolve human culture. 

In one of many brilliant little surprises, Alexander zeros in on the power of thought experiments — like Einstein's as he worked toward his theory on the nature of light. Again, I’m reminded of my own path. This time, I'm in the meeting rooms at national publications, asking a team to set aside long standing processes and new industry research for just a moment, and to instead imagine more ideal futures, and how we might get there. 

The lessons Jazz offers to Alexander are similar to those of improv: Learn to say “yes and…” when you’re working with someone, avoid imposing your own way of thinking about the problem.

Stepping back to my meeting-room, I see too many smart people beset by caution, struggling to produce useful insights that are any different from the test cases we study. 

The next big thing in Machine Learning WILL sound counterintuitive, or unworthy of the effort, or just plain crazy. If you decide to build that great big thing, or even just a small AI widget, the criticism Alexander faced in his early years will soon be yours. 

“Social and cultural orders are maintained by penalizing deviant behavior.” - Stephon Alexander

Alexander, unsurprisingly, considers himself a deviant. His very presence and his experiences in theoretical physics violate the “social conventions that restrict creativity."

Many of us working on anything even tangentially-related to ethical AI are already deviants. For those of you on the business side, you’ll be asked why you couldn’t exclude more humans from the loop, to test dangerous unproven tech on vulnerable societies, to eliminate more jobs. From the creative side, you will be told that you’re constructing Frankenstien’s monster, destroying human art, feeding an insatiable corporate machine. 

And yet, we have no choice but to invent a better path forward. To succeed, we’ll need to find not just the strength to ignore the hecklers, but the wisdom to recognize when someone else is being set upon by a braying crowd, and stand up for them. 

Comments

Sign in or become a Machines on Paper member to join the conversation.
Just enter your email below to get a log in link.