Welcome to the Unknown Unknowns

Where do we go from here?

Unsplash

Welcome back, Shit Givers.

Sent this a day early to get my thoughts out into the open. Iā€™d be forever thankful for your feedback, and for sharing this widely.

Members received an exclusive Top of Mind post about forever chemicals this week. Donā€™t miss out on the next one.

āš”ļøĀ Last weekā€™s most popular Action Step was Redwood Energyā€™s pocket guide to retrofitting single-family homes for electrification.

šŸ‘©ā€šŸ’» You can read this on the website

šŸŽ§ Or once itā€™s live, you can listen to it (Apple Podcasts, Spotify)

šŸ“ŗļø Like to watch? Check out our YouTube channel

šŸ“§ New here? Sign up to join 12,000+ other Shit Givers

THIS WEEK

What we know ā€” and more importantly, what we donā€™t ā€” about what AI is capable of, and how much change weā€™re capable of absorbing.

Plus: the Willow Project, H-1B visas, blueberries, honeybee vaccines, climate disclosure rules, next-gen bed nets, and health insurance in North Carolina

TOGETHER WITH AVOCADO MATTRESS

Avocado A Little Green Season 2 - 1

Introducing season two of A Little Green, a short podcast series featuring stories of lives transformed by the power of nature, brought to you by our pals at Avocado Mattress.

Avocadoā€™s award-winning, certified organic mattresses are also inspired by the power of nature. Organic latex and wool from Avocadoā€™s own farms deliver luxurious support and breathability.

And Avocado is Climate Neutral Certified for net-zero emissions and donates 1% of all revenue to 1% For the Planet (like we do!).

Learn more at AvocadoMattress.com.

What We Can Do

āš”ļø Just a few bucks buys some life-saving bed nets with Against Malaria, maybe the most effective NGO on the planet

āš”ļø The only thing dumber than cancer is rare cancers. Good news: you can help fund research against them (and work up a sweat at the same time) with our friends at Cycle for Survival

āš”ļøĀ This is the best electric vehicle you can buy

āš”ļø Get educated and follow the Black Maternal Health Caucus on Twitter

āš”ļø Understand your homeā€™s exposure to flooding, fires, heat and wind with Risk Factor

Welcome to the Unknown Unknowns

Take a look at your calendar. Note the date. Today is the day the world changed forever.

Letā€™s step back into the human past for a moment, to 1992.

Even if youā€™ve never seen Jurassic Park, you know the quote, the one by a soon-to-be-gloriously-shirtless Dr. Ian Malcolm.

ā€œYour scientists were so preoccupied with whether or not they could, they didnā€™t stop to think if they should.ā€

ā€” David Koepp, screenwriter, Jurassic Park

Dr. Ian Malcolmā€™s quote has stayed relevant because it has applied to many of the technological advancements weā€™ve made since 1992, and it applies very much to whatā€™s happening in ā€œartificial intelligenceā€ ā€” but not in the way you think.

To be clear: Iā€™m not here to bash innovation, not by a long shot, friend.

The world is better in almost every measurable way since 1992 not just because of environmental protections, lawsuits, anti-smoking campaigns, and the Sustainable Development Goals, but also because of advancements in genome science, big data, medical devices, targeted cancer therapies, pharmacology, ART, heart disease, and so many more.

To answer Dr. Malcolm, we absolutely should have done these, because we could. We were finally, marvelously, technologically able, and millions of lives could have been ā€” and were subsequently ā€” improved by them.

Today, we can do many, many fruitful things, and we should do those, as well, because we understand them, and they are necessary.

For example, we have virtually every technology we need to build a world powered by renewable energy sources. And with climate change here, of course we ā€œshouldā€ build them, to bring meaningful relief to billions of people, animals, and ecosystems. The only ā€œcanā€ holding us back is the political will to overcome trillions in fossil-fuel subsidies and industry lobbying to build what the hell we need to build.

Once we overcome those, we can, should, and will spend the next few decades building an abundant, incredible world, relieving the devastating burdens weā€™ve put upon the planetā€™s ecosystems and most marginalized people, and honestly accounting and paying for new tradeoffs along the way.

We can do those things, now, and simultaneously, because we have the information we need to understand them, and (some) time to most ethically execute them. These are the known knowns.

With AI, ā€œcanā€ is no longer a question of technical ability ā€” weā€™re well past that ā€” but a much more urgent question of how much change we can possibly absorb. It requires us to shift Malcolmā€™s question from ā€œShould we do X?ā€ to ā€œCan we afford to do X, all things considered?ā€

This new AI era supersedes everything before it, where the abundance from its utter ubiquity will rapidly compound into known unknowns ā€” which we have some experience with, but donā€™t handle very well ā€” and soon, unknown unknowns, where fundamental assumptions of how society works evaporate and future shock becomes the status quo.

Why time matters.

One of the driving mechanisms of both the pandemic and climate crisis has been a refusal to calculate ā€” much less pay for ā€” the costs of a couple hundred years of progress, of building what we want to build, where we want to build it, and of whatever materials we please.

After decades of lies and lobbying, companies, governments, and rich people are finally doing the math on their daily emissions, and theyā€™re throwing millions of dollars at carbon offsets that arenā€™t real, in a dishonest attempt to forestall the future.

Editorā€™s note: It shouldnā€™t surprise you that I donā€™t support carbon offsets. In almost every version so far, they simply arenā€™t real. And worse, their existence sets decarbonization back even further, providing those same companies, governments, and the wealthy with a ā€œGet out of jail freeā€ card for continued emissions.

This isnā€™t that essay, but I do support the continued testing and scaling of carbon removal, if only executed in concert with all available efforts to get to ā€œreal zeroā€ new emissions.

The point is: The Industrial Era built the world we take for granted every minute of every day. It lifted billions out of poverty, but with perspective ā€” with time ā€” we understand just how much damage weā€™ve caused along the way.

We understand now what carbon does to the atmosphere, and that we can remove carbon ā€” and that our ocean and trees have been doing it for us all along. But there are many species we cannot bring back, and we cannot put sea level rise back in the box. It will march on for the rest of our lives, and our descendants, too.

Progress comes with tradeoffs. It can take time to understand what they are, and the first and second-order effects of those. Weā€™ve built some mechanisms to speed these up ā€” massive, randomized clinical trials, for example. But as humans, we can only know so much, and project so far into the future, much less travel to it.

Thereā€™s a reason we pull ice cores from the Antarctic and Greenland ā€” to give us a better understand of what happened the last time things were like this.

But with AI, there are no ice cores to pull. There is no precedent in the geological record. The only factor that is relevant about the past is an understanding of how we as humans have made decisions, given enormous change, given time to adjust.

With AI, we have no time to adjust, to assess what is necessary to who we are, and how the pieces of our society and economy fit together, whatā€™s worth preserving and what is not, and what our descendants lose when we let options be taken away.

To be clear: itā€™s not like AI overlords are going to tell people they can write fiction anymore. In the best case scenario, weā€™ll have even more time to write fiction. The question is who will pay us for it.

With AI, we can do wonderful, imaginative, and soon, impossible to imagine things. But the biodiversity of human contributions as we know it is at risk, and we donā€™t know what will happen with those go away.

In Thinking In Systems, Donella Williams et al. described what happens when we willingly sacrifice biodiversity:

ā€œWhen you understand the power of system self-organization, you begin to understand why biologists worship biodiversity even more than economists worship technology.

The wildly varied stock of DNA, evolved and accumulated over billions of years, is the source of evolutionary potential, just as science libraries and labs and universities where scientists are trained are the source of technological potential.

Allowing species to go extinct is a systems crime, just as randomly eliminating all copies of particular science journals or particular kinds of scientists would be.ā€

To say that we donā€™t all agree on this framing yet would be an understatement: we've normalized industrial meat to the tune of one soccer field of rainforest lost a minute, every minute, and air pollution that kills eight million people a year, every year, because they are convenient.

The climate clock has ticks remaining, but it is ticking faster, now.

Barring an asteroid or supervolcano explosion or both, climates usually change over millennia, or longer. We have sped ours up, and in the wrong direction, but we still have some time, some room for error, to do as much as we can.

Weā€™ve begun to course correct, like Captain Jack Aubrey in the Southern Ocean, chased by some massive Dutch ship-of-the-line, freezing cold water rising in the hold, our masts splitting down the middle, who knows how many midshipmen already flung over the side or exploded by cannon balls or grape shot, but somehow we push on with the belief that weā€™ll make it out of this, that clear skies and calm waters are just around the corner.

Weā€™ve (barely) enough time to turn the proverbial ship around ā€” knowing, of course, that millions have already suffered and many more will during the transition.

There are plentiful known unknowns when it comes to the climate crisis ā€” heat, drought, flooding, storms, and of course, what we can build with unlimited renewable energy, and what the contributions might be from eight million people a year who would have otherwise died from air pollution.

We donā€™t know, but weā€™re sure as hell going to find out. These are incredibly complex systems weā€™ve fucked with, and there are real tipping points with inevitable outcomes we canā€™t understand yet.

But weā€™ve triangulated the information we do have, and many of us are operating at maximum warp to build a radically better future and atone for the past, to build multisolve with shit like solar panels over dwindling reservoirs, to shut off the gas, to map the ocean floor, to protect it and the waters and creatures above it.

We can use AI to move faster on those. But what will be the costs to access the power we want?

I got into sci-fi writing because I wanted to help imagine whatā€™s just beyond our reach ā€” not too close, not too far ā€” and to question how we get there.

Years later, I find myself here, trying to help tens of thousands of readers more effectively put a dent in the universe.

Used ethically, AI can help us put one hell of a dent in the universe.

The future-positiveĀ known unknowns of AI are abundant:

New medicines, but for which diseases?

New ways of learning languages, but what might be the most effective way to do so?

New ways for less educated workers to compete (and contribute) alongside more educated ones? But where? Will it require Microsoft Office, Google Docs, or something else?

More productivity and more free time (for some), but more time to doā€¦what?

To find meaning?

Is it, as Viktor Frankl wrote, a new opportunity to ā€œtranscend subjective pleasures by doing something that points, and is directed, to something, or someone, other than oneself ā€¦ by giving himself to a cause to serve or another person to loveā€?

As it stands, the current pace of AI doesnā€™t give us much time. Time to react, much less to plan.

But like the climate crisis on fast forward, AI is only going compound on itself until the clock is ticking so fast that time doesnā€™t mean what it used to.

Recognizing we cannot slow AIā€™s progress now, it is essential we ask of ourselves, our money, our tools, and our time:

Whatā€™s it all for?

In A Wizard of Earthsea, series protagonist Ged is one of a few special wizards, a teen who feels his potential and powers are criminally unappreciated. So one day at school he lashes out at a rival, showing off in front of peers and mentors, never stopping to question what may come of it.

It goes poorly.

Ged spends the rest of the series tempering his mighty powers, and atoning for what he wrought, because he increasingly understands the wide-ranging implications of that one decision, and because he has the time to make recompense.

Weā€™re not going to temper anything ā€” with notable, muddied exceptions for nuclear weapons, cloning, and germ line editing, we donā€™t temper progress.

Even with those, we are mostly dealing in known unknowns.

In The Three Body Problem, Ken Liu translated Cixin Liuā€™s brief summary of human progress into English:

ā€œHumans took more than a hundred thousand Earth years to progress from the Hunter-Gatherer Age to the Agricultural Age. To get from the Agricultural Age to the Industrial Age took a few thousand Earth years. But to go from the Industrial Age to the Atomic Age took only two hundred Earth years. Thereafter, in only a few Earth decades, they entered the Information Age. This civilization possesses the terrifying ability to accelerate their progress.ā€

I donā€™t say this lightly: Todayā€™s AI-copilots might be obvious, but we have manifested a future of unknown unknowns.

Describing the scope of real AI as anything but everything, everyone, everywhere, all at once, would be a disservice, and we are not prepared for the transition.

Itā€™s one thing to adapt to a sea that is rising slowly but surely over decades and centuries.

Itā€™s another to adapt to a novel coronavirus for which we have no natural immunity, or AI tools that have literally just this week unlocked vast educational and productivity improvements, but which could quickly overturn our understanding of education and productivity, of employment, of inequality, of biological research, and a million other building blocks of society that we canā€™t possibly foresee.

AI ā€” or really just fancy machine learning ā€” has been a part of your life for a decade now, from social media to online advertising to Siri to mortgages and policing.

But compared to just this week, those tools were primitive at best, with results that have been, well, decidedly mixed. Known knowns, our most obvious instincts and biases at work, more connected and made faster.

I cannot believe I am saying this but as relatively limited as these new tools are compared to what weā€™ve always imagined artificial general intelligence, or AGI, would be, we are not far off from a version of it.

There is a very long way to go, but time-space does not mean the same thing to AI as it does to us, and LLMā€™s that can inhabit different personalities on demand, all while somewhat accurately posing as a law student, a radiologist, a musical historian, a micro-economist, and action-movie screenwriter is a paradigm-shift we are not ready for.

As of today we have entered a world and empowered a technology that we simply do not understand, much less are able to control or rein in. We barely know how to handle a late-pandemic, early-climate crisis economy, with known, measurable inputs and outputs.

Many things about AI are out of our control, but knowing what we can control and operating with purpose can upvote fantastical opportunities, and alleviate some of the inevitable and unimaginable losses.

We have to ask all of the hard questions right now.

I firmly believe we can celebrate a new era like this one while simultaneously questioning not only the ethics of who makes the underlying technology, what (and for something like face-scanning, who) itā€™s made from, who profits from it, and who will suffer from it.

Which is already getting more difficult to answer.

I intended this to be a more timeless piece ā€” if thatā€™s possible in the AI era ā€” but itā€™s important to understand for a moment how one of the primary players, Open AI, has evolved from a well-funded open research non-profit to, in part, a closed for-profit.

Google has always been for-profit, so Open AIā€™s pivot and feedback loop arenā€™t difficult to understand, or even a new idea: theyā€™ve said becoming a for-profit entity enables them compete for talent, who can use access to increased funding for more research, the subsequent intellectual property from which becomes a further profit mechanism, enabling them to hire even more talent, and so on.

But theyā€™ve also refused to share any more research and said sharing in the past was a mistake, because doing so alongside their effort to bring about AGI would give bad actors too many pieces to put together on their own. That is, itā€™s not because it would torpedo their partner and sugar daddy Microsoftā€™s new business model, the way Microsoft torpedoed their ethics and safety team just this week.

These moves require more pointed questions from us: is backtracking their way to win the arms race vs Google and Meta and others? Is it for safety? Is it to preemptively eliminate opportunities for regulation and enforcement?

Without more context, I think we can safely assume all of these are true. But AI doesnā€™t operate in isolation ā€” the exact opposite ā€” so without answers to these, asking broader questions becomes more difficult, too:

What are the raw mineral and climate impacts of NVDIAā€™s chips?

What are the power requirements for a day of use even now, at the beginning?

How much should our precious water cost to cool the data centers weā€™ll somehow become even more reliant on?

Who should regulate these things? States, countries, the UN? No one? The ā€œself-regulatingā€ market?

How will they self-regulate for ethics and safety without an ethics and safety team? Google set the pace by laying off their ethical AI team years ago. We can only assume this is the way forward.

And to paraphrase The Mandalorian himself, this is not the way, and certainly not when weā€™re dealing with unknown unknowns.

Maybe governments will step up? Before legislation and regulation comes understanding ā€” not simply how something works, but what its potential may be and who it could effect, to protect the vulnerable and still leave room for innovation and to maximize the universal good it can do.

A compromised, octogenarian Congress isn't the answer. But thatā€™s the obvious dig, and if it isnā€™t clear, I donā€™t think anyone has the answer. And willful ignorance DEFINITELY isnā€™t the fucking answer.

Which is why itā€™s so vital we ask better questions. Big questions. Hard questions.Ā 

One analogous climate-era example would be, ā€œHow can tens of millions of people continue to live in the American West knowing itā€™s well into desertification?ā€

A more future-positive AI question would be, ā€œIf the cost for pharmaceutical companies to research new medicines drops 90%, how can we cap consumer costs for new medicines (or devices) to provide for universal access (especially if AIā€™s going to make so many jobs suddenly expendable)?ā€

Hereā€™s what we know. Hereā€™s where we start.

The known knowns: Training these foundational models require very specific chips, and enormous amounts of power, both of which are enormously tenuous geopolitical questions right now. Derivative versions of the models, from the API or public research, require far fewer chips and far less power ā€” they can be trained more specifically and run right on your phone ā€” because the broader work is already done. For those who seek to profit from them, the work will never be done.

These tools will struggle at times to live up to all of the hype, including what Iā€™ve posited here. But they will eventually, technically make jobs and entire industries like graphic design, screenwriting, editing, non-fiction writing, accounting, architects, software development, data science, market research, legal, customer service, and many, many others expendable, and soon. Do not delude yourself. The copilots of today will become the pilots of tomorrow. There is no going back.

These tools, like the workers they are replacing, are very imperfect, often inaccurate, and biased. Weā€™d like to believe they know more than they do, when in reality they are incapable so far of making decisions on their own. But they will inevitably grow, and change. As they grow, they surprise us, so we expect more from them than they are capable of. This is what we do.

But this is also what we do:

We are a species that finds enormous meaning in work, in creativity, in expression. We are most happy when we are connected live and in person, and when we have a purpose to work towards, even if a rare few of us get to actually choose our work and that purpose, and on the other hand, even if many of us could stand to work a little less.

Progress will never slow but we can manage the transition by knowing ourselves as best we can.

We can electrify cars, but what about the tens of thousands of people who service combustion engines because they love working on machines?

We can automate checkout, customer service, or food prep but what is the cost of less human interaction?

How do we accommodate both versions?

If there are tools that let us spend more meaningful time with young people and the elderly, to rebuild our relationships with nature, to make it easier to converse with one another in whatever language, to personalize learning, to increase crop yields, to distribute our clean energy more efficiently, to increase access to financial services and essential infrastructure services, to provide for a more robust safety net, to predict natural disasters and speed recovery, and to make wellness more universal with an increased emphasis on preventative health ā€” we should use those.

Those tools are here, or coming, and thatā€™s wonderful. But we have to try to understand the known tradeoffs as best we can, and steel ourselves for the rest, considering our most basic needs.

So now is the time to ask big questions about social safety nets, about reinvigorating hands-on-work industries, improving labor standards, economic diversification, trade schools, re-training, and more, to support one another, to make for a soft landing.

Look. After all of this, time might not actually be real ā€” long story, Iā€™ll call you later ā€” but until we make some serious advances in theoretical physics, the past has already happened, and tomorrow is always right around the corner.

Thereā€™s no going back.

So we have no choice but to go into tomorrow with our eyes wide open, to make sure we donā€™t automate what gives us life, to take away the livelihoods of those that rely on the act of creation to find meaning for themselves, and whose creations often provide it for the rest of us.

ā€” Quinn

Support Our Work

INI is 100% independent and mostly reader-supported.

This weekly newsletter is free, but to support our work, get twice-weekly Member posts, my bi-monthly popular ā€œNot Importantā€ book, music, and tool recommendations, connect with other Shit Givers, and attend exclusive quarterly live events, please consider becoming a paid Member.

News From My Notebook


Beep Boop

Health & medicine

Climate

  • Joe Biden approved the Willow Project in Alaska and it sucks, but on the other hand, the US is on track for a major clean energy milestone

  • The worldā€™s first honeybee vaccine is like ā€œmagicā€

  • Use this tool to see how early spring came to your neighborhood

  • In the future, it might still cost $1 trillion per ppm of CO2 removed, we should pay it

  • The SECā€™s climate disclosure rules are coming. 85% of business execs said theyā€™re not ready.

  • Why did insurers slash Hurricane Ian payouts?

Food & water

COVID

  • COVID made maternal health outcomes much worse (and especially among Black people)

  • There doesnā€™t seem to be an association between Paxlovid and a COVID rebound, which is great

Got YouTube?

Join the conversation

or to participate.