What Kind of Wake Will AI Leave?
Behold, our very first guest post! To kick us off: Evan Scherrer. Evan Scherrer is a rising senior at Drake University studying Computer Science and Artificial Intelligence. He is spending this summer working for the US Department of Energy in Oak Ridge, TN. In his free time, he enjoys cooking, playing trumpet, and petting his dog Aayla. Thank you so much, Evan!
In their discussion of Leaving a Clean Wake so far, Michael and Vienna have spent a lot of time discussing direct ethical impacts of one’s decisions. That is, in essence, what the Leaving a Clean Wake philosophy is about: think about the effects that your decisions and actions will have on the world and do your best to ensure that those effects are positive. However, in this post I want to talk a little bit more about the indirect effects of your actions—that is, how do your actions affect how other people interact with the world, and how does that relate to Leaving a Clean Wake?
Tools
Since I don't have any experience sailing (although if Michael and Vienna want to invite me on their next trip, I'd love to go with—hint, hint, Uncle!), I want to instead talk about this through the lens of my primary area of study: software tools and artificial intelligence (AI).
It's no secret that AI has become a very big deal over the past few years. Since OpenAI's first release of ChatGPT in 2022, there has been an explosion of competition. Other models, from Google's Gemini and Anthropic's Claude to open-source models such as Deepseek's R1, have made AI far more accessible to the general public than ever before. This comes with some wonderful potential benefits, but also some pretty alarming downsides (which we'll talk more about later).
In the meantime, however, I just want you to think of AI as a tool. Like a pen, a hammer, or a fire, AI isn't inherently good or bad—it depends on how it's used. AI is, in other words, amoral. While the technology is still early in its evolution, now is the time for serious discussion of how to best ensure that it aligns with human morality and flourishing. It is fair—urgent even—to ask how this should influence the people developing it. How should they proceed when they're working on new AI technologies?
What Does Leaving a Clean Wake Say?
I want to take another look at the ethical framework Vienna and Michael have been talking about over the past several posts.
At its core, Leaving a Clean Wake calls us to carefully consider the impacts of our decisions, and take action based upon which decision will ultimately leave the people and world around us a better place. With some things, that's relatively easy: don't throw trash overboard; don't cheat on a take-home exam—even if nobody would know; etc.
However, many tools—such as a pen, a hammer, or AI—can be used in a variety of ways. A pen can be used to record important information for later, but it can also be used to anonymously leave a cruel note. A hammer can be used to build a house, but it can also be used to cave someone's skull in. AI can be used to help you study for a test, but it can also be used to cheat on that same test.
If we evaluate those actions through the clean wake lens, each falls somewhere between morally abhorrent and morally "required". However, I don't want to examine the actions themselves—I want to examine the tools. Under Leaving a Clean Wake, was the person who designed the pen acting ethically? What about the hammer? What about the people developing AI systems?
More About AI
The largest focus of AI in the past few years has been on generative AI—AI models designed to generate text or images based on some input from a user. The most prominent example of this is a Large Language Model (LLM) such as ChatGPT, which specializes in generating text. However, there are other models that work with different forms of media, such as Midjourney for images or Sora for video (among others).
This kind of content generation isn't inherently problematic—there are great uses for it; I personally use LLMs often to help me learn new things, because the interactivity and real-time feedback isn't something I can get from watching videos. However, it also opens the door for some pretty scary things. Text-based generative AI is already degrading the quality of information available on the web—over the past few years there's been a large increase in AI-generated "articles" with very little substance and no credible sources of verifiable information[1]. Image- and video-based generative AI—in addition to threatening the viability of artists' careers—are also threatening the reliability of photo and video evidence.
You may have seen this image of Pope Francis circulating the internet; this event never happened—it was an AI-generated image.
With today’s technology, it's still relatively easy to tell the image is a fake—the Pope's hand is phasing into his water bottle, and his necklace only has a chain on one side. However, as the technology improves, these discrepancies will become more subtle—it'll get more difficult to tell images are fake. In addition, the uses of technology like this can get far more nefarious. Dis- and misinformation already tear at the fabric of civil society; revenge porn is already a serious issue that can play havoc with an innocent person’s life… it is not hard to imagine how AI-generated content could make these problems exponentially worse and much harder to detect and rectify.
If you're looking for a good book about the different ways that AI is likely to shape the next few decades, I'd highly recommend AI 2041 by Chen Qiufan—it's really well-written and not too technical. It does a phenomenal job at describing the potential dangers we as a society face as this new technology emerges.
It’s Not All Bad
I want to bring up one more example of AI that I think deserves a little bit more credit, but first I need to talk about a little bit of biology.
In order for your body to function, it needs building blocks called amino acids. These amino acids link together in long chains like LEGO bricks to form another type of molecule called a protein. These proteins come in a wide variety of shapes and are used for pretty much everything we do—from repairing tissue to communicating between cells to synthesizing energy.
The most important thing to note is that with proteins, structure equals function—if we can figure out the structure of a protein (the shape it folds into), we can figure out what it will do, and how it can be used. This has a number of potential applications: fighting a variety of diseases, helping tackle climate change, and plenty more[2].
Back in 2020, DeepMind—a subsidiary of Google—created an AI model to help solve the protein folding problem. With that model, they were able to figure out the structure of hundreds of thousands of previously unknown proteins. To make things even better, they released everything about the project—the database of proteins, the AI model, and the code to run it—to the public, for free.
From a Leaving a Clean Wake perspective, Google’s decision seems to me like an unequivocally good action—they don't make money directly off of this, they're stretching the benefits of it about as far as they can, and it's spurring on breakthroughs in environmental science and medicine[3].
A Student’s Perspective
Lots has been written, and will continue to be written about some of the obvious pitfalls of AI: the environmental consequences of its energy consumption, its appropriation of others' work as part of its training, likely business models that continue to hack human attention for algorithmically driven content and advertising, its impact on the future of work... I will let others sound the alarms about those potential pitfalls.
Where I feel like I do have a unique perspective, as a student of AI and future practitioner, is how I see the current generation of AIs already being used on campus: more and more often, people are using AI to do their thinking for them. The most common example that I see of this is people using it to cheat on their homework—I know several people who have been caught turning in a paper written entirely by ChatGPT, and submitting AI-generated code is now a frequent occurrence in introductory programming classes.
Leaving aside the fact that this kind of behavior is antithetical to the core principles of both Leaving a Clean Wake and academic standards, I see a number of other issues with it that can generalize beyond college campuses.
Primarily, as you cede your decision-making to AI, the things that AI creates lose the essence of what you put into them—there's less and less humanity left in that essay, poem, or even email as it becomes more and more the creation of an algorithm. Every decision that you make while writing is a reflection of you, the writer—and it's an opportunity to express something about yourself and your experiences. When you instead let an AI model make those decisions for you, you're giving up that opportunity to express yourself.
I could dedicate an entire post to this topic, and I think it can generalize to a lot more than AI, but the argument I'm making is this: as we hand over more of our lives to algorithms, we gradually lose some of what I believe makes us as humans special—the ability to draw on our previous experiences to express ourselves through what we create.
Another thing I'll note here is that while it may not be happening yet, I think that AI will likely be used in the future as a medium for targeted advertising and data collection. AI companies need a consistent revenue stream to become profitable, which isn't really happening yet—and companies like Facebook and Google have spent years refining the targeted advertising business model. This is a consistent pattern that has struck search engines, content distribution platforms, and social media so far—what's to stop it from coming to AI chatbots next? For a great book on this topic (mostly through the lens of social media), check out Ten Arguments for Deleting Your Social Media Accounts Right Now by Jaron Lanier.
Closing Thoughts
I want to make it very clear that I don’t think there are straightforward answers to the questions I raised in this post.
Instead, my goal in writing about AI from the perspective of someone poised to enter the field is to bring more attention to some of the issues we will have to confront as individuals, as an industry, and as a society. AI is here, and it's only going to get more prevalent, capable, and powerful in the future.
I believe that considering AI through the lens of Leaving a Clean Wake can provide some guidance: we need to ensure that AI is developed with integrity, transparency, and a lot of discussion. Because of the massive societal ramifications of AI (the massive wake that it'll leave, if you will), we need to talk a lot more about what its development should look like. How do we steer it to create more successes like AlphaFold that improve human health and well-being rather than just new advertising revenue from hacking human attention? What kind of wake will we leave with the tools we build?
[1] These articles I'm talking about are often targeted towards people who don't know much about the subject of the article, and are usually written to optimize placement in search engines and increase traffic to the site. Higher placement in search results leads to more visitors to the site, which leads to more advertising clicks and (sometimes) more data about those visitors that the website can sell. The fact that this is a viable business model is, in my opinion, a problem much larger than AI itself; Generative AI just makes the cost of running such a business much lower.
[2] https://deepmind.google/science/alphafold/impact-stories/