Post-Scarcity Achievementt
This essay is my fourth essay for my Ethics of AI philosophy tutorial (under the tutelage of Benjamin Lang). In the essay, I stumble my way through arguing that not all jobs should be replaced by machines. This was not my finest work. During the tutorial we talked about the weakness of some of my arguments (like that control of offspring is connected to autonomy), how I stated conjecture as truth, and how I used different definitions of jobs at times. Related, the last paragraph of the paper took the wind out of the sails of the whole argument because I was basically saying that the safety reason is the real reason why we shouldn’t automate all jobs - not autonomy and values. Ben’s feedback to me was to “see if you can pick out the strongest (defensible) version of whatever argument you’re making. It will be rhetorically more compelling and philosophically more interesting”.
Imagine a world where no one needs to work for ‘a living’. Goods and services necessary for survival and satisfying a significant amount of desires are cheap or even free. Unfortunately, the technological progress required to bring us to this world threatens values associated with meaningful work such as: a sense of purpose, mastery of a skill, social contribution, and social status (Danaher 2022). Given this issue, should we aim to replace all jobs with AI and machine automation? This paper will argue that there are some jobs - specifically those related to autonomy and the pursuit of achievement - where humans ought to remain in the driver’s seat. These jobs can be categorized into two spheres: internal, focused on ourselves, our society, and our relationships and external, focused on our understanding and exploration of the physical world. This paper argues that there exist jobs in both spheres that should not be automated because doing so risks our autonomy and our values.
Achievement
Work, as commonly defined in the literature, is “any activity that is performed in return for, or in the reasonable expectation of, an economic reward” (Danaher 2022, 750). Jobs are defined as collections of work-related tasks associated with a workplace identity that may be redefined or altered over time (Danaher and Nyholm 2020, 228). In a post-scarcity society, work is not a necessity, so neither are jobs. This does not, however, imply that we should automate all jobs because values like autonomy and the benefits of meaningful work would be lost.
Achievement is a “positive manifestation of responsibility” where instead of deserving blame, one deserves praise (Danaher and Nyholm 2020, 230). In a world where work has no instrumental necessity, achievement can rise to take its place and ensure we maintain the values associated with meaningful work. Four conditions under which achievements can be assessed are: the value of the output produced, the causal connection between the agent and the output, the cost of the agent’s commitment to producing the outcome, and the voluntariness of the agent’s actions (Ibid., 231). We can derive similar meaning-related goods from achievement as we can from work because the value of the output we voluntarily and causally create can give us a sense of purpose and self-worth derived from contributing to society.
There are jobs in a post-scarcity world which, if automated, would endanger human autonomy. Autonomy is a foundational value in many eminent philosophical theories, so we should seek to protect it (Christman 2020). In addition, some of these jobs also allow us to enjoy some of the values currently associated with meaningful work. Therefore, there are strong reasons to avoid automating these jobs. The following sections will focus on specific examples of these jobs in the external and internal spheres.
The External Sphere
“We choose to go to the Moon in this decade and do the other things, not because they are easy, but because they are hard” - JFK. A post-scarcity scenario invites us to explore our physical world with the aid of intelligent and high-agency machines. We can expand the scope and scale of consciousness. It is very plausible that people will plan and execute journeys into space - once they have the means, adventurous spirits have shown throughout history their desire to explore. What is one job in exploration that could compromise human autonomy if it were to be automated? The job of principal investigator, the leadership role of determining where humans should explore. If this job was automated, we would have a scenario of order-following collaboration - like an Amazon warehouse worker following instructions from an algorithm (Danaher and Nyholm 2020, 233). In this case, if humans are instructed by machines to explore a certain place, and they blindly follow those instructions, we have lost an ability to question and reason about decisions that deeply affect us. We need people in leadership positions to question, debate, explain, and explore the options given to us by intelligent systems so that we can maintain our autonomy as a species.
Similar to spatial exploration, we can make an argument that the jobs of principal investigators in the sciences and other branches of human knowledge should not be automated. There are an exponential number of different paths research can take, branching at every decision, so for research that could potentially influence us, we should have humans guiding AI systems and choosing which questions to ask to maintain our autonomy. Humans will shift from doing science to guiding, interpreting, and governing it with these core roles: purpose setting (deciding what questions to ask), ethical context (legal issues, defining boundaries), contextual synthesis (interpreting machine-made discoveries and communicating them with other humans), and orchestrators (coordinating agent swarms) (Weisser 2025).
The job of communicating science is an especially interesting job that would hurt our autonomy if fully automated. If we as a society cannot understand what new discoveries reveal and the trade-offs between options that emerge because of new technologies enabled by discovery, our autonomy is threatened. One could argue that we are seeing this now with students (and teachers) using AI in ways that are detrimental to learning. Crucially, human-made explanations have a valuable property that AI-generated ones do not. When a human sees a human-made explanation, they can think “if they can understand it, so can I!” because both the reader and the author have very similar biological hardware. This is one property a machine-generated explanation lacks for humans, and is a reason to value human-made explanations even in an age of inexpensive AI explanations. Another property AI explanations lack is related to ethos - the character and credibility of the speaker. One could argue that the personality of the model and its performance on benchmarks are the same as its character and credibility. However, the current paradigm of LLMs is built around mostly one-off conversations that feel lacking in credibility and continuity when compared with humans.
Through this job of communication, we can also see opportunities for achievement that allow us to realize values associated with meaningful work. One specific example of an achievement involving the job of scientific communication is the youtube channel 3 Blue 1 Brown by Grant Sanderson - a prolific creator of visually intuitive, engaging, and helpful videos on topics in math. The post-scarcity age would reduce the value of these videos because AI models would be able to generate them on-demand. However, the other properties of achievement would stay mostly intact: Grant Sanderson’s casual contribution, the cost of his commitment, and the voluntariness of his actions would be similar to what it is now. The job would also allow him to still enjoy the values associated with meaningful work. His sense of purpose, of creating the best explanations for math concepts, would remain. He would still be enjoying the journey of mastery over the skills of explanation. He would still be contributing to society by providing another perspective on how to understand math - although his contribution could be said to diminish if there were a hundred great AI-generated videos on the same topic already out there. Similarly, his social status would benefit from this achievement but perhaps less so than if other videos had not been available beforehand. Sanderson’s job, and others like it, should not be automated away because they help protect human autonomy, have valuable properties AI-generated explanations don’t, and allow meaning-related goods to be enjoyed.
The Internal Sphere
What jobs that are internal to society should avoid automation? Like the role of a principal investigator, one job that should not be automated is the job of politicians. Their role and responsibilities, however, may change. A post-scarcity world could contain superintelligent AI agents that could ensure you understand the consequences of your vote and help ensure you vote in a way that aligns expected outcomes with preferences. In this scenario, politicians wouldn’t have to spend time convincing people about the merits of their platform - instead they would focus solely on crafting policies about how to best use collective resources to further the interests of society. If the core part of this job, leadership, was automated, humanity would risk its autonomy in the same way that Amazon warehouse workers have lost autonomy following algorithmic instructions. Giving the ability to control humanity’s direction to non-human systems endangers our autonomy because we no longer can be said to be self-governing. To ensure that AI systems are aligned with humans requires humans in the loop to act as feedback mechanisms. The job of politicians also contains meaning-related goods: leading society in addressing its problems and future prospects instills a sense of purpose and involves significant social contribution.
Another job that should avoid automation is the job of child care. Like a parent who cedes their responsibilities to an iPad, ceding responsibilities to AI systems may come at a short-term benefit (child stops misbehaving) but a long-term cost (child doesn’t learn vital social skills). Our offspring are kind of like biological continuations of ourselves, so a lack of influence over them could be thought of as harming our ability to self-govern. This becomes worse when we consider our reliance on them in the future (although less so materially in a post-scarcity world, but perhaps still emotionally) so neglecting their upbringing now has direct consequences on us later. Child care is also the source of meaning-related goods. Empowering children to live good lives instills a sense of purpose. Well-educated and well-rounded children are profoundly important social contributions - they are the next generation of society. Automating this job therefore both harms our autonomy and deprives people of meaning-related goods.
The job of professional game players should not be replaced by machines. This is not an issue about creating machines that play games like chess extremely well - this issue is more related to the integrity of games played by humans and the maintenance of an environment in which achievements can be accurately assessed. Professional game players represent a pinnacle of achievement in a post-scarcity world because game-playing has the property that if “all instrumental goods are provided, it would be everyone’s primary pursuit” (Hurka 2006, 220). If professional game players were replaced by machines, games would lose the environment that allows us to pursue and assess achievements that result in meaning-related goods in a post-scarcity world. People at the high end of the achievement spectrum act as landmarks for others to aspire towards and they redefine the limits of human performance. In this way, they provide a platform through which everyone can enjoy meaning-related goods like mastery of the skills associated with the game, contribution to society by further pushing those limits, and achieving social status for performing at a level known to be impressive.
One way to formalize the notion of a game is that it has three elements: a prelusory goal (an aim that can be described independent from the game), constitutive rules (rules that forbid the most efficient means to the prelusory goal), and a lusory attitude (an acceptance of the rules to make the game possible) (Ibid., 219). Cheating in games clearly violates the constitutive rules of a game played under the assumption of no outside assistance. If we were to replace professional players with machines, we would be violating the usually implicit rule in games that players are human. Essentially, we would be playing a different game. We have different competitions for men and women in sports, where we don’t ‘replace’ the best woman with a man if the man’s performance is higher. Analogously, we should not replace human gamers with machines but instead have different categories of machine-assisted and human-only competition. Therefore, we can maintain an environment in which human achievement can still be assessed and meaning-related values associated with achievement can be enjoyed.
Alignment vs. Efficiency
If AI is faster and better than the best humans at leading exploration and research, proposing plans for society’s future, caring for our children, and playing games - as measured by benchmarks we create - doesn’t it make sense to trade some of our autonomy for the efficiency gains we will receive and benefits we will enjoy? Therefore, shouldn’t all jobs that can be done by AI better than humans be automated? This trade is a short-term gain but a long-term risk. While AI may be aligned with our values in the moment the trade occurs, values may shift over time and we need ways to ensure these changes are reflected in systems with immense influence in our lives. Also, we need to ensure people still have access to ways of achieving well-being. We can do both by keeping humans in the driver’s seat of jobs that are crucial to our autonomy, like leading researchers and politicians, and keeping jobs that, if automated, would damage the environment from which we can derive meaning-related goods, like professional gamers.
References
- Christman, John. 2020. “Autonomy in Moral and Political Philosophy.” Stanford Encyclopedia of Philosophy, edited by Edward N. Zalta. Last modified January 9, 2020. https://plato.stanford.edu/entries/autonomy-moral/.
- Danaher, John. 2023. “Automation and the Future of Work.” In The Oxford Handbook of Digital Ethics, edited by Carissa Véliz. Oxford: Oxford University Press. https://academic.oup.com/edited-volume/37078/chapter/337810502
- Danaher, John, and Sven Nyholm. 2020. “Automation, Work and the Achievement Gap.” AI and Ethics 1 (3): 227–237. https://doi.org/10.1007/s43681-020-00028-x.
- Hurka, Thomas, and John Tasioulas. 2006. “Games and the Good.” Proceedings of the Aristotelian Society, Supplementary Volumes 80: 217–264. https://www.jstor.org/stable/4107044.
- James, Aaron. 2020. “Planning for Mass Unemployment: Precautionary Basic Income.” In Ethics of Artificial Intelligence, edited by S. Matthew Liao, 154–183. Oxford: Oxford University Press. https://doi.org/10.1093/oso/9780190905033.003.0007
- Weisser, Vincent. 20 May 2025. Presentation on Decentralized Science for part of the AI, Philosophy, and Innovation Seminar at Oxford. Prime Intellect