1 General Education Core Curriculum Division, Seigakuin University, 362-8585 Saitama, Japan
Abstract
Knowledge Organization (KO) is changing rapidly in the age of artificial intelligence (AI). AI now plays a growing role in organizing and classifying documents, while KO principles are being explored for potential application to make AI more fair, transparent, and effective. These changes raise major ethical questions. KO has never been neutral: the way knowledge is labeled and arranged shapes what is seen as legitimate, influences inclusion or exclusion, and reflects cultural values. AI can intensify these effects by creating and applying categories without clear human oversight. While KO ethics research has addressed issues such as bias, intellectual freedom, and cultural representation, it has rarely applied utilitarian approaches, especially in modern forms that combine multiple values and protect rights. This gap matters because AI-based KO systems face complex governance challenges. They need to coordinate the actions of human and machine agents, balance competing values, and plan for long-term impacts. Modern, pluralist utilitarianism offers tools for integrating multiple goals, adapting to change, and maintaining coherence across institutions. This paper argues that such utilitarian coordination is structurally necessary for large-scale, multi-agent KO systems but must be combined with rights-based, virtue, and care ethics to protect core values and ensure fairness, legitimacy, and inclusion in an AI-mediated knowledge environment.
Keywords
- ethical theory
- artificial intelligence
- utilitarianism
- deontology
- virtue ethics
- ethics of care
Knowledge Organization (KO) as a field is undergoing profound transformation in the algorithmic age. This transformation operates in two interconnected directions: artificial intelligence (AI) is increasingly deployed to enhance KO systems and processes (Golub, 2021); at the same time, KO principles are being applied to improve the performance, fairness, and transparency of AI (Greenberg et al, 2021). Both trajectories remain in flux (Jaillant et al, 2025), and their long-term implications for cultural memory, epistemic authority, and public knowledge ecosystems are still uncertain.
KO systems and processes have never been ethically neutral. The ways in which knowledge is named, categorized, and arranged shape what is recognized as legitimate knowledge, reflect cultural assumptions, and reinforce or challenge existing power structures (Olson, 2002). Decisions about vocabulary design, conceptual boundaries, and subject assignment have consequences for inclusion, exclusion, and the visibility of particular perspectives. These ethical dimensions are amplified by AI integration. As Berry (2025) argues, we are entering an “algorithmic condition”, in which AI does not merely follow human-designed rules but autonomously generates patterns of meaning. These systems can create categories, assign labels, and reframe our perception of reality, often without clear human authorship or oversight. This gives rise to pressing ethical questions: Who is accountable for these processes? How can we ensure transparency and fairness? And what does it mean to place epistemic trust in knowledge that is produced and structured by machines?
While KO ethics scholarship has grown significantly (Beghtol, 2005), it exhibits important theoretical limitations in addressing AI-era challenges. Most studies focus on specific ethical problems, such as cultural representation (Bair, 2005), rather than systematic frameworks for multi-agent governance. When ethical theories are employed, they typically draw on deontological rights-protection or care ethics approaches (Zhitomirsky-Geffet and Hajibayova, 2020). Systematic engagement with utilitarianism, especially in its more sophisticated, pluralist forms, is notably absent from this literature.
This theoretical gap is particularly significant given the apparent conceptual fit between AI-era KO challenges and utilitarian approaches. The complexity of AI-based KO systems creates distinctive governance challenges: systems must coordinate the actions of multiple human and machine agents, operate in dynamic and uncertain information environments, and plan for diverse agents with varying needs across extended temporal horizons. These conditions structurally require coordination mechanisms capable of weighing competing values, allocating resources effectively, and maintaining coherence across institutional levels. In this respect, the coordination logic of utilitarianism is particularly well-suited: it offers a structured decision-making framework that can integrate multiple goods while remaining adaptable to changing circumstances.
Utilitarianism itself has evolved far beyond its classical formulations (de Lazari-Radek and Singer, 2017). Contemporary developments include pluralist accounts of well-being that incorporate non-hedonic goods, rule-based approaches that safeguard rights and trust, and institutional design level reasoning that focuses on designing and maintaining decision-making procedures rather than performing constant calculation in individual cases (Goodin, 1995). These refinements address many of the limitations identified in other ethical traditions, and they align closely with the institutional needs of AI-based KO systems.
This paper addresses this theoretical gap through an institutional analysis showing why modern utilitarian coordination mechanisms, which link institutional design and operational practice, are structurally necessary for ethical governance in AI-based KO systems. Our contribution is threefold: (1) to demonstrate the conceptual adequacy of utilitarian approaches for multi-agent coordination challenges, (2) to show how utilitarian frameworks can integrate insights from other ethical traditions, and (3) to provide theoretical foundations for institutional design in complex knowledge ecosystems.
The aim is not to prescribe a normative model for immediate adoption, nor to present an empirical evaluation of existing systems, nor to provide a comprehensive review of contemporary philosophical debates in utilitarian theory. Rather, it is to clarify the structural conditions under which such mechanisms are necessary for coherent and ethically responsive decision-making in large-scale human–machine knowledge ecosystems, developing theoretical foundations that are sufficiently rigorous to guide institutional design while avoiding unnecessary philosophical elaboration beyond the scope required for governance in complex knowledge ecosystems. This analysis proceeds from the premise that the increasing integration of AI into KO is a trajectory that, while its scope and governance may be shaped, is unlikely to be reversed in the foreseeable future. If this trajectory can be influenced but not fundamentally avoided, the central question becomes how to ensure that such systems remain ethically coherent and socially legitimate at scale.
Over the past three decades, KO communities have become increasingly aware of the ethical dimensions of their work (Smiraglia, 2015). This awareness has generated substantial scholarship addressing issues such as cultural bias in classification systems, the marginalization of minority perspectives, and the need for more inclusive vocabulary development (Fox and Olson, 2012). These contributions have established the now widely accepted view that KO is never merely technical, but always ethically and culturally situated (Hjørland, 2020).
Recent research shows that many KO systems and processes continue to marginalize underrepresented communities through biased structures and outdated terminology. One common issue is the scattering or erasure of topics related to Indigenous, lesbian, gay, bisexual, transgender, queer (or questioning), intersex, asexual, and plus other sexual and gender identities (LGBTQIA+), and other minoritized groups, making these resources harder to locate and less visible (Buente et al, 2020; Howard and Knowlton, 2018). Some scholars have emphasized the importance of process and participation in ethical KO work, arguing for more inclusive, iterative, and community-based approaches (Beghtol, 2005). Ethical challenges are also evident in professional practice: catalogers have expressed the need for a shared code of ethics and for greater attention to the social impact of classification decisions (Snow and Shoemaker, 2020). Real-world examples illustrate these challenges. The Library of Congress (LC) revised its subject heading “Illegal aliens” after public protests (Wikipedia, 2025). These cases show that KO must move beyond narrow and outdated worldviews, embracing diverse epistemologies and ethical commitments.
Despite growing ethical attention in KO, much existing work (e.g., Berman, 1971; Choi, 2022) focuses on identifying problems such as bias and marginalization without recognizing the formal ethical theories that are emerging in institutional responses. Some scholars, however, have begun to address this gap. Mai (2013) proposes a practice-based framework grounded in MacIntyre’s virtue ethics, situating ethical decisions within professional contexts rather than abstract ideals. Bair (2005) draws on just-consequentialism and information ethics to propose a cataloging-specific code that emphasizes intellectual freedom, cultural inclusivity, and accountability. Zhitomirsky-Geffet and Hajibayova (2020) introduce more pluralistic and context-sensitive ethical KO models depending on ethics of care. Ridi (2013) outlines core values (intellectual freedom, professionalism, and social responsibility) as a normative base for KO ethics, and suggests adding privacy and intellectual property.
Fox and Reece (2012) are unusual in explicitly discussing multiple ethical theories in the KO context, including Kantian deontology, Rawlsian justice (contractualism), feminist care ethics, Derridean ethics, pragmatism, and utilitarianism. Their treatment of utilitarianism emphasizes its consequence-oriented character, noting that “In a practical setting like IO [information organization], results matter, and a functioning system that promotes user satisfaction is a priority. Thus, consequences of justice, care, hospitality, practical efficacy, and so forth must be regularly monitored and maintained through iterative feedback and testing mechanisms” (pp. 381–382).
They conclude with a hybrid model that integrates the strengths of these perspectives, aiming to design systems that ensure access without harm through a balance of care, inclusiveness, rights, and harm-avoidance, alongside attention to consequences. However, their discussion does not engage with contemporary developments in utilitarian theory, nor does it consider whether utilitarianism might offer distinctive structural advantages for coordinating the complex, multi-agent, and temporally extended decision-making processes characteristic of AI-based KO systems. This gap motivates the following analysis of utilitarian coordination mechanisms.
AI-based KO systems operate in complex environments where multiple agents, diverse values, and long-term impacts intersect. Governance in such contexts requires more than efficient day-to-day operations: it entails creating and maintaining institutional rules, allocating resources transparently, and designing processes that function consistently across local, national, and global infrastructures. These conditions generate coordination challenges that demand systematic methods for balancing competing claims, adapting to change, and ensuring coherent decision-making at scale. Among the available approaches, utilitarian structures emerge not as arbitrary preferences but as structurally necessary responses to the multi-agent, multi-value, and temporally extended nature of AI-based KO systems, because they provide coordinated methods for evaluating actions and policies according to their overall contribution to well-being.
AI-based KO systems require coordination mechanisms that operate across multiple organizational levels. Building on Hare’s (1981) distinction between critical and intuitive moral reasoning, institutional design decisions generate operational protocols, while operational experience informs institutional policy revision. At the design level, policies are shaped by systematic evaluation of long-term impacts, stakeholder needs, and cross-value integration. At the operational level, professionals apply these policies through standardized procedures that embed prior institutional reasoning, while also maintaining review mechanisms for exceptional cases. In this way, operational practice not only ensures efficiency and predictability but also serves as a feedback channel that signals when institutional rules require reassessment. This structured interaction between levels allows governance systems to balance stability with adaptability, ensuring that complex ethical judgments are addressed at the level where they can be handled most appropriately.
Goodin’s (1995) account of utilitarianism as a public philosophy explains why utilitarian patterns emerge in institutional contexts even if they seem ill-suited to personal relationships. While utilitarian calculation may appear impersonal in individual moral decisions, the systematic evaluation of policies, resource allocation, and institutional design requires the impartial, evidence-based reasoning that utilitarian approaches provide. As Goodin (1995, pp. 8) observes, “The strength of utilitarianism, the problem to which it is a truly compelling solution, is as a guide to public rather than private conduct. There, virtually all its vices—all the things that make us wince in recommending it as a code of personal morality—loom instead as considerable virtues”.
These coordination mechanisms link institutional design and operational practice through bidirectional processes in which operational issues that reveal systemic bias or harmful outcomes initiate institutional-level review, and policy changes at the institutional level cascade into revised operational protocols. They encompass the generation of operational protocols based on systematic value evaluation, the identification of cases in operational practice that require institutional-level judgment, and the evolution of institutional policies in response to operational experience and stakeholder feedback. A common example is contested terminology in KO systems. At the operational level, catalogers apply established subject headings according to standard protocols. When community stakeholders raise concerns about harmful or outdated terms, or when evidence of systematic bias emerges, these cases are directed into the coordination mechanism for review. Broader institutional reviews involve multiple stakeholders, including professional associations, affected communities, and governance bodies, and employ impact assessment, stakeholder consultation, and deliberation on long-term consequences. The resulting policy revisions cascade back to operational practice, updating classification rules and professional training.
This two-level structure preserves rule stability under normal conditions while enabling adaptive change in response to evidence and stakeholder input. It maintains operational efficiency, provides systematic mechanisms for addressing harmful outcomes, ensures responsiveness to unintended consequences, and incorporates diverse stakeholder perspectives into fundamental policy decisions.
Modern utilitarian theory, when adapted to institutional contexts, has evolved in ways that make it particularly relevant to KO governance at the level of policy design and resource allocation, moving far beyond the simplified forms often associated with classical figures. Contemporary versions debate the nature of the good whether pleasure, preference satisfaction, or capabilities, the kinds of outcomes that matter, the scope of ethical concern across present and future generations, and the appropriate level of ethical evaluation, whether for acts, rules, or institutions (Sinnott-Armstrong, 2023).
The pluralistic conception of utility addresses the classical criticism that utilitarianism reduces all values to a single dimension, neglecting goods such as knowledge, freedom, dignity, or authentic relationships. As Nozick’s (1974, pp. 42–45) famous “experience machine” thought experiment suggests, many people would reject a life of perfect simulated pleasure if it meant losing contact with reality, indicating that we value more than just pleasurable experiences. Modern utilitarianism can respond with a pluralist account of utility in which such goods are treated as intrinsic components of well-being. By recognizing qualitative differences among pleasures and incorporating non-hedonic goods into welfare or preference-based frameworks, it becomes possible to treat knowledge, cultural recognition, and justice as integral to what is maximized. This pluralistic utilitarianism explains how KO systems naturally evolve to pursue multiple objectives including retrieval accuracy, cultural authenticity, explainability, bias reduction, and community empowerment as co-constitutive of overall utility rather than as competing goals forced into a single metric.
The distinction between act and rule utilitarianism is fundamental to understanding how utilitarian reasoning can be adapted to institutional contexts. Act utilitarianism evaluates each individual action directly according to its expected consequences, requiring fresh calculation in every case. Rule utilitarianism, by contrast, evaluates general rules according to the utility of following them in the long run, allowing routine decisions to be made by reference to these rules without recalculating consequences each time (Hooker, 2023).
However, as Hare (1981) argued in developing his two-level model of moral thinking, this distinction should not be treated as a rigid separation. At the critical level, reasoning resembles act utilitarianism in its capacity for case-specific evaluation, but it can also take the form of specific rule utilitarianism with rules of unlimited specificity, in which case the two approaches are effectively indistinguishable. On this basis, the critical thinker identifies general prima facie principles for use at the intuitive level, where rule-following guides everyday decisions without fresh calculation. As Hare (1981, pp. 43) puts it:
Much of the controversy about act-utilitarianism and rule-utilitarianism has been conducted in terms which ignore the difference between the critical and intuitive levels of moral thinking. Once the levels are distinguished, a form of utilitarianism becomes available which combines the merits of both varieties … The two kinds of utilitarianism, therefore, can coexist at their respective levels; the critical thinker considers cases in an act-utilitarian or specific rule-utilitarian way, and on the basis of these he selects … general prima facie principles for use, in a general rule-utilitarian way, at the intuitive level.
This two-level structure allows the strengths of both varieties to be combined: the flexibility and context-sensitivity of act utilitarian reasoning at the critical level, and the stability and predictability of rule utilitarianism at the intuitive level. In institutional contexts such as AI-based KO systems, this division of moral labor enables systematic rule design informed by long-term utility, while preserving operational efficiency through established protocols.
Temporal sophistication in modern utilitarianism addresses the criticism that it demands constant and exhaustive calculation, which is unrealistic for human agents. As noted earlier in connection with Hare’s (1981) two-level utilitarianism, most everyday decisions can and should follow established moral rules, while critical calculation is reserved for exceptional or complex cases. While this time-saving function is less relevant for AI systems, which can perform act utilitarian calculations instantly, the challenge in AI-mediated KO lies rather in incorporating temporally extended consequences and evolving preferences into decision frameworks. In a similar vein, utilitarian coordination mechanisms address this by allowing most decisions to follow well-designed rules, while institutional feedback processes enable revision over time. In this way, the framework can accommodate not only exceptional cases but also the gradual evolution of collective preferences, which neither fixed rules nor static calculations can adequately capture. In KO systems, routine metadata assignments increasingly follow established protocols, while politically sensitive classifications or contested cultural terminology are handled through coordination mechanisms that enable special deliberation.
Although deontology, virtue ethics, and care ethics each contribute important insights, they face structural limitations when adapted to the scalable coordination challenges required for governing AI-based KO systems at the institutional level. Comparing utilitarianism with other major ethical theories clarifies both its distinctive strengths and how it can be complemented in KO governance.
In the Kantian tradition, deontology bases ethical judgment on following duties, rights, and rules rather than on evaluating outcomes (Alexander and Moore, 2024). An action is considered right if it follows moral laws that can apply universally, and wrong if it breaks these duties, regardless of the results. This approach strongly supports KO values such as intellectual freedom, non-discrimination, and the protection of cultural heritage. However, while deontology offers strong protection for rights, it does not provide a clear way to resolve conflicts between duties that are equally binding. For example, protecting one group’s cultural rights might limit another group’s access. In such situations, decision-making often depends on additional principles drawn from consequentialist reasoning.
Virtue ethics, drawing on Aristotle, Mencius, and modern interpreters, emphasizes the cultivation of ethical character and the exercise of practical wisdom (Hursthouse and Pettigrove, 2023). It enriches KO ethics by highlighting ethical sensitivity, professional virtues, and the shaping of institutional culture over time. However, virtue ethics lacks a general criterion for consistent decision-making across diverse contexts because institutions lack a unified ethical character, and operational decisions often require explicit trade-off criteria that virtue ethics does not fully specify. As Annas (2011, pp. 50) observes, “Clearly it cannot appeal to any ‘criterion of right action’ to guide the person uncertain of what to do, since it denies that there is anything ethically substantial in common to all right actions, just as right actions, and so no informative criterion that could pick them out just as right actions”. This lack of a clear and informative criterion makes virtue ethics difficult to apply as a governance framework that must coordinate decisions at scale.
Care ethics, grounded in the work of Gilligan (1982), prioritizes attentiveness, empathy, and maintaining relationships, especially with vulnerable or marginalized stakeholders. It corrects the impersonality of large-scale governance by emphasizing trust, responsiveness, and context-specific understanding. However, care ethics is difficult to operationalize across globally distributed, culturally diverse, and temporally extended knowledge networks that often include non-human agents. In KO systems, stakeholders are often diffuse, temporally distant, culturally diverse, or non-human, as in the case of AI agents, making it difficult for care ethics alone to provide a scalable framework for weighing competing claims.
These limitations clarify the comparative advantage of utilitarianism as a primary coordination framework, as the governance structures discussed above integrate the strengths of these approaches while addressing their structural limitations.
The most effective governance model for AI-based KO systems integrates these ethical approaches hierarchically rather than competitively. From deontology, governance systems can embed rights and duties as stable institutional rules whose long-term benefits justify their protection, operating as inviolable constraints within which decision-making must occur. From virtue ethics, ethical sensitivity and professional wisdom can be translated into operational guidelines that inform the implementation of rules through the cultivation of professional culture and practical wisdom. From care ethics, attentiveness to vulnerability and relational integrity can be incorporated as weighted components of utility, ensuring these values remain central to decision-making.
Within this structure, utilitarianism provides the overarching coordination mechanism that enables systematic trade-off management, temporal optimization, and multi-agent value integration while retaining a coherent decision procedure focused on maximizing overall well-being, broadened to include fairness, recognition, and epistemic justice. This combination appears to produce transparent and adaptable coordination mechanisms in the complex, multi-agent, and multi-value environment of AI-based KO systems, explaining why institutions naturally develop utilitarian-style reasoning.
This functional complementarity balances adaptability to technological change with the ethical stability required for large-scale, human–machine knowledge ecosystems, yielding frameworks in which efficiency, inclusivity, recognition, and epistemic justice may be integrated into coordinated yet adaptable governance structures suited to the demands of contemporary KO. These patterns suggest that institutional reasoning in AI-based KO systems may not develop as pure act or rule utilitarianism, but rather as a hybrid form in which deontological constraints are embedded as stable rules while policy design and revision follow a rule utilitarian logic grounded in long-term utility. This hierarchical rather than competitive integration resonates with Fox and Reece’s conclusion (2012, pp. 381–382) that the most robust governance frameworks emerge when ethical theories are combined in complementary rather than adversarial ways.
This section examines how the increasing integration of AI into KO creates coordination challenges that push institutional governance toward utilitarian logic. By analyzing both emerging contemporary issues and hypothetical future scenarios, we demonstrate how AI’s expanding role makes utilitarian coordination not merely useful, but structurally necessary for maintaining coherent and ethically responsive KO systems. However, these same scenarios reveal why such utilitarian frameworks must operate within essential constraints and guidance provided by deontological, virtue, and care ethics.
Current developments in AI-based KO systems reveal how institutions naturally gravitate toward utilitarian reasoning when facing complex coordination challenges. Libraries are beginning to experiment with a large language model (LLM) and similar models for generating subject headings and abstracts (Balnaves et al, 2025, Part III), with the National Library of Medicine introducing automated indexing in Medical Subject Headings (Fernandez-Llimos et al, 2024). These implementations may create ethical coordination challenges where human catalogers may find themselves having to review thousands of AI-generated suggestions daily, requiring consistent policies for AI oversight that traditional professional ethics frameworks struggle to provide.
Institutions naturally develop coordination mechanisms in response to these challenges. At the institutional design level, libraries establish policies for AI oversight, accuracy standards, and cultural sensitivity requirements. At the operational level, catalogers apply these policies to daily AI-generated suggestions. When operational practice reveals systematic problems (such as consistent cultural missteps or accuracy issues) coordination mechanisms trigger institutional-level policy review and revision.
They ask questions like how much accuracy can be sacrificed for speed, and how many cultural missteps justify slowing the workflow. These ad hoc calculations reveal implicit utilitarian-style reasoning that a systematic utilitarian coordination mechanisms would make explicit and subject to democratic oversight rather than leaving to individual professional intuition. Deontological professional codes offer principles like “provide accurate information” and “respect cultural differences” but provide no guidance for weighing these against each other when they conflict, while care ethics emphasizes individual attention to each cataloging decision in ways that become impractical when processing thousands of AI suggestions daily.
Similar concerns about algorithmic bias have been extensively documented in AI systems used for information retrieval and related technologies; a recent scoping review outlines how such bias affects library and information science practices as well as information retrieval systems (Igbinovia and Danquah, 2025). While comprehensive empirical studies specifically focusing on bias within library discovery interfaces remain limited, it is plausible that analogous dynamics, such as the systematic under-ranking of works by authors from underrepresented backgrounds, could emerge within library search environments. Addressing such potential biases would necessitate continuous algorithm refinement, alongside institutional priority-setting in which objectives like bias mitigation may compete with other performance metrics, including response speed and relevance accuracy. In these circumstances, institutions may find themselves weighing trade-offs, for example marginal reductions in overall user satisfaction in exchange for meaningful improvements in representational equity and system trustworthiness. These implicit decisions invoke utilitarian logic even in the absence of explicitly articulated utilitarian frameworks.
The climate impact of AI-based KO systems presents another coordination challenge where utilitarian logic emerges naturally. The carbon footprint of training and running LLMs is becoming a significant concern for libraries committed to environmental sustainability (Houghton et al, 2024). Enhanced AI services improve user experience and research capabilities but increase environmental impact, requiring libraries to balance service quality against carbon emissions often without clear institutional guidelines. This temporal and spatial coordination challenge naturally requires utilitarian calculation to weigh distributed harms against concentrated benefits, as environmental costs affect global populations across time while service benefits affect local users in the present.
To understand how these coordination challenges might intensify, consider several hypothetical scenarios that illustrate the structural logic driving utilitarian necessity while revealing the essential role of other ethical frameworks. Although such thought experiments may seem speculative and even implausible, and debates will naturally arise over the plausibility or desirability of these future visions, they can be partly justified by the fact that current developments are already making them seem less far-fetched than they once did.
In a scenario where autonomous knowledge agents discover, classify, and preserve digital materials without human oversight for routine decisions, these systems might make thousands of daily choices about what constitutes valuable knowledge, how to classify ambiguous materials, and which preservation formats to use. When AI systems become knowledge creators rather than just tools, traditional human-centered ethics becomes inadequate as the system must coordinate the actions of multiple autonomous agents, each optimizing different objectives including preservation, access, efficiency, and cultural sensitivity. No individual human could review these agents’ decisions, making virtue ethics and care ethics impractical as primary governance frameworks. Deontological rules would help set boundaries, but systematic coordination across thousands of autonomous agents would require utilitarian optimization algorithms that can balance multiple objectives automatically. However, pure utilitarian optimization might sacrifice minority voices for majority satisfaction, requiring strong deontological constraints to prevent such outcomes. These constraints must be built into the utilitarian framework rather than competing with it, demonstrating how utilitarian coordination becomes necessary while other frameworks provide essential protective functions.
Consider a hypothetical scenario involving interplanetary knowledge networks where Earth-Mars knowledge sharing faces communication delays that make traditional collaborative decision-making impossible. Such systems would require AI agents on different planets to make independent decisions about knowledge classification and access policies while maintaining system coherence across space and time. Each planetary system would need to optimize for local needs while contributing to interplanetary knowledge coherence, requiring optimization algorithms that can balance different populations’ needs across extended time horizons without real-time consultation. Virtue ethics and care ethics could not operate effectively across such spatial and temporal distances, while the framework would need to weigh preferences of different populations across planetary systems, account for resource scarcity in space colonies, and optimize for species-wide document preservation. This scenario pushes utilitarian theory toward unprecedented complexity while highlighting how alternative ethical frameworks must provide essential constraints about what values cannot be traded off regardless of utilitarian calculations.
In a speculative post-scarcity knowledge economy where advanced AI makes document production, organization, and dissemination nearly cost-free, the primary ethical questions would become what knowledge is worth creating and how infinite documents should be organized for finite human attention. When technical constraints disappear, ethical choices become paramount as the system must coordinate across potentially billions of human and artificial agents, each with different conceptions of valuable knowledge. Traditional professional judgment could not scale to coordinate selection of knowledge resources for billions of agents, while democratic processes would become unwieldy at global scale. Utilitarian algorithms would provide the only feasible method for aggregating preferences and optimizing outcomes across massive populations, though classical utilitarianism assumes scarcity and trade-offs, requiring new utilitarian theories that can optimize for human flourishing when material constraints disappear but attention and meaning remain limited. Yet even in such scenarios, certain values like cultural identity, personal dignity, and indigenous knowledge sovereignty would require absolute protection rather than optimization, demonstrating the continuing necessity of deontological constraints within utilitarian frameworks.
These scenarios reveal clear patterns in how utilitarian coordination becomes increasingly necessary as AI systems grow more capable and autonomous, while simultaneously showing why such coordination cannot operate without essential constraints and guidance from other ethical frameworks.
Scale effects show that human judgment-based approaches work for small-scale professional decisions but become inadequate for systems making millions of daily choices affecting global populations. The progression moves from hundreds of professional librarians making thousands of daily decisions in current systems, to hundreds of AI systems making millions of daily decisions in near-term scenarios, to millions of AI agents making billions of daily decisions in more speculative cases. At each scale increase, utilitarian coordination mechanisms become more necessary while alternative approaches become less feasible as primary governance frameworks, yet they become more essential as sources of constraints and values that utilitarian optimization must respect.
Temporal extension creates similar necessities as AI systems operate across extended time horizons, making decisions whose consequences may unfold over decades or centuries. Traditional professional ethics focuses on immediate responsibilities spanning individual career spans, while AI systems should consider institutional continuity across extended periods and species-level civilization preservation across much longer timescales. Utilitarian frameworks provide conceptual tools for intergenerational coordination that other approaches cannot match at these temporal scales (Shiozaki, 2025), but the values that guide such coordination can only be coherently grounded in deontological commitments to future generations’ rights and care ethical attention to vulnerable populations across time.
The complexity of value integration grows exponentially as KO systems serve increasingly diverse global populations. The number and complexity of values requiring integration creates challenges that utilitarian frameworks can handle through mathematical methods for managing this complexity, while other approaches struggle with value incommensurability at scale. However, this structural necessity for utilitarian coordination paradoxically increases rather than decreases the importance of alternative ethical frameworks for defining which values can be traded off and which must be protected absolutely.
The increasing necessity of utilitarian coordination intensifies rather than eliminates its limitations and risks, making other ethical frameworks more rather than less important as the complexity of coordination challenges grows.
Even in highly AI-based KO systems, some values resist utilitarian calculation. Indigenous knowledge sovereignty, personal dignity, and cultural identity may require absolute protection rather than optimization. These deontological constraints must operate as non-negotiable boundaries within utilitarian optimization rather than competing alternatives, ensuring that efficiency gains do not override fundamental rights and cultural protections. As coordination challenges intensify, these constraints become more rather than less crucial for preventing utilitarian optimization from producing ethically unacceptable outcomes.
Utilitarian optimization remains ethically justified only when utility functions reflect genuine democratic consensus rather than technocratic imposition. This requirement becomes more challenging as systems scale and operate across diverse cultural contexts, requiring sophisticated participatory value-setting processes, transparent algorithm auditing, and ongoing stakeholder consultation. The democratic legitimacy of utilitarian frameworks depends on processes that draw heavily on care ethical attention to stakeholder relationships and virtue ethical cultivation of institutional cultures that prioritize genuine participation over procedural compliance.
Utilitarian aggregation risks sacrificing minority interests for majority benefit, especially when AI systems amplify existing biases. This necessitates anti-majoritarian constitutional constraints, minority veto powers over certain decisions, and bias monitoring systems that operate outside utilitarian calculation. As AI systems become more powerful and autonomous, these safeguards become more rather than less essential for preventing utilitarian coordination from undermining the very values it aims to serve.
The framework requires increasingly sophisticated safeguards to protect sacred values, ensure democratic legitimacy, and prevent minority oppression as coordination challenges intensify. These safeguards demonstrate that utilitarian coordination cannot operate alone but must function within broader ethical frameworks that preserve the distinctive contributions of deontological, virtue, and care ethical traditions.
The preceding analysis has shown how the structural conditions of AI-based KO systems, involving large-scale multi-agent environments operating across extended time horizons, naturally push governance systems toward utilitarian coordination. In both present-day and speculative scenarios, utilitarian frameworks provide the most coherent and adaptable method for integrating diverse values, managing trade-offs, and sustaining institutional consistency. Yet this apparent fit raises a further question: if utilitarian coordination mechanisms emerges so readily under AI-era conditions, might this be not only because it solves genuine coordination problems but also because it mirrors the internal logic of the AI systems themselves?
This possibility complicates the picture. It suggests that utilitarian coordination mechanisms may not simply be the most suitable tool for the job; rather, the “job” itself, meaning the way AI systems define problems, process documents, and measure success, may already be framed in utilitarian terms. If this is the case, the alignment we have observed could be as much a product of shared design assumptions as of genuine normative adequacy. This recognition opens the door to a critical counter-argument.
The conceptual fit between utilitarian coordination and AI-based KO systems may appear so natural as to be self-evident. This naturalness reflects a deeper structural alignment between the dominant logic of AI development and the evaluative principles of utilitarian reasoning. Contemporary AI engineering, particularly in machine learning, is shaped by optimization paradigms in which designers define an objective function, assess performance against it, and iteratively adjust actions to maximize outcomes in specific contexts. This procedural structure more closely resembles act utilitarianism, where each decision is assessed directly in terms of expected utility, than the rule utilitarian approaches advocated for institutional governance. In both AI optimization and act utilitarian reasoning, diverse goods are aggregated into a single evaluative scale, decisions are optimized under constraints, and policies are revised based on direct outcome assessment. As the AI ethics literature on value alignment notes, these parallels reveal how technical architectures and normative frameworks can become mutually reinforcing, shaping not only how decisions are made but also how the range of possible values is framed (Gabriel, 2020). Schroth (2025) similarly argues that while utilitarianism aligns structurally with AI’s optimization capacities, it cannot serve as a universally sufficient ethical framework and must be integrated with other approaches.
The structural homology between AI’s act utilitarian-style optimization and institutional decision-making explains why utilitarian coordination patterns emerge so readily in AI-based systems. Rather than undermining the case for utilitarian coordination, this circularity strengthens the argument for rule utilitarian institutional frameworks. The challenge is not that AI systems operate through case-by-case optimization, but that this logic, if left unconstrained, can permeate institutional governance in ways that undermine democratic legitimacy and stable rule-based protections. In reinforcement learning, agents are trained to maximize expected cumulative reward through direct calculation in each situation, resembling act utilitarian reasoning at the operational level. In supervised learning, accuracy metrics, loss functions, and precision–recall trade-offs evaluate each prediction directly rather than by applying established institutional rules. When such systems are embedded into KO governance without appropriate mediation, they tend toward patterns that prioritize case-by-case optimization over rule-based stability.
This dynamic creates a risk of technical colonization in which AI’s optimization logic gradually erodes the institutional frameworks that protect minority rights, maintain cultural sovereignty, and ensure democratic accountability. Without safeguards, efficiency gains can crowd out the slower, more deliberative processes that confer legitimacy. Recognizing AI’s act utilitarian tendencies therefore makes rule utilitarian institutional governance more, not less, necessary. By operating at a different logical level, such governance can harness AI’s computational advantages while preventing its optimization logic from undermining fundamental values.
Rule utilitarian coordination provides the institutional architecture needed to channel AI’s operational logic toward broader social coordination goals. Stable institutional rules can embed long-term reasoning about rights protection, cultural recognition, and democratic participation, ensuring that AI’s case-by-case optimization serves purposes defined through collective deliberation. AI’s optimization capabilities then become a tool within these frameworks rather than a force that dictates them.
Effective governance requires a clear separation between levels of decision-making authority. Institutional policies establish adaptable frameworks that integrate multiple values, constraining AI behavior through rule utilitarian design that recognizes the contribution of rights, recognition, and participation to overall well-being. Within these parameters, AI systems can employ act utilitarian-style optimization to deliver capabilities beyond human deliberation while remaining accountable to democratically established priorities.
The circularity between AI optimization and utilitarian reasoning cannot be eliminated, but it can be managed through institutional design. The aim is not to exclude utilitarian reasoning from AI systems, but to ensure that it operates within democratically legitimate frameworks that protect essential values from being reduced to mere optimization variables. Rule utilitarian coordination mechanisms achieve this by treating AI’s optimization as a valuable yet bounded resource, functioning within constraints established through social deliberation. This hierarchical separation enables the benefits of AI to be realized without sacrificing the stability and accountability that democratic governance requires. Far from indicating a weakness, the circularity between AI optimization and utilitarian reasoning demonstrates why sophisticated rule utilitarian frameworks are structurally necessary for the ethical and democratic governance of AI-based systems.
Given the premise that AI’s influence on KO will continue to expand and cannot realistically be rolled back, the question is not whether to govern such systems, but how to govern them so that they remain aligned with plural ethical commitments while retaining coordination capacity. This paper has not sought to propose a new governance model for immediate adoption. Rather, it has aimed to clarify the conditions under which particular forms of coordination become both possible and normatively defensible in AI-based KO systems.
The analysis has clarified that the large-scale, multi-agent, and temporally extended conditions specific to AI-based KO systems strongly require a utilitarian coordination logic adapted from individual ethical theory to institutional contexts. Extending Hare’s (1981) two-level model of moral thinking from the scale of individual moral reasoning to the institutional domain, in which act utilitarian and rule utilitarian reasoning coexist at their respective levels, we describe a structure where systematic evaluation at the institutional design level functions like the critical level, applying case-specific assessment or specific rule utilitarianism to develop general prima facie principles. These principles then guide operational practice at the intuitive level, where established rules are followed without fresh calculation. This division of moral labor combines the adaptability and context-sensitivity of act utilitarian reasoning with the stability of rule utilitarianism, enabling the integration of diverse values, the management of trade-offs, and the maintenance of institutional coherence at scales where neither individual professional judgment nor purely rights-based or care-ethical approaches can operate effectively.
At the same time, the very naturalness of the fit between AI’s optimization paradigms and utilitarian reasoning carries significant risks. As section 5 argued, this structural homology can narrow the ethical field to what can be measured and optimized. To prevent utilitarian coordination from becoming a closed, self-justifying system, it should operate within explicit constraints drawn from deontological rights-protection, virtue-ethical cultivation of professional integrity, and care-ethical attentiveness to vulnerable communities. These complementary traditions ensure that certain values, such as cultural sovereignty, human dignity, and epistemic diversity, are protected from trade-off and that utility functions themselves remain subject to democratic revision.
What emerges from this analysis is an integrated, pluralistic framework that treats rule utilitarianism not as a comprehensive ethical theory intended to replace all others, but as the primary coordination mechanism within a governance architecture that also embeds absolute constraints and relational commitments. In this configuration, AI’s act utilitarian-style optimization becomes a bounded tool operating within democratically established institutional frameworks, harnessing computational capacities without compromising stability, legitimacy, or inclusivity. This functional integration resonates with Parfit’s (2011) convergence hypothesis, which suggests that the best-developed forms of consequentialism, deontology, and contractualism may ultimately converge on similar practical prescriptions when fully specified. In AI-based KO governance, such convergence appears through a moral division of labor: deontological values are embedded as inviolable boundaries within rule utilitarian optimization, translated into professional codes and training programs guiding intuitive-level operational decision-making; virtue ethics and care ethics operate at an attention function level, identifying overlooked stakeholders, emerging vulnerabilities, and relational dynamics that are insufficiently captured by either critical-level calculation or intuitive rule-following.
These ethical traditions do not displace one another but instead form a multi-layered, dynamic ethical architecture. While this paper does not offer a definitive normative model, it provides a theoretical foundation for understanding how institutional design in AI-mediated knowledge environments can reconcile adaptability with stability, and responsiveness with the maintenance of public trust.
Not applicable.
RS made substantial contributions to the conception and design of the study. RS drafted the article and revised it critically for important intellectual content. RS approved the final version to be published and agrees to be accountable for all aspects of the work, ensuring that questions related to the accuracy or integrity of any part of the work are appropriately investigated and resolved.
I gratefully acknowledge the assistance and guidance of Professor Emeritus Akira Nemoto (The University of Tokyo), Associate Professor Akiko Hashizume (Jissen Women’s University), and Associate Professor Tahee Onuma (Yonezawa Women’s Junior College), whose thoughtful advice helped shape the research direction of this paper.
This work was supported by Japan Society for the Promotion of Science (KAKENHI Grant Number JP 25K15817).
The author declares no conflict of interest.
During the preparation of this work the authors used the free version of ChatGPT (GPT-4o) in order to check spell and grammar. After using this tool, the authors reviewed and edited the content as needed and takes full responsibility for the content of the publication.
References
Publisher’s Note: IMR Press stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.
