The methodological signature of menschsein.ai rests on a deliberate combination of four traditions, brought into a single argumentative architecture, and demonstrated through one formal artefact:

TraditionFunction in the Project
Aristotle — Aquinas — SpaemannSubstance-ontological foundation: intelligence as the actualization of a rational nature borne by a substantial subject
Karl-Otto Apel — Jürgen HabermasTranscendental-pragmatic grounding of capacity for truth (performative contradiction)
Dietrich von Hildebrand — Max Scheler — Adolf ReinachPhenomenological recovery of the heart as third spiritual centre and the structure of Wertantwort (value-response)
Harry Frankfurt — Peter Strawson — G. E. M. AnscombeAnalytic accounts of responsibility, second-order volitions, and intentional action under a description
Formal description-logic operationalization with machine-checked consistency and constraint validationDemonstration that the categorical argument is logically operationalizable, machine-checkable, and applicable in regulatory contexts where auditable definitions of moral and legal subject are required

The Argumentative Architecture

The four philosophical traditions converge in a single structured argument. The argument has one foundation and four faculties:

Foundation: Capacity for Truth (Wahrheitsfähigkeit). The capacity to claim something as true and to stand by the claim from a first-person perspective (Spaemann; Pieper; Apel — performative-contradiction proof).

From which four faculties unfold:

  1. Understanding and insightintellectus: grasping a state of affairs as a state of affairs, not as a token sequence (Aristotle, Aquinas, Pieper).
  2. Reasoned ethical judgementphronesis / prudentia: seeing a situation under its morally salient description and bearing the judgement (Aristotle, Aquinas, MacIntyre, Pieper).
  3. Assumption of responsibility — standing for one’s act from a first-person perspective (Spaemann, Frankfurt, Strawson, Anscombe).
  4. Affective value-response of the heartWertantwort: the heart as a third spiritual centre alongside intellect and will, responding to value-disvalue (Hildebrand, The Heart 1965; Scheler; Reinach).

These four faculties together with their common foundation form a categorical, non-gradualist criterion for the uniqueness of human intelligence. The criterion is intentionally constructed to survive behavioural mimicry by artificial systems: a system that produces output indistinguishable from acts of an understanding, judging, responsible, or affectively answering bearer thereby has no bearer; the constitution remains open, the behaviour is not enough.

Formal Operationalization

The Personseins-Ontologie — developed by the author over the past decade and held privately as a long-term scholarly research instrument — encodes the foundation and the four faculties in a formal description-logic framework. The conceptual layer derived from the ontology is publicly accessible at menschsein.ai (254 networked concept pages; six core pages on intelligence translated into English). Substantive scholarly outputs are forthcoming peer-reviewed articles and the two volumes of the menschsein.ai trilogy.

The formal operationalization is not the substantive contribution; it is the demonstration that a substance-ontological account of personhood can be operationalized — with machine-verifiable consistency — in a way that ordinary substance-ontological scholarship has never achieved. This makes the categorical argument simultaneously philosophically rich and applicable in regulatory contexts where auditable definitions of moral and legal subject are required.

Application to Four AI-Ethics Cases

The argument is applied to four contemporary cases, each examining whether a specific AI or biotechnology system can plausibly be said to instantiate one of the four faculties:

  • Case I — Truth-Capacity Tested on LLMs. Can a system that emits truth-claiming sentences thereby make truth-claims in the personally-binding sense? Application of the performative-contradiction argument.
  • Case II — Phronesis Tested on RLHF-Trained Agents. Can a reinforcement-learning system trained on human moral feedback be said to have practical wisdom in the Aristotelian-Thomistic sense, or only rule-following without insight?
  • Case III — Responsibility Tested on Autonomous Agents and Brain-Organoid Consciousness. When autonomous AI agents are granted operational autonomy, and when in-vitro neuronal cultures show learning behaviour (Kagan et al. 2022), can responsibility-ascription be coherently attached to such systems, or must it remain delegated responsibility?
  • Case IV — Value-Response Tested on Affect-Adapted Systems and Synthetic Embryo Models. Can an emotionally adaptive system perform Wertantworten in Hildebrand’s sense, and how does this question intersect with the moral status of synthetic embryo models (Liu 2021; Oldak/Hanna 2023)?

A fifth shorter article addresses BCI-augmented humans (Neuralink, Synchron) and shows that augmentation does not equal substance-change.

What the Method Achieves

The method achieves three things at once that have not previously been combined at this depth:

  1. Philosophical depth. It draws on the substance-ontological tradition in its full strength, including the often-overlooked phenomenological recovery of the heart as third spiritual centre (Hildebrand).
  2. Formal operability. It demonstrates that the categorical argument is not only philosophically defensible but also machine-checkable — and therefore applicable in regulatory and compliance contexts where auditable definitions of moral and legal subject are required.
  3. Applicative concreteness. It works through four contemporary AI-ethics cases that current frameworks leave unanswered, supplying formally-defensible reasoning chains for each.

Further Information