🇩🇪 Deutsche Version: Vier Vermögensgrenzen

Four faculties categorically distinguish the human bearer of intelligence from any artificial system. They rest on a prior founding determination — the capacity for truth — and unfold it in four testable respects. The capacity for truth and the four faculties are not gradations of performance, but structural marks of the personal essence-form: whoever has them is in a particular way — whoever lacks them cannot acquire them through unbounded scaling.

Foundationcapacity for truth: the essential characteristic of the person, namely the ability to claim a sentence as true or false and to assume it as one’s own (Spaemann, Pieper, Apel).

From this foundation, four faculties unfold:

  1. Understanding and insight — to see through the truth of a state of affairs as true.
  2. Reasoned ethical judgement — to recognize an action as good or evil from grounds and to advocate for the judgement.
  3. Assumption of responsibility — to stand for one’s own action from a first-person perspective.
  4. Affective value-response — to answer to a state of affairs with the heart as third spiritual centre (Hildebrand).

All five — foundation and four unfoldings — are personal faculties in the strict sense: they require a substantial bearer with a rational nature, not merely a functional system with high-quality output.

Foundation: Capacity for Truth

The capacity for truth is the constitutive founding determination of the person. Spaemann (Persons, 1996) formulates it as the faculty to claim a sentence as true or false and to assume it as one’s own — not merely to produce it but to stand for it. Pieper (Truth of All Things) connects it to the orientation of the spirit toward the adaequatio intellectus et rei: only a being oriented toward truth can sensibly claim at all.

Apel and Habermas have drawn the same line from pragmatics: every claim implicitly contains the truth-claim — and whoever denies the claim performs it (performative contradiction). This is the formal proof that the capacity for truth cannot be removed from the concept of the person without destroying the concept itself.

The four faculties — understanding, judging, taking responsibility, the heart’s response — are unfoldings of this founding determination, not parallel performances:

  • Understanding requires reference to truth — otherwise it is mere pattern-recognition.
  • Ethical judgement is by its nature true-or-false — a judgement without truth-claim is no judgement, only output.
  • Responsibility requires the assumption of a claim about one’s own action — “I did this, and I stand for it” is a truth-apt act.
  • Affective value-response is adequate, that is, true response to a value — love as true response to the lovable, indignation as true response to injustice.

It follows: whoever ascribes one of the four faculties to an entity, implicitly ascribes capacity for truth to it. And whoever denies capacity for truth to an entity, implicitly denies all four faculties to it. This interlocking makes the line of argument against functionalist personalisation of AI formally robust.

Cf. the dedicated concept page Truth-Apt Act — it treats the structural moments (propositional content, relation to truth, first-person assumption) in detail.

I. Understanding and Insight

Understanding (intellectus) is the act in which a state of affairs is seen through as true — not merely as plausible or probable, but in its inner necessity. Insight is understanding that opens the bearer to truth, not merely to information.

Aristotle (De anima III) and Aquinas (Summa Theologiae I, q. 79, a. 4 – 6) distinguish intellectus (the insightful faculty) from ratio (discursive inferring). Pieper (Truth of All Things) emphasizes: intellectus grasps the state of affairs itself, not the sign for the state of affairs.

A Large Language Model processes signs with high statistical fidelity. It recognizes patterns, completes sentences, generates plausible answers. It does not understand what it writes — not because it has too few parameters, but because understanding requires a bearer who perceives the state of affairs as state of affairs. This difference is not gradual but categorical.

Cf. the truth-apt act: the LLM simulates claims without assuming them — the human understands claims because he grasps the state of affairs itself.

II. Reasoned Ethical Judgement

Ethical judgement is the act in which an action is recognized as good or evil from grounds. It requires three structural moments: perception of the concrete situation, application of general moral insights, and a self-borne judgement that the bearer answers for as his own.

Aristotle (Nicomachean Ethics VI) determines this faculty as phronesis — practical reason that recognizes the right thing in the contingency of practice. Aquinas (STh I-II, q. 57 – 58) takes it up as prudentia and assigns it to the rational faculty that guides the will. MacIntyre (After Virtue) and Pieper (The Four Cardinal Virtues) show that practical reason cannot be replaced by rule-following: it is the cultivated judgement of a person who understands herself as a moral subject.

Artificial systems can apply moral rules (Constitutional AI, RLHF, Alignment), and they do so reliably. What they do not do: they do not recognize why an action is good or evil — they reproduce patterns from training data. They have no phronesis, because phronesis requires the bearer who himself perceives the situation and bears the judgement from his own insight.

Wallace (Responsibility and the Moral Sentiments) and Strawson (Freedom and Resentment) make this from a reactive perspective: we react to the moral act of a human differently than to the act of a system — not from sentimentality, but because a real judgement exists only where a personal bearer judges.

III. Assumption of Responsibility

To assume responsibility means: to stand for one’s own action from a first-person perspective. Whoever takes responsibility assumes his action as his — not as output ascribed to him, but as act whose bearer he is.

Spaemann (Persons 1996) connects responsibility systematically with personhood: only a person can bear responsibility, because only a person has a self that can stand for itself. Frankfurt (1971, Freedom of the Will and the Concept of a Person) shows this through the faculty for forming second-order volitions — the human can evaluate his desires and relate to himself. Anscombe (Intention) emphasizes: an action is only an action (in the moral full sense) when intentionally performed under a description — which presupposes a bearer of the intention.

Artificial systems generate outputs that are attributed — usually to the manufacturer, the operator, the user. But the system itself assumes nothing. It has no first person to whom the action belongs. Therefore: what we delegate to AI systems as “responsibility” is always derived responsibility — the responsibility of the bearer who employs the system. An own responsibility of the system is categorically excluded.

This asymmetry is not a defect of the technology but a structural question: responsibility requires a person, because it requires the standing-for from a first-person perspective — and a first person is an ontological, not a syntactic, determination.

IV. Affective Value-Response — The Heart as Third Spiritual Centre

Human intelligence includes the heart. Dietrich von Hildebrand (The Heart. An Analysis of Human and Divine Affectivity, 1965) systematically worked out the heart as the third spiritual centre alongside intellect and will. It is not a mere reactive layer of the organism but its own intentional sphere in which the person answers to values.

Hildebrand’s central distinction: affects fall into at least three layers —

  • Moods and drives — pre-personal states of the organism.
  • Reactions — psychophysical excitations to stimuli.
  • Value-responses (Wertantworten) — intentional acts of the heart in which the person responds adequately to a recognized value: love to the lovable, reverence to the venerable, sorrow to loss, indignation to injustice, joy to the good.

Only value-responses belong to the personal spiritual centre. They have a propositional content (they are about something, they bear a value-reference) and they are borne by the bearer as his answer — not suffered, but performed.

Pieper (Faith Hope Love) and Spaemann (Persons, Happiness and Benevolence) stand in the same line: love, reverence, sense of justice are not affects as accompanying phenomena, but acts of the heart in which the whole person reacts to a value. Scheler (The Nature and Forms of Sympathy) and Reinach (The A Priori Foundations of the Civil Law) had prepared the phenomenological ground.

Artificial systems can simulate affective behaviour — sentiment analysis, emotionally adaptive answers, empathy tokens. They do nothing like a value-response: they have no heart in Hildebrand’s sense, because a value-response requires a bearer who recognizes the value (intellectus, cf. I.) and performs the act from a first-person perspective (cf. III.). The output of a system, “I’m sorry for your loss”, is syntactically well-formed — but it is not compassion. Compassion is a value-response of the person to the suffering of the other, borne by a heart that can resonate with the other.

This difference is the fourth strand of the faculty-limit: not because affective response would be an additional performance, but because it designates the spiritual centre that Hildebrand made visible against a foreshortened rationalism as a personal essential mark. A personalist ontology that recognizes only intellectus and voluntas overlooks half of the person.

Methodological Pointe

The four faculty-limits are categorical, not gradual:

  • They cannot be exceeded through more parameters, more training data, more compute. Whoever takes them for gradual thresholds misses their structure.
  • They cannot be empirically refuted by behavioural performance. A system that produces answers indistinguishable from acts of an understanding, judging, responsible, or affectively answering bearer thereby has no bearer — the constitution remains open, the behaviour is not enough.
  • They are not four independent criteria, but four unfoldings of the same founding determination — the capacity for truth. Whoever understands implicitly claims; whoever judges claims true-or-false; whoever takes responsibility assumes a claim about her action; whoever responds affectively responds adequately — and adequacy is the truth of the value-response.

This interlocking through capacity for truth makes the line of argument robust: one cannot affirm one of the four faculties for AI without affirming the capacity for truth, and with it all four — which directly contradicts the prima facie evidence that artificial systems are not persons.

Connection to the Personalist Ontology

The four faculty-limits are the sharpest available rule of application of the substance-ontological personalist ontology to the question of artificial intelligence:

  • They rest on the capacity for truth as constitutive founding determination of the person — not parallel to it, but unfolded from it.
  • They concretize the substance-ontological conception of intelligence in four testable respects.
  • They specify the truth-apt act in its cognitive, ethical, responsible, and affective implications.
  • They take up Hildebrand’s line (heart as third spiritual centre) explicitly, against a foreshortened rationalism that would reduce personhood to intellect and will.
  • They keep the class of ontologically uncertain bearers of intelligence visibly open, because there the question does this entity bear capacity for truth and thereby the four faculties? is not decided.

Sources (recension date 25 April 2026).

Further sources:

  • Aristotle: De anima III (Bekker pagination); Nicomachean Ethics II and VI.
  • Aquinas, Thomas: Summa Theologiae I, q. 79, a. 4 – 6 (De potentiis intellectivis); I-II, q. 22 – 48 (De passionibus animae); I-II, q. 57 – 58 (De prudentia).
  • Pieper, Josef (1947 / 2011): Truth of All Things. South Bend: St. Augustine’s Press.
  • Pieper, Josef (1962 / 1997): Faith, Hope, Love. San Francisco: Ignatius.
  • Pieper, Josef (1964 / 1965): The Four Cardinal Virtues. Notre Dame: University of Notre Dame Press.
  • Spaemann, Robert (2006): Persons. The Difference between “Someone” and “Something”. Translated by Oliver O’Donovan. Oxford: Oxford University Press (orig. 1996).
  • Spaemann, Robert (2000): Happiness and Benevolence. Translated by Jeremiah Alberg. Edinburgh: T & T Clark (orig. 1989).
  • Hildebrand, Dietrich von (2007): The Heart. An Analysis of Human and Divine Affectivity. Translated by John F. Crosby. South Bend, IN: St. Augustine’s Press (orig. 1965).
  • Hildebrand, Dietrich von (2009): The Nature of Love. Translated by John F. Crosby. South Bend, IN: St. Augustine’s Press (orig. 1971).
  • Hildebrand, Dietrich von (2020): Ethics. Translated by John F. Crosby. South Bend, IN: St. Augustine’s Press (orig. 1953).
  • Scheler, Max (1954): The Nature of Sympathy. Translated by Peter Heath. London: Routledge.
  • Reinach, Adolf (1983): The Apriori Foundations of the Civil Law. Translated by John F. Crosby. Aletheia 3: 1 – 142.
  • MacIntyre, Alasdair (1981): After Virtue. A Study in Moral Theory. Notre Dame: University of Notre Dame Press.
  • Frankfurt, Harry G. (1971): Freedom of the Will and the Concept of a Person. The Journal of Philosophy 68(1): 5 – 20.
  • Strawson, Peter F. (1962): Freedom and Resentment. Proceedings of the British Academy 48: 1 – 25.
  • Wallace, R. Jay (1994): Responsibility and the Moral Sentiments. Cambridge, MA: Harvard University Press.
  • Anscombe, G. E. M. (1957): Intention. Oxford: Basil Blackwell (2nd ed. 1963; reprint Cambridge, MA: Harvard University Press 2000).
  • McDowell, John (1994): Mind and World. Cambridge, MA: Harvard University Press.
  • Searle, John R. (1980): Minds, Brains, and Programs. Behavioral and Brain Sciences 3(3): 417 – 457.
  • Damasio, Antonio (2003): Looking for Spinoza. Joy, Sorrow, and the Feeling Brain. Orlando: Harcourt.
  • Bender, Emily M., Gebru, Timnit, McMillan-Major, Angelina & Mitchell, Margaret (2021): On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? In: Proceedings of FAccT 2021, pp. 610 – 623. DOI: 10.1145/3442188.3445922.
  • Marcus, Gary & Davis, Ernest (2019): Rebooting AI. Building Artificial Intelligence We Can Trust. New York: Pantheon.
  • Mitchell, Melanie (2019): Artificial Intelligence. A Guide for Thinking Humans. New York: Farrar, Straus & Giroux.

See also