Why “Consciousness?” Is Typically Undecidable

Under a standard computationalist framing, “Is this system conscious?” is modeled as a non‑trivial property of a computation’s extensional behavior. By Rice’s Theorem, any non‑trivial semantic property of programs is undecidable; consequently, there is no total algorithm that can decide consciousness for all programs or implementations.

Formal backdrop: Rice’s Theorem

Let ⟦·⟧ map programs to their (partial) extensional behaviors/functions over some domain 𝔻 and codomain ℛ. A property P ⊆ (𝔻 ⇀ ℛ) is extensional if it depends only on behavior (i.e., on ⟦·⟧), not on source representation.

$$ \textbf{Rice (informal).}\;\; \text{If }P\subseteq (\mathcal D\rightharpoonup \mathcal R)\text{ is non\text{-}trivial and extensional, then } \;\{ e \mid \llbracket e \rrbracket \in P \}\;\text{ is undecidable.} $$

“Non‑trivial” means ∃f,g with f∈P and g∉P. “Undecidable” means no total computable decider exists for the membership set above.

Modeling “conscious?” as an extensional property

Under computationalism, define a predicate C ⊆ (𝔻 ⇀ ℛ) such that C(⟦e⟧) holds iff the realized behavior meets a stipulated notion of consciousness. If C is neither empty nor universal, then by Rice’s Theorem {e | C(⟦e⟧)} is undecidable.

$$ C \text{ non\text{-}trivial }\;\Rightarrow\; \neg\exists\,D:\text{Prog}\to\{0,1\}.\;\forall e,\; D(e)=1\iff C(\llbracket e\rrbracket). $$

If instead one makes intensional definitions (e.g., “contains token X”), decidability may return — but such tests ignore behavioral equivalence and are typically poor scientific proxies.

Reduction sketch (why typical undecidability holds)

Suppose, for contradiction, that a total decider D for C existed. Given any other non‑trivial extensional property Q, we can build a computable transformation T(e) that embeds Q into “conscious?” so that Q(⟦e⟧) iff C(⟦T(e)⟧).

Sketch: construct two behavior gadgets Gc, G¬c with C(⟦Gc⟧)=true and C(⟦G¬c⟧)=false.
Given e, define T(e) to run e on a fresh input; if Q(⟦e⟧) then behave like Gc else behave like G¬c.
If D decides C, then e ↦ D(T(e)) decides Q, contradicting Rice.

The gadgets are theoretical “witness” programs used only to carry the reduction; we do not assume an engineer can actually generate or recognize consciousness.

When can it be decidable? (boundary regimes)

Practical posture for engineering

In Tau Translator workflows, these ideas manifest as testable specs, mutation/property tests, and guided refinement — never as a claimed universal decider.

FAQ (extended)

What does “typically” mean?
It refers to general Turing‑powerful settings with non‑trivial extensional properties. Special‑case, bounded, or intensional definitions can be decidable.

Is this a claim about metaphysics?
No. It constrains algorithms that aim to decide a broad behavioral predicate, not whether consciousness exists.

Does pancomputationalism change this?
No — see the formal note on self‑implementation and its non‑trivial extensions: CTM ⇒ Pancomputationalism.

References

What This Does (and Doesn’t) Say