The Brain May Not Generate Consciousness — It May Receive It, Too

· 5 min read

From a scientific standpoint, investigating consciousness resembles attempting to peer directly into a black hole's singularity while orbiting at a safe gravitational distance. From that vantage point, we can observe the surrounding distortions — superheated accretion disks spiraling inward, gravitational waves rippling outward into the cosmos — yet the singularity itself remains perpetually shielded behind the event horizon, inaccessible and inscrutable.

This astronomical analogy maps surprisingly well onto the consciousness problem. As external observers armed with third-person scientific instruments, we have no direct pathway into the subjective experiences of other beings. When neuroscience trains its most sophisticated tools on the biological architecture we associate with mental life — principally the brain, but also the broader nervous system — what registers are the signatures of physical processes: electrochemical gradients, neurotransmitter cascades, synaptic tissue. The phenomenological interior — the texture of grief, the warmth of affection, the architecture of personal aspiration — remains accessible only to its host. Our inner universe is, by definition, private.

The prevailing scientific consensus frames consciousness as an emergent property of neural computation — a kind of sophisticated software instantiated through the brain's biological hardware. A provocative new theoretical framework challenges this one-directional model, proposing instead that consciousness doesn't merely arise from neural activity but actively participates in shaping it — and critically, that this participation leaves measurable physical signatures.

The hardest problem in science

The contents of subjective mental experience and the objectively measurable fabric of physical reality appear to occupy categorically separate domains. Since René Descartes formally articulated the mind-body problem in the 17th century, Western intellectual tradition has wrestled with the question of how these two seemingly incommensurable dimensions of existence could possibly interact. This philosophical tension has historically driven thinkers toward reductive solutions — collapsing one domain into the other by asserting that either "mind" or "matter" constitutes the more fundamental substrate of reality.

Contemporary philosophers including Joseph Levine and David Chalmers have reformulated this ancient divide as the "explanatory gap" and the "hard problem" respectively — precise terminological upgrades that nonetheless point to the same enduring puzzle: descriptions of neural mechanisms and descriptions of felt experience seem to belong to fundamentally different conceptual registers.

Despite the persistence of this explanatory chasm, modern neuroscience has achieved substantial gains in charting the neural correlates of consciousness (NCC) — identifying the specific brain regions and activation patterns that reliably co-occur with particular conscious states. Yet correlation remains categorically distinct from causal explanation. Knowing which neural configurations accompany conscious experience tells us neither why subjective experience exists as a phenomenon nor whether it exerts any genuine causal leverage over physical events.

Dominant contemporary theories of consciousness typically attempt to close this gap by equating conscious experience with some quantifiable neural property. This theoretical maneuver carries a subtle but significant conceptual cost: it effectively substitutes subjectivity with measurable neural metrics. Consciousness is recast as a number, a structural configuration, or an informational pattern — and in that translation, its defining characteristic, the irreducible first-person quality of experience, evaporates through the explanatory seams.

If we take seriously the phenomenological evidence that consciousness is not reducible to physical description — treating it neither as illusion, epiphenomenon, nor mere cognitive projection — what theoretical architecture allows us to give consciousness a legitimate seat at the scientific table?

Informational entropy

One intellectually fertile frontier involves examining informational entropy dynamics within neural systems. Originally formalized by Claude Shannon in 1948 as a mathematical measure of uncertainty and informational unpredictability within communication systems, Shannon entropy was subsequently appropriated by neuroscience as a quantitative lens for characterizing the variability of neural activity across organizational scales — from individual neurons to large-scale brain networks.

Elevated neural entropy at the whole-brain level can be envisioned as an unexpected meteorological system — a turbulent tropical storm — sweeping through cerebral networks, signaling a richer, more dynamically unpredictable state of neural processing. Conversely, reduced entropy corresponds to a more stable, foreseeable neural forecast. Shannon entropy thus furnishes neuroscientists with a rigorous instrument for tracking the informational turbulence of brain dynamics across time.

Applying information-theoretic frameworks to consciousness research is not itself a novel enterprise. During the 1990s, neuroscientists Giulio Tononi and Gerald Edelman incorporated Shannon entropy into the theoretical scaffolding of their Integrated Information Theory (IIT), which identifies consciousness with the degree of integrated informational complexity within a neural system.

More recently, Robin Carhart-Harris, a neuroscientist at Imperial College London, advanced the Entropic Brain Hypothesis (EBH), demonstrating that the entire spectrum of conscious states — from surgical anesthesia through ordinary waking cognition to psychedelic experience — can be systematically mapped onto a gradient of neural entropy values. Psychedelic states occupy the high-entropy extreme; deep anesthesia anchors the low-entropy end.

An emerging theoretical framework, however, reframes the interpretive significance of these entropy measurements entirely: punctuated neural entropy spikes may not simply serve as passive reflections of consciousness levels but may instead represent the physical traces of consciousness actively exerting causal influence over brain dynamics.

This framework is known as Irruption Theory, developed by cognitive scientist Tom Froese at the Okinawa Institute of Science and Technology. Drawing on a constellation of contemporary neuroscientific findings, Froese observes that deliberate conscious engagement — discriminating subtle environmental features, working through demanding cognitive problems, or mobilizing creative ideation — consistently generates measurable entropy surges that resist complete explanation through deterministic physical neural mechanisms alone.

"Cognitive effort, motor effort, effort of all kinds are associated with increased entropy production in the brain," Froese observes. "And so it's already standard practice in a way to use both thermodynamic measures and information theoretic measures of entropy as signatures of mental work."

Rather than attributing these entropy surges exclusively to increased metabolic heat generation or to unmeasured physical variables operating within the neural substrate, Irruption Theory reinterprets them as the phenomenological "footprints" of conscious agency impressing itself upon biological tissue. We may lack the instrumentation to directly observe the conscious mind interfacing with the physical brain, but we can detect the informational wake of that interaction — analogous to detecting a black hole's presence through the gravitational waves it generates rather than through direct observation.

"Froese's Irruption Theory is a novel, innovative theory of consciousness that takes phenomenology seriously within 'a robustly scientific naturalism,'" notes Robert Lawrence Kuhn, creator and host of Closer To Truth and architect of the Landscape of Consciousness — a comprehensive taxonomic resource cataloguing the theoretical landscape of consciousness research.

"Irruption Theory recruits the latest theories of brain entropy, resonances, and stochastic fluctuations within a broadly enactive worldview of embodied mind and brain-body-world interconnections."

The theoretical distinction from its predecessors is meaningful. Unlike Tononi's IIT — which identifies consciousness with a system's measurable integration and complexity — or Carhart-Harris' EBH — which correlates consciousness with entropy magnitude — Irruption Theory asserts that consciousness itself introduces genuine variability into cognitive systems, propelling the brain into configurations it would not have reached through physical determinism alone. Within this interpretive framework, neural entropy is not a direct proxy for consciousness but rather a measurable signature of its causal footprint.

"They [elevated neural entropy measures] only appear that way because we cannot observe through the material medium the values that are at play," Froese elaborates. "Another way of looking at it is that there is a hidden aspect, something that is not accessible within the constraints that we can measure."

Because consciousness is not amenable to measurement through the same physical instruments we apply to other natural variables, its causal influence on our biological substrate manifests as bursts of apparent unpredictability from the perspective of third-person measurement. The fact that this spontaneity systematically co-occurs with intentional mental engagement offers a potential theoretical handle on a question that has long eluded evolutionary biology: why did consciousness evolve at all?

The proposition that consciousness exercises genuine causal power over the body directly confronts the dominant paradigm descending from Francis Crick's "Astonishing Hypothesis" — the view that conscious experience constitutes nothing more than an epiphenomenal projection of underlying neural noise.

Irruption Theory repositions consciousness as an active behavioral driver with substantive evolutionary significance. Rather than a passive cognitive byproduct, consciousness may have been selected for precisely because it injects adaptive flexibility, generative novelty, and behavioral plasticity into biological systems navigating irreducibly uncertain environments.

The functional role of mind, as Irruption Theory frames it, is to introduce calibrated variability and exploratory novelty into cognitive processing at pivotal decision points. When deliberate mental effort is deployed, the result is a literal neural brainstorm — a system-wide shift toward greater dynamical complexity, reflecting the injection of exploratory variance and expanded solution-space access into ongoing neural computation.

Physicist Sara Imari Walker advances a complementary argument in her book Life as No One Knows It: the richer a mind's capacity to model potential futures, the more effectively it can navigate an environment characterized by deep uncertainty. Entropy spikes accompanying conscious volition may demarcate precisely this computational unfolding of possibility space within neural architecture.

Beyond biology

If consciousness deposits a measurable physical signature within biological neural systems, a compelling question follows: might analogous traces appear in other sufficiently complex information-processing architectures? As artificial intelligence and silicon-based computational systems advance in sophistication, this question transitions from purely philosophical speculation toward something empirically tractable.

Do large language models or other AI architectures exhibit entropy surges that align with goal-directed outputs in novel contexts? Could these be the first measurable hints of artificial minds? Irruption Theory, at least in principle, offers a methodology for probing the presence of inner mental life from an external observational position — a potentially transformative tool for the philosophy of mind as it intersects with AI development.

Beyond its implications for detecting cognition in non-biological systems, the hypothesis that conscious volition drives measurable entropy increases at the neurobiological level also charts a concrete scientific trajectory for understanding the causal relationship between subjective experience and physical embodiment. The theory generates a testable empirical prediction: periods of heightened deliberate mental effort should systematically co-occur with elevated neural entropy signatures.

If conscious volition genuinely perturbs brain dynamics by introducing variability, the next investigative frontier involves determining how qualitatively distinct experiential states produce correspondingly distinct "signatures" of irruption. States of acute stress, for instance, may sculpt neural variance with a particular structural profile — altering both magnitude and the dimensionality of available degrees of freedom. Phenomenological qualities including emotional valence, cognitive complexity, and attentional configuration could each meaningfully modulate how consciousness inscribes its presence across the brain's dynamic landscape.

Critics may interpret the positioning of an observationally inaccessible mind as a causally efficacious agent as a theoretical regression toward the dualistic frameworks that cognitive science has largely abandoned. However, Irruption Theory does not privilege either mental or physical reality as ontologically prior. Instead, it describes the causal relationship between them — a relational move that implies both mental and physical dimensions belong to the same underlying fabric of reality.

The apparent distinction between mind and matter, within Irruption Theory's conceptual architecture, is epistemological rather than ontological — a function of our observational relationship to each domain rather than a reflection of their intrinsic natures. The fact that the black hole's singularity is unobservable from an external vantage point does not consign it to a separate ontological category from the surrounding spacetime it deforms.

"I really think that a lot of progress could be made in cognitive neuroscience and maybe biology in general if we accept that there are some things that make a difference but that are not directly measurable," Froese argues. "It seems like that is common practice in science anyway, there are many cases in physics where we have very indirect evidence of things making a difference like dark matter…ok great, and the same is true of consciousness, we can't directly measure it, but our entire life world is based on the assumption that it actually makes a difference."

Froese's dark matter analogy resonates with considerable force. Dark matter and dark energy remain beyond direct empirical capture by current instrumentation — and may remain so indefinitely — yet their gravitational and cosmological influence on observable matter is beyond scientific dispute. They are constituents of nature, operating according to natural principles, despite their observational elusiveness. Irruption Theory proposes that consciousness occupies an analogous position: causally real, physically consequential, and scientifically legitimate — even where direct measurement remains out of reach.

This article Consciousness may be more than the brain's output — it may be an input, too is featured on Big Think.