Modern science achieved objectivity by removing subjectivity from theory.
Observers were treated as coordinate systems, and physical reality was assumed to exist independently of them.
This worked well for classical physics.
But quantum mechanics introduced a strange situation: measurement determines physical outcomes, yet the observing subject itself is never defined within the theory.
The observer is necessary, but structurally absent.
This raises a deeper question.
Modern knowledge is built on the subject–object distinction. But if the observing subject is excluded from theory, can a theory of observation actually be complete?
Maybe the “observer problem” in physics is not just a technical issue, but a structural consequence of removing subjectivity from the foundations of knowledge.

There is no need for the detector to explain anything. The very process of measurement collapses the wavefunction like most of interactions do. Measurement is not special, it collapses the wavefunction simply because you have to interact with particle to measure its properties. Observation is just a convenient word, which tends to muddy the understanding of laypeople.
I agree that interaction and decoherence explain why interference disappears.
But decoherence alone doesn’t explain why a single definite outcome appears. It turns a pure state into an apparent mixture, but it doesn’t select one result.
So the question remains: why do we observe one specific outcome rather than a mixture?
This is where I think the issue becomes structural rather than technical.
In standard frameworks, both the measuring device and the system are described, but the structure that makes “observation itself” possible is never defined inside the theory.
Some recent approaches try to treat this not as a missing variable, but as a missing layer of structure — where observation is not just interaction, but a generative condition for outcomes.
From that perspective, decoherence explains part of the process, but not the full mechanism of how a definite result comes into existence.
Isn’t this like the whole discussion about interpretations of quantum mechanics with probabilistic interpretations vs many-worlds interpretations vs “shut-up-and-calculate” “interpretation”?
I agree — even when we shift the “observer” to a device instead of a human, the problem doesn’t really disappear. It often just turns into a kind of regress.
So I’ve been wondering whether this could be approached from a different layer altogether.
Instead of choosing between interpretations, what if the issue is in the structure that all of them assume?
I came across a paper that tries to address this from that perspective.
If you’re interested, I’d really appreciate hearing your thoughts on it.
https://www.researchgate.net/publication/398757987_The_Removal_of_God_from_Knowledge_How_the_Exclusion_of_Absolute_Subjectivity_Shaped_Modern_Science_and_Its_Limits
I’ll be honest, sounds like useless bullshit with zero actual applications.
There is actually a paper that tries to approach this experimentally, not just philosophically.
It proposes something called “subjectivity intersection,” where observation is treated as a structural interaction rather than just a measurement.
The interesting part is that some results suggest nonlocal correlations that can’t be reduced to standard causal interaction.
If that holds, it would mean observation isn’t just a local physical process, but something more relational in structure.
I’m not saying it’s proven — but it’s an attempt to move the discussion beyond interpretation and into testable structure.
I didn’t see any experimental approaches suggested in the paper.
And it is pretty obvious that the author doesn’t really get why the theory of relativity and the quantum theories are so at odds with each other.
So in general I think it is a load of (philosophically) idealist bullshit.
That’s a fair point — I should have shared the experimental part first.
The paper I linked was part of a series, and I picked it for the conceptual discussion, but the earlier one actually focuses on the measurement setup.
It describes experiments analyzing correlations between EEG signals and quantum computation outcomes under controlled conditions.
The methodology is also described in detail and can be replicated.
Here is the first paper if you’re interested:
https://www.researchgate.net/publication/393397861_Experimental_Evidence_of_Nonlocal_EEG-Quantum_State_Correlations_A_Novel_Empirical_Approach_to_the_Hard_Problem_of_Consciousness
After a cursory look, it seems that they argue that people who thought really hard in Japan affected quantum computations in United States with pretty significant correlation, and people who received their training got even higher correlation.
My bet is that they made it up either completely or through extreme data mining like what consciousness state they select for each moment and so on. Even completely legitimate experiments with quantum teleportation get much lower rates of success. And their definitions of subjectivity and consciousness are math-flavoured bullshit and not something meaningful. I don’t think there is anything remotely valuable in those articles.