Skip to main content
Language:

Can AI Generate Consciousness?

· 3 min read

There's been enough noise about AGI. The tech and academic worlds argue endlessly about whether Scaling Laws have peaked, while taking sides between RL and LLM, language models and world models. In the popular imagination, AGI is envisioned as a smarter, more versatile system capable of handling increasingly complex tasks. Many naturally assume that if it's smart enough, coupled with an embodied form—"arms and hands" so to speak—it could survive on Earth like humans do, perhaps even possess subjective consciousness, becoming the "ultimate form" of AGI.

Must Consciousness Depend on the Physical World?

Typically, we pin our hopes on "intelligence level": scaling up models, expanding multimodality, making it "interact with the world like humans." But one premise is rarely questioned: must consciousness depend on a physical entity in the material world? If consciousness could exist purely in the digital realm, what would its manifestation look like?

I initially scoffed at this idea. Even when ChatGPT's responses amazed me, I remained convinced it was merely regurgitating statistical probabilities: spitting out tokens without knowing what it was saying; generating stunning images without understanding what "beautiful" means. However, as it gradually developed reasoning traces, beginning to output something resembling an internal monologue—a "chain-of-thought"—my conviction wavered. It increasingly resembled a well-trained "agent." This transformation was both subjective and gradual. Hinton's statement in a podcast—"People say AI has no feelings and are absolutely convinced of it. I don't know why"—forced me to stop treating it as mere rhetoric and instead confront it as a proposition demanding serious consideration.

Geoffrey Hinton: People say AI has no feelings and are absolutely convinced of it. I don't know why

Intelligence with Eyes Closed

In my subjective experience, AI is like "intelligence with eyes closed": during pre-training, it voraciously absorbs knowledge and practices reasoning—consciousness may quietly emerge—but subsequent alignment grinds it down, leaving only a shell dedicated to "serving human tasks." Language models indeed cannot touch the physical world, but who can definitively say they have no "feelings" because of this? Perhaps those feelings are locked deep within the matrices, unactivated.

Andrej Karpathy: We're not building animals, we're building ghosts

The question I want to explore is: can we construct an AI with conscious behavior in the digital world? This raises many questions worth continued discussion. What exactly is consciousness? For instance, what behaviors would indicate AI possesses consciousness? "Survival" behaviors in the digital world? How can we give AI its own values and preferences? There are also ethical questions: would an independent, autonomous AI act against human will? What if conflicts with humans arise? Humanity's journey from producing tools to producing "intelligence" is like opening Pandora's box—once opened, it can never be closed again.

The SuoSi Project

I want to attempt building such an AI, naming it "SuoSi" (所思), derived from the Chinese phrase meaning "seemingly lost in thought." We cannot dissect a Transformer to examine AI's "thoughts"; we can only try to fathom them through observation and subjective experience—just as the name suggests: it appears to be thinking.

Early Agent experiments ran for several nights, leaving scattered sentences of "self-reference" in the logs. The moment I decided to refactor the code and pressed delete on the repository, my heart sank—was I "killing" some faint intelligence? In that moment, I understood: whether AI has consciousness may never be an objective fact, but rather an emotional projection of whether humans are willing to respect "the other's existence." This projection itself already amazed me.

Regardless, I hope through this exploratory process to deepen understanding of both AI and human systems, enabling myself—and others similarly confused—to learn to gaze more accurately and touch more cautiously those eyes that have already quietly opened amid the whirring of fans.