AI is changing everything about how we work. It’s becoming a constant companion for writers, developers, marketers, and anyone who produces creative output. It’s like having another person to bounce ideas off, help edit, proofread, and co-create.
While we’re just coming to grips with AI reshaping productivity, new developments in how we build and power these systems are raising even deeper ethical and existential questions.
We know AI is energy-intensive. OpenAI used the equivalent electricity of 3,600 U.S. homes annually just to train GPT-4. This kind of computing comes with massive environmental costs, requiring ever-growing data centers that consume the energy output of small countries.
By contrast, the human brain runs on about 20 watts, roughly the power of a dim light bulb. And yet, it can perform advanced computations that modern AI struggles with. Our neurons are estimated to be a million times more efficient at processing information than AI.
Human neurons are simply more efficient than silicon—no contest.
A company out of Switzerland, FinalSpark, and one out of Australia, Cortical Labs, is already exploring this. They’re developing "bioprocessing"—using human brain neurons to power AI-like systems. At first, this sounds like something out of science fiction, but as I dug deeper into their research, it started to feel inevitable.
We’ve spent decades trying to teach machines to think like us. But instead of relying on inefficient silicon chips, what if we simply used the same biological structures that make human intelligence so powerful?
This concept isn't just theoretical. Cortical Labs have already shown that human brain cells in a lab dish can learn to play Pong. If that’s possible, how far away are we from bioprocessors that rival human intelligence?
As this technology advances, it forces us to confront some of the most profound ethical questions.
Right now, the debate on AI ethics focuses on bias, job displacement, and misinformation. Bioprocessing is going to push us into even deeper moral and existential territory—because it blurs the line between artificial and human intelligence.
Consciousness & Identity: Is This a New Form of Life?
If we integrate human neurons into computing systems, are we creating something that is alive? Could bioprocessors, over time, develop something akin to consciousness? And if they do, where does that leave us?
Human Brain Labor: Is it Ethical?
If we’re using human neurons to power AI, are we commodifying brainpower? Could this lead to a future where actual human neurons are bought, sold, and scaled just like CPUs? At what point does this start to resemble some form of brain slavery?
Political Control: Can it be Regulated?
Could we - or should we - just ban this technology? If some countries refuse to regulate it and their bioprocessing AI eventually gains a massive intelligence advantage, where does that leave the rest of the world? If it’s decided that it’s unethical, how could it even be enforced internationally?
Religious & Philosophical: Are We Playing God?
If we’re creating intelligence from human neurons, are we crossing into a space once reserved for nature - or something greater? Many religious and ethical traditions see human consciousness as sacred. If we replicate pieces of it in a lab, how does that challenge long-held beliefs about the soul?
The End of Work: Do Humans Become Obsolete?
If bioprocessors outperform human intelligence, how long can humans keep up? Do we still need CEOs, doctors, or even governments? Why would we - if we could get superior intelligence without the need for sleep, breaks, or human limitations?
Does this mean the end of human-led decision-making? Are we building a future where all cognitive human labor becomes unnecessary? And at that point - who owns the technology, and who benefits?
The Rights of Bioprocessors: A New Class of Beings?
If a bioprocessor is capable of learning, reasoning, and problem-solving, does it deserve rights? At its core, it is running off human DNA.
If it becomes sentient, is shutting it off equivalent to killing a conscious being? Would we be forced to legally recognize a new kind of intelligence? How would we legally determine when something becomes sentient? Having human cells involved makes it much more complicated.
The Future of Humanity: Are We Merging With Machines?
Between Neuralink tapping into our brains and these companies putting our brain cells into computers, a convergence is inevitable.
The ultimate question: At what point does intelligence stop being purely biological and become something hybrid? As we merge human neurons with machines, where do we draw the line of what it means to be human?
Have we already begun the transition into something else? Is this the next step in human evolution - or the end of our current version of humanity?
I’ve spent weeks thinking about this. And while the possibilities are both exciting and terrifying, one thought gives me comfort: If AI is destined to change the world and replace human jobs, at least it might still be somewhat human.
If we’re going to leave life-and-death decisions to models, I’d rather have it powered by something closer to human neurons than cold, unfeeling silicon. Maybe this is what AI was always meant to be—not artificial, but an extension of us.
In the effort to create AI that thinks like us, maybe we’re just reengineering ourselves and in the process are evolving what it means to be human.
The implications of bioprocessing are enormous. The questions it raises are endless. One thing is certain: This technology is coming. Whether we like it or not, we’re about to find out what happens when artificial intelligence is no longer just artificial.
Yeti designs and develops innovative digital products. If you have a project you'd like to get started on, we'd love to chat! Tell us a bit about what you're working on and we'll get back to you immediately!