
Does intelligence come from motion? Image: Gerd Altmann, on Pixabay .
By Mariana Meneses
A new wave of scientific breakthroughs is reshaping how researchers probe complexity. In late 2025, engineers demonstrated a paper-thin brain implant capable of streaming neural activity wirelessly in real time, while neurobiologists showed that the brain can learn to interpret artificial light patterns as meaningful sensations.
Meanwhile, artificial intelligence uncovered laws governing complex systems, and physicists revealed that even ordinary foam reorganizes itself according to the same mathematical logic used to train AI.
In a peer-reviewed scientific paper published in December 2025 in the journal Nature Electronics, titled “A wireless subdural-contained brain–computer interface with 65,536 electrodes and 1,024 channels”, researchers led by Taesung Jung, from the Department of Electrical Engineering at Columbia University, present an ultra-thin brain–computer interface designed to create a high-bandwidth, wireless connection between the human brain’s outer cortex layer and external computers. The device, known as BISC (Biological Interface System to Cortex), is built around a single silicon chip as thin as a human hair, capable of recording and stimulating neural activity at unprecedented scale while remaining minimally invasive.
Scientific Daily explains that, unlike conventional medical-grade BCIs that rely on bulky implanted canisters and wired connections, BISC integrates all essential components – signal amplification, data conversion, wireless power, radio transmission, and digital control – onto one flexible chip. Thinned to just 50 micrometers and occupying less than 1/1000th the volume of standard implants, the device can slide into the space between the skull and the brain without penetrating brain tissue. This design reduces surgical invasiveness and minimizes long-term tissue reaction while preserving signal quality.
The New Generation of Brain-Computer Interface | Columbia Engineering
Technically, BISC represents a major leap in neural data capacity.
The chip contains 65,536 electrodes, supporting 1,024 simultaneous recording channels and 16,384 stimulation channels. Through an external wearable relay station, it communicates wirelessly at data rates up to 100 Mbps, roughly two orders of magnitude higher than existing wireless BCIs. This bandwidth enables real-time processing of rich neural signals using advanced machine-learning and deep-learning models, allowing researchers to decode movement, perception, and intent with far greater fidelity than before.
The project brings together engineering, neuroscience, and clinical expertise from New York Presbyterian Hospital, Stanford University, and University of Pennsylvania. Initial clinical applications are focused on drug-resistant epilepsy, with longer-term potential to support recovery of motor, speech, and visual function after neurological injury.
In sum, BISC demonstrates how advances in semiconductor fabrication, wireless communication, and AI-driven signal decoding are converging to transform brain–computer interfaces from bulky experimental systems into scalable clinical technologies. Now moving toward commercialization through the startup Kampto Neurotech, the technology may point toward a future in which high-resolution, bidirectional communication between brains and machines becomes not only feasible, but medically practical, raising profound possibilities for treatment, human–AI interaction, and the governance of neural data.

What is your opinion about BCIs? Image: Gerd Altmann, on Pixabay.
While BISC focuses on reading and stimulating neural activity at unprecedented scale, other researchers are exploring a complementary challenge: how to deliver entirely new kinds of information to the brain. In December 2025, an unrelated study demonstrated that the cortex can adapt to receiving artificial signals that do not correspond to any natural sense, opening a new frontier in how perception itself can be engineered.
In the peer-reviewed scientific paper published in the journal Nature Neuroscience, titled “Patterned wireless transcranial optogenetics generates artificial perception”, researchers led by Mingzheng Wu, from the Department of Neurobiology at Northwestern University, describe a soft, fully implantable device that sends patterned light signals directly to the brain. Rather than stimulating traditional sensory pathways such as touch, vision, or hearing, the system delivers artificial signals straight to cortical neurons, demonstrating that the brain can learn to interpret entirely new forms of input as meaningful information.
As Science Daily explains, the device sits beneath the scalp and rests on the skull, where it emits precisely controlled patterns of red light that penetrate the bone and activate genetically light-sensitive neurons in the cortex. Using arrays of up to 64 independently programmable micro-LEDs, each roughly the width of a human hair, the implant can generate complex spatial and temporal stimulation patterns. This distributed approach mirrors how real sensory experiences engage broad neural networks, rather than isolated points in the brain.
In behavioural experiments, mice were trained to associate specific light patterns with rewards. Even in the absence of sound, sight, or touch, the mice learned to recognize these artificial signals and used them to guide decisions and actions. Their performance showed that the patterned stimulation was not merely activating neurons but creating perceptual cues the brain could reliably decode and act upon.
Technologically, the work builds on earlier breakthroughs in optogenetics, which traditionally relied on fiber-optic cables tethered to external light sources. By contrast, this system is wireless, battery-free, and fully implantable, allowing animals to move and behave naturally.
The study shows that the brain can learn to perceive and use synthetic, light-based signals delivered directly to the cortex, effectively expanding the repertoire of possible sensory experiences. Beyond its implications for basic neuroscience by probing how perception is constructed, the technology points toward future clinical applications, from advanced prosthetics and rehabilitation after stroke to new, non-pharmacological approaches for modulating pain or restoring lost sensory function.

What is complexity? Image: Gerd Altmann, on Pixabay .
Together, these neuro-technologies dramatically increase our ability to record from and intervene in complex neural systems. But they also raise a parallel challenge: how to make sense of the vast, high-dimensional data that such systems generate. In late 2025, a separate line of research addressed this problem directly, introducing an artificial intelligence framework designed not to control complex systems, but to uncover the simple mathematical rules that govern their behavior over time.
In a scientific paper published in October 2025 in the journal PNAS, entitled “Automated global analysis of experimental dynamics through low-dimensional linear embeddings”, researchers led by Samuel A. Moore, from the Department of Mechanical Engineering and Materials Science at Duke University, introduce an artificial intelligence framework designed to uncover simple, interpretable laws hidden within highly complex systems. The central goal of the work is not merely prediction but understanding: extracting compact mathematical descriptions that capture how real-world systems evolve over time, even when those systems involve thousands of interacting variables.
The release in Science Daily explains that the method draws inspiration from classical dynamical systems theory, particularly the idea that complex, nonlinear behavior can sometimes be represented using linear models with the right mathematical coordinates. In practice, however, finding such representations has been difficult for researchers, because it often requires tracking enormous numbers of variables simultaneously. The team at Duke addresses this challenge by combining deep learning with physics-informed constraints, allowing the AI to sift through large time-series datasets and identify a much smaller set of hidden variables that govern the system’s behavior.
Applied across a wide range of domains, including mechanical systems like pendulums, electrical circuits, climate models, and neural activity, the framework consistently reduced complex dynamics to compact linear models without sacrificing accuracy. In many cases, the resulting descriptions were more compressed, by over an order of magnitude, than those produced by previous machine-learning approaches, while still preserving long-term predictive power. Crucially, these reduced models remain readable and mathematically transparent, making them usable within existing scientific theories rather than functioning as opaque black boxes.
Beyond simplification, the approach also reveals key structural features of dynamic systems, such as attractors, which are stable states toward which systems naturally evolve. Identifying these landmarks allows researchers to distinguish between normal operation, gradual drift, and looming instability. As Science Daily reports, locating such structures is akin to mapping the geography of an unfamiliar landscape: once the stable regions are known, the system’s broader behavior becomes far easier to interpret.
In sum, by translating massive, high-dimensional data into low-dimensional, interpretable equations, the framework extends the reach of classical dynamical analysis to systems that were previously too complex to formalize. The work points toward a future in which AI-assisted scientists help uncover the simple rules underlying apparent chaos.

What forms of foam do you find in your daily life? Image: Pixabay .
What stands out about these AI tools is that they are doing more than making better predictions. They revive a much older scientific goal: finding ways to reason about systems that change in complicated, hard-to-track ways. That same way of thinking seems now to be reshaping questions far beyond AI and neuroscience. In early 2026, researchers studying a deceptively ordinary material—foam—used similar mathematical intuition to show that what looks stable at the human scale can still be dynamically restless underneath, in a way that echoes the optimization dynamics behind modern deep learning.
In a scientific paper published in November 2025 in the journal PNAS, entitled “Slow relaxation and landscape-driven dynamics in viscous ripening foams”, researchers led by Amruthesh Thirumalaiswamy, from the Department of Chemical and Biomolecular Engineering at the University of Pennsylvania, revisit a long-standing assumption in soft-matter physics: that foams behave like glass, with their microscopic components eventually freezing into stable configurations. Using advanced simulations, the team shows that this picture is incomplete. Even when a foam appears static at the human scale, its bubbles are in constant motion at the microscopic level.
Foams, which are found in everyday materials such as soap suds, shaving cream, and food emulsions, are two-phase systems composed of bubbles embedded in a liquid or solid matrix. Traditionally, physicists modeled bubble motion as a simple descent into lower-energy states, much like rocks rolling downhill into a valley and coming to rest. This framework helped explain why foams maintain their shape over time, but it failed to account for puzzling experimental observations that had accumulated for nearly two decades.
Science Daily explains that by closely tracking bubble rearrangements in viscous, “wet” foams, the researchers discovered that bubbles never truly settle. Instead of locking into a single minimum-energy configuration, the bubbles wander continuously through a broad range of nearly equivalent states. Mathematically, this behavior mirrors how modern artificial intelligence systems learn: not by converging on a single solution, but by remaining in relatively flat regions where many configurations perform similarly well.
Contemporary AI systems are trained using optimization techniques related to gradient descent, which iteratively reduce error rather than forcing models into their lowest possible state. As Science Daily explains, pushing a system too far into a narrow optimum can often make it highly efficient at one very specific task or condition but fragile in changing circumstances, when it becomes more prone to sudden failure rather than gradual adaptation. Generalization, which provides the ability to perform well in new situations, emerges precisely because learning unfolds within broad, flexible regions of the landscape. The foam, it turns out, has been following this logic.
In sum, the study reveals that learning-like dynamics are not exclusive to brains or machines but can arise naturally in physical matter. By showing that the same mathematics underlie both deep learning algorithms and the restless bubbles in foams, the work invites a rethinking of how adaptation and stability coexist across materials, living cells, and computational systems. What began as a puzzle in foam physics now points toward a more general principle: that continuous reorganization within constraints may be a fundamental feature of complex systems everywhere.

How much do humans understand from the complexity in everyday life? Image: Gerd Altmann, on Pixabay .
Taken together, these four studies point to a subtle but consequential shift in how researchers across fields are approaching complexity. Instead of trying to fully enumerate every component of a system, whether neurons, parameters, or particles, they focus on how systems evolve over time, where they remain flexible, and which patterns persist across many possible configurations. From neural implants that read and write activity at scale, to AI systems that compress dynamics into interpretable models, to materials that remain stable precisely because they never fully settle, the common thread is not raw computational power but a renewed emphasis on structure, constraints, and motion.
In brains, machines, and even ordinary matter, intelligence increasingly appears less like a stored property and more like a dynamic process, maintained through continual movement, shaped by constraints, and made understandable through carefully chosen reductions. The significance of this shift is not that foam “thinks,” or that machines “understand,” but that the same mathematical ideas are helping scientists to reason across domains once treated as fundamentally separate.
As tools for recording, intervening, and modelling complex systems continue to improve, the deeper challenge may be learning how to recognize when different systems are, in fact, solving variations of the same problem.
Want to learn more? Here are some TQR articles we think you’ll enjoy:
- Thinking in the Age of Machines: Global IQ Decline and the Rise of AI-Assisted Thinking
- Can Science Break Free from Paywalls? Technologies for Open Science Are Transforming Academic Publishing
- COP-30 in Belém: What Emerging Technologies Can and Can’t Deliver for Planetary Health
- The Science of the Paranormal: Could New Technologies Help Resolve Some of the Oldest Questions in Parapsychology?
- Digital Sovereignty: Cutting Dependence on Dominant Tech Companies
Have we made any errors? Please contact us at info@thequantumrecord.com so we can learn more and correct any unintended publication errors. Additionally, if you are an expert on this subject and would like to contribute to future content, please contact us. Our goal is to engage an ever-growing community of researchers to communicate—and reflect—on scientific and technological developments worldwide in plain language.

