From Squid Axons to Modern AI: A Journey Through Neuron Modelling, Neuromorphic Hardware, and Neuro-Inspired Computing
It all starts from lab!

From Squid Axons to Modern AI: A Journey Through Neuron Modelling, Neuromorphic Hardware, and Neuro-Inspired Computing

The quest to mimic intelligence has been a long-sought pursuit in the world of artificial intelligence, and the path can be confusing because it spans biology, mathematics, and engineering.

The modern story begins in the late 19th century, when Santiago Ramón y Cajal used the Golgi silver-stain technique to prove that the nervous system is made of discrete cells—neurons—not a continuous web. This “Neuron Doctrine” set the stage for investigating how single cells communicate.

In the early 20th century, physiologists like Julius Bernstein proposed that electrical signals travel along axons as changes in membrane potential. By the 1930s–50s, Alan Hodgkin and Andrew Huxley used the giant axon of the squid to measure these voltages precisely. Their voltage-clamp experiments revealed the alternating flows of sodium and potassium ions that create the action potential.

In 1952 they published the Hodgkin–Huxley model—a set of nonlinear differential equations that remains a gold standard for neuron modelling, describing how ion channels and membrane capacitance give rise to a spike.

Around the same time, Warren McCulloch and Walter Pitts (1943) introduced a very different kind of model: an abstract “neuron” that sums weighted inputs and fires if a threshold is crossed. This mathematical simplification seeded the idea of artificial neural networks.

Early perceptrons (Frank Rosenblatt, 1958) could learn simple patterns but struggled with non-linear problems, a limitation famously highlighted by Minsky and Papert (1969). The field revived in the 1980s with the backpropagation algorithm, enabling multi-layer perceptrons to learn complex decision boundaries.

Then specialized designs emerged:

  • Convolutional Neural Networks (CNNs) for images (e.g., LeNet).
  • Recurrent Neural Networks (RNNs) and LSTMs for sequences.
  • Cellular Neural Networks (not to be confused with convolutional nets) proposed by Leon Chua and Lin Yang in 1988—analogue, grid-like arrays of simple processors where each “cell” interacts only with its neighbors. These have been used in real-time image processing and are closely tied to neuromorphic hardware research.
  • Hierarchical Temporal Memory (HTM), inspired by the structure of the neocortex, emphasizing sparse distributed representations and sequence memory for anomaly detection and time-series tasks.

The “deep learning revolution” of the 2010s combined large datasets, GPUs, and improved activations like ReLU. Landmark systems such as AlexNet (2012) showed that very deep CNNs could dominate image recognition tasks.

Recent years have brought Transformers (2017) with their attention mechanism, powering large language models like GPT; Graph Neural Networks for relational data; and diffusion models for image and audio generation. Modern architectures emphasize scale, modularity, and efficiency, moving far beyond the early threshold units.

Despite sharing roots in the biology of the neuron, today’s research and technology diverge into three overlapping streams:

  • Neuron modelling seeks biological accuracy, using mathematical models like Hodgkin–Huxley or simplified integrate-and-fire neurons to simulate real brain activity.
  • Neuromorphic engineering builds hardware—chips and sensors—that operate more like brains: event-driven, massively parallel, and extremely power-efficient, including implementations of Cellular Neural Networks and spiking-neuron chips.
  • Neuro-inspired computing focuses on algorithms: convolution mimicking the visual cortex, reinforcement learning reflecting dopamine-driven reward, HTM capturing cortical sequence memory, or attention mechanisms loosely echoing cognitive focus.

Persistent Challenges

  • Energy efficiency and scaling: Deep models demand enormous computational and electrical resources, far beyond the brain’s ~20 W power budget.
  • Biological fidelity vs. practicality: Highly detailed neuron models are computationally expensive, while simplified models risk missing crucial dynamics.
  • Learning paradigms: Brains learn continuously, often from sparse data, while most AI still depends on massive labeled datasets and offline training.
  • Robustness and interpretability: Modern networks can be brittle to adversarial inputs and remain difficult to explain, limiting trust and safety.
  • Hardware limits: Memory bandwidth, interconnect latency, and fabrication costs challenge neuromorphic and large-scale AI hardware.

From staining neurons in the 1800s to training billion-parameter transformers today, the trajectory is clear:

Biology revealed the neuron → mathematics captured its dynamics → engineering scaled and re-imagined it for computation.

Neuron modelling continues to deepen our understanding of the brain. Neuromorphic hardware aims to bring brain-like efficiency to real-time computing. Neuro-inspired algorithms power the AI systems that now write, converse, and create alongside us. All three domains—scientific, hardware, and algorithmic—remain interconnected, each drawing inspiration from that first great insight: the neuron is a cell, and in its spikes lies the logic of thought.

To view or add a comment, sign in

More articles by Alex James

Others also viewed

Explore content categories