Cover page
Eurema hecabe is alos popularly known as Common Grass Yellow butterfly often seen in Asia, Africa, and
Australia. It is a small, vibrant species often flying low to the ground in grassy or scrubby areas.
Photo Credit: Vidyamol M .V, DCII Zoology, project student at airis4D from Christian College, Chenganoor
Managing Editor Chief Editor Editorial Board Correspondence
Ninan Sajeeth Philip Abraham Mulamoottil K Babu Joseph The Chief Editor
Ajit K Kembhavi airis4D
Geetha Paul Thelliyoor - 689544
Arun Kumar Aniyan India
Sindhu G
Journal Publisher Details
Publisher : airis4D, Thelliyoor 689544, India
Website : www.airis4d.com
Email : nsp@airis4d.com
Phone : +919497552476
i
Editorial
by Fr Dr Abraham Mulamoottil
airis4D, Vol.4, No.5, 2026
www.airis4d.com
Our journal is delighted to welcome Prof. K. Babu
Joseph, who launches a new monthly column titled
Vijnanam—a Sanskrit term meaning “knowledge.” A
former faculty member of CUSAT and current advisor to
the journal, Prof. Joseph is a distinguished physicist and
an accomplished writer of popular science, philosophy,
and poetry in both English and Malayalam. Through this
column, he will offer rich interdisciplinary perspectives,
primarily focusing on science while also occasionally
engaging with other fields of thought.
The journal starts with the first article, “Scientific
Knowledge in Evolution: Empiricism and Logical
Positivism” by K. Babu Joseph, which examines how
scientific knowledge develops through philosophical
inquiry. It begins by questioning the nature
of knowledge—traditionally seen as “justified true
belief”—and highlights its limitations and the need
for continuous validation. The discussion then focuses
on empiricism, which emphasises sensory experience as
the foundation of knowledge, while also acknowledging
Immanuel Kant’s view that reason plays a vital role.
The article further explores logical positivism, which
sought to define science through logic and verifiability,
but ultimately declined due to its inability to address
universal and abstract scientific statements, leading to
the emergence of more robust theories of scientific
progress.
The article “From Imagination to Innovation:
How Science Fiction Inspires Real Science” by Dr.
Arun Aniyan highlights how science fiction serves
as a powerful driver of real-world scientific and
technological progress. It explains that many modern
innovations—such as artificial intelligence, space
exploration, and biotechnology—were first imagined
in sci-fi narratives, which provide both inspiration and
ethical foresight. By visualising future possibilities,
simplifying complex ideas for the public, and offering
both blueprints and warnings, science fiction motivates
scientists and engineers to turn imagination into reality,
ultimately shaping the direction of modern science and
innovation.
Blesson George’s article Towards Efficient
Learning in Neuromorphic Computing: A Hybrid
Probabilistic–Spike Approach” presents an innovative
framework to improve learning in neuromorphic
computing systems. It explains how traditional
spiking neural networks, though energy-efficient, face
challenges in training due to their complex dynamics.
To address this, the author proposes a hybrid model
that combines probabilistic reasoning with spike-
based computation, enabling more adaptive, efficient,
and scalable learning. By introducing feature-aware
connections, adaptive priors, and local learning rules,
the framework reduces computational complexity,
avoids the limitations of backpropagation, and supports
real-time, low-power applications, making it highly
suitable for advanced AI and neuromorphic hardware
systems.
The article “Can Machines Create, or Only
Rearrange Ideas?” by Jinsu Ann Mathew explores the
evolving debate on whether artificial intelligence can
truly be creative or simply recombine existing ideas. It
argues that while AI generates novel and often surprising
outputs by learning patterns and combining them in
new ways, this process closely resembles a fundamental
aspect of human creativity itself—recombination with
insight. However, the article highlights that human
creativity is also shaped by intention, emotion, and lived
experience, which AI lacks. Ultimately, it suggests that
AI does not just challenge the idea of machine creativity
but also deepens our understanding of what creativity
truly means, revealing it as a blend of recombination,
meaning, and human experience.
Abishek P.S.’s article “Plasma in the Interstellar
Medium” explains the crucial role of plasma in
shaping the structure and evolution of galaxies. It
describes the interstellar medium as a dynamic, multi-
phase environment where ionised plasma interacts
with magnetic fields, radiation, and cosmic events
like supernovae. The article highlights how plasma
governs key processes such as star formation, energy
transfer, turbulence, and chemical enrichment, while
also emphasising its dominance as the primary state of
visible matter in the universe. Through observational
evidence and discussion of challenges, it shows
that understanding interstellar plasma is essential for
explaining the life cycle of stars, the behaviour of
galaxies, and the broader evolution of the cosmos.
Astronomy at Scale” by Ajay Vibhute discusses
how modern astronomy has transformed into a
data-driven science due to the massive growth of
observational datasets. It highlights how large-scale
data enables deeper insights into the universe, including
galaxy evolution and transient phenomena, while
requiring advanced computational tools like machine
learning, distributed computing, and automated
pipelines. The article also addresses challenges related
to data storage, access, movement, and the risk of
systematic errors, emphasising the need for robust,
transparent, and scalable methods. Overall, it shows
that integrating computation with astronomy is essential
for managing complexity and driving future scientific
discoveries.
The article “Black Hole Stories-26: Some Black
Hole Mergers From LIGO-Virgo-KAGRA Observing
Run O4” by Ajit Kembhavi presents an overview of
significant black hole merger events detected during
the O4 observing run (2023–2025). It highlights
discoveries such as extremely massive and high-spin
mergers like GW231123, precise high signal-to-noise
detections like GW250104 that enable tests of general
relativity, and unusual systems including those formed
through hierarchical mergers or lying in mass gap
regions. The article emphasizes how improved detector
sensitivity has increased both the number and diversity
of observed GW detections, offering deeper insights
into black hole formation, evolution, and fundamental
physics while also raising new questions about their
origins.
The article by Aromal P, “X-ray Astronomy:
Theory”, explains the origin and physics of
thermonuclear X-ray bursts in neutron star low-mass
X-ray binaries. It describes how accreted hydrogen and
helium on a neutron stars surface undergo unstable
nuclear reactions, leading to rapid thermal runaway and
intense bursts of X-ray radiation. The article highlights
how different accretion rates influence the nature of
these bursts, including processes like the CNO cycle
and rapid-proton (rp) process, and explains their impact
on the surrounding accretion disk and corona. It also
emphasises that studying these bursts provides valuable
insights into extreme physics, such as neutron star
structure, dense matter equations of state, and nuclear
processes under extreme conditions.
Aengela Grace Jacob’s article “Gene Knockout
Strategies The Scientific Wrestle of Gene
Therapeutics” explains the principles and importance
of gene knockout techniques in genetic engineering
and cancer research. It describes how methods such
as homologous recombination, ZFNs, TALENs, and
especially CRISPR-Cas9 enable precise and permanent
disruption of genes to study their function and identify
therapeutic targets. The article highlights how these
strategies help in understanding disease mechanisms,
developing targeted cancer treatments like CAR T-cell
therapy, and discovering critical genetic vulnerabilities
through approaches such as CRISPR screening. It
also discusses emerging insights like isoform switching
in cancers, emphasising the potential of gene-based
precision medicine for more effective and selective
treatments.
The article “The eDNA Metabarcoding Model:
Next-Generation Biodiversity Assessment” by
Geetha Paul explains how environmental DNA
iii
(eDNA) metabarcoding is revolutionising biodiversity
monitoring by detecting genetic traces that organisms
leave in their environment. It describes the scientific
workflow from DNA collection and sequencing
to bioinformatics analysis, highlighting tools like
universal primers and high-throughput sequencing
to identify entire ecosystems from a single sample.
The article emphasises the method’s advantages,
including high sensitivity, non-invasive sampling,
and efficiency in detecting rare or invasive species,
while also addressing challenges such as database
gaps and technical biases. Overall, it presents eDNA
metabarcoding as a transformative approach for
conservation, ecological research, and environmental
management.
iv
News Desk
Photo courtesy: Prof. G. Ambika, his long term research collaborator.
Dr. K P Harikrishnan is no more
The airis4D family and the global scientific fraternity are deeply grieved by the sudden and tragic passing
of Dr. K. P. Harikrishnan, former faculty at the Department of Physics, Cochin College. He was a Research
Associate of IUCAA, Pune for more than 20 years and in this collaboration has published around 30 research
papers related to data driven dynamics of astrophysical sources.
Beyond his brilliant contributions as a researcher, ’Hari’ was a beacon of kindness—a soul whose trademark
was a gentle smile and words that healed. In a world of chaos, he remained an island of serenity, never once
known to be angry or unkind. We bow our heads in homage to his exceptional life and his restless scientific
curiosity. Rest in eternal peace, dear Hari. You will be missed beyond measure.
v
Contents
Editorial ii
I Vijnana 1
1 Scientific Knowledge in Evolution :
Empiricism and Logical Positivism 2
1.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.2 Empiricism . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.3 Logical Positivism . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
II Artificial Intelligence and Machine Learning 4
1 From Imagination to Innovation: How Science Fiction Inspires Real Science 5
1.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.2 The Power of the First Look . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.3 Fictional Inventions That Became Reality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
1.4 A Look at Modern Scientific Fields and Their Sci-Fi Roots . . . . . . . . . . . . . . . . . . . . . 6
1.5 How You Can Contribute to the Next Big Idea . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
2 Towards Efficient Learning in Neuromorphic Computing:
A Hybrid Probabilistic–Spike Approach 9
2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
2.2 Background and Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
2.3 Proposed Hybrid Learning Framework . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
2.4 Proposed Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
2.5 Computational Advantages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
2.6 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
3 Can Machines Create, or Only Rearrange Ideas? 13
3.1 Why Creativity Often Looks Like Recombination . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
3.2 How AI Generates “New” Ideas . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
3.3 Where AI Seems Creative and Where It Still Falls Short . . . . . . . . . . . . . . . . . . . . . 15
3.4 Have We Misunderstood Creativity All Along? . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
3.5 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
III Astronomy and Astrophysics 18
1 Plasma in the Interstellar Medium 19
1.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
1.2 Plasma in the Interstellar Medium . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
CONTENTS
1.3 Multi-phase Plasma Structure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
1.4 Physical Processes in Interstellar Medium Plasma . . . . . . . . . . . . . . . . . . . . . . . . . . 20
1.5 Observational Evidences . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
1.6 Challenges . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
1.7 Relevance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
2 Astronomy at Scale 24
2.1 Growth of Astronomical Data Volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
2.2 Why Manual Analysis Failed . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
2.3 Data-Intensive Astronomy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
2.4 Storage, Access, and Data Movement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
2.5 Scientific Implications of Scale . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
2.6 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
3 Black Hole Stories-26
Some Black Hole Mergers From LIGO-Virgo-KAGRA Observing Run O4 27
3.1 The Fourth Observing Run O4 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
3.2 Compact Object Masses Found in the Four Observing Runs: . . . . . . . . . . . . . . . . . . . . 31
4 X-ray Astronomy: Theory 33
4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
4.2 Thermonuclear X-ray burst . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
IV Biosciences 36
1 Gene Knockout Strategies The Scientific Wrestle Of Gene Therapeutics 37
1.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
1.2 Fundamental Logic Of Knockout . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
1.3 Traditional Homologous Recombination . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
1.4 The Programmable Nucleases (ZFNs and TALENs) . . . . . . . . . . . . . . . . . . . . . . . . . 38
1.5 CRISPR-Cas9: The Genetic Revolution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
1.6 Why We Need Gene Knockout? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
1.7 Assisting Therapy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
1.8 The Great Genetic Shuffle: A Symphony of Survival in Esophageal Cancer . . . . . . . . . . . . 39
1.9 The Shifting Genetic Mask . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
1.10 The TTLL12 Gambit: Evading the Quality Control . . . . . . . . . . . . . . . . . . . . . . . . . . 40
1.11 The HM13 Maneuver: Stressing the System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
1.12 A New Horizon for Treatment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
2 The eDNA Metabarcoding Model: Next-Generation Biodiversity Assessment - Part 2 41
2.1 Introduction: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
2.2 The Theoretical Framework: From Cells to Sequences . . . . . . . . . . . . . . . . . . . . . . . . 41
2.3 The Technical Pipeline: Field to Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
2.4 Strategic Advantages and the Biotic Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
2.5 Challenges: The Road to Standardisation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
vii
Part I
Vijnana
Scientific Knowledge in Evolution :
Empiricism and Logical Positivism
by K. Babu Joseph
airis4D, Vol.4, No.5, 2026
www.airis4d.com
1.1 Introduction
Vijnana is the Sanskrit equivalent of knowledge
in English, which represents understanding developed
through experience and reasoning. It spans a wide
range of categories like facts, skills, and capacity of
independent thinking. Plato (1) quotes a Socratic
dialogue in which knowledge is tentatively defined
as justified true belief. He knew that this definition is
insufficient to capture cases like dreams, inspirations
and intuitions, all of which can sometimes be the
sources of knowledge. But, how can their genuinity be
justified? No way! The epithet true is also unclear. In
1963, the American philosopher Gettier (2) challenged
the conventional definition of knowledge by means of
counterexamples in which the beliefs become justified
by accident or coincidence but cannot be treated as
knowledge. The emerging consensus is that there is no
universally accepted definition of knowledge! We leave
this foundational terrain to philosophers of science for
further grinding.
Turning to science, some regard it as systematic
knowledge in any branch of learning, empirical
knowledge being only a part. Should science be
restricted to experiential knowledge , vast domains
of higher mathematics would be left out, a catastrophe
that should be avoided at all cost! In the present
article, we examine the implications of each of the
three words: justified, true. and belief, and proceed
to consider some major theories about how scientific
knowledge evolves. If empirical justification alone is
available in a particular case, then it can be considered
true, postponing its theoretical justification to the
future. Belief is a matter of psychology such that
sometimes even false things happen to be believed for
long periods. The geocentric theory of the universe is
a classic example of erroneous belief that gave rise to
anthropocentrism in thought and action. It is necessary
to check and recheck, from time to time the validity of
any knowledge. With these introductory remarks, we
turn to some important movements that took place in
the last century to characterise the growth pattern of
science.
1.2 Empiricism
That knowledge arises primarily from sensations or
sensory experience is the principal thesis of empiricism
(3). Ideas and reason are secondary things that develop
on the basis of sensations. This school was in existence
in India in the form of the Vaiseshika philosophy which
stressed perception and the Charvaka school’s trust
in sensory experience (4). Both Ayurveda and Greek
medicine followed the empirical approach in medicine.
Philosophers have tried to distinguish between different
types of empiricism like concept- empiricism and
belief-empiricism. The difficulty with such a level of
abstraction is that it is the individual and not a common
experience that forms its basis. For the purposes of
understanding science, we will ignore such subtleties
and define empiricism to be the school that gives
primacy to experience in general. Immanuel Kant (5), a
1.3 Logical Positivism
great German philosopher of the 18th century, believed
that reason is above experience. The essential point he
wanted to argue was that experience must coherence
with reason. Kant posed the following questions: If
experience is the foundation of all knowledge, how is
pure math possible, and how is natural science possible?
In answering these, he developed his theory of synthetic
a priori knowledge, which precedes experience, and
therefore, must validate the latter.
1.3 Logical Positivism
Logical Positivism (6) also known as logical
empiricism or neopositivism, took birth in the early 20th
century among philosophers of science in Vienna under
the leadership of Moritz Schlick, and in Berlin under
the leadership of Hans Reichenbach. Both Karl Popper
and Wittgenstein used to unofficially participate in the
group’s discussions. Ernst Mach suggested an empirical
origin of mathematical concepts. Logical positivists,
on the other hand, believed that mathematical truth
is the same as logical consistency. But it became
clear that logical , rules are fixed only by convention.
There arose a classification of sentences into two types:
analytic and synthetic. Analytic statements have no
empirical reference, but synthetic ones combine formal
and empirical aspects, and therefore, are suitable for
natural science. Here we give ann example for each.
All bachelors are unmarried ( analytic) . Politicians are
unselfish servants of the country (synthetic). Besides
this classification, the positivists proposed a verifiability
criterion too, which defines the meaning of a sentence
to be as given by the method of its verification. Science
should, therefore, consist only of verifiable sentences.
The verifiability criterion obviously fails in the case
of universal statements like All metals expand when
heated”, because of the sheer impossibity of verification
in all cases. We know that logical positivism is not
useful for evaluating the truth in a branch like cosmology
which deals not only with heavenly objects but also
with esoteric problems such as the beginning and end
( if any) of the universe. Logical Positivism declined,
and finally met with an ungraceful collapse when more
realistic theories such as Poppers and Kuhns were put
forward .
References
1. Plato, Thaetetus, a Socratic dialogue
2.
Edmund Gettier, Is Justified True Belief
Knowledge , Analysis, 23, no. 6, 121 (1963)
3.
Edward Craig, Editor,The Shorter Routledge
Encyclopaedia of Philosophy, (2005)
4.
M. Hiriyanna, The Essentials of Indian
Philosophy, 1949, Reprint 2024
5.
Immanuel Kant, The Critique of Pure Reason
(1781, 1787), The Critique of Practical Reason
(1788), and The Critique of the Power of
Judgement (1790) , or Edward Craig, ibid
6. Edward Craig, ibid
About the Author
Prof. K. Babu Joseph, who was formerly
the Vice Chancellor of Cochin University of Science
and Technology (CUSAT), is a well-known Physicist
and writer. Besides scientific journals, he has written
several books spanning a wide spectrum of topics from
popular science, philosophy and poetry in English and
Malayalam.
3
Part II
Artificial Intelligence and Machine Learning
From Imagination to Innovation: How Science
Fiction Inspires Real Science
by Arun Aniyan
airis4D, Vol.4, No.5, 2026
www.airis4d.com
1.1 Introduction
Have you ever stopped to truly consider the genesis
of a smartphone, the blueprint for a robot vacuum that
autonomously cleans your home, or the foundational
vision for space travel itself? It’s a fascinating pattern:
the seeds of today’s most revolutionary scientific and
technological achievements were often sown not within
the sterile confines of a research laboratory, but within
the boundless, vibrant narratives of a science fiction
book or projected onto a cinematic screen.
Science fiction, universally known as ”sci-fi,”
transcends its reputation as mere escapism featuring
extraterrestrials and futuristic dystopias. It is, in fact,
a deeply profound and powerful engine for creative
inspiration and technological foresight. It functions as
an unconstrained playground for the human imagination,
granting us the liberty to meticulously explore the
”what if” scenarios of science and technology entirely
unburdened by the practical, physical, and financial
limitations of current reality. This imaginative freedom
is crucial; it doesnt just entertain, but actively fuels
the ambitions, determination, and innovative spirit of
scientists, engineers, inventors, and policymakers. By
consistently portraying the technologically impossible
as merely ”not yet possible, science fiction provides a
critical long-term vision, catalyzing the effort required
to make the most ambitious dreams—like interstellar
travel or sentient artificial intelligence—an eventual,
tangible reality. The genre effectively functions as
a cultural precursor, laying the psychological and
conceptual groundwork long before the technology
can physically exist.
1.2 The Power of the First Look
Science fictions most significant contribution is
its ability to visualize and popularize concepts before
they are technically feasible. By doing this, it achieves
several critical things:
1.2.1 It Provides a Goal to Strive For
Science fiction often presents a destination. For
example, the fictional starship Enterprise from the
Star Trek series, with its ability to cross vast distances
instantaneously, established ”warp drive” as the ultimate
travel goal, even if the current technology is light-years
away. These grand, ambitious ideas inspire generations
of researchers to dedicate their lives to solving the
complex problems that stand in the way of achieving
these fictional dreams.
1.2.2 It Makes Complex Ideas Accessible
Through compelling narratives, sci-fi introduces
non-technical audiences to advanced concepts like
artificial intelligence, genetic engineering, and parallel
universes. This widespread familiarity can garner
public support and funding for research that might
otherwise seem too obscure or abstract.
1.4 A Look at Modern Scientific Fields and Their Sci-Fi Roots
1.2.3 It Offers a Blueprint (or a Warning)
While not literal instructions, the detailed
descriptions of fictional devices often give engineers
a starting point. More importantly, sci-fi explores
the ethical and social consequences of technology.
Stories about robots turning against humans (like in
The Terminator) or gene-editing creating a two-tiered
society encourage real scientists to think deeply about
safety, ethics, and the long-term impact of their work.
1.3 Fictional Inventions That Became
Reality
The history of technology is littered with examples
of scientific breakthroughs that were first envisioned in
science fiction. This is shown in Table 1.1
1.4 A Look at Modern Scientific
Fields and Their Sci-Fi Roots
From Imagination to Innovation: How Science
Fiction Inspires Real Science
Science fiction is not merely escapism; it serves as
a powerful and indispensable wellspring of ideas that
actively shapes the future by posing profound questions
and imagining revolutionary technologies. This
visionary role is evident in three major contemporary
fields where the line between fiction and reality
continues to blur.
1.4.1
Artificial Intelligence (AI) and Robotics:
Defining the Intelligent Machine
Long before the development of commercial
products like Googles Assistant, Amazons Alexa, or
advanced industrial automation, science fiction authors
were grappling with the complex philosophical and
engineering challenges of creating intelligent machines.
The foundational concepts of what AI could be—from
a helpful, empathetic automaton like C-3PO from Star
Wars to the menacing, self-aware computer HAL 9000
from 2001: A Space Odyssey—were meticulously
explored.
Pioneering authors like Isaac Asimov established
seminal frameworks, such as the famous Three Laws of
Robotics, which were intended to regulate the behavior
of complex machines and ensure human safety. These
fictional laws laid the groundwork for contemporary
ethical discussions and research into AI alignment and
safe AI design. The ultimate measure of machine
intelligence, the Turing Test”—where a machines
ability to exhibit intelligent behavior indistinguishable
from a humans is tested—was a concept extensively
popularized and dissected in these narratives. The core,
enduring questions first posed in these stories—Can a
machine achieve genuine sentience? What are the
moral responsibilities of its creator? How do we
prevent unintended consequences?—remain central
drivers for researchers, ethicists, and policymakers
in the burgeoning field of AI today.
1.4.2 Space Exploration and Propulsion:
Igniting the Cosmic Ambition
Humanity’s relentless ambition to break the
bounds of Earth and venture to Mars, establish
permanent space habitats, or explore exoplanets is a
direct and powerful inheritance from the works of early
20th-century sci-fi authors. Luminaries such as H.G.
Wells, Jules Verne, and Robert Heinlein didnt just write
about space travel; they meticulously envisioned the
necessary engineering and logistical solutions.
Ideas considered radical at the time—including
the concept of multi-stage chemical rockets (as detailed
by Arthur C. Clarke), the necessity of space stations
for long-duration missions (a common setting in many
Golden Age stories), and the possibility of sustained
human presence in extraterrestrial environments—were
normalized and popularized decades before the first
actual rocket launch. This fictional foresight served
as an inspirational blueprint. Space agencies globally,
most notably NASA and the European Space Agency
(ESA), have frequently and openly acknowledged
the critical, inspirational role that science fiction
plays. These narratives often serve as the first spark,
encouraging countless young readers to pursue careers
in STEM (Science, Technology, Engineering, and
6
1.5 How You Can Contribute to the Next Big Idea
Fictional Concept Sci-Fi Origin Real-World Application
Submarine
Twenty Thousand Leagues Under the Sea
by Jules Verne (1870)
Modern military and
research submarines
Earbuds /
Hands-Free Comms
Fahrenheit 451 by Ray Bradbury (1953) Bluetooth earpieces and earbuds
Credit Cards/
Electronic Banking
Looking Backward by Edward Bellamy (1888) Modern debit and credit card systems
Robots
R.U.R. (Rossum’s Universal Robots)
by Karel
ˇ
Capek (1920)
Industrial robots, robotic surgery,
and advanced AI
Communications
Satellites
”The Arthur C. Clarke Orbit” (1945)
Geostationary communication
satellites
Mathematics), ultimately supplying the next generation
of astronauts, rocket scientists, and aerospace engineers
necessary to turn fictional dreams into concrete reality.
1.4.3 Biotechnology and Genetic Editing:
Re-engineering Life Itself
The field of biotechnology, particularly the
revolutionary advancements in genetic editing, owes
a significant debt to science fiction for exploring
its ultimate potential and inherent risks. Fiction
has long delved into the profound ethical and
societal implications of modifying the human form.
Narratives range from the utopian vision of eliminating
disease, engineering perfect, ”designer” children, and
significantly extending the human lifespan, to dystopian
warnings of unintended biological mutations and a
future defined by genetic stratification.
Contemporary genetic editing tools, such as
CRISPR (Clustered Regularly Interspaced Short
Palindromic Repeats), represent the real-world
fulfillment of a capability first imagined in fiction.
While the day-to-day reality of laboratory genetic work
is far more complex and incremental than a dramatic
sci-fi plot, the genre provides the ultimate thought
experiment. It presents researchers, bioethicists, and
the public with the most extreme and compelling
scenarios—both the incredible benefits of curing
inherited diseases and the potential pitfalls associated
with tampering with the fundamental blueprint of life.
Sci-fi thus acts as a crucial ethical laboratory, preparing
society for technologies that can fundamentally change
what it means to be human.
1.5 How You Can Contribute to the
Next Big Idea
You dont have to be a scientist to be inspired
by science fiction. The next wave of innovation often
comes from thinking outside the box, which is exactly
what sci-fi encourages.
1.
Read and Watch Critically: When you read or
watch science fiction, don’t just enjoy the story.
Ask yourself: What is the piece of technology
here? How does it work? What problems does it
solve? Could this really be built in the next ten
years?
2.
Encourage Imagination: Support educational
programs that foster creative thinking and
storytelling alongside scientific study. The
combination of imagination and technical skill is
the formula for future breakthroughs.
3.
Support Foundational Research: Remember
that every fictional device, from invisibility
cloaks to instant teleportation, requires basic,
often unfunded, scientific research to even begin
to be realized.
Science fiction provides the visionary spark, the
audacious dream that challenges the status quo. It is
the compass pointing scientists toward the next great
frontier, reminding us that the only real limits are those
of our own imagination.
The next time you enjoy a piece of sci-fi, remember
that you might just be looking at the world of tomorrow.
7
1.5 How You Can Contribute to the Next Big Idea
About the Author
Dr.Arun Aniyan is leading the R&D for
Artificial intelligence at DeepAlert Ltd,UK. He comes
from an academic background and has experience
in designing machine learning products for different
domains. His major interest is knowledge representation
and computer vision.
8
Towards Efficient Learning in Neuromorphic
Computing:
A Hybrid Probabilistic–Spike Approach
by Blesson George
airis4D, Vol.4, No.5, 2026
www.airis4d.com
2.1 Introduction
Neuromorphic computing has emerged as a
powerful paradigm aimed at replicating the efficiency
and adaptability of the human brain. Unlike
traditional computing systems, which rely on sequential
processing and separate memory and computation units,
neuromorphic systems operate using distributed, event-
driven architectures.
Spiking neural networks (SNNs) form the
computational backbone of neuromorphic systems.
These networks process information through discrete
spike events, enabling low-power and real-time
computation. However, despite their advantages,
training SNNs remains a challenging problem due to
their non-differentiable nature and temporal dynamics.
Existing approaches to learning in SNNs often rely
on biologically inspired rules such as Hebbian learning
and spike-timing dependent plasticity (STDP). While
these methods provide local learning capabilities, they
lack the flexibility and efficiency of modern machine
learning techniques.
In this paper, we propose a hybrid probabilistic–
spike learning framework that integrates probabilistic
reasoning with spike-based neural computation. The
proposed model overcomes key limitations of existing
approaches by introducing feature-aware connections,
adaptive prior updating, and efficient learning dynamics.
2.2 Background and Motivation
Biological neural systems learn through synaptic
plasticity, where connections between neurons are
strengthened or weakened based on activity. Hebbian
learning captures this principle through correlation-
based updates, while STDP introduces temporal
sensitivity by considering the timing of spikes.
Although these mechanisms are biologically
plausible, they are not sufficient for solving complex
computational tasks efficiently. On the other hand,
probabilistic models such as Bayesian learning provide
a strong mathematical framework for inference and
uncertainty handling but are not directly compatible
with spike-based computation.
This gap motivates the development of hybrid
approaches that combine the strengths of probabilistic
modeling and neuromorphic computation.
2.3 Proposed Hybrid Learning
Framework
2.3.1 Model Representation
Let the input feature vector be defined as:
X = {x
1
, x
2
, . . . , x
n
} (2.1)
Each feature is connected to the output neuron
2.4 Proposed Algorithm
through a synaptic weight interpreted probabilistically:
w
i
= P (yx
i
) (2.2)
Unlike naive probabilistic models, the proposed
framework accounts for feature interactions by
introducing adaptive importance factors.
2.3.2 Feature Interaction Modeling
The conditional probability of the output is
expressed as:
P (yX)
n
i=1
P (yx
i
)
α
i
(2.3)
where
α
i
represents the importance of each feature
and is learned dynamically.
This formulation relaxes the independence
assumption and enables more expressive
representations.
2.3.3 Spike-Based Computation
The membrane potential of a neuron is computed
as:
V (t) =
i
w
i
x
i
(t) (2.4)
A spike is generated when:
V (t) V
th
(2.5)
This event-driven mechanism ensures efficient
computation.
2.3.4 Adaptive Prior Updating
The prior probability is updated incrementally as:
P
t+1
(y) = (1 η)P
t
(y) + η
ˆ
P (yX) (2.6)
where η is the learning rate.
This allows the system to adapt continuously to
new data.
2.3.5 Weight Learning Rule
The synaptic weights are updated using a
probabilistic learning rule:
w
new
i
= w
old
i
+ η (x
i
y w
old
i
) (2.7)
This update balances stability and adaptability.
2.4 Proposed Algorithm
The complete training procedure is outlined below:
Algorithm 1: Hybrid Probabilistic–Spike Learning
1.
Initialize weights
w
i
and prior probabilities
P (y)
2. For each input sample X:
(a). Compute membrane potential:
V =
i
w
i
x
i
(b). Generate spike if V V
th
(c). Estimate posterior probability P (yX)
(d). Update weights:
w
i
w
i
+ η(x
i
y w
i
)
(e). Update prior:
P (y) (1 η)P (y) + ηP (yX)
3. Repeat until convergence
2.5 Computational Advantages
The proposed hybrid probabilistic–spike learning
framework offers several computational advantages
that make it well suited for efficient and scalable
intelligent systems. By combining probabilistic
reasoning with event-driven neural computation, the
framework achieves a balance between expressiveness
and efficiency.
2.5.1 Event-Driven Efficiency
Unlike conventional neural networks that perform
continuous computations, neuromorphic systems
operate in an event-driven manner. Computation occurs
only when spikes are generated. This significantly
reduces unnecessary processing when inputs are
inactive, leading to lower energy consumption and
improved efficiency. Such behavior is particularly
advantageous for sparse and real-time data streams.
2.5.2 Reduced Computational Complexity
Traditional probabilistic models often
require either strong independence assumptions
or computationally expensive joint probability
10
2.6 Conclusion
estimation. The proposed framework avoids both
extremes by introducing feature-weighted probabilistic
contributions. By assigning adaptive importance
factors to features, the model captures relevant
interactions without requiring full joint distributions.
This results in a substantial reduction in computational
complexity while maintaining expressive power.
2.5.3 Parallel Processing Capability
Neuromorphic systems naturally support parallel
computation, as neurons operate independently and
simultaneously. In the proposed model, each feature
contributes independently to the neurons activation, and
weight updates are performed locally. This eliminates
the need for centralized computation and makes the
framework highly compatible with parallel architectures
such as GPUs and neuromorphic hardware.
2.5.4 Incremental and Online Learning
The framework supports continuous and
incremental learning through adaptive updates of both
synaptic weights and prior probabilities. Instead
of requiring batch training over large datasets, the
system updates its parameters on a per-sample basis.
This reduces memory requirements, shortens training
time, and enables real-time learning in dynamic
environments.
2.5.5 Avoidance of Backpropagation
Bottleneck
Conventional deep learning methods rely
on backpropagation, which requires global error
propagation and high computational cost. In contrast,
the proposed approach employs local update rules based
on spike activity and probabilistic adjustments. This
eliminates the need for global gradient computation,
making the model more efficient and suitable for
hardware implementation.
2.5.6 Sparse Computation
Spike-based processing inherently leads to sparse
activity, as only a subset of neurons is active at any given
time. This sparsity reduces the number of computations
required and improves overall efficiency. Sparse
representations also contribute to better scalability in
large networks.
2.5.7 Hardware Compatibility
The proposed framework is well aligned with the
constraints of neuromorphic hardware. It supports
local memory usage, low-precision computation, and
event-driven processing. These characteristics make
it suitable for deployment in low-power devices,
embedded systems, and edge computing environments.
2.5.8 Scalability
Due to its reliance on local learning rules,
parallel processing, and reduced dependence on global
information, the framework scales efficiently to larger
systems. As network size increases, computational cost
grows in a manageable manner, making the approach
practical for real-world applications.
These applications benefit from low power
consumption and efficient learning.
2.6 Conclusion
This paper presented a hybrid probabilistic–spike
learning framework for neuromorphic computing
systems. By integrating probabilistic inference
with spike-based neural dynamics, the proposed
model addresses key limitations of existing learning
approaches. The framework introduces feature-aware
connections, adaptive priors, and efficient learning rules,
enabling scalable and energy-efficient computation.
Future work may focus on experimental validation,
hardware implementation, and integration with deep
learning systems to further enhance performance.
11
2.6 Conclusion
About the Author
Dr. Blesson George presently serves as
an Assistant Professor of Physics at CMS College
Kottayam, Kerala. His research pursuits encompass
the development of machine learning algorithms, along
with the utilization of machine learning techniques
across diverse domains.
12
Can Machines Create, or Only Rearrange
Ideas?
by Jinsu Ann Mathew
airis4D, Vol.4, No.5, 2026
www.airis4d.com
When an AI writes a poem, composes music,
designs an image, or suggests an unexpected solution to
a problem, the reaction is often a mixture of fascination
and discomfort. There is a sense that something
remarkable is happening—but also uncertainty about
how to interpret it. Are we witnessing creativity in a
genuine sense, or simply the output of a system that has
become extraordinarily skilled at combining patterns?
This question has moved from philosophy into
everyday conversation. Generative AI systems can
now produce stories, paintings, songs, code, and even
research ideas that sometimes appear original and
imaginative. A prompt as simple as “write a detective
story set on Mars” can produce something coherent and
surprising within seconds. For many, this feels creative.
However, others remain skeptical, arguing that these
systems are not inventing anything truly new—they are
only remixing what they have already seen.
At the heart of this debate lies a deeper question
about creativity itself. What do we actually mean when
we call something creative? Does creativity involve
conscious intention, imagination, and lived experience?
Or can it emerge whenever existing ideas are combined
in novel and meaningful ways?
Traditionally, creativity has often been treated
as one of the most distinctive human capacities. We
associate it with originality, inspiration, and the ability
to produce something that did not exist before. It
belongs to artists, scientists, writers, and inventors. It
feels tied to insight and imagination in a way that seems
difficult to reduce to rules or algorithms.
Yet, when we look closely, even human creativity
is rarely created from nothing. New ideas often grow
out of old ones. Innovations emerge by connecting
concepts that were previously separate. A scientific
breakthrough may combine ideas from two fields. A
novel may reshape familiar themes into something fresh.
A piece of music may transform existing influences into
a new expression. In this sense, creativity often involves
not invention from emptiness, but recombination with
insight.
This is where AI makes the question so intriguing.
If creativity frequently involves recombining patterns
in unexpected ways, then what exactly separates human
creativity from what generative models do? Is AI
merely imitating creativity, or is it participating in one
of its fundamental mechanisms?
The question becomes even more interesting
because it does not only challenge our understanding of
machines—it challenges our understanding of ourselves.
Asking whether AI can be creative forces us to ask what
creativity actually is.
Is creativity something uniquely tied to
consciousness? Or could it, at least in part, emerge from
the surprising rearrangement of ideas? Exploring that
possibility may tell us as much about human imagination
as it does about artificial intelligence.
3.2 How AI Generates “New” Ideas
3.1 Why Creativity Often Looks Like
Recombination
We often think of creativity as the act of producing
something completely original—something that appears
almost out of nowhere. A brilliant scientific theory, a
timeless painting, a new musical style, or a powerful
story can feel like a sudden spark of invention. Because
of this, creativity is often associated with novelty in
its purest form, as though the creative mind generates
ideas from nothing.
But when we look more closely, creativity often
works differently. Many of the ideas we call original
are not created in isolation. They emerge by reshaping,
connecting, or extending ideas that already exist. What
appears revolutionary is often a new arrangement of
familiar pieces.
Scientific progress offers many examples. Major
discoveries are frequently born when ideas from
different domains are brought together in unexpected
ways. Breakthroughs often arise not from abandoning
existing knowledge, but from seeing connections others
had missed. Creativity here lies less in inventing entirely
new building blocks and more in combining known
ones in transformative ways.
Art follows a similar pattern. Writers draw
on myths, histories, and earlier literary traditions,
yet produce stories that feel original. Musicians
borrow rhythms, scales, or styles and transform them
into something new. Painters absorb influences but
reinterpret them through a different vision. What
makes the work creative is not the absence of prior
influence, but the originality of how those influences
are reassembled.
Even everyday creativity often works through this
process. A clever metaphor joins two unrelated ideas.
A joke works because it connects expectations in an
unexpected way. A novel solution to a practical problem
often comes from applying an old idea in a new setting.
In this sense, creativity frequently involves
recombination, but not mere rearrangement. It is not
random mixing. It is the meaningful joining of existing
elements in ways that produce novelty.
A simple example makes this clearer. Imagine
combining the detective logic of Sherlock Holmes with
the futuristic world of science fiction. Neither idea is
new on its own, yet their combination may produce
an entirely fresh narrative world. Much creative work
operates in exactly this way.
This does not reduce creativity to mechanical
mixing. The value lies in which connections are made,
how they are transformed, and whether something
genuinely unexpected emerges. Recombination
becomes creative when it produces insight, surprise, or
meaning.
This perspective changes the debate about machine
creativity in an important way. If even human creativity
often depends on connecting and transforming existing
ideas, then recombination may not be the opposite of
creativity at all. It may be one of its central mechanisms.
And if that is true, then the question is no longer whether
creativity involves recombination. It becomes whether
machines can engage in that process in a way that
deserves to be called creative.
3.2 How AI Generates “New” Ideas
Once we recognize that creativity often involves
combining existing ideas in unexpected ways, the
discussion about AI becomes much more interesting.
The question is no longer whether machines start from
existing material—they clearly do, just as humans often
do. The real question is whether the way they recombine
ideas can lead to something genuinely new.
To understand this, it helps to look at how
generative AI produces output.
A common misconception is that AI simply stores
vast amounts of information and copies pieces of it
when prompted. But modern generative models do
not work like giant databases retrieving ready-made
answers. Instead, they learn patterns, relationships, and
structures from enormous amounts of data, and use
those learned patterns to generate new responses.
This is an important distinction. When asked to
write a poem about the moon in the style of classical
literature, the model is not pulling a hidden poem from
storage. It is generating one word at a time based on
14
3.3 Where AI Seems Creative and Where It Still Falls Short
patterns it has learned about poetry, imagery, language,
rhythm, and style. What emerges is not a direct copy,
but a new construction shaped by those patterns.
Consider a prompt such as: Imagine a conversation
between a medieval philosopher and a modern robot
about consciousness.
There may be no exact example of this in the
model’s training data. Yet the system can generate
something plausible—even surprising—by combining
patterns associated with philosophy, dialogue, history,
and science fiction. This is where AI begins to look
creative. It can produce unusual combinations that may
not have existed before. It can blend concepts from
distant domains, generate analogies, suggest design
ideas, or offer unexpected turns in a story. Sometimes
the result may even surprise the person who gave the
prompt.
That sense of surprise is important, because
surprise is often part of what we associate with
creativity.
Of course, one might argue that this is still just
recombination at scale. But perhaps scale changes what
recombination can do. When patterns become rich
enough and combinations become sufficiently complex,
novelty can emerge in ways that begin to resemble
creative behavior.
A useful comparison is with music. A composer
works within patterns of harmony and rhythm, yet
can still create something original. Constraints do not
eliminate creativity; they often make it possible. In a
different way, generative AI operates through learned
constraints and patterns, yet can still produce outputs
that feel new.
This does not necessarily mean machines possess
imagination in the human sense. But it does suggest
that novelty can arise from pattern generation in ways
that are more sophisticated than simple imitation. And
this complicates the old assumption that machines can
only repeat what they have seen. Sometimes they do
something closer to synthesis. And once synthesis
enters the picture, the line between recombination and
creativity begins to blur.
3.3 Where AI Seems Creative and
Where It Still Falls Short
There are many situations where AI appears
creative.
For example, suppose you ask an AI to suggest a
logo for a coffee shop inspired by astronomy. It might
combine stars, coffee cups, and constellations in ways
you had not thought of. Or imagine asking it to write a
fairy tale set in a future where robots and humans live
together. It may produce a story that feels original and
imaginative.
In cases like these, AI can seem more than a
tool—it can feel like a creative partner. This is one
reason people increasingly use AI for brainstorming. A
writer may use it to generate plot ideas, an artist may
explore design variations, or a researcher may use it to
test unusual perspectives. Sometimes the value is not
in the final output itself, but in the unexpected ideas it
helps trigger. That certainly looks like creativity. But
there is also a limit.
Consider a human poet writing about grief. The
poem may come from personal loss, memory, and
emotion. An AI can generate lines about grief, but
it has never felt loss. It can imitate the language of
emotion, but not the experience behind it. That seems
like an important difference.
The same applies to imagination. A human child
imagining a dragon kingdom may do so from curiosity,
play, and wonder. An AI can generate a description of
such a world, but it does not dream it in the same way.
This is why some people say AI can produce creative
outputs, but may not possess creativity itself.
A simple way to think about it is this: AI can
generate novel combinations Humans often create with
novel combinations plus intention and experience. The
first may look creative. The second may be what we
usually call creativity. Still, even this distinction can
become blurry.
Suppose an AI-generated melody inspires a
musician to compose a new song. Or an AI-generated
concept helps an architect design a building. Was the AI
merely assisting, or did it contribute creatively? There
15
3.5 Conclusion
may not be a sharp boundary. Perhaps AI is not creative
in exactly the human sense, but it may participate in
part of the creative process—especially in generating
unexpected possibilities. And that may be why the
debate remains so interesting.
The question may not be whether AI is creative in
the same way humans are. It may be whether creativity
itself has more than one form.
3.4 Have We Misunderstood
Creativity All Along?
At this point, the question begins to turn around.
Instead of asking whether AI is truly creative, we might
ask something more surprising: What if creativity itself
has always involved recombination?
The more we look at human creativity, the more
this seems plausible.
A new recipe may come from combining familiar
ingredients in an unusual way. A scientific idea may
emerge by connecting concepts from two different
fields. A novelist may take old themes—love, conflict,
mystery—and arrange them into a story no one has told
before.
In each case, something new appears, but it often
grows from what already existed.
Take a simple example. Imagine someone
combines classical Indian music with jazz improvisation
and creates a new musical style. The elements are not
entirely new, but the combination may be original and
creative.
This happens everywhere. Creativity often lies not
in inventing from nothing, but in seeing connections
others did not see. If that is true, then recombination
is not the opposite of creativity—it may be part of its
foundation. And this is where AI becomes interesting
again.
Generative models also combine patterns in
unexpected ways. They can link ideas from different
domains, generate unusual associations, and produce
novel outputs. In that sense, they are doing something
that resembles at least one aspect of creativity.
Of course, human creativity may include things
machines lack—intention, emotion, lived experience,
purpose. Those still matter. But perhaps those are
layers built upon a more basic process: forming new
ideas through unexpected combinations. If so, the gap
between human creativity and machine generation may
not be as absolute as it first appears. Maybe AI is
not replacing creativity. Maybe it is helping reveal
something about what creativity has always been. And
perhaps that is the most interesting possibility of all.
Because the question may not be: Can machines be
creative? It may be: Have we misunderstood creativity
all along?
3.5 Conclusion
The question of whether machines can be creative
may not have a simple yes-or-no answer. But perhaps
that is because the question itself is too narrow.
AI has pushed us to look more carefully at
something we often take for granted: what creativity
actually means. We tend to associate it with originality,
imagination, and uniquely human insight. Yet the
process of creation is often less mysterious and more
relational—it frequently grows through connections,
transformations, and experimentation.
Seen this way, generative AI is interesting not
because it settles the question of machine creativity,
but because it complicates it. Its ability to produce
novel ideas, unexpected combinations, and useful
inspiration challenges the assumption that creativity
belongs entirely outside computation.
At the same time, AI also reminds us that creativity
is more than novelty alone. Meaning, purpose, and
human experience still shape much of what we value in
creative work.
Perhaps, then, the real significance of AI is not
that it has become creative in the human sense, but that
it has become a mirror—forcing us to examine what
originality, imagination, and invention have always
involved.
And maybe that is the deeper insight: The rise
of creative machines may be teaching us less about
machines— and more about creativity itself.
16
3.5 Conclusion
References
AI can make you more creative—but it has limits
The Role of AI in Art Creation
Yes AI art is art
The Rise of AI Art: Creativity or Automation?
Mastering Creativity in Data Science & AI
About the Author
Jinsu Ann Mathew is a research scholar
in Natural Language Processing and Chemical
Informatics. Her interests include applying basic
scientific research on computational linguistics,
practical applications of human language technology,
and interdisciplinary work in computational physics.
17
Part III
Astronomy and Astrophysics
Plasma in the Interstellar Medium
by Abishek P S
airis4D, Vol.4, No.5, 2026
www.airis4d.com
1.1 Introduction
The interstellar medium (ISM) is far more complex
than just empty space sprinkled with gas and dust. It is a
dynamic environment made up of several distinct phases,
each with different temperatures, densities, and states of
ionisation. For example, cold molecular clouds, dense
and rich in hydrogen molecules, are the birthplaces of
stars, while hot ionised regions, often created by massive
stars, glow brightly as nebulae. Between these extremes
lie warm neutral and ionised gases that fill much of the
galactic disk. The ISM is constantly shaped by stellar
winds, radiation, and supernova explosions, which stir
and enrich it with heavier elements like carbon, oxygen,
and iron. This recycling process means that every new
generation of stars is formed from material that has
been processed by earlier stars, linking stellar evolution
directly to galactic evolution. In short, the ISM is not
just a passive backdrop; it is the active stage on which
the drama of star birth, life, and death unfolds, driving
the ongoing transformation of galaxies.
1.2
Plasma in the Interstellar Medium
Plasma in the interstellar medium is a fundamental
state of matter that shapes the physics of galaxies. It
arises primarily through ionisation, where ultraviolet
photons from hot stars, cosmic rays accelerated by
supernovae, or shock waves strip electrons from atoms.
This process produces a mixture of free electrons and
ions, creating a highly responsive medium that behaves
very differently from neutral gas. Ionisation is not
uniform across the ISM; dense molecular clouds remain
mostly neutral, while diffuse regions are highly ionised,
making the ISM a patchwork of environments where
ionisation and recombination compete dynamically.
Because plasma is electrically conductive, it
couples strongly with galactic magnetic fields.
This magnetisation means that plasma dynamics
are governed not by simple fluid mechanics
but by magnetohydrodynamics (MHD), where
electromagnetic forces play a central role. Even though
galactic magnetic fields are relatively weak, they exert
significant influence over the diffuse plasma, guiding
charged particles, shaping cosmic ray propagation,
and channelling flows of ionised gas. Magnetised
plasma supports waves and turbulence, redistributing
energy and momentum across vast scales[1]. This
coupling explains filamentary structures in nebulae, the
confinement of cosmic rays, and the regulation of star
formation.
Plasma is not a minor component of the cosmos
but the dominant one. Over 99% of visible matter in the
universe exists in plasma form, and the ISM exemplifies
this dominance. Hot ionised plasma fills galactic halos
and bubbles carved by supernovae, while even cooler
regions remain partially ionised. Plasma connects
local galactic environments to the larger cosmic web,
where intergalactic plasma forms filaments that structure
the universe itself. This dominance underscores why
plasma physics is central to astrophysics: understanding
plasma in the ISM is essential for modelling galaxy
evolution, star formation, and the transport of energy
and matter across cosmic scales.
1.4 Physical Processes in Interstellar Medium Plasma
1.3 Multi-phase Plasma Structure
The interstellar medium is a multi-phase plasma
structure, where different temperature and density
regimes coexist and interact dynamically. This
framework is essential because it explains how energy
and matter circulate through galaxies, shaping their
evolution.
The Warm Ionised Medium (WIM) is one of the
most pervasive plasma phases. It consists of diffuse
ionised hydrogen at temperatures around 8,000 K, filling
much of the galactic disk. The WIM is maintained by
ultraviolet radiation from hot stars, which keeps the
gas ionised over large volumes[2]. The WIM is crucial
because it represents the baseline ionised environment
in which stars and cosmic rays interact, and it serves as
a reservoir of material that can cool and condense into
denser structures. The Hot Ionised Medium (HIM) is
created by supernova explosions, which inject enormous
amounts of energy into the ISM. With temperatures
reaching about
10
6
K, the HIM is extremely diffuse
but occupies large cavities known as “superbubbles.”
This plasma emits strong X-ray radiation, making it
observable by space telescopes. Researchers view the
HIM as a key feedback mechanism: it regulates star
formation by heating surrounding gas, drives turbulence,
and enriches the ISM with heavy elements produced in
stellar interiors. In contrast, the Cold Neutral Medium
(CNM) is not plasma but coexists with plasma phases. It
consists of cooler atomic hydrogen at temperatures near
100 K, often forming dense molecular clouds. These
clouds are the birthplaces of stars, and their interactions
with surrounding plasma phases are central to the star-
formation cycle. CNM cannot be studied in isolation,
because shocks, radiation, and turbulence from plasma
phases constantly reshape its boundaries and trigger
collapse into new stars.
Importantly, these phases are not isolated
compartments. They exchange energy and matter
through shocks, turbulence, and radiation. Supernovae
heat cold gas into plasma, while cooling processes
allow hot plasma to condense back into neutral clouds.
Turbulence mixes material across boundaries, and
radiation ionises neutral gas at the edges of molecular
clouds. This dynamic interplay is what makes the ISM
a “living system, continuously recycling matter and
energy.
1.4 Physical Processes in Interstellar
Medium Plasma
The interstellar medium is governed by a set of key
physical processes that continually reshape its plasma
environment and regulate the life cycle of galaxies.
These processes, such as supernova feedback, cosmic
ray interactions, radiative cooling, and turbulence are
not isolated phenomena but interconnected mechanisms
that together determine the structure, dynamics, and
evolution of the ISM.
Supernova feedback is perhaps the most dramatic
driver of change in the ISM. When massive stars
explode, they inject vast amounts of energy into their
surroundings, producing shock waves that heat and
ionise gas. These shocks carve out superbubbles of
plasma, which can extend hundreds of parsecs across the
galactic disk. Supernova feedback is essential because it
prevents runaway star formation by dispersing cold gas,
while simultaneously enriching the ISM with heavy
elements synthesised in stellar interiors[3]. It is a
feedback loop: stars form from the ISM, and their
deaths reshape the very medium that gave them birth.
Cosmic ray interactions add another layer of
complexity. High-energy particles accelerated in
supernova shocks scatter off plasma, influencing the
pressure balance of the ISM. We note that cosmic rays
can travel long distances, coupling different regions
of the galaxy and even escaping into intergalactic
space. Their interactions with plasma affect transport
processes, magnetic field dynamics, and the heating of
otherwise cold regions [4]. This makes cosmic rays not
just a byproduct of stellar explosions but an active agent
in galactic evolution.
Radiative cooling is the counterbalance to heating.
Plasma loses energy through line emission, such as
hydrogen recombination and transitions of metal ions,
such as oxygen and carbon. However, cooling in the
ISM is highly non-linear and often out of equilibrium.
20
1.6 Challenges
Studies highlight that cooling rates depend strongly
on density, metallicity, and ionisation state, meaning
that plasma can cool rapidly in some regions while
remaining hot in others[3]. This uneven cooling creates
the multi-phase structure of the ISM, where hot, warm,
and cold components coexist and interact.
Turbulence permeates the ISM, cascading energy
across scales from kiloparsec-sized superbubbles
down to parsec-sized clouds. Plasma turbulence is
magnetised, meaning that magnetic fields guide the flow
of energy and create anisotropic structures. Turbulence
is a critical regulator of star formation: it can support
clouds against collapse, fragment them into smaller
structures, or trigger collapse in localised regions[1].
Turbulence also mixes chemical elements, ensuring that
enrichment from supernovae is distributed throughout
the galaxy.
1.5 Observational Evidences
The study of plasma in the interstellar medium
relies heavily on observational evidence across multiple
wavelengths, since plasma itself is invisible in ordinary
light. We have developed sophisticated techniques to
probe its properties, each revealing different aspects of
the ISM’s complex structure and dynamics.
X-ray astronomy has been particularly
transformative in uncovering the presence of
hot plasma. Supernova remnants and galactic halos
emit strongly in X-rays, allowing astronomers to
measure temperatures of around a million degrees
Kelvin. These observations reveal the existence of
superbubbles carved by stellar explosions, as well
as diffuse hot gas filling galactic halos[5]. X-ray
data provide direct evidence of supernova feedback
and the large-scale circulation of energy in galaxies,
confirming theoretical models of plasma heating and
enrichment. Radio astronomy offers another window
into plasma physics, especially through the detection
of synchrotron radiation. This radiation is produced
when relativistic electrons spiral around magnetic
field lines, highlighting the intimate connection
between plasma and galactic magnetism. Radio
observations map the structure of magnetic fields
across galaxies and trace cosmic ray interactions with
plasma. Researchers use these data to study turbulence,
magnetic confinement, and energy transport, all of
which are critical for understanding how plasma shapes
galactic evolution. Optical and ultraviolet spectroscopy
provide complementary insights by identifying
ionisation states and plasma cloud densities. Emission
and absorption lines reveal the chemical composition,
ionisation balance, and physical conditions of the
ISM. For example, hydrogen recombination lines
indicate regions of active ionisation, while metal ion
transitions trace cooling processes. We rely on these
spectra to quantify plasma densities, measure ionisation
fractions, and test models of radiative cooling and
non-equilibrium processes.
Together, these multi-wavelength observations
form a coherent picture of the ISM as a dynamic plasma
environment. Each technique probes a different regime,
X-rays capture the hottest plasma, radio waves trace
magnetic interactions, and optical/UV spectra reveal
ionisation and cooling. The synergy of these methods
is essential because plasma cannot be studied with
ordinary visible light alone. This multi-wavelength
approach is the cornerstone of modern astrophysics,
enabling us to piece together the life cycle of plasma in
galaxies and its role in shaping cosmic evolution[6].
1.6 Challenges
The study of plasma in the interstellar medium
presents several profound research challenges that
continue to push the boundaries of astrophysics. One of
the most significant difficulties lies in non-equilibrium
ionisation. Unlike laboratory plasmas, which can
often be approximated as being in equilibrium, the
ISM is subject to constant disturbances from radiation,
shocks, and cosmic rays. This means that ionisation and
recombination processes do not balance neatly, and the
plasma often exists in transient states. This complicates
modelling efforts, as standard equilibrium assumptions
fail to capture the ISM’s true behaviour. Instead,
complex time-dependent simulations are required to
track how ionisation evolves under varying conditions,
making the study of ISM plasma both computationally
21
1.7 Relevance
intensive and conceptually challenging.
Another major obstacle is the issue of coupled
physics. Plasma in the ISM cannot be understood within
a single discipline; it requires integrating atomic physics,
radiation transfer, hydrodynamics, and magnetic field
dynamics. Each of these areas introduces its own
complexities. Atomic physics governs ionisation and
cooling processes, radiation transfer dictates how energy
moves through the medium, hydrodynamics describes
fluid-like behaviour, and magnetic fields add anisotropy
and turbulence through magnetohydrodynamics (MHD).
Therefore, we build models that combine these diverse
physical processes into a unified framework. This
coupling is not only mathematically demanding but
also requires cross-disciplinary expertise, making ISM
plasma research a highly collaborative field.
The scale problem further complicates matters.
Processes in the ISM range from microscopic
electron collisions to galactic-scale phenomena such
as superbubbles created by supernovae. Capturing
this enormous range of scales in a single model
is nearly impossible with current computational
resources. Therefore we must rely on multi-scale
simulations, where small-scale physics is parameterised
and embedded within larger-scale models. Even then,
uncertainties remain, as small-scale processes such as
turbulence and cosmic-ray scattering can have outsized
effects on galactic evolution. The challenge lies in
bridging these scales without losing essential physical
detail, a task that continues to drive innovation in
computational astrophysics.
1.7 Relevance
The importance of plasma in the interstellar
medium lies in its role in regulating the fundamental
processes that shape galaxies. One of the most critical
roles plasma plays is in regulating star formation.
Molecular clouds, the birthplaces of stars, are subject to
the pressures and turbulence generated by surrounding
plasma. If plasma pressure is too high, it can
prevent these clouds from collapsing under gravity,
delaying or suppressing star formation[5]. Conversely,
turbulence within plasma can fragment clouds, creating
localised regions where collapse becomes possible. We
emphasise that this delicate balance between plasma
support and gravitational collapse determines the rate
of star formation, making plasma a key regulator of
galactic growth.
Plasma also functions as the backbone of the
galactic ecosystem, transporting energy and metals
across vast distances. When stars die in supernova
explosions, they release heavy elements such as carbon,
oxygen, and iron into the ISM. Plasma carries these
enriched materials throughout the galaxy, ensuring
that subsequent generations of stars and planets are
chemically diverse. In addition, plasma distributes
energy from stellar winds and supernovae, maintaining
the multi-phase structure of the ISM. Studies view this
transport as a vital feedback mechanism: it links the
life cycles of individual stars to the broader dynamics
of the galaxy, creating a self-sustaining system in which
matter and energy are constantly recycled.
On an even larger scale, plasma in the ISM is
central to cosmic evolution. Galaxies are not static
structures; they evolve over billions of years through
cycles of star birth, death, and renewal. Plasma is the
medium through which these cycles occur, connecting
small-scale processes like ionisation and turbulence
to large-scale phenomena such as galactic winds and
halo formation. By studying ISM plasma, researchers
gain insights into how galaxies recycle matter, regulate
star formation, and interact with the intergalactic
medium. This knowledge is crucial for understanding
the evolution of the universe itself, since plasma is the
dominant state of visible matter across cosmic scales.
References
[1] Ferri
`
ere, K. (2020). “Plasma turbulence in the
interstellar medium”. Plasma Physics and Controlled
Fusion, 62(1), 014014.
[2] Reynolds, R. J. (1992). The warm ionized
medium”. AIP Conference Proceedings (Vol. 278, No.
1, pp. 156-165). American Institute of Physics.
[3] Klessen, R. S., & Glover, S. C. (2015).
“Physical processes in the interstellar medium”. IStar
Formation in Galaxy Evolution: Connecting Numerical
22
1.7 Relevance
Models to Reality: Saas-Fee Advanced Course 43.
Swiss Society for Astrophysics and Astronomy (pp. 85-
249). Berlin, Heidelberg: Springer Berlin Heidelberg.
[4] Spitzer Jr, L. (2008). “Physical processes in
the interstellar medium”. John Wiley & Sons.
[5] Maciel, W. J. (2013). Astrophysics of the
interstellar medium (Vol. 1)”. New York: Springer.
[6] Lequeux, J. (2005). The interstellar medium”.
Berlin, Heidelberg: Springer Berlin Heidelberg.
About the Author
Abishek P S is a Research Scholar in
the Department of Physics, Bharata Mata College
(Autonomous) Thrikkakara, kochi. He pursues
research in the field of Theoretical Plasma physics.
His works mainly focus on the Nonlinear Wave
Phenomenons in Space and Astrophysical Plasmas.
23
Astronomy at Scale
by Ajay Vibhute
airis4D, Vol.4, No.5, 2026
www.airis4d.com
2.1 Growth of Astronomical Data
Volumes
Over the past few decades, astronomy has
undergone a dramatic transformation driven by the
exponential growth of data volumes. Early digital
surveys produced datasets measured in megabytes
or gigabytes, but modern observatories routinely
generate terabytes of data per night, with total archives
reaching petabyte scales. This growth is fueled
by advances in detector technology, wider fields of
view, higher temporal resolution, and multi-wavelength
observational capabilities.
Large-scale projects such as wide-field sky surveys
and radio interferometers continuously scan the sky,
producing repeated measurements of billions of
sources. Time-domain astronomy further amplifies
data volume by capturing transient and variable
phenomena, requiring frequent observations and rapid
data processing. As a result, astronomy has become
a data-intensive science, where the scale of data is
comparable to that encountered in fields such as particle
physics or climate science.
This increase in volume is not merely quantitative
but also qualitative. Modern datasets are richer,
combining spatial, spectral, and temporal information,
often with complex metadata describing observational
conditions and calibration parameters. Managing
and extracting knowledge from such datasets requires
fundamentally new computational approaches.
2.2 Why Manual Analysis Failed
The traditional model of manual data
analysis—where astronomers visually inspected images
or individually processed small datasets—became
untenable as data volumes increased. Even a single
modern survey can contain more images than a
human could examine in a lifetime, rendering manual
inspection impractical for all but a tiny subset of data.
Human-driven analysis is also limited by
subjectivity and inconsistency. Different researchers
may interpret the same data differently, leading
to potential biases and reduced reproducibility.
Furthermore, manual methods are inherently slow and
cannot keep pace with the continuous data streams
produced by modern instruments, particularly in time-
sensitive applications such as transient detection.
As datasets grew, it became clear that automation
was not simply a convenience but a necessity.
Algorithms replaced human inspection for tasks
such as source detection, classification, and anomaly
identification. Machine learning techniques, in
particular, have enabled the rapid analysis of large
datasets, identifying patterns and rare events that would
be difficult or impossible for humans to detect unaided.
2.3 Data-Intensive Astronomy
Data-intensive astronomy represents a shift in
how scientific discovery is conducted. Instead of
focusing on individual objects, researchers increasingly
analyze large populations, searching for statistical
trends, correlations, and rare outliers across vast
2.5 Scientific Implications of Scale
datasets. This approach enables new types of science,
such as mapping the large-scale structure of the universe,
studying the evolution of galaxies over cosmic time,
and identifying transient phenomena like supernovae
or gravitational wave counterparts.
The scale and complexity of these datasets
require advanced computational techniques, including
distributed computing, parallel processing, and machine
learning. Algorithms must be designed not only for
accuracy but also for scalability, ensuring that analyses
remain feasible as data volumes continue to grow.
In this paradigm, data are often processed
in automated pipelines that perform tasks such as
calibration, source extraction, feature measurement,
and classification. These pipelines must be robust,
reproducible, and capable of handling heterogeneous
data from multiple instruments and observing modes.
Data-intensive astronomy thus blurs the line between
astronomy and data science, integrating methods from
statistics, computer science, and applied mathematics
into the core of astronomical research.
2.4 Storage, Access, and Data
Movement
Handling astronomical data at scale presents
significant challenges in storage, access, and data
movement. Petabyte-scale archives require distributed
storage systems that ensure reliability, redundancy, and
long-term preservation. Efficient indexing and query
systems are essential for enabling researchers to locate
and retrieve relevant subsets of data without scanning
entire datasets.
Data access models have evolved to address these
challenges. Instead of downloading large datasets
locally, researchers increasingly analyze data in situ,
using remote computing resources located near the
data archives. Cloud computing and high-performance
computing (HPC) facilities provide the infrastructure
needed to process large datasets efficiently.
Data movement is another critical consideration.
Transferring large volumes of data across networks can
be time-consuming and costly, necessitating careful
optimization and, in some cases, the development
of specialized data-transfer protocols. Minimizing
unnecessary data movement by bringing computation
to the data has become a key design principle in modern
astronomical computing.
2.5 Scientific Implications of Scale
The shift to large-scale data has profound
implications for scientific discovery. On one hand,
it enables unprecedented statistical power, allowing
researchers to detect subtle effects and rare phenomena
with high confidence. Large datasets also support cross-
disciplinary studies, combining observations across
wavelengths and time to build a more comprehensive
picture of the universe.
On the other hand, the scale of data introduces
new challenges. Systematic errors can propagate across
entire datasets, potentially biasing results if not properly
controlled. The complexity of analysis pipelines makes
it more difficult to fully understand how results are
derived, increasing the importance of transparency,
validation, and reproducibility.
Moreover, the reliance on automated methods
raises important questions about interpretability and
trust. Machine learning models, while powerful, can act
as ”black boxes, making it difficult to understand the
reasoning behind their outputs. Ensuring that results are
physically meaningful and not artifacts of the analysis
process remains a central concern.
Ultimately, astronomy at scale requires a careful
balance between leveraging computational power and
maintaining rigorous scientific standards.
2.6 Summary
Astronomy has entered an era defined by data scale.
The rapid growth of observational data has transformed
the field from one centered on individual measurements
to one driven by large datasets and statistical analysis.
Manual methods have given way to automated pipelines
and machine learning, enabling the efficient extraction
of knowledge from vast and complex data.
25
2.6 Summary
At the same time, the challenges of storage,
access, and systematic error highlight the need
for robust computational infrastructure and careful
methodological design. As data volumes continue
to grow, the integration of computation into every
aspect of astronomy will deepen, reinforcing its role
as a foundational component of modern scientific
discovery.
About the Author
Dr. Ajay Vibhute is currently working
at the National Radio Astronomy Observatory in
the USA. His research interests mainly involve
astronomical imaging techniques, transient detection,
machine learning, and computing using heterogeneous,
accelerated computer architectures.
26
Black Hole Stories-26
Some Black Hole Mergers From
LIGO-Virgo-KAGRA Observing Run O4
by Ajit Kembhavi
airis4D, Vol.4, No.5, 2026
www.airis4d.com
In this story we will consider some interesting
binary mergers detected during the observing runs
O4 of the LIGO-Virgo-KAGRA detectors. Our
description here will depend on the concepts developed
in describing O3 sources.
3.1 The Fourth Observing Run O4
The observing run O4 began on May 24, 2023 and
ended on November 18, 2025. O4 was divided into
three parts, with O4a from 24 May 2023 to 16 January
2024, O4b from 10 April 2024 to 28 January 2025, and
O4c starting directly after O4b on 28 January 2025, and
ending on 18 November 2025, with a commissioning
break between 01 April 2025 to 11 June 2025. There
have been 218 confident detections of merging binary
systems in the run O4a, with another 173 candidate
events during O4b and O4c, which remain to be
confirmed through detailed data analysis (see BHS-
22 for a more detailed discussion). This story will be
about a few interesting binary mergers detected during
O4.
This source was detected during O4a by the two
aLIGO detectors on November 23, 2023. The Virgo and
KAGRA detectors were not observing at that time. The
event was observed over about five cycles, over a short
duration of about 0.1 s, but data from the two detectors
enabled the detection to be made with high confidence.
However, there was relatively large uncertainty in the
determination of the source parameters, with different
methods of analysis providing a range of values.
From the analysis, the primary is of 137 (119,
160) Solar masses, where 137 is the median value of
the range of the primary mass obtained in the analysis,
and the numbers in parenthesis show the range of in
which the primary mass is located with 90 percent
probability. We have indicated the ranges to show the
large uncertainty in the parameter values for this source.
With the same notation, the secondary has 101 (51,
123) Solar masses, with the total mass being 236 (189,
266) Solar masses. The final mass after the merger is
222 (180, 250) Solar masses. It was found the primary
had dimensionless spin parameter 0.9 (0.71, 1.0) and
the secondary 0.80 (0.28, 1.0). The spin parameter of
the remnant is 0.84 (0.68, 0.90). The spin parameter is
defined in the description of the GW190412 in BHS-25.
The distance to the source is 2.2 (0.7, 4.1) Gigaparsec.
It follows from the number above that the
GW231123 was a merger of two high mass black
holes with high spin. The total mass of the binary
is greater than that of the massive binary GW190521
described in BHS-25, making GW231123 the most
massive binary merger detected as of the end of 2025.
The high total mass, together with the high spin of the
two components, make analysis difficult because the
theoretical models which are used in the analysis are
not completely adequate. That leads to differing values
of parameters predicted by different models and with
3.1 The Fourth Observing Run O4
resultant large uncertainty.
We have considered in BHS-25 a gap in the mass
of black holes in the region of 60 130 Solar masses.
Black holes produced through the evolution of massive
stars are not expected to have mass in this region. Given
the high mass of the primary and secondary components
of the binary which merged to form GW231123, it can
be argued that there is a significant probability that the
mass of the primary, or the secondary, or both are in the
mass gap region, so we need to understand how such
black holes can be produced. A few possibilities are:
(1) There can be change in the details of the evolution
of progenitor massive star, which avoids the formation
of a pair instability supernova (PISN). That leads to the
collapse of the core of a massive star to a black hole
with mass in the gap; (2) the massive black hole binary
components could have formed through earlier BH-BH
mergers which would produce high mass black holes
with high spin; (3) The black hole mass could have
increased significantly by accretion of gas in a suitable
environment or in collisions with other stars. There
are other possible processes too for producing black
holes in the mass gap, but all the process need detailed
study before their suitability can be established. The
discovery of other massive binaries in the existing data
or through future observations will help in establishing
viable processes.
All the gravitational wave detections made so far
have been analysed under the assumption that they
are mergers of compact binaries, with the components
being black holes or neutron stars, and the orbit having
low eccentricity, i.e., it is nearly circular. The same
assumption has been made in the case of GW231123
and it provides a good fit to the observed signal. But
given the large uncertainty in the parameter values, and
the small number of cycles on which the fit is based, it
is necessary to consider other possible ways to produce
the observed gravitational wave signal. Models with
high eccentricity may have to be considered, but it is
astrophysically unlikely that compact binaries with high
eccentricity can be produced through earlier mergers.
Some possible sources which could produce a burst of
gravitational waves of the duration observed are core-
collapse supernovae, cosmic strings and collisions of
compact objects made up of exotic particles like axions.
Some of these possibilities are wholly theoretical at the
present time, and the merger of massive black holes
in a binary system remains the most likely explanation
of the event. The alternative interpretations of the
gravitational wave data mentioned here also apply to
the massive binary GW190521detected in O3.
GW250104: Detected on January 4, 2025, this
merging binary has source parameters similar to those
of the first ever gravitational wave detection GW150914.
But the sensitivity of the LIGO detectors during O4 was
much greater than during the first observing run O1,
which happened ten years before O1. So, the observed
signal for GW250104 is much larger than the noise in
the detector, which is expressed as a high signal-to-noise
ratio. This leads to a very clean signal from which firm
conclusions can be drawn about the theory of gravity
and black holes.
GW250114 was observed by both LIGO detectors,
while the Virgo and KAGRA were not operational.
The primary, secondary and total mass determined
for the merging binary are
m
1
= 33.6, m
2
= 32.2
and
M
t
= 65.8
Solar masses, respectively. The ratio of the
secondary mass to the primary mass is 0.91. The two
black holes before the merger have low spin. The mass
of the remnant black hole after the merger is
M
f
= 62.7
Solar masses and it has a spin parameter
S
f
= 0.68
.
The mass lost in the merger to gravitational waves is
3.1 Solar masses. These parameters are similar to
those of many black hole binaries detected earlier. The
novel element in this case is the high signal-to-noise
ratio, which makes possible increased accuracy in the
determination of the parameters.
The gravitational wave signal observed by the
aLIGO detector at Hanford from the first merging
binary GW150815 in 2015 (upper panel), and the signal
observed by the upgraded more sensitive version of the
detector from GW250104 (lower panel) are shown in
Figure 1. Both binaries were located about 400 Mpc
away, and the black holes in the two binaries were of 30
to 40 Solar masses. The energy emitted in each merger
therefore was about the same. The vertical purple bars
indicate the data point for a given time as the event
progresses, with the height of the bar indicating the
28
3.1 The Fourth Observing Run O4
Figure 1: The signal observed by the aLIGO detector
at Hanford from the merging binary black holes
GW150915 in 2015 (upper panel) and GW250141
(lower panel) in 2025. See the text for a detailed
description.
detected noise at that time. The green lines in each
panel is the best fit obtained for the data using a model
based on the general theory of relativity. The reduced
noise level and the smoother fit obtained in 2025 are
evident.
Soon after the merger the resultant black hole is
a distorted object, which rings down to its final state
of a Kerr black hole. The ringdown signal consists
of quasi-normal modes, which are oscillations which
damp down as the blackhole settles to its final state.
The oscillation frequencies and the time over which the
damping occurs are determined by the mass and spin of
the final black hole. The dominant mode is quadrupolar
in nature, which is expected given that the two black
hole masses are nearly equal (see the discussion about
multipole moments for the O3 detection GW190412).
At late times, the ringdown can be represented by two
modes, an oscillation with a single frequency and its
first overtone, both of which are damped. The nature
of the oscillations including the existence of the two
modes closely resembles the expectation for a Kerr
black hole with the mass
M
f
and spin parameter
S
f
mentioned above.
The high quality data available for the GW250104
merger can be used to test the second law of black hole
mechanics, which was given by Stephen Hawking in
1971, and was also discussed earlier by others. This law
states that the area of the event horizon of a black hole
cannot decrease in time through any processes, it must
remain constant or increase during any interactions that
the black hole undergoes. The law can only be violated
if certain general conditions are not met, and in some
theories of gravity other than Einsteins general theory
of relativity. For black hole binaries we have been
considering, it follows from Hawking’s law that the
area of the remnant black hole must be greater than the
sum of the areas of the two black holes of the binary
before the merger. The total area before the merger can
be measured from the detected signal before the merger,
which is the part of the signal when the two black
holes are spiralling towards each other. The area of the
remnant can be measured from the ringdown part of the
signal after the merger is complete. Such measurements
exclude the signal from the most violent parts of the
merger, The analysis is based on the assumption that
the orbit of the black holes during the merger was
almost circular in shape, that general theory of relativity
applies in the region where the measurements are made,
and that the Kerr solution provides a good description
of black holes. It is found that the results make it
highly probable that the area of the remnant was indeed
greater than the sum of the areas of the two black holes
before the merger. As of early 2026, the signal from
GW250114 is being analysed for further precise tests of
the general theory of relativity and the Kerr space-time.
GW241011 & GW241110: These two mergers
are similar in properties and possible origin so we treat
them together, as was done in the discovery research
publication. GW241011 was detected on October 11,
2024 by LIGO, Hanford and by Virgo, while GW241110
was detected by the LIGO detectors at Hanford and
Livingston, and Virgo. For both sources the probability
for the detection to be a false alarm is very low, and the
signal is strong so that accurate parameter determination
is possible.
For GW241011, in units of Solar masses, the
primary black hole mass is 19.6, the secondary black
hole mass 5.9 and the ratio of the secondary mass to
primary mass is 0.3. The novel element for this source
is the spin of the primary which has been precisely
determined to have the large value 0.78 in dimensionless
units (see the discussion for GW190412 for a definition
of the dimensionless spin). This value makes the
primary the fastest rotating black hole detected by early
2026. There is an angle of about 31 degrees between
29
3.1 The Fourth Observing Run O4
the direction of the spin vector of the primary and the
direction of the angular momentum of the binary system.
It is not possible to determine the spin of the secondary
from the data.
In the case of GW241110, in Solar units, the
primary mass is 17.2, the secondary mass 7.7, the mass
ratio is 0.45 and the dimensionless spin of the primary is
0.61. The angle between the spin vector of the primary
and the orbital angular momentum is 133 degrees, so
that in this case there is a large misalignment between
the two directions. This is the first black hole binary
known in which the spin is so misaligned with the
angular momentum. The mass and spin values are
rather similar for the two mergers we are considering
here.
The favoured explanation for the high spin is that
the two black hole binaries are formed from black holes
which have themselves formed out of earlier mergers.
The scenario here is that massive black holes are formed
through mergers of black hole binaries. Such remnant
black holes have high mass and also high spin, in the
dense environment of the central region of a star cluster.
Such remnant black holes have high spin which arises
from the angular momentum of the binary which merged.
In fact, the spin parameter of the remnants of binary
mergers which have been observed so far is clustered
around the value of about 0.7. The binaries which are
later formed from such black holes would have spin
and unequal mass components, as has been observed in
the case of the two binary mergers we are considering
here. Hierarchical merging therefore provides a natural
explanation for the formation of such binaries. The
hierarchical formation mechanism is shown in Figure 2.
As explained in the case of the observing run O3
merger GW190412, the most dominant component of
gravitational wave radiation is quadrupole in nature.
As the secondary to primary mass ratio becomes
significantly less than 1, higher multipoles contribute
to the radiation, and can be detected if the signal is
strong enough. In the case of the two binary mergers
GW241011 and GW241110, the mass ratios are 0.3
and 0.45 respectively and the signal is loud, so higher
moments can be detected. In addition, the high spin
of the primary component of the two sources allows
Figure 2: The hierarchical merger route for the
formation of black hole binaries. The formation takes
place in the dense central environment of a large star
cluster, where there are frequent encounters between
black holes and other constituents of the cluster. On
the left is shown how GW241011 could have formed.
In the first generation at the bottom, a 13.3 Solar mass
black hole merges with a 7.5 Solar mass black hole. The
effective spin of the system is 0.23. These parameters
are possible values derived from detailed modelling and
can change with the model. The merger produces a black
hole remnant which has the mass and spin observed for
the primary component of the second generation binary.
The remnant also has a velocity which is estimated to be
750 km/s, but with large uncertainty. The new remnant
captures another black hole with mass 5.9 Solar masses,
to produce the next generation binary, whose merger
we observe as GW241011. On the right of the figure
a similar process for the formation of GW241110 is
shown.
other tests to be performed. The spin of the black hole
contributes to the quadrupole moment, and there is a
specific relation between the two for a Kerr black hole.
From the gravitational wave data, it is verified that this
relation holds accurately, establishing the validity of
the Kerr solution and the general theory of relativity in
this case. The measurements are good enough to rule
out the possibility that the spinning compact object is
not a Kerr black hole and is instead an exotic object like
a boson star.
GW230529: This source was observed on 29
May 2023 by the aLIGO detector at Livingston alone.
The aLIGO detector at Hanford and Virgo were offline,
and the sensitivity of the KAGRA detector was not
enough to contribute to the observation.
This is a low mass binary merger, with the primary,
secondary and total mass being 3.6, 1.4 and 5.1 Solar
masses, and a secondary to primary mass ratio of 0.39.
The spin parameter of the primary is 0.44 and the source
distance is 201 Megaparsec. The mass of the primary
30
3.2 Compact Object Masses Found in the Four Observing Runs:
means that it could be a black hole, or an unusually
massive neutron star. The mass of the primary puts
it in the region of (3 5) Solar masses, which is a
known low mass gap in which neither neutron stars, nor
black holes are found. The primary could be a black
hole in the gap, or an unusually massive neutron star,
again in the gap. The mass of the secondary, including
uncertainties in the mass measurements, puts it in the
known range of neutron star masses. If either or both
components are neutron stars, then it is possible that
the tidal distortions produced in them make could be
detected in the gravitational wave signal. But no sign of
such distortion has been found in the primary, and the
data is not sufficient to constrain any deformations in the
secondary. The overall evidence points to the merger
being that of a black hole and a neutron star. With
one component being a neutron star, it is possible for
electromagnetic signals to be emitted by the disruption
of the neutron star during the merger. However, no
such electromagnetic counterpart has been found. The
gravitational wave signal was detected by only one
detector, so the position of the source in the sky was
very poorly determined. In such a case it is very
difficult to associate an electromagnetic counterpart
like a Gamma-ray burst with the gravitational wave
detection.
GW230529 is the first merging binary detected in
which the primary candidate is very likely a black hole
with mass in lower mass gap region (we have discussed
in BHS-25 gap at larger masses, in the 60 130 Solar
mass range). This provides clues to the origin of BHNS
binary, but the possibilities are still open as of early
2026.
3.2 Compact Object Masses Found in
the Four Observing Runs:
Here we repeat a diagram and comments from
BHS-22, in the context of the present story and BHS-
25.
In the observing runs O1, O2, O3, which included
a total observing period of 23 month, there were 90
gravitational wave source detections. In O4, more than
Figure 3: In this plot the numbers and masses
of the black holes and neutron stars discovered as
components and products of merging binaries by the
LVK collaboration. Also shown are black holes and
neutron stars discovered through electromagnetic means.
See text for details. Image Credit: LIGO-Virgo-
KAGRA | Aaron Geller | North Western University.
200 candidate sources have been discovered by the end
of the run in November 2025, over an observing period
again of 23 months. Some of the candidate sources
have been confirmed to be merging black hole binaries,
while the rest are being steadied in detail to confirm
them as valid sources or to reject them.
In Figure 3 are shown the numbers and masses
of black holes and neutron stars discovered through
gravitational wave detections and electromagnetic
means. Black holes are shown mainly as blue pairs,
connected by a blue line. These are components of
binary black hole systems which are merging and are
observed as gravitational wave source (GW). The blue
line extends to the black hole which is the product
of the merger of the two components. The mass of
each black hole is indicated on the vertical axis. The
orange dots are neutron stars which are components of
a binary neutron star system, or of a black hole–neutron
star binary, which have been detected by LIGO. The
yellow dots represent neutron stars discovered through
electromagnetic means (EM). Such EM detection is
possible (1) from the X-ray emission by a binary system
of which a neutron star is a component, the other
component being a star from which the neutron star
is accreting matter or (2) from a binary neutron star
when one (or both) component is a radio pulsar. The
pink dots are black holes discovered through their being
the compact component of an X-ray binary system, the
other component again being a star from which the
31
3.2 Compact Object Masses Found in the Four Observing Runs:
black hole accretes mater (see BHS-1 for some details).
It is seen from Figure 3 that the mass range of the
GW and EM neutron stars are the same, extending over
1-2 Solar masses. While there is an overlap between
GW and EM black hole mass ranges, the GW black
hole masses extend to significantly higher values. The
EM black holes are the end products of the evolution
of stars with mass greater than about 25 Solar masses.
They are the compact objects left after the supernova
explosion which occurs at the end of the evolution of
the massive star. This explanation can apply to the
formation of the less massive GW black holes, with the
added complication that binary black hole formation
has to take place without disrupting the binary of which
they are the components. As for the more massive
GW black holes, there are difficulties in explaining
their origin as end products of stellar evolution, and
other mechanisms like hierarchical mergers have to be
considered, as described above.
The black hole binary mergers discovered in the
O1 and O2 had similar parameters. There was one
gravitational wave detection of a spectacular binary
neutron star merger observed in O2, with accompanying
electromagnetic radiation. In later runs here have been
a few other mergers identified as NS-NS and BH-NS,
but no electromagnetic counterpart has been identified
in these cases. That has primarily been because of the
difficulty in localising the position of the merger to a
small enough area of the sky which could be searched
for the counterpart. This situation will change when
LIGO-India becomes operational around 2030. With
the additional detector with high sensitivity operating at
a distance from the three detectors presently in operation,
localisation will improve significantly, leading to further
identifications.
As the sensitivity of the gravitational wave
detectors has improved over the four runs, a variety
of mergers have been detected, some which we have
captured in the descriptions of individual sources from
the later runs. It is clear that as the sensitivity improves
further, and new generations of detectors become
available on the ground and in space, we will make
rich discoveries which will provide us insights into the
formation, evolution and final moments of these sources.
It is startling to imagine that so many of these very
massive binaries are present in the Universe, which we
will ever detect only through their gravitational radiation,
mostly in the last few moments of their evolution.
About the Author
Professor Ajit Kembhavi is an emeritus
Professor at Inter University Centre for Astronomy
and Astrophysics and is also the Principal Investigator
of the Pune Knowledge Cluster. He was the former
director of Inter University Centre for Astronomy and
Astrophysics (IUCAA), Pune, and the International
Astronomical Union vice president. In collaboration
with IUCAA, he pioneered astronomy outreach
activities from the late 80s to promote astronomy
research in Indian universities.
32
X-ray Astronomy: Theory
by Aromal P
airis4D, Vol.4, No.5, 2026
www.airis4d.com
4.1 Introduction
In the previous article, we discussed X-ray
emission from accretion disks in X-ray binaries and
compared its efficiency to that of nuclear fusion. We
concluded by asking whether the accretion disk is the
only source of X-ray emission in X-ray binaries. In
today’s article, we will focus on a specific type of X-
ray emission found in Neutron Star Low-Mass X-ray
Binaries: the thermonuclear X-ray burst. As a personal
note, my research centers on the observational studies
related to this phenomenon.
4.2 Thermonuclear X-ray burst
We have already discussed how accretion disks
form in X-ray binaries. In neutron star X-ray binaries,
if the neutron star is relatively old and has a weaker
magnetic field, the accreted material will eventually be
deposited onto its surface. If this accumulated matter
undergoes an unstable nuclear reaction, it can burn
instantaneously due to nuclear processes, leading to a
thermal runaway.
This thermal runaway happens because the
accreted hydrogen and helium fuel experiences intense
hydrostatic compression as it continues to pile onto
the surface of the neutron star. Within hours or
days, the accumulating material reaches extreme
ignition temperatures and densities. The core physical
mechanism driving this violent eruption is known as
the thin-shell instability.
Since the burning layer is exceptionally thin, only
a few meters deep compared to the neutron stars typical
radius of around 10 kilometers, the initial nuclear
heating causes the shell to expand, but this is insufficient
to reduce the local pressure and cool the region.
Additionally, in this dense environment, the electrons
are mildly degenerate, meaning their pressure does
not significantly depend on temperature. This further
prevents the expanding gas from effectively dissipating
the heat generated. As a result, the temperature spikes
dramatically, accelerating nuclear reaction rates and
triggering a localized thermonuclear runaway.
The specific dynamics of this thermonuclear
runaway depend heavily on the mass accretion rate
and the chemical composition of the infalling material.
At relatively low accretion rates, hydrogen burns
unstably via the Carbon-Nitrogen-Oxygen (CNO) cycle,
which can subsequently trigger helium ignition. At
intermediate mass accretion rates, hydrogen burns
continuously and stably via the hot CNO cycle,
completely exhausting itself and gradually building
up a dense, pure helium layer beneath it. Once critical
conditions are met, this pure helium layer violently
detonates via the triple-alpha process, producing a
short, intense X-ray burst that typically lasts around
ten seconds. At even higher accretion rates, the
conditions for helium ignition are met much faster,
before the hydrogen has fully burned, resulting in a
mixed hydrogen and helium flash. The initial helium
ignition generates extreme temperatures that trigger
breakout reactions, bypassing the standard CNO cycle
and initiating the rapid-proton (rp) process. During
the rp-process, a rapid succession of proton captures
and slower beta decays synthesizes heavier elements,
significantly extending the energy release and creating
4.2 Thermonuclear X-ray burst
Figure 1: Graphical representation of occurrence of
thermonuclear X-ray Burst
a burst tail that lasts for tens to hundreds of seconds.
The ignition itself is highly complex; rather than
erupting uniformly across the entire sphere, the runaway
typically sparks at a localized point, often near the
equator, where the effective surface gravity is slightly
reduced by the star’s rapid rotation. From this ignition
point, a thermonuclear flame front propagates laterally
across the neutron star, engulfing the entire surface in
approximately one second. In the most powerful of these
events, the extreme local luminosity can temporarily
exceed the Eddington limit, causing the outward
radiation pressure to overcome the immense inward
gravitational pull. When this threshold is breached,
the outermost layers of the neutron stars photosphere
are physically lifted off the surface and driven outward,
creating a Photospheric Radius Expansion (PRE) burst.
As the thermonuclear fuel is exhausted, the lifted
photosphere eventually contracts and settles back onto
the stellar surface before an extended cooling phase
takes over. Ultimately, while continuous accretion
generates a steady baseline X-ray luminosity, the sudden,
localized burning of stored fuel temporarily outshines
this persistent emission by a factor of ten or more,
producing thermonuclear X-ray bursts
Thermonuclear X-ray emission is predominantly
thermal and is well-described by a blackbody spectrum
with peak temperatures reaching 2–3 keV. Consequently,
bursts are most prominent in the soft X-ray energy
band (typically below 10 keV), where they temporarily
outshine the persistent X-ray emission from accretion
by an order of magnitude or more
The intense photons from a thermonuclear X-
ray burst drastically alter the surrounding accretion
environment, primarily affecting the corona, the
accretion disk, and the companion star. When the
burst injects a massive influx of soft X-ray photons into
the hot electron corona, it triggers inverse Compton
scattering that rapidly cools the coronal plasma. This
cooling manifests observationally as a sharp deficit
in hard X-ray emission during the burst peak. In
extreme cases, the bursts immense radiation pressure
may completely blow the corona away, which could
temporarily shut off the collimated radio jets that are
linked to coronal magnetic fields. The burst also
strongly impacts the physical structure of the accretion
disk. Irradiation can heat the disk, causing it to puff
up and increase its scale height, while the intense
photon flux can induce Poynting-Robertson radiation
drag, forcing the inner disk material to drain onto the
neutron star. Furthermore, burst photons scattering off
the inner accretion disk generate observable reflection
features, such as fluorescent iron emission lines and
absorption edges. Finally, photons that reach the cooler
outer accretion disk and the donor star are reprocessed,
producing transient optical and ultraviolet flashes.
Studying thermonuclear X-ray bursts provides a
unique astrophysical laboratory to probe matter and
physical processes under extreme conditions that cannot
be replicated on Earth. One of the primary motivations
for studying these powerful explosions is to constrain
the Equation of State (EoS) of the supranuclear dense
matter inside neutron star cores. By using methods
like Photospheric Radius Expansion (PRE) bursts
and continuum spectrum modeling, researchers can
accurately measure a neutron stars mass and radius,
thereby ruling out specific theoretical models of exotic
matter. Thermonuclear Burst allows astronomers to
dynamically probe the accretion process by observing
how burst photons interact with the accretion disk
and the hot electron corona. Finally, they provide a
testing ground for complex nuclear physics, such as
the rapid-proton (rp) process, and multidimensional
hydrodynamics like flame spreading.
We will discuss more methods of producing X-rays
in the upcoming articles.
Reference:
Sudip Bhattacharyya Measurement of neutron
star parameters: A review of methods for low-
mass X-ray binaries
Anna L. Watts Thermonuclear burst oscillations
Nathalie Degenaar, David R. Ballantyne, Tomaso
34
4.2 Thermonuclear X-ray burst
Belloni, Manoneeta Chakraborty, Yu-Peng Chen,
Long Ji, Peter Kretschmar, Erik Kuulkers, Jian
Li, Thomas J. Maccarone, Julien Malzac, Shu
Zhang, and Shuang-Nan Zhang Accretion disks
and coronae in the X-ray flashlight
Duncan K. Galloway and Laurens Keek
Thermonuclear X-ray bursts
Duncan K. Galloway, Zac Johnston, Adelle
Goodwin, and Alexander Heger High-energy
transients: thermonuclear (type-I) X-ray bursts
L. Colleyn, Z. Medin, and A. C. Calder Modeling
X-ray Bursting Neutron Star Atmospheres
W. H. G. Lewin, J. van Paradijs, and R. E. Taam
X-ray bursts
About the Author
Aromal P is a research scholar in
Department of Astronomy, Astrophysics and Space
Engineering (DAASE) in Indian Institute of
Technology Indore. His research mainly focuses on
studies of Thermonuclear X-ray Bursts on Neutron star
surface and its interaction with the Accretion disk and
Corona.
35
Part IV
Biosciences
Gene Knockout Strategies The Scientific
Wrestle Of Gene Therapeutics
by Aengela Grace Jacob
airis4D, Vol.4, No.5, 2026
www.airis4d.com
1.1 Introduction
In the field of genetic engineering and oncology,
researchers often need to silence a gene to understand
its function. This is typically achieved through
two primary strategies: Gene Knockout and Gene
Knockdown. While they sound similar, they operate on
different biological levels and offer unique advantages
for cancer research and therapy.Gene knockout involves
the complete permanent disruption of a gene at the DNA
level. By using genome-editing tools like CRISPR-
Cas9, the specific DNA sequence of a gene is deleted
or scrambled, ensuring that the cell can no longer
produce the corresponding protein. While considering
the case of gene knockdown, a temporary reduction in
gene expression, usually at the RNA level. Instead of
destroying the DNA, knockdown techniques intercept
the messenger RNA (mRNA) before it can be translated
into a protein.When a drug is developed (before the
initial process), researchers must prove a gene is a
viable target therefore by using knockdown or knockout
strategies in the lab, they can mimic the effect of a
future drug. If knocking down Gene X stops a tumor
from spreading in a petri dish, Gene X becomes a high-
priority target for pharmaceutical development. Here
we would indulge in discussing on the gene knockout
methods and strategies and understand their upscaling
and leverage in therapeutic applications.
1.2 Fundamental Logic Of Knockout
The standard observational biology asks, What is
this gene doing? , knockout technology asks, What
happens to the system when this gene is gone? This
distinction is vital. It shifts the research from correlation
(observing that a gene is active during a process) to
causation (proving that the process cannot happen
without that gene).
The process typically targets the genomic DNA.
Unlike temporary methods that intercept messages, a
knockout creates a permanent, heritable change. When
a gene is knocked out, the cellular machinery can no
longer transcribe the DNA into functional messenger
RNA (mRNA), ensuring that the corresponding protein
is never synthesized.
There are certain primary methodologies
introduced to conduct the knockout procedures it
include homologous recombination (traditional method)
and ZFNs and TALENs and the gold standard CRISPR-
Cas9
1.3 Traditional Homologous
Recombination
Before the era of programmable nucleases,
scientists relied on a natural cellular repair process
called homologous recombination. Researchers would
introduce a DNA fragment containing a nonsense
sequence flanked by sequences identical to the target
gene. Occasionally, the cell would mistakenly swap
1.7 Assisting Therapy
the functional gene for the broken one. While
groundbreaking (earning the Nobel Prize in 2007),
this was an inefficient numbers game, often requiring
thousands of attempts to achieve a single successful
knockout.
1.4 The Programmable Nucleases
(ZFNs and TALENs)
To increase efficiency, scientists developed
molecular scissors that could be programmed to
cut specific DNA locations. Zinc-Finger Nucleases
(ZFNs)and TALENs were the first to allow for targeted
double-strand breaks (DSBs). By forcing the cell to
repair a specific break, researchers could introduce
mutations that effectively deactivated the gene.
1.5 CRISPR-Cas9: The Genetic
Revolution
The current gold standard is CRISPR-Cas9.
Derived from a bacterial immune system, CRISPR
uses a guide RNA to lead the Cas9 enzyme to a precise
coordinate in the genome. Once there, Cas9 snips the
DNA. When the cell attempts to repair this break using
a sloppy mechanism called Non-Homologous End
Joining (NHEJ) , it often introduces small insertions
or deletions (indels). These indels shift the reading
frame of the gene, creating a premature stop signal that
renders the gene completely non-functional.
1.6 Why We Need Gene Knockout?
This figure illustrates the CRISPR-Cas9 gene-
editing process, showing how a specific DNA sequence
is targeted, cut, and then repaired by the cell’s natural
mechanisms.
This is a very intriguing question to be pondered
upon so lets take an example, imagine the city starts
experiencing random fires (tumors). We suspect a
specific office,let’s call it the Growth Office .It has a
broken manual thats telling the city to build buildings
where they don’t belong.
Figure 1: This figure illustrates the CRISPR-Cas9 gene-
editing process, showing how a specific DNA sequence
is targeted, cut, and then repaired by the cell’s natural
mechanisms.
Image Courtesy: https://www.biorender.com/template
/crisprcas9-gene-editing
To prove this office is the problem, scientists use a
Knockout Strategy. If we completely delete the Growth
Office manual and the fires stop, we have proven that
office was the culprit. Conversely, if we delete a Safety
Office manual and fires suddenly start everywhere, we
realize that manual was actually a Tumor Suppressor -
a set of instructions meant to prevent fires.
We cant just throw a grenade into the library; we
need to be surgical. We use tools like CRISPR, which
acts like a biological GPS paired with a pair of scissors.
Now we need to know the factor of how,the answer is
through precision shredding;
The GPS (Guide RNA): We give the tool a
snippet of the manual we want to destroy. It
wanders through the library until it finds the
exact matching page.
The Scissors (Cas9): Once it finds the page, it
snips the DNA.
The Sloppy Repair: The cell is a neat freak; it
hates broken DNA. It tries to tape the page back
together immediately. But because it’s in such a
rush, it almost always makes a typo or leaves out
a sentence
The Result: That typo makes the manual
unreadable. The cell can no longer follow those
instructions. The gene is Knocked Out.
38
1.8 The Great Genetic Shuffle: A Symphony of Survival in Esophageal Cancer
Figure 2: This diagram illustrates how the CRISPR-
Cas9 system works to edit DNA
Image Courtesy: https://encrypted-tbn1.gstatic.com/licensed-image?q=tbn:ANd9GcR XMz75l w
Vf0fSb2L7 WC5L3KXmd5LkZJDWc3Qa8jgNwGQu70lLCgaTvEubv5ozeY5XcLiVO3LUnBW
WwfTI9vfGzhd9tFb8MDrbDdI5BSrDShQfM
1.7 Assisting Therapy
Now, we need to understand how this technique
helps to target the tumour which arise in our bodies.To
tackle this enemy we utilise the knockout strategy.By
knocking out genes one by one in cancer cells (a process
called a CRISPR screen), we can find Achilles heels.
We might find that while a cancer cell is aggressive, it
is uniquely dependent on one specific Utility Manual.
If we can find a drug that mimics that knockout, we
can kill the cancer without hurting the healthy parts
of the body.In gene therapy, we actually use knockout
strategies to help our immune cells. Sometimes, our
immune cells (the city’s police) have a manual that tells
them, Dont attack anything that looks like a citizen.
Cancer cells are clever and disguise themselves as
citizens. Scientists can knock out that specific Dont
Attack instruction in the immune cells. Now, these
upgraded Bodyguards can see through the cancer’s
disguise and go on the offensive. This is the CAR
T-cell Therapy, it is shown to be groundbreaking for
hematologic cancer treatments.
Figure 3: This image illustrates CAR-T cell therapy,
a specialized form of immunotherapy that genetically
engineers a patients own immune cells to recognize
and destroy cancer.
ImageCourtesy:https://encrypted-tbn2.gstatic.com/licensed-image?q=tbn:ANd9GcQDq4PfGy7r
dbeeD7WokUkPQ1flVsoai0jdbSWFJz3DgDM Yn6cq9wuJFcnxqdWUWtYkNN6Cd4Qla54DRu
J owWVUpIu6MmyUjdOOaYKATsQMPmuxQ
1.8 The Great Genetic Shuffle: A
Symphony of Survival in
Esophageal Cancer
In the silent, cellular theatre of the Esophageal
Adenocarcinoma (EAC), a complex and deadly
transformation unfolds. The recent study published
in Cell Death and Disease reveals that the progression
of this cancer from its precursor, Barrett’s Esophagus
(BE), to a lethal malignancy is orchestrated not just
by mutations, but by a subtle genetic sleight of hand
known as isoform switching (IS).
1.9 The Shifting Genetic Mask
Think of a gene as a master blueprint. Through
alternative splicing, a single gene can produce different
versions of its protein output, called isoforms. The
researchers discovered that as EAC develops, the cell
switches its preference from one isoform to another. By
analyzing RNA-sequencing data, the team identified:
71 genes that underwent significant isoform switching
during the progression to EAC. 42 specific isoforms
that serve as grim harbingers, directly linked to all-
39
1.12 A New Horizon for Treatment
cause patient mortality. A unique synergy between
these switches and the status of TP53, the most
commonly mutated gene in this cancer type.
The Protagonists of Pathogenesis: TTLL12 and
HM13: To bridge the gap between observation and
action, the study focused on two high-stakes players
identified by survival analysis: TTLL12 and HM13.
1.10 The TTLL12 Gambit: Evading
the Quality Control
:
The coding isoform TTLL12-201 emerges as a
villain in the EAC narrative. When researchers silenced
this specific isoform using siRNA, the results were
transformative: Activation of Chaperone-Mediated
Autophagy (CMA): Knocking down TTLL12-201
forced the cell to restart its internal cleaning system.
Degradation of CHK1 and TP53: This activated
autophagy targeted and degraded key survival proteins
like CHK1, which normally helps the cancer repair
its DNA. Chemosensitivity: Silencing this isoform
sensitized cancer cells to standard chemotherapy
(paclitaxel and carboplatin), turning a resistant foe
into a vulnerable one.
1.11 The HM13 Maneuver: Stressing
the System
The HM13-201 isoform acts as a shield within
the Endoplasmic Reticulum (ER). Targeting it triggered
a cellular catastrophe: Unfolded Protein Response
(UPR): Its removal induced massive ER stress,
overwhelming the cancer cell with misfolded protein
garbage. Translation Arrest and Apoptosis: The cell
effectively stopped producing new proteins and initiated
programmed cell death (apoptosis). Immunotherapy
Synergy: Remarkably, silencing HM13 increased the
response to avelumab , an anti-PD-L1 agent, suggesting
that this genetic switch plays a role in helping the cancer
hide from the immune system.
1.12 A New Horizon for Treatment
Perhaps the most promising finding is the
selectivity of these targets. While silencing TTLL12
and HM13 was devastating to EAC cells, it had minimal
effect on normal esophageal cells (Het-1A), offering a
goldilocks opportunity for precision medicine with low
toxicity.
By moving beyond simple mutations and looking
at the isoform landscape, this research provides a new
map for prognostic markers and therapeutic targets. It
suggests that the future of EAC treatment may lie in
reversing these genetic switches, forcing the cancer to
unmask itself and face the cellular consequences.
REFERENCES
https://www.nature.com/articles/s41392-023-0
1309-7
https://www.nature.com/articles/s41392-025-0
2269-w
https://www.cancer.gov/about-cancer/treatmen
t/research/car-t-cells
https://www.nature.com/articles/s41598-025-2
4184-4
https://link.springer.com/article/10.1038/s414
19-026-08542-2
About the Author
Aengela Grace Jacob is Final Year Student
of Bsc Biotechnology , Chemistry ( BSc BtC ) Dual
Major at Christ University Central campus, Bangalore
40
The eDNA Metabarcoding Model:
Next-Generation Biodiversity Assessment -
Part 2
by Geetha Paul
airis4D, Vol.4, No.5, 2026
www.airis4d.com
2.1 Introduction:
The Molecular Revolution in Ecology
For centuries, biodiversity monitoring has relied
on traditional methods such as nets, traps, and visual
surveys, techniques that, while foundational, are
labour-intensive, taxonomically limited, and often
invasive. As the global biodiversity crisis accelerates,
driven by habitat loss, climate change, and species
extinctions, the need for faster, more comprehensive,
and non-destructive monitoring tools has become urgent.
Enter environmental DNA (eDNA) metabarcoding,
a revolutionary approach that shifts the focus from
direct organismal observation to the detection of the
genetic traces species leave behind in their surroundings.
Every organism sheds DNA through skin cells, mucus,
faeces, or gametes into water, soil, or air. By
capturing and sequencing these genetic fragments using
high-throughput sequencing (HTS), researchers can
identify entire communities from a single sample,
uncovering rare, cryptic, or elusive species that
traditional methods miss. This technique offers
unparalleled advantages: high sensitivity, scalability,
non-invasiveness, and the ability to process hundreds
of samples simultaneously. From tracking endangered
species and invasive organisms to reconstructing ancient
ecosystems and monitoring microbial communities,
DNA metabarcoding is transforming conservation
biology, ecological research, and environmental policy.
Yet challenges remain, including gaps in reference
databases, PCR biases, and the need for standardised
protocols. As technology advances with portable
sequencers, machine learning, and global DNA
databases, metabarcoding is poised to become the
cornerstone of biodiversity assessment, ushering in
a new era in which ecosystems are decoded not by sight
but by their genetic fingerprints.
2.2 The Theoretical Framework:
From Cells to Sequences
The eDNA metabarcoding model operates on
a multi-stage pipeline that integrates field biology,
molecular genetics, and advanced bioinformatics.
Environmental Shedding and Persistence: The
model begins with the biological reality that
organisms lose DNA through faeces, mucous,
gametes, and decaying tissue. The detectability
of a species depends on the rate of shedding
versus the rate of degradation. In aquatic systems,
factors such as UV radiation, pH, and temperature
dictate how long this genetic memory persists,
usually ranging from a few days to several weeks.
The Power of Universal Primers: Unlike
traditional DNA barcoding, which targets a single
specimen, metabarcoding uses universal primers.
These are short DNA sequences designed to
2.4 Strategic Advantages and the Biotic Index
Figure 1: Sampling of River water for DNA extraction
to identify species.
Image courtesy: https://www.bing.com/images/search?view=detailV2&ccid=NgDBPWbw&id=06
8A58357A8DCBEF2E98EFDA061BD6869C253946&thid
bind to highly conserved regions across a broad
group of taxa (e.g., all fish or all insects), while
flanking a variable region specific to each species.
The Cytochrome c Oxidase subunit I (COI)
gene is the most common target for animal DNA
barcoding, often referred to as the biological
barcode.
2.3 The Technical Pipeline: Field to
Data
Implementing the eDNA model requires a rigorous
technical workflow to ensure that the data accurately
reflects the biological reality of the site.
Phase I: Capture and Extraction
Sampling typically involves filtering large volumes
of water (often 1–5 litres) through fine-pore membranes
to capture DNA fragments. Because eDNA is
highly sensitive, preventing contamination is the most
significant challenge. Once captured, the DNA is
extracted in a sterile laboratory environment using
specialised kits designed to remove environmental
inhibitors, such as humic acids, which can interfere
with subsequent chemical reactions.
Phase II: Library Preparation and Sequencing
The extracted DNA undergoes Polymerase Chain
Reaction (PCR) amplification. During this stage,
unique indexes or molecular tags are attached to the
DNA from each sample, allowing multiple samples to
be pooled together in a single sequencing run. The
library is then sequenced using platforms like Illumina,
which generate millions of individual reads.
Figure 2: The extracted DNA undergoes Polymerase
Chain Reaction (PCR) amplification.
Image courtesy:pcr factsheet.jpg (1800×1800
Phase III: Bioinformatics and Taxonomy
The resulting raw data is a massive digital
haystack of sequences. Bioinformatics pipelines
(such as QIIME2 or OBITools) filter out noise and
errors. These sequences are then clustered into
Operational Taxonomic Units (OTUs) or mapped
to global reference databases such as GenBank or the
Barcode of Life Data System (BOLD). The result is a
detailed list of every detectable species present in the
original environment.
2.4 Strategic Advantages and the
Biotic Index
The eDNA metabarcoding model offers several
distinct advantages over traditional morphological
surveys:
Detection of Rare and Invasive Species: eDNA
is exceptionally sensitive. It can detect the
presence of a single invasive carp in a massive
lake or a rare, endangered dragonfly nymph in
a remote stream long before a human observer
would find them.
Cost and Time Efficiency: A single technician
can collect dozens of water samples in a day,
covering an area that would take a team of
taxonomists weeks to survey using traditional
42
2.5 Challenges: The Road to Standardisation
Figure 3: Schematic diagram of eDNA sample
collection, analysis, and functions.
Image courtesy:Environmental DNA (eDNA) Technology in Biodiversity and Ecosystem Health
Research: Advances and Prospects - Wu - 2026 - Ecology and Evolution - Wiley Online Library
nets and microscopes.
Non-Invasive Monitoring: This is the gold
standard for conservation in protected areas. It
allows monitoring biodiversity without capturing,
stressing, or killing specimens.
The model is being used to develop a Molecular
Biotic Index. By analysing the entire macroinvertebrate
community via eDNA, researchers can assign a health
score to a watershed. Certain species are highly
sensitive to pollution; their presence, detected via their
DNA, serves as a real-time indicator of water quality
and ecosystem resilience.
2.5 Challenges: The Road to
Standardisation
Irrespective of its potential, the eDNA model faces
hurdles that require ongoing research.
The Reference Database Gap: A DNA
sequence is only useful if it can be matched
to a known species. In many tropical regions and
among specific groups like Odonata (dragonflies),
reference libraries are still being built.
Primer Bias: Not all DNA sequences amplify
with equal efficiency. Some species may be
hidden if the universal primers do not bind
perfectly to their genetic code, leading to under-
representation.
Quantification: While eDNA tells us who is
there, it is still difficult to determine exactly
how many individuals are present. While
sequence read counts often correlate with
biomass, environmental variables complicate
absolute quantification.
Conclusion:
The Future of Ecological Surveillance
The eDNA metabarcoding model represents the
ultimate synthesis of technology and natural history.
As sequencing costs continue to fall and reference
databases expand, this model will become the standard
for environmental impact assessments, biosecurity, and
climate change monitoring.
By looking into a drop of water and seeing
the genetic signatures of an entire forest or river
system, we gain a more profound understanding of
the interconnectedness of life. For the next generation
of researchers, the ability to decode these environmental
secrets will be the key to preserving the earth”s most
vulnerable ecosystems, from the depths of the oceans
to the peaks of the Western Ghats.
Reference
eDNAmanual-eng ver3 0 0.pdf
Sato, Y., Miya, M., Fukunaga, T., Sado, T.
& Iwasaki, W. 2018. “MitoFish and MiFish
Pipeline: A Mitochondrial Genome Database of Fish
with an Analysis Pipeline for Environmental DNA
Metabarcoding”. Molecular Biology and Evolution
35:1553-1555.
Fukuzawa, T., Shirakura, H., Nishizawa, N.,
Nagata, H., Kameda, Y., & Doi, H. 2023,
“Environmental DNA extraction method from water
for a high and consistent DNA yield.” Environmental
DNA 5 (4): 627–633. doi: 10.1002/edn3.406
Uchii, K., Doi, H. & Minamoto, T. 2016, A novel
environmental DNA approach to quantify the cryptic
invasion of non-native genotypes.” Molecular Ecology
Resources 16 (2): 415-422. Doi: 10.1111/1755-
0998.12460
Wong, M. K. S, Nakao, M. & Hyodo, S.
2020, “Field application of an improved protocol
for environmental DNA extraction, purification, and
43
2.5 Challenges: The Road to Standardisation
measurement using Sterivex filter.” Scientific Reports
10: 21531. doi: 10.1038/s41598-020-77304-7
About the Author
Dr. Geetha Paul is one of the directors of
airis4D. She leads the Biosciences Division. Her
research interests extends from Cell & Molecular
Biology to Environmental Sciences, Odonatology, and
Aquatic Biology.
44
About airis4D
Artificial Intelligence Research and Intelligent Systems (airis4D) is an AI and Bio-sciences Research Centre.
The Centre aims to create new knowledge in the field of Space Science, Astronomy, Robotics, Agri Science,
Industry, and Biodiversity to bring Progress and Plenitude to the People and the Planet.
Vision
Humanity is in the 4th Industrial Revolution era, which operates on a cyber-physical production system. Cutting-
edge research and development in science and technology to create new knowledge and skills become the key to
the new world economy. Most of the resources for this goal can be harnessed by integrating biological systems
with intelligent computing systems offered by AI. The future survival of humans, animals, and the ecosystem
depends on how efficiently the realities and resources are responsibly used for abundance and wellness. Artificial
intelligence Research and Intelligent Systems pursue this vision and look for the best actions that ensure an
abundant environment and ecosystem for the planet and the people.
Mission Statement
The 4D in airis4D represents the mission to Dream, Design, Develop, and Deploy Knowledge with the fire of
commitment and dedication towards humanity and the ecosystem.
Dream
To promote the unlimited human potential to dream the impossible.
Design
To nurture the human capacity to articulate a dream and logically realise it.
Develop
To assist the talents to materialise a design into a product, a service, a knowledge that benefits the community
and the planet.
Deploy
To realise and educate humanity that a knowledge that is not deployed makes no difference by its absence.
Campus
Situated in a lush green village campus in Thelliyoor, Kerala, India, airis4D was established under the auspicious
of SEED Foundation (Susthiratha, Environment, Education Development Foundation) a not-for-profit company
for promoting Education, Research. Engineering, Biology, Development, etc.
The whole campus is powered by Solar power and has a rain harvesting facility to provide sufficient water supply
for up to three months of drought. The computing facility in the campus is accessible from anywhere through a
dedicated optical fibre internet connectivity 24×7.
There is a freshwater stream that originates from the nearby hills and flows through the middle of the campus.
The campus is a noted habitat for the biodiversity of tropical Fauna and Flora. airis4D carry out periodic and
systematic water quality and species diversity surveys in the region to ensure its richness. It is our pride that the
site has consistently been environment-friendly and rich in biodiversity. airis4D is also growing fruit plants that
can feed birds and provide water bodies to survive the drought.