Cover page
Chitaura indica is a species of grasshopper in the family Acrididae, subfamily Oxyinae, and is endemic to the
Indian subcontinent, particularly South India. It was first described by Boris Uvarov in 1929 from specimens
collected from Siddapura in the Karnataka (Mysore Plateau) region of the Western Ghats, with the holotype
deposited at the Natural History Museum of Geneva. The species is characteristic of tropical grassland and
scrub–forest habitats, likely feeding on grasses and herbaceous vegetation as a typical herbivorous orthopteran,
and appears to be a relatively robust, medium
-
sized grasshopper with colouration and markings that may aid in
camouflage within its native vegetation.
Photo Credit: Vidyamol M .V, DCII Zoology, project student at airis4D from Christian College, Chenganoor
Managing Editor Chief Editor Editorial Board Correspondence
Ninan Sajeeth Philip Abraham Mulamoottil K Babu Joseph The Chief Editor
Ajit K Kembhavi airis4D
Geetha Paul Thelliyoor - 689544
Arun Kumar Aniyan India
Sindhu G
Journal Publisher Details
Publisher : airis4D, Thelliyoor 689544, India
Website : www.airis4d.com
Email : nsp@airis4d.com
Phone : +919497552476
i
Editorial
by Fr Dr Abraham Mulamoottil
airis4D, Vol.4, No.4, 2026
www.airis4d.com
This editions cover page is a photograph by
Vidyamol M .V, DCII Zoology, project student at
airis4D from Christian College, Chenganoor. Vidya is
an inborn, talented photographer with keen observation.
This photograph was captured on her mobile phone
near her home premises at Kuttoor, Thiruvalla.
The journal starts with Blesson Georges
article, ”Towards Efficient Learning in Neuromorphic
Computing: A Hybrid Probabilistic–Spike Approach”.
The author proposes a hybrid learning framework
that integrates probabilistic reasoning with spike-
based neural computation to address key limitations
in training spiking neural networks (SNNs). The
model combines the efficiency of event-driven, spike-
based processing with the mathematical robustness of
probabilistic inference by introducing feature-aware
connections, adaptive prior updating, and local learning
rules. This approach overcomes challenges such
as non-differentiability and temporal dynamics in
SNNs, offering computational advantages including
event-driven efficiency, reduced complexity, parallel
processing capability, and hardware compatibility for
scalable, low-power neuromorphic systems.
In ”From Shannon to ChatGPT: Information
Theory in Modern NLP,” Jinsu Ann Mathew traces
the conceptual continuity from Claude Shannons
foundational information theory to contemporary
large language models such as ChatGPT. The article
argues that the core principle underlying modern
natural language processing—viewing language as
a probabilistic system governed by uncertainty—has
remained unchanged since Shannon introduced
concepts like entropy to quantify unpredictability.
Early language models evolved from simple n-
gram frequency counters to today’s neural networks
that leverage massive context windows and cross-
entropy optimization to make increasingly accurate
predictions. Mathew suggests that what appears as
linguistic understanding in systems like ChatGPT may
actually emerge from sophisticated prediction and
compression capabilities, reframing the question of
machine understanding in terms of how effectively a
system can reduce uncertainty about language.
Abishek P S examines in ”Plasma Physics- Plasma
Collisions & Transport, the fundamental differences
between collisions in plasmas and those in neutral
gases, emphasizing the complexity introduced by
long-range Coulomb forces and collective effects.
Unlike neutral gas collisions that involve short-range,
localized encounters, plasma collisions are shaped
by Debye shielding, which confines the influence of
charged particles within a characteristic length, and
by a rich spectrum of collisional processes including
electron-ion, electron-neutral, ion-ion, and photon
interactions—that govern transport properties such as
conductivity, viscosity, and diffusion. The article
highlights how binary collision models provide a
foundational framework for analysis, but must be
considered alongside collective phenomena like plasma
oscillations and wave-particle interactions to fully
understand energy exchange, heating, and stability
in both natural plasmas (such as those in stars) and
engineered systems (such as fusion devices). Ultimately,
the author underscores that mastering collision physics
is crucial for controlling plasma behavior and advancing
applications like sustainable fusion energy.
Ajit Kembhavi’s article ”Black Hole Stories-
25: Some Black Hole Mergers From LIGO-Virgo-
KAGRA Observing Run O3, examines significant
gravitational wave detections from the third observing
run, highlighting how these mergers challenge existing
astrophysical theories about black hole formation
and mass distributions. The article explains key
concepts such as the black hole mass gap—ranging
from approximately 60 to 130 solar masses predicted
by pair instability supernova theory, and intermediate
mass black holes (IMBHs) ranging from
10
2
to
10
5
solar masses. Notable events discussed include
GW190412, with its highly unequal mass ratio that
required accounting for higher-order gravitational wave
multipoles; GW190425, a binary neutron star merger
with total mass exceeding known electromagnetic
binaries; GW190521, which placed a remnant black
hole firmly in the IMBH range while its primary
component fell within the pair instability mass gap;
and GW190814, whose secondary component at 2.59
solar masses sits in the neutron star–black hole mass
gap, raising questions about its true nature. These
detections collectively demonstrate how gravitational
wave observations are reshaping our understanding of
stellar evolution, binary formation mechanisms, and the
boundaries between neutron stars and black holes.
In ”X-ray Astronomy: Theory, Aromal P
examines thermonuclear X-ray bursts in neutron star
low-mass X-ray binaries, a phenomenon that forms the
core of his observational research. The article explains
how accreted material accumulating on a neutron star’s
surface can undergo a thermonuclear runaway driven
by thin-shell instability, where degenerate electron
pressure prevents cooling and leads to explosive burning
of hydrogen and helium via processes such as the
CNO cycle, triple-alpha reactions, and the rapid-proton
process. These bursts, which temporarily outshine the
persistent accretion luminosity by an order of magnitude
or more, produce thermal blackbody spectra peaking
in the soft X-ray band and can trigger photospheric
radius expansion when the Eddington limit is exceeded.
Beyond their intrinsic brightness, these events interact
with the surrounding accretion environment by cooling
the corona, distorting the accretion disk, and producing
reprocessed emission at longer wavelengths. The author
emphasizes that studying these bursts provides a unique
laboratory for constraining the equation of state of
supranuclear matter in neutron star cores, probing
accretion physics, and testing nuclear reaction networks
under extreme conditions.
Geetha Paul explores in “The Genetic Whisper:
Unlocking Biodiversity with eDNA, environmental
DNA (eDNA) as a transformative biomonitoring
tool that enables scientists to detect organisms by
analyzing genetic material shed into water, soil, or air,
bypassing the limitations of traditional visual surveys.
The article details how eDNA metabarcoding uses
universal primers and next-generation sequencing to
identify entire communities from a single sample,
with applications ranging from detecting invasive
species during their lag phase for early intervention
to assessing ecosystem health through indicator groups
like odonates. Paul highlights cutting-edge extensions
of the technology, including airborne eDNA for
surveying elusive canopy-dwelling species in tropical
rainforests, deep-sea eDNA for mapping biodiversity
in the midnight zone, and ancient DNA extracted from
permafrost and cave sediments to reconstruct extinct
ecosystems. By transforming environmental samples
into high-resolution genetic maps, eDNA serves as
a non-invasive, sensitive early warning system for
biodiversity monitoring, bridging traditional zoology
with genomic science to address the global biodiversity
crisis.
iii
News Desk
Vijnanam by Prof Babu Joseph, former VC, CUSAT
airis4D has the pleasure of announcing a monthly column, titled Vijnanam, which in Sanskrit, means
knowledge. It will be handled by Prof. K. Babu Joseph, formerly of CUSAT, and currently, an advisor to this
journal. He is a physicist and a writer of popular science, philosophy and poetry in English and Malayalam.The
series will be focussed on science, but occasionally, address other disciplines as well. The first contribution will
appear in the May 2026 issue.
iv
Contents
Editorial ii
I Artificial Intelligence and Machine Learning 1
1 Towards Efficient Learning in Neuromorphic Computing:
A Hybrid Probabilistic–Spike Approach 2
1.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.2 Background and Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.3 Proposed Hybrid Learning Framework . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.4 Proposed Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.5 Computational Advantages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.6 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
2 From Shannon to ChatGPT: Information Theory in Modern NLP 6
2.1 When Language Became a Problem of Uncertainty . . . . . . . . . . . . . . . . . . . . . . . . . . 6
2.2 From Words to Probabilities: The Birth of Language Models . . . . . . . . . . . . . . . . . . . . 7
2.3 Scaling the Same Idea: From Simple Models to ChatGPT . . . . . . . . . . . . . . . . . . . . . . 7
2.4 Prediction, Compression, and the Question of Understanding . . . . . . . . . . . . . . . . . . . . 8
2.5 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
II Astronomy and Astrophysics 10
1 Plasma Physics- Plasma Collisions & Transport 11
1.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
1.2 Nature of Collisions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
1.3 Binary Collisions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
1.4 Collective Effects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
1.5 Types of Collisional Processes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
1.6 Plasma Collisions and Neutral Gas Collisions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
1.7 Challenges and Implications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
2 Black Hole Stories-25
Some Back Hole Mergers From LIGO-Virgo-KAGRA Observing Run O3 17
2.1 Black Hole Mass Gap: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
3 X-ray Astronomy: Theory 23
3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
3.2 Thermonuclear X-ray burst . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
CONTENTS
III Biosciences 26
1 The Genetic Whisper: Unlocking Biodiversity with eDNA 27
1.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
1.2 What is eDNA? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
1.3 The Role of eDNA in the Modern World . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
1.4 Applications in New Era Science . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
1.5 The eDNA Metabarcoding Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
1.6 Deep-Sea Exploration: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
1.7 Airborne eDNA: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
1.8 Ancient DNA: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
vi
Part I
Artificial Intelligence and Machine Learning
Towards Efficient Learning in Neuromorphic
Computing:
A Hybrid Probabilistic–Spike Approach
by Blesson George
airis4D, Vol.4, No.4, 2026
www.airis4d.com
1.1 Introduction
Neuromorphic computing has emerged as a
powerful paradigm aimed at replicating the efficiency
and adaptability of the human brain. Unlike
traditional computing systems, which rely on sequential
processing and separate memory and computation units,
neuromorphic systems operate using distributed, event-
driven architectures.
Spiking neural networks (SNNs) form the
computational backbone of neuromorphic systems.
These networks process information through discrete
spike events, enabling low-power and real-time
computation. However, despite their advantages,
training SNNs remains a challenging problem due to
their non-differentiable nature and temporal dynamics.
Existing approaches to learning in SNNs often rely
on biologically inspired rules such as Hebbian learning
and spike-timing dependent plasticity (STDP). While
these methods provide local learning capabilities, they
lack the flexibility and efficiency of modern machine
learning techniques.
In this paper, we propose a hybrid probabilistic–
spike learning framework that integrates probabilistic
reasoning with spike-based neural computation. The
proposed model overcomes key limitations of existing
approaches by introducing feature-aware connections,
adaptive prior updating, and efficient learning dynamics.
1.2 Background and Motivation
Biological neural systems learn through synaptic
plasticity, where connections between neurons are
strengthened or weakened based on activity. Hebbian
learning captures this principle through correlation-
based updates, while STDP introduces temporal
sensitivity by considering the timing of spikes.
Although these mechanisms are biologically
plausible, they are not sufficient for solving complex
computational tasks efficiently. On the other hand,
probabilistic models such as Bayesian learning provide
a strong mathematical framework for inference and
uncertainty handling but are not directly compatible
with spike-based computation.
This gap motivates the development of hybrid
approaches that combine the strengths of probabilistic
modeling and neuromorphic computation.
1.3 Proposed Hybrid Learning
Framework
1.3.1 Model Representation
Let the input feature vector be defined as:
X = {x
1
, x
2
, . . . , x
n
} (1.1)
Each feature is connected to the output neuron
1.4 Proposed Algorithm
through a synaptic weight interpreted probabilistically:
w
i
= P (yx
i
) (1.2)
Unlike naive probabilistic models, the proposed
framework accounts for feature interactions by
introducing adaptive importance factors.
1.3.2 Feature Interaction Modeling
The conditional probability of the output is
expressed as:
P (yX)
n
i=1
P (yx
i
)
α
i
(1.3)
where
α
i
represents the importance of each feature
and is learned dynamically.
This formulation relaxes the independence
assumption and enables more expressive
representations.
1.3.3 Spike-Based Computation
The membrane potential of a neuron is computed
as:
V (t) =
i
w
i
x
i
(t) (1.4)
A spike is generated when:
V (t) V
th
(1.5)
This event-driven mechanism ensures efficient
computation.
1.3.4 Adaptive Prior Updating
The prior probability is updated incrementally as:
P
t+1
(y) = (1 η)P
t
(y) + η
ˆ
P (yX) (1.6)
where η is the learning rate.
This allows the system to adapt continuously to
new data.
1.3.5 Weight Learning Rule
The synaptic weights are updated using a
probabilistic learning rule:
w
new
i
= w
old
i
+ η (x
i
y w
old
i
) (1.7)
This update balances stability and adaptability.
1.4 Proposed Algorithm
The complete training procedure is outlined below:
Algorithm 1: Hybrid Probabilistic–Spike Learning
1.
Initialize weights
w
i
and prior probabilities
P (y)
2. For each input sample X:
(a). Compute membrane potential:
V =
i
w
i
x
i
(b). Generate spike if V V
th
(c). Estimate posterior probability P (yX)
(d). Update weights:
w
i
w
i
+ η(x
i
y w
i
)
(e). Update prior:
P (y) (1 η)P (y) + ηP (yX)
3. Repeat until convergence
1.5 Computational Advantages
The proposed hybrid probabilistic–spike learning
framework offers several computational advantages
that make it well suited for efficient and scalable
intelligent systems. By combining probabilistic
reasoning with event-driven neural computation, the
framework achieves a balance between expressiveness
and efficiency.
1.5.1 Event-Driven Efficiency
Unlike conventional neural networks that perform
continuous computations, neuromorphic systems
operate in an event-driven manner. Computation occurs
only when spikes are generated. This significantly
reduces unnecessary processing when inputs are
inactive, leading to lower energy consumption and
improved efficiency. Such behavior is particularly
advantageous for sparse and real-time data streams.
1.5.2 Reduced Computational Complexity
Traditional probabilistic models often
require either strong independence assumptions
or computationally expensive joint probability
3
1.6 Conclusion
estimation. The proposed framework avoids both
extremes by introducing feature-weighted probabilistic
contributions. By assigning adaptive importance
factors to features, the model captures relevant
interactions without requiring full joint distributions.
This results in a substantial reduction in computational
complexity while maintaining expressive power.
1.5.3 Parallel Processing Capability
Neuromorphic systems naturally support parallel
computation, as neurons operate independently and
simultaneously. In the proposed model, each feature
contributes independently to the neurons activation, and
weight updates are performed locally. This eliminates
the need for centralized computation and makes the
framework highly compatible with parallel architectures
such as GPUs and neuromorphic hardware.
1.5.4 Incremental and Online Learning
The framework supports continuous and
incremental learning through adaptive updates of both
synaptic weights and prior probabilities. Instead
of requiring batch training over large datasets, the
system updates its parameters on a per-sample basis.
This reduces memory requirements, shortens training
time, and enables real-time learning in dynamic
environments.
1.5.5 Avoidance of Backpropagation
Bottleneck
Conventional deep learning methods rely
on backpropagation, which requires global error
propagation and high computational cost. In contrast,
the proposed approach employs local update rules based
on spike activity and probabilistic adjustments. This
eliminates the need for global gradient computation,
making the model more efficient and suitable for
hardware implementation.
1.5.6 Sparse Computation
Spike-based processing inherently leads to sparse
activity, as only a subset of neurons is active at any given
time. This sparsity reduces the number of computations
required and improves overall efficiency. Sparse
representations also contribute to better scalability in
large networks.
1.5.7 Hardware Compatibility
The proposed framework is well aligned with the
constraints of neuromorphic hardware. It supports
local memory usage, low-precision computation, and
event-driven processing. These characteristics make
it suitable for deployment in low-power devices,
embedded systems, and edge computing environments.
1.5.8 Scalability
Due to its reliance on local learning rules,
parallel processing, and reduced dependence on global
information, the framework scales efficiently to larger
systems. As network size increases, computational cost
grows in a manageable manner, making the approach
practical for real-world applications.
These applications benefit from low power
consumption and efficient learning.
1.6 Conclusion
This paper presented a hybrid probabilistic–spike
learning framework for neuromorphic computing
systems. By integrating probabilistic inference
with spike-based neural dynamics, the proposed
model addresses key limitations of existing learning
approaches. The framework introduces feature-aware
connections, adaptive priors, and efficient learning rules,
enabling scalable and energy-efficient computation.
Future work may focus on experimental validation,
hardware implementation, and integration with deep
learning systems to further enhance performance.
4
1.6 Conclusion
About the Author
Dr. Blesson George presently serves as
an Assistant Professor of Physics at CMS College
Kottayam, Kerala. His research pursuits encompass
the development of machine learning algorithms, along
with the utilization of machine learning techniques
across diverse domains.
5
From Shannon to ChatGPT: Information
Theory in Modern NLP
by Jinsu Ann Mathew
airis4D, Vol.4, No.4, 2026
www.airis4d.com
In the late 1940s, long before machines could
generate essays or answer complex questions, Claude
Shannon was working on a seemingly unrelated
problem: how to transmit messages efficiently over
imperfect communication channels. His goal was
not to understand language, but to ensure that
information could be sent accurately despite noise
and interference. In doing so, he introduced a
mathematical framework—information theory—that
would later reshape how we think about language itself.
Today, systems like ChatGPT can generate
coherent paragraphs, solve problems, and assist in
research. While these systems appear fundamentally
different from Shannons early work, they are deeply
connected. The apparent intelligence of modern NLP
systems is built upon a principle Shannon introduced
decades ago: information can be measured, and
uncertainty can be reduced.
The journey from Shannon to modern language
models is not about replacing ideas, but about extending
them. It is a story of how a theory of communication
gradually became a theory of language.
2.1 When Language Became a
Problem of Uncertainty
Shannons most influential idea was that
information is fundamentally tied to uncertainty. If
the outcome of a message is already known, it carries
little new information. On the other hand, if it is
(image courtesy: HMP Comics)
Figure 1: A humorous illustration of entropy by HMP
Comics, depicting how systems tend to move from
order to disorder, often leading to the oversimplified
explanation that “entropy causes everything.”
surprising or unpredictable, it carries more. To quantify
this, Claude Shannon introduced the concept of entropy,
a measure of how uncertain or unpredictable a system
is.
At first glance, this idea may seem abstract.
However, its intuition is surprisingly familiar in
everyday life. We often observe that systems tend
to move from order to disorder: a neatly arranged room
gradually becomes messy, objects get misplaced, and
structured arrangements break down over time. This
common intuition is humorously captured in figure
1. In the comic, a child explains every instance of
2.3 Scaling the Same Idea: From Simple Models to ChatGPT
disorder—whether it is a messy room or a chaotic
situation—by simply saying “entropy.” While this
explanation is not entirely incorrect, it reflects a common
misunderstanding. Entropy does not actively cause
disorder in a physical sense. Rather, it describes a
statistical tendency: there are far more ways for a
system to be disordered than ordered.
To understand this more clearly, consider a simple
example. Imagine a set of objects arranged neatly on a
shelf. There are only a limited number of ways in which
this ordered arrangement can exist. In contrast, there
are countless ways for those same objects to be scattered
randomly. Because disordered configurations are vastly
more numerous, a system that evolves randomly is
far more likely to move toward disorder than toward
order. What we perceive as the “increase of disorder”
is therefore a reflection of probability, not a directed
force.
When this idea is applied to language, it leads to
a powerful insight. Language is neither completely
predictable nor entirely random. If every sentence were
perfectly predictable, communication would carry no
new information. If it were completely random, it
would be meaningless. Instead, language operates in
a balance between structure and variability. Certain
word sequences are highly likely because they follow
familiar patterns, while others are rare or unexpected.
This balance is precisely what makes language both
expressive and analyzable. From Shannons perspective,
understanding language is not just about interpreting
meaning, but about recognizing and modeling these
underlying probabilities. In this way, language itself
becomes a system governed by uncertainty—one that
can be studied, measured, and ultimately modeled using
the principles of information theory.
2.2
From Words to Probabilities: The
Birth of Language Models
Once language was viewed as a system governed
by uncertainty, the next logical step was to model it
using probability. If certain word sequences occur
more frequently than others, then these patterns can be
captured and quantified.
Early language models did exactly this. They
estimated the probability of a word based on its context.
In the simplest case, a unigram model treated each
word independently, assigning probabilities based on
frequency. More advanced models, such as bigrams
and trigrams, considered short sequences of words,
capturing local dependencies. For example, after
the phrase “I am going to,” a model might assign
high probability to words like “school, “work,” or
“sleep, and very low probability to unrelated words
like “banana.” These probabilities are not based on
understanding in a human sense, but on observed
patterns in data.
What is important here is not the complexity of
the model, but the conceptual shift. Language was no
longer treated as a purely symbolic system governed by
rules. Instead, it became a probabilistic system, where
structure emerges from patterns of usage. Even without
explicit knowledge of meaning, these models could
generate plausible sequences by following statistical
regularities.
This idea—that meaning can emerge from
probability—would become central to all future
developments in NLP.
2.3 Scaling the Same Idea: From
Simple Models to ChatGPT
By this point, one thing becomes clear: the core
idea of language modeling—predicting what comes
next—has not changed. What has changed is how much
context a model can use and how well it can capture
patterns across that context.
Early models were extremely limited. They could
only look at a few words at a time. This meant they
often failed when meaning depended on information
that appeared earlier in the sentence or even in previous
sentences.
To see this limitation, consider the sentence:
“Ravi dropped the glass. It shattered because it
was fragile.”
A simple model that only looks at the last few
7
2.5 Conclusion
words might struggle to understand what “it” refers
to. Is it the glass, or something else? Humans easily
resolve this because we use the broader context of the
sentence. Early models, however, often could not.
Modern systems like ChatGPT overcome this
limitation by considering much larger context. They
can connect words across an entire sentence—or even
multiple sentences—and identify relationships between
them. In the example above, such a model correctly
associates “it” with “the glass, allowing it to generate
or interpret the sentence coherently.
Another important difference is how these models
learn. Instead of relying on simple frequency counts,
they learn from massive amounts of text and adjust
their internal parameters to improve predictions over
time. When the model makes an incorrect prediction, it
updates itself to reduce that error. This learning process
is guided by a measure called cross-entropy, which
essentially tells the model how far its predictions are
from the actual text.
What emerges from this process is not just better
prediction, but the ability to capture deeper patterns
in language—such as relationships between words,
sentence structure, and even subtle contextual cues.
So while modern NLP systems may appear
fundamentally different, they are still following the
same principle introduced by Claude Shannon. The goal
remains unchanged: ’Reduce uncertainty by making
better predictions.
The difference is that today’s models can do this
across much larger contexts, making their predictions
appear far more intelligent.
2.4 Prediction, Compression, and the
Question of Understanding
As language models become more accurate in their
predictions, an interesting connection emerges between
prediction and compression. If a model can predict text
with high accuracy, it can also represent that text more
efficiently by exploiting its regularities. This is because
predictable elements require fewer bits to encode, while
unpredictable elements require more.
This idea leads to a powerful insight: a good
language model is also a good compression system. By
learning the statistical structure of language, the model
reduces redundancy and captures essential patterns.
In doing so, it mirrors the goals of information
theory—efficient representation and transmission of
information.
However, this raises a deeper question. When
a system like ChatGPT generates coherent and
contextually appropriate responses, is it truly
understanding language, or is it simply performing
highly sophisticated prediction?
From Shannons perspective, understanding may
not be a separate process at all. It may emerge
naturally from the ability to model and predict language
effectively. If a system can consistently reduce
uncertainty and capture patterns, it begins to exhibit
behaviour that resembles understanding.
This does not fully resolve the debate, but it
reframes it. Instead of asking whether machines
understand language in a human sense, we can ask how
far prediction and compression can go in approximating
what we call understanding.
2.5 Conclusion
The evolution from Claude Shannon to ChatGPT
is not a story of conceptual revolution, but of continuity
and expansion. The central idea—that language can be
described in terms of probability and uncertainty—has
remained unchanged.
What has changed is our ability to apply this idea.
From simple probabilistic models to large-scale neural
networks, each step has brought us closer to capturing
the complexity of human language.
At its core, modern NLP still reflects Shannons
original insight:
To understand language is to reduce uncertainty
about it.
And in that sense, every prediction made by a
language model is part of a journey that began not with
language itself, but with a fundamental question about
information.
8
2.5 Conclusion
References
Information Theory Explained for Machine
Learning
Entropy Is Universal Rule of Language
Shannons Information Theory, AI, and Fiction
Why GPT Still Listens to Shannon: The Hidden
Mathematics of Language Compression
An entropy-based study of Simplification in
ChatGPT translations
About the Author
Jinsu Ann Mathew is a research scholar
in Natural Language Processing and Chemical
Informatics. Her interests include applying basic
scientific research on computational linguistics,
practical applications of human language technology,
and interdisciplinary work in computational physics.
9
Part II
Astronomy and Astrophysics
Plasma Physics- Plasma Collisions &
Transport
by Abishek P S
airis4D, Vol.4, No.4, 2026
www.airis4d.com
1.1 Introduction
Plasma collisions are far more complex than
the simple, short-range encounters we see in neutral
gases. In a plasma, the charged particles, electrons
and ions interact through long-range Coulomb forces,
meaning that each particle can influence many others
over significant distances. This makes collisions in
plasma not just isolated events but part of a collective,
interconnected system. Energy exchange and scattering
occur, but they are strongly shaped by the surrounding
environment, particularly through phenomena like
Debye shielding, where a cloud of opposite charges
forms around a particle and effectively reduces the
strength of its electric field beyond a certain distance.
This shielding alters how particles see each other,
softening the long-range interactions and giving plasma
its distinctive behaviour. Moreover, collisions in plasma
are tied to collective effects such as waves, instabilities,
and transport processes, which can redistribute energy
and momentum across the entire system. As a result,
plasma collisions are not merely about two particles
bumping into each other, they are about how countless
particles, through their electric fields, continuously
reshape the dynamics of the medium. This makes
understanding plasma collisions essential for explaining
conductivity, diffusion, heating, and the stability of both
natural plasmas, like those in stars, and engineered ones,
such as in fusion devices.
1.2 Nature of Collisions
In plasmas, the nature of collisions is profoundly
different from those in neutral gases, and this difference
stems from the fundamental forces at play. In a neutral
gas, collisions occur when molecules physically bump
into each other due to short-range forces like van der
Waals interactions. These encounters are localized,
meaning that only particles in close proximity exchange
momentum and energy. By contrast, plasmas are
composed of charged particles such as electrons and ions
that interact through the Coulomb force, which is long-
range and does not require direct contact. This means
that even particles separated by significant distances
can influence each other’s trajectories, creating a web of
interactions that is inherently collective. The presence
of many charged particles also leads to phenomena such
as Debye shielding, where the effective range of the
Coulomb force is modified by the surrounding cloud
of charges, ensuring that collisions are shaped by the
plasma environment rather than being isolated events
[1].
Another layer of complexity arises when we
distinguish between elastic and inelastic collisions in
plasmas. Elastic collisions conserve kinetic energy,
redistributing momentum among particles without
changing the total energy of the system. These are
important for processes like thermalization, where
particles gradually reach a common temperature
through repeated scattering. Inelastic collisions,
however, involve energy transfer into other processes
1.4 Collective Effects
such as ionization, excitation of atomic states, or
radiation emission. For instance, when a fast electron
collides with an ion, it may excite the ion to a higher
energy state or even strip away another electron, leading
to ionization. These inelastic processes are crucial in
determining plasma radiation, energy loss mechanisms,
and the overall balance of energy within the system.
The coexistence of both elastic and inelastic collisions
makes plasma behaviour far richer and more varied than
that of neutral gases, since collisions can simultaneously
redistribute momentum and alter the internal energy
states of particles.
Taken together, the long-range nature of Coulomb
interactions and the duality of elastic versus inelastic
collisions highlight why plasma collisions are central
to plasma physics. They govern transport properties
like conductivity and diffusion, influence heating and
cooling processes, and play a decisive role in the stability
of both natural plasmas such as those in stars and
interstellar space and engineered plasmas in fusion
devices. Unlike the relatively straightforward collisions
in neutral gases, plasma collisions are embedded in a
dynamic, collective environment where every particle’s
motion is shaped by the fields of many others, making
them a fascinating and complex subject of study.
1.3 Binary Collisions
The study of plasma collisions often begins
with the simplification of a many-body system into
binary collisions. Although a plasma is inherently a
collective medium where countless particles interact
simultaneously, modelling collisions as two-body
encounters provides a tractable framework for analysis.
This approach relies on the concept of reduced mass,
which allows the motion of two interacting particles
to be described relative to their centre of mass. By
treating collisions in this way, researchers can derive
scattering angles, cross sections, and energy transfer
rates without being overwhelmed by the complexity of
the full many-body problem. While this simplification
does not capture every collective effect, it serves as a
foundational tool for understanding transport properties
and for building more sophisticated kinetic and fluid
models of plasma behaviour.
A particularly important case is electron-ion
scattering, which illustrates the asymmetry inherent
in plasma collisions. Electrons, being much lighter than
ions, experience strong deflections when they encounter
the Coulomb field of an ion [2]. Their trajectories can
be significantly altered, leading to large-angle scattering
events that influence how electrons move through the
plasma. In contrast, ions, due to their much greater
mass, barely change direction when colliding with
electrons. This imbalance has profound consequences:
it determines the plasmas electrical conductivity and
resistivity, since electron motion largely governs current
flow. Moreover, electron-ion collisions contribute to
energy exchange between species, helping equilibrate
temperatures and influencing how plasmas respond to
external fields. Researchers pay close attention to these
interactions because they directly affect confinement,
heating, and stability in laboratory plasmas, as well as
energy transport in astrophysical environments.
Thus, while the plasma is a many-body system, the
binary collision model provides a powerful lens through
which to analyse fundamental processes. By focusing
on electron-ion scattering, researchers uncover the
mechanisms that control macroscopic plasma properties,
bridging the gap between microscopic interactions
and large-scale phenomena. This dual perspective
simplifying to two-body physics while acknowledging
collective effects remains central to advancing both
theoretical plasma physics and practical applications
such as fusion energy research.
1.4 Collective Effects
Collective effects in plasmas are among the most
fascinating and defining features of the field, because
they highlight how the behaviour of individual particles
is inseparably linked to the dynamics of the entire
system. One of the most fundamental collective
phenomena is Debye shielding. In a plasma, when a
charged particle is introduced, the surrounding particles
rearrange themselves in such a way that the particle’s
electric field is partially cancelled beyond a certain
distance. This characteristic distance is known as
12
1.5 Types of Collisional Processes
the Debye length, and it sets the scale over which
Coulomb interactions are effectively confined. Without
shielding, the Coulomb force would extend infinitely,
making collisions in plasma uncontrollably long-
ranged[2]. Debye shielding ensures that interactions
remain finite and manageable, allowing researchers to
treat plasma as a quasi-neutral medium where local
charge imbalances are quickly corrected by collective
rearrangements[3]. This phenomenon is not just a
mathematical convenience, it is a physical reality that
governs how plasmas respond to perturbations, how
waves propagate, and how instabilities develop.
Equally important is the concept of collision
frequency, which determines how often particles
interact and exchange energy. Unlike in neutral
gases, where collision rates depend primarily on
density and temperature, plasma collision frequencies
are influenced by additional factors such as the
charge states of ions and the degree of shielding.
For example, highly charged ions exert stronger
Coulomb forces, increasing the likelihood of deflecting
electrons, while Debye shielding reduces the effective
range of these interactions, modifying the overall
collision rate. This sensitivity makes plasma transport
properties such as electrical conductivity, viscosity, and
diffusion highly dependent on environmental conditions.
In astrophysical plasmas, variations in density and
temperature across regions like the solar corona or
interstellar medium lead to dramatic differences in
collision frequencies, shaping energy transport and
radiation processes. In laboratory plasmas, especially in
fusion devices, controlling collision frequency is critical
for achieving efficient confinement and minimizing
energy losses.
Together, Debye shielding and collision frequency
illustrate the collective nature of plasma physics. They
show that collisions cannot be understood in isolation
but must be analysed in the context of the plasmas
self-organizing environment. For researchers, these
effects are central to bridging the microscopic physics
of particle interactions with the macroscopic behaviour
of plasmas, whether in stars, space, or fusion reactors.
By studying how shielding modifies forces and how
collision rates respond to changing conditions, scientists
gain insight into the fundamental mechanisms that
govern plasma stability, transport, and energy balance
knowledge that is indispensable for both theoretical
advances and practical applications
1.5 Types of Collisional Processes
The types of collisional processes in plasmas
represent a rich spectrum of interactions that connect
microscopic particle dynamics with macroscopic
plasma behaviour. Each type of collision contributes
uniquely to transport, heating, radiation, and stability,
making their study central to both theoretical and
applied plasma physics.
Electron-ion collisions are among the most
fundamental. Because electrons are much lighter than
ions, they are strongly deflected when interacting with
the Coulomb fields of ions [2]. These collisions lead
to scattering, which redistributes electron trajectories,
and to heating, as kinetic energy is exchanged between
species. Importantly, electron-ion collisions are the
primary mechanism behind plasma resistivity, since
they impede the free flow of electrons that carry current.
Understanding these collisions is crucial in fusion
research, where resistivity affects confinement and
energy losses.
Electron-neutral collisions introduce another layer
of complexity, especially in partially ionized plasmas.
When electrons collide with neutral atoms or molecules,
they can cause ionization, stripping electrons from
atoms and increasing plasma density. They can also lead
to excitation, where electrons in atoms are promoted
to higher energy levels, often followed by photon
emission. Conversely, recombination occurs when free
electrons reattach to ions, reducing ionization levels.
These processes are vital in astrophysical plasmas, such
as those in planetary atmospheres, and in laboratory
discharges, where they determine plasma composition
and radiation output[2].
Ion-ion collisions are heavier interactions that
primarily govern momentum transfer. Because ions
have comparable masses, their collisions are less about
deflection and more about redistributing momentum
across the plasma. This mechanism underpins plasma
13
1.6 Plasma Collisions and Neutral Gas Collisions
viscosity, influencing how plasmas flow and how
energy is transported across magnetic fields. In
fusion devices, ion-ion collisions help equilibrate ion
temperatures, while in astrophysical plasmas they
shape large-scale dynamics such as shock formation in
supernova remnants.
Photon collisions highlight the coupling between
electromagnetic radiation and plasma particles. Photons
can ionize atoms (photoionization) or excite electrons to
higher energy states, contributing to plasma heating and
radiation balance. These interactions are especially
significant in astrophysical contexts, where stellar
radiation continuously ionizes interstellar gas, and in
laser-plasma experiments, where intense photon beams
drive ionization and heating.
We classify these processes into bound-bound,
bound-free, free-bound, and free-free transitions, each
describing a different energy exchange pathway. Bound-
bound transitions involve excitation between discrete
atomic energy levels, producing characteristic spectral
lines that serve as diagnostic tools. Bound-free
transitions correspond to ionization, where an electron
escapes from a bound state into the continuum. Free-
bound transitions describe recombination, where a
free electron is captured into a bound state, often
accompanied by photon emission. Finally, free-free
transitions, also known as bremsstrahlung radiation,
occur when free electrons are deflected by ions and
emit photons, a key mechanism of energy loss in hot
plasmas.
Taken together, these collisional processes form
the backbone of plasma physics, linking microscopic
interactions to macroscopic phenomena. For
researchers, they are not just abstract categories but
essential tools for diagnosing plasmas, predicting
their behaviour, and designing systems from fusion
reactors to astrophysical models that depend on a deep
understanding of how particles and radiation interact.
1.6 Plasma Collisions and Neutral
Gas Collisions
When comparing plasma collisions with neutral
gas collisions, researchers emphasize the fundamental
differences in the forces involved, the role of
collective effects, the definition of collisions, the
energy outcomes, and the dependence of collision
frequency on environmental conditions. Each of these
points highlights why plasma physics requires its own
specialized framework, distinct from the kinetic theory
of neutral gases.
Force type is the most striking distinction. In
neutral gases, collisions are governed by short-range
forces such as van der Waals interactions or direct
molecular contact. These forces act only when particles
are very close, making collisions localized events. In
plasmas, however, charged particles interact through
long-range Coulomb forces, meaning that even particles
separated by significant distances can influence each
other’s trajectories. This long-range nature makes
plasma collisions inherently collective and far more
complex to model.
Collective effects are another defining feature
of plasma collisions. In neutral gases, collisions
are essentially independent, with minimal collective
behaviour beyond bulk pressure and flow. In plasmas,
however, collective phenomena such as Debye shielding,
plasma oscillations, and wave-particle interactions
dominate. Debye shielding ensures that the Coulomb
force is effectively confined within a characteristic
Debye length, preventing infinite-range interactions.
Waves and instabilities further couple particles across
large distances, making plasma collisions inseparable
from the collective dynamics of the medium.
The definition of collisions itself differs between
the two systems. In neutral gases, collisions
are always validly defined as discrete, short-range
encounters between molecules. In plasmas, the
concept of a collision is meaningful mainly in weakly
coupled plasmas, where particle interactions can be
approximated as binary events. In strongly coupled
plasmas, where collective effects dominate, the notion
14
1.7 Challenges and Implications
of a simple collision breaks down, and researchers must
rely on statistical or fluid descriptions instead.
Energy outcomes also diverge significantly. In
neutral gases, collisions primarily redistribute kinetic
energy among molecules, leading to thermalization. In
plasmas, collisions can produce a much wider range
of outcomes: ionization, where electrons are stripped
from atoms; excitation, where electrons are promoted
to higher energy states; and radiation emission, such
as bremsstrahlung or recombination radiation. These
processes not only redistribute energy but also change
the plasmas composition and radiative properties,
making collisions central to plasma heating and cooling.
Finally, collision frequency depends on different
factors in plasmas compared to neutral gases. In
neutral gases, collision rates are determined mainly
by density and temperature, which set how often
molecules encounter each other and how energetic those
encounters are. In plasmas, collision frequency is more
complex: it depends not only on density and temperature
but also on charge states of ions and the degree of
shielding. Highly charged ions increase collision
likelihood, while Debye shielding reduces effective
interaction ranges[1]. This sensitivity makes plasma
transport properties such as conductivity, viscosity,
and diffusion, highly dependent on environmental
conditions, whether in astrophysical plasmas or
laboratory fusion devices.
1.7 Challenges and Implications
The challenges and implications of modelling
plasma collisions highlight why plasma physics is
such a demanding yet rewarding field. The first major
challenge lies in the complexity of modelling plasma
collisions due to collective effects and long-range forces.
Unlike neutral gases, where collisions are short-range
and can be treated as isolated events, plasmas involve
charged particles interacting through Coulomb forces
that extend over significant distances. These interactions
are further modified by collective phenomena such
as Debye shielding, plasma oscillations, and wave-
particle interactions. As a result, collisions cannot
be understood simply as two-body encounters; they
must be analysed within the context of the plasmas
self-organizing environment. This makes theoretical
modelling highly non-trivial, requiring advanced kinetic
theory, statistical mechanics, and numerical simulations
to capture the interplay between individual particle
dynamics and collective behaviour.
The implications of these collisions are profound
because they determine transport properties such
as electrical conductivity, viscosity, and diffusion.
Electrical conductivity in plasmas is largely governed
by electron-ion collisions, which impede electron flow
and introduce resistivity. Viscosity arises from ion-ion
collisions, redistributing momentum and influencing
how plasmas flow under external forces. Diffusion,
meanwhile, is shaped by both electron-neutral and
ion-neutral collisions, which determine how particles
spread through the plasma. These transport properties
are not fixed but vary dramatically with plasma density,
temperature, and charge states, making them highly
sensitive to environmental conditions. For researchers,
this means that understanding collisions is essential
for predicting how plasmas behave in both natural and
laboratory settings.
In the context of fusion research, controlling
collisions becomes absolutely crucial. Fusion plasmas
must be confined at extremely high temperatures and
densities to sustain reactions, but collisions can lead
to energy losses through radiation, resistivity, and
diffusion across magnetic fields. If collisions are
not properly managed, they can degrade confinement,
reduce efficiency, and destabilize the plasma[3]. On
the other hand, collisions also play a beneficial
role in equilibrating temperatures between electrons
and ions, distributing energy, and maintaining quasi-
neutrality. The challenge for researchers is to strike
a balance, minimizing detrimental collisional effects
while harnessing beneficial ones. This requires precise
control of plasma parameters, advanced diagnostic tools,
and sophisticated modelling to predict and optimize
behaviour.
In summary, plasma collisions are not just a
theoretical curiosity but a practical challenge with direct
implications for transport properties and fusion energy
development. Their complexity arises from long-range
15
1.7 Challenges and Implications
forces and collective effects, their importance lies in
determining conductivity, viscosity, and diffusion, and
their control is central to achieving stable, efficient
plasma confinement in fusion devices. For researchers,
mastering the physics of collisions is one of the keys to
unlocking both the mysteries of astrophysical plasmas
and the promise of clean, sustainable fusion energy.
References
[1] Hinton, F. L. (1983). “Collisional transport in
plasma”. Handbook of Plasma Physics, 1(147), 331.
[2] Helander, P., & Sigmar, D. J. (2005).
“Collisional transport in magnetized plasmas (Vol. 4).
Cambridge university press.
[3] Dinklage, A., Klinger, T., Marx, G., &
Schweikhard, L. (Eds.). (2005). “Plasma physics:
confinement, transport and collective effects” (Vol.
670). Springer Science & Business Media.
About the Author
Abishek P S is a Research Scholar in
the Department of Physics, Bharata Mata College
(Autonomous) Thrikkakara, kochi. He pursues
research in the field of Theoretical Plasma physics.
His works mainly focus on the Nonlinear Wave
Phenomenons in Space and Astrophysical Plasmas.
16
Black Hole Stories-25
Some Back Hole Mergers From
LIGO-Virgo-KAGRA Observing Run O3
by Ajit Kembhavi
airis4D, Vol.4, No.4, 2026
www.airis4d.com
In this story we will consider some interesting
binary mergers detected during the observing runs O3
of the LIGO-Virgo-KAGRA detectors. But before that,
we will describe a few astrophysical concepts which
are important in understanding some of the mergers
detected in O3 and O4.
2.1 Black Hole Mass Gap:
We have seen in earlier stories that at the end of
the evolution of star with mass about 25 Solar masses,
the core of the fully evolved star collapses to a black
hole, and the outer layers are ejected in a supernova
explosion. The mass of the remnant black hole depends
on the mass of the initial star; its metallicity, which
is the abundance of elements heavier than helium; the
extent of mass lost by the star due to stellar winds which
can carry away a substantial amount of mass during the
course of the evolution; and how much matter falls back
onto the core during the supernova explosion.
It follows from the theory of stellar evolution that
there should be gap in the mass of remnant black holes
in the range about 60 130 Solar masses. Why is there
such a gap? Stars begin their evolution by converting
hydrogen to helium through nuclear fusion. In the
process, a large helium core develops in the central
region. Stars of mass greater than 100 Solar masses
develop a helium core with mass greater than about
32 Solar masses. The temperature of such massive
cores is so high, that photons of the radiation have
sufficient energy to form electron positron pairs through
a process known as pair creation. With energy lost
to the pairs, the pressure due to the radiation reduces,
and the pressure exerted by the pairs is not sufficient to
compensate for the decrease. The core then collapses
very rapidly, and for helium core mass in the range of
about 64 135 Solar masses, the energy released in
the collapse leads to complete disruption of the star
in a supernova, with no remnant left behind. Such
an explosion is known as a pair instability supernova
(PISN). For stars which develop helium cores more
massive than 135 Solar masses, the pair instability leads
to a direct collapse to a black hole.
For stars whose helium cores are in the mass range
of about 32 64 Solar masses, the pair creation leads to
a series of oscillations of the star known as pulsational
pair instability(PPI), during which material is ejected
from the star, and it returns to a stable configuration.
After that there is a supernova explosion, and a black
hole is left behind, which has a mass less than it would
have had, in the absence of a PPI. The net result of
the above processes is an expected gap in the mass of
remnant black holes in the range of about 60 130 Solar
masses. As we will see later, this gap is being breached
by black holes in merging binary systems found through
gravitational wave detections.
Intermediate Mass Black Holes:
We have encountered two kinds of black holes
2.1 Black Hole Mass Gap:
in our earlier stories: (1) Stellar mass black holes
which are the remnants formed in the evolution of
massive stars, which can range in mass from a few
Solar masses to about a hundred Solar masses or more,
depending on the mass of the star from which they
formed and the conditions. Such black holes have been
observed, for example, in X-ray binary systems. The
more massive black holes in this range can be formed
through processes mentioned in the discussion above on
the black hole mass gap. (2) Supermassive black holes
with mass greater than about
10
5
Solar masses. A very
compact object with mass of
4.3x10
6
Solar masses,
which is believed to be a supermassive black hole, has
been detected in our Galaxy (see BHS-1). Black holes
with a mass of a billion Solar masses or more are
believed to be the central engines that power the highly
luminous active galactic nuclei (AGN) and quasars
which have been observed across the electromagnetic
spectrum, from radio to optical, to X-ray and Gamma-
ray wavelengths.
Black holes in the mass range of about
10
2
10
5
Solar masses are known as intermediate mass black
holes. The evidence for the existence of black holes in
this range is much weaker than for the other two types
of black holes, and the mechanisms for their formation
are not well established.
One possibility is that they formed in the early
Universe, through the direct collapse of very massive
stars known as Population III stars. These were the first
generation of stars which formed from the matter created
in the big bang and consisted of hydrogen and helium,
with only trace amounts of heavier elements. It is the
absence of the heavier elements, and the dust which is
present in present day interstellar matter, which makes
massive star formation possible. The IMBH could also
form in the early Universe through the collapse of gas
clouds which have low rotation, which form a quasi-star.
Such an object has a black hole at the centre and energy
is released by matter falling on it, rather than through the
fusion of lighter elements to heavier elements, happens
in a normal star. The process is expected to produce
IMBH.
IMBH can form in massive clusters of star and
globular clusters. The latter have about
10
5
stars, with a
high density of stars in their central region. The stars in
these clusters are in incessant motion, and can randomly
encounter other stars and merge with them, in a runaway
process so that a very massive star is built up. An IMBH
can form from the evolution of such massive stars. In
the dense environment of a cluster centre, it is also
possible for there to be encounters between black holes
formed through stellar evolution. In a close encounter
two black holes can form a binary system which can
eventually merge together to form a larger black hole.
IMBH are also associated with ultraluminous X-ray
sources and with dwarf galaxies, where they become
low luminosity active galactic nuclei. In spite of all
these possibilities, as of early 2026 there has not been an
unambiguous detection of an IMBH. We will see below
examples such processes which have been spotted by
gravitational wave detectors. In fact, some examples
of binary mergers detected through their gravitational
waves could be the first reliable gravitational wave
detections.
The Third Observing Run O3
The observing run O3 began on April 1, 2019 and
ended on March 27, 2020, when it was cut short due
the Covid-19 pandemic. It had a break of a month in
October 2019 for commissioning instruments, which
divided O3 into O3a and O3b. During O3a a total
of 44 binary merger events were detected and, during
O3b another 35 detections were made. That took the
total number of merging events detected by the end
of the O3 to 90, including three events detected in O1
and eight in O2. Most of the O3 binaries have black
holes as the two components (BH-BH). There are three
detections with a black hole as one component, with
the other component very likely to be a neutron star
(BH-NS), and one detection with two neutron stars
(NS-NS). The BH-NS and NS-NS mergers were not
detected at any electromagnetic wavelength unlike the
neutron star binary GW170817 from O2, which was
widely observed over the electromagnetic spectrum.
We will now describe some of the interesting sources
from O2.
GW190412: This merger was detected on April
12, 2019, during the second week of the observing run
O3a, by the two Advanced LIGO detectors and the
18
2.1 Black Hole Mass Gap:
Advanced Virgo detector. It is estimated that the chance
that such a source would appear to be detected due to a
fluctuation in the noise ranges from
<
1 per
10
5
years to
<
1 per
10
3
years, depending on the method of analysis,
so the event is statistically highly significant.
The primary component of the binary was of
30.1 Solar masses, while the less massive secondary
component
m
2
was of 8.3 Solar masses. These masses
are within the range of component masses inferred
from observing runs O1 and O2, but the ratio of the
masses
m
2
m
1
= 0.28
is much smaller than the ratios close
to one found for the binaries detected in the two runs.
The small mass ratio has implications for the possible
formation mechanisms for the binary. The final mass
of the remnant black hole after the merger after the
merger is 37.3 Solar masses and the dimensionless spin
parameter of the black hole remnant is 0.67 (the spin
parameter for a rotating black hole is
a
= cJ/GM
2
,
where J is the spin angular momentum of the black
hole, M its mass and G the constant of gravitation.
The maximum permitted value of
a
= 1
, see (BHS
9). The more massive black hole had dimensionless
spin parameter 0.44. The distance to the merger is
740 Megaparsec. The values of the parameters depend
somewhat on the exact method of analysis used.
The small mass ratio of the binary has an
interesting implication. To understand that, let us
first consider the electromagnetic radiation emitted by
a change in circular motion in a magnetic field. The
radiation emitted by such a charge can be considered to
be made up of components known as multipoles. The
lowest order of multipole present is known as dipole
radiation. This is the dominant component when the
charge moves non- relativistically, i.e. with speed much
lesser than the speed of light. The frequency of the
dipole radiation is equal to the frequency of rotation of
the charge in its circular motion. As the speed of the
charge increases, other multipole components known
as quadrupole, octupole and so on make increasing
contributions. The frequency of the radiation emitted in
these components are higher harmonics of the frequency
of the dipole component, i.e. integral multiples of
the dipole frequency. The frequency of the dipole
component is known as the first harmonic. There is
no monopole electromagnetic radiation because of the
conservation of electric charge.
Gravitational radiation also can be considered
to made up of multipoles. In this case there is no
monopole radiation, and also no dipole radiation, due
to the conservation of momentum. The lowest order
gravitational radiation is quadrupolar in nature. For a
binary with equal mass components in circular motion
the quadrupolar radiation is by far the dominant one.
The dominant frequency of the radiation is twice the
orbital frequency with which the two component black
holes go round each other. However, when the mass ratio
becomes significantly less than one, as in the case of
GW190412, the higher multipoles become increasingly
important. The multipole content also depends on the
inclination of the orbital plane and the line of sight
from the observer to the binary system. The higher
multipoles have to be taken into account in the analysis
leading to various parameters of the binary. It is found
that the multipoles make a discernible difference to the
values of the parameters determined from the data.
GW190425: This event was detected on April 25,
2019 by just the LIGO detector at Livingston, Louisiana,
since the other LIGO detector at Hanford, Washington
State was temporarily offline. The Virgo detector did
not contribute to the detection itself, since the source
was too weak to be observed by it, but the data gathered
could be used in the subsequent analysis.
The mass of the more massive primary component
was found to be in the range of 1.61-2.52 Solar masses
with 90 percent confidence, while the mass of the less
massive secondary was in the range 1.12-1.68 Solar
masses. The total mass was independently determined
to be in the range 3.3-3.7 Solar masses. The distance to
the source is 159 Megaparsec.
The known masses of neutron stars in observed
X-ray binary and binary neutron star systems are in
the range 1.2 2.5 Solar masses. The lower estimated
mass for the primary and secondary of GW190425
are consistent with this range, while the upper end of
the primary mass could be somewhat greater than the
maximum neutron star mass observed so far. But the
theoretical maximum for a neutron star mass is about
three Solar masses and it is therefore reasonable to
19
2.1 Black Hole Mass Gap:
assume that both components are neutron stars. The
total mass of GW170817 before the merger was in the
range of 3.3-3.7 Solar masses. As of early 2026, there
are 17 known neutron star binary systems discovered
through electromagnetic observations. The maximum
total mass of these systems is 2.89 Solar masses. So
if GW190425 is indeed a binary neutron star, then its
total mass is in excess of the total neutron star binary
mass observed so far, and some novel mechanism may
be needed for its formation.
GW190425 is the second neutron star binary
merger observed. But unlike in the case of the first
binary neutron star merger GW170817, which we
described in detail in BHS-23, no electromagnetic
emission has been observed to be associated with
GW190425. There are two reasons why the
electromagnetic emission may have been missed:
GW190425 is at the larger distance of 159 Megaparsec
compared to the distance of 40 Megaparsec of
GW170817. Moreover, since GW190425 was observed
by a single detector, it could be localised only within
the much large area of 8284 square degrees of the
sky, compared to the 28 square degree localisation of
GW170817. That makes it very difficult to try to find
an electromagnetic counterpart of the merger.
If the two components are neutron stars, the tidal
effect of each component on the other would distort
its shape. Since black holes are point particles, they
are not subject to such distortions. In the case of the
neutron star binary merger GW170817, tidal distortions
have been detected from the change that they make to
the gravitational wave signal. Tidal distortions have not
been confirmed in GW190425, so either one or both
components could be black holes. But that leads to
another interesting situation. The most massive neutron
star known has a mass of about 2.5 Solar masses, while
the least massive black hole known has a mass of about
5 Solar masses. So there is a mass gap in the 2.5 - 5
Solar mass range. It is not known whether this gap is
real and if it is, what would be the astrophysical reason
for the gap. If GW190425 is a black hole binary, then
the black holes would significantly below the currently
known minimum black hole mass. If the primary is
a neutron star, it could be in the mass gap region. So
we see that GW190425 will require new astrophysical
ideas, regardless of the nature of the components.
GW190521: This detection was made on May 21,
2019 by the two LIGO detectors and the Virgo detector.
The gravitational wave signal lasted for a short duration
of about 0.1 s. The false alarm rate, which is the rate at
which such a signal would arise because of a fluctuation
in background is less than 1 in 4990 yr. The source
is of special interest because of the high masses of
the components: the primary was of 85 Solar masses
and the secondary 66 Solar masses. The total of 150
Solar masses was the largest mass detected until then.
It will be noticed here that the total mass as quoted is
different from the sum of the primary and secondary
masses. The reason is that the quoted values for each
of the three masses, primary, secondary and total, are
median values for the statistical distribution of each
of the masses obtained from detailed analysis of the
gravitational wave data. Therefore the total mass does
not turn out to be simply be the addition of the other
two masses. Similar considerations apply to all the
masses quoted for this source, as well as for the other
gravitational wave sources described here.
The mass of the remnant black hole from the
merger was found to be 142 Solar masses, and 7.6
Solar masses were lost from the system to gravitational
radiation generated during the merger. The high value
of the lost mass makes this source the most energetic
merger detected until early 2026. The gravitational
radiation is mainly emitted over a short period of time
of about 0.1 second, and the peak luminosity, which is
the maximum rate of energy emission, was
3.7x10
56
erg/s. This luminosity is much large than the luminosity
of a supernova at its peak, which is about
10
44
erg/s,
and the luminosity of the brightest known quasar which
is about
10
48
erg/s, and the highest observed luminosity
of a Gamma-ray burst of about
10
54
erg/s. While
the peak luminosity of the merger is far higher than
that of the other highly luminous source, the burst
emission is of gravitational radiation, which can only be
detected by gravitational wave detectors, unlike the other
sources which emit at electromagnetic wavelengths.
The distance to the merger is 5.3 Gigaparsec. The
remnant black hole has a dimensionless spin parameter
20
2.1 Black Hole Mass Gap:
of 0.72.
The mass of the primary is in the mass gap region
of 65 135 Solar masses, which exists due to the pair
instability process which occurs during the course of
evolution of a massive star, which we discussed above.
The mass of the remnant puts it in the range 100
10
5
Solar masses of intermediate mass black holes, also
discussed above, which makes it the first black hole
with such a measured mass. But the short duration of
this event, and the small number of cycles which have
been observed, lead to the possibility that this was not a
merger of a black hole binary in a nearly circular orbit,
as has been assumed. We will discuss such a possibility
further when we come to the source GW231123 from
the observing run O4.
The gravitational wave research group from the
Indian Institute of Technology was involved in a study
to assess the detection significance of GW190521
along with the colleagues of LIGO-Virgo scientists. In
addition, the group contributed to assessing the distance
reach of various searches in the intermediate mass black
hole parameter space. The research group at the Indian
Institute of Technology, Gandhinagar was involved in
developing the filter bank used for the detection of the
black holes in the third observing run, along with other
LIGO-Virgo scientists.
GW190814: This merger was first detected on
August 14, 2019 by LIGO detector in Livingston, and
by the Virgo detector. The later analysis also used data
from the LIGO detector at Hanford. The data covered
300 cycles of the binary rotation above a gravitational
wave frequency of 20 Hz, so it is possible to tightly
constrain the source properties. The mass of the primary
component of the binary was 23.3 Solar masses, while
the secondary component was of 2.59 Solar masses,
and the final mass of the black hole after the merger
is 25.6 Solar masses. The. The component masses
in this case are even more unequal than in the case of
GW190425 with the component mass ratio
m
2
m
1
= 0.11
.
As mentioned in that case, the small mass ratio means
that the higher order multipoles of the gravitational
wave emission should be present in the data. There is
strong evidence that such is the case with GW190814
data, which leads to more precise measurements of the
system parameters.
We mentioned in the description of GW190425
that there is a gap in the region 2.5 5.0 Solar masses
in the known mass of neutron stars and black holes,
with neutron stars being below the gap and black holes
above it. In of GW190814, the mass of the secondary
is in the gap region, so it could be either a unusually
massive neutron star or a black hole with unusually low
mass. So while the massive primary can be identified
as a black hole, the nature of the secondary remains in
doubt. There has been no identified electromagnetic
counterpart of the merger, and other signs like a tidal
deformation of the secondary, which should be present
if it were a neutron star, have not been. It is also possible
that the secondary mass exceeds the maximum mass
permitted for a neutron star. So it seems likely that the
observed merger was of two black holes, rather than a
black hole and a neutron star. In that case, the low mass
ratio of the two black holes poses challenges to black
hole binary formation mechanisms.
textbfGW200105 and GW200115: These two
sources of gravitational waves are BH-NS binaries
and are similar in nature, so we describe them together.
GW200105 was observed by LIGO Livingston and
Virgo on January 5, 2001. It has components with
mass 8.9 and 1.9 Solar masses respectively and the
mass ratio
m
2
m
1
= 0.22
. The distance to the source is 280
Megaparsec. From the data, and various astrophysical
assumptions, it can be estimated that the probability for
the secondary mass to be below the maximum mass of
a neutron star is in the range 89% - 96%. It is therefore
likely that GW200105 is a BH-NS binary merger.
textbfGW20015 was observed by both LIGO
detectors and the Virgo detector on January 15, 2021.
The component masses in this case are 5.7 and 1.5
Solar masses respectively, with mass ratio
m
2
m
1
= 0.26
.
The distance to the source is 300 Megaparsec. The
probability that the secondary mass is below the
maximum neutron star limit is in the range 87% -
98% , so this source too is likely to be a BH-NS binary
merger.
In the next story we will consider a few novel black
hole binaries detected in observing run O4, which will
provide us with further insight into the physics of black
21
2.1 Black Hole Mass Gap:
holes and black hole binaries.
About the Author
Professor Ajit Kembhavi is an emeritus
Professor at Inter University Centre for Astronomy
and Astrophysics and is also the Principal Investigator
of the Pune Knowledge Cluster. He was the former
director of Inter University Centre for Astronomy and
Astrophysics (IUCAA), Pune, and the International
Astronomical Union vice president. In collaboration
with IUCAA, he pioneered astronomy outreach
activities from the late 80s to promote astronomy
research in Indian universities.
22
X-ray Astronomy: Theory
by Aromal P
airis4D, Vol.4, No.4, 2026
www.airis4d.com
3.1 Introduction
In the previous article, we discussed X-ray
emission from accretion disks in X-ray binaries and
compared its efficiency to that of nuclear fusion. We
concluded by asking whether the accretion disk is the
only source of X-ray emission in X-ray binaries. In
today’s article, we will focus on a specific type of X-
ray emission found in Neutron Star Low-Mass X-ray
Binaries: the thermonuclear X-ray burst. As a personal
note, my research centers on the observational studies
related to this phenomenon.
3.2 Thermonuclear X-ray burst
We have already discussed how accretion disks
form in X-ray binaries. In neutron star X-ray binaries,
if the neutron star is relatively old and has a weaker
magnetic field, the accreted material will eventually be
deposited onto its surface. If this accumulated matter
undergoes an unstable nuclear reaction, it can burn
instantaneously due to nuclear processes, leading to a
thermal runaway.
This thermal runaway happens because the
accreted hydrogen and helium fuel experiences intense
hydrostatic compression as it continues to pile onto
the surface of the neutron star. Within hours or
days, the accumulating material reaches extreme
ignition temperatures and densities. The core physical
mechanism driving this violent eruption is known as
the thin-shell instability.
Since the burning layer is exceptionally thin, only
a few meters deep compared to the neutron stars typical
radius of around 10 kilometers, the initial nuclear
heating causes the shell to expand, but this is insufficient
to reduce the local pressure and cool the region.
Additionally, in this dense environment, the electrons
are mildly degenerate, meaning their pressure does
not significantly depend on temperature. This further
prevents the expanding gas from effectively dissipating
the heat generated. As a result, the temperature spikes
dramatically, accelerating nuclear reaction rates and
triggering a localized thermonuclear runaway.
The specific dynamics of this thermonuclear
runaway depend heavily on the mass accretion rate
and the chemical composition of the infalling material.
At relatively low accretion rates, hydrogen burns
unstably via the Carbon-Nitrogen-Oxygen (CNO) cycle,
which can subsequently trigger helium ignition. At
intermediate mass accretion rates, hydrogen burns
continuously and stably via the hot CNO cycle,
completely exhausting itself and gradually building
up a dense, pure helium layer beneath it. Once critical
conditions are met, this pure helium layer violently
detonates via the triple-alpha process, producing a
short, intense X-ray burst that typically lasts around
ten seconds. At even higher accretion rates, the
conditions for helium ignition are met much faster,
before the hydrogen has fully burned, resulting in a
mixed hydrogen and helium flash. The initial helium
ignition generates extreme temperatures that trigger
breakout reactions, bypassing the standard CNO cycle
and initiating the rapid-proton (rp) process. During
the rp-process, a rapid succession of proton captures
and slower beta decays synthesizes heavier elements,
significantly extending the energy release and creating
3.2 Thermonuclear X-ray burst
Figure 1: Graphical representation of occurrence of
thermonuclear X-ray Burst
a burst tail that lasts for tens to hundreds of seconds.
The ignition itself is highly complex; rather than
erupting uniformly across the entire sphere, the runaway
typically sparks at a localized point, often near the
equator, where the effective surface gravity is slightly
reduced by the star’s rapid rotation. From this ignition
point, a thermonuclear flame front propagates laterally
across the neutron star, engulfing the entire surface in
approximately one second. In the most powerful of these
events, the extreme local luminosity can temporarily
exceed the Eddington limit, causing the outward
radiation pressure to overcome the immense inward
gravitational pull. When this threshold is breached,
the outermost layers of the neutron star’s photosphere
are physically lifted off the surface and driven outward,
creating a Photospheric Radius Expansion (PRE) burst.
As the thermonuclear fuel is exhausted, the lifted
photosphere eventually contracts and settles back onto
the stellar surface before an extended cooling phase
takes over. Ultimately, while continuous accretion
generates a steady baseline X-ray luminosity, the sudden,
localized burning of stored fuel temporarily outshines
this persistent emission by a factor of ten or more,
producing thermonuclear X-ray bursts
Thermonuclear X-ray emission is predominantly
thermal and is well-described by a blackbody spectrum
with peak temperatures reaching 2–3 keV. Consequently,
bursts are most prominent in the soft X-ray energy
band (typically below 10 keV), where they temporarily
outshine the persistent X-ray emission from accretion
by an order of magnitude or more
The intense photons from a thermonuclear X-
ray burst drastically alter the surrounding accretion
environment, primarily affecting the corona, the
accretion disk, and the companion star. When the
burst injects a massive influx of soft X-ray photons into
the hot electron corona, it triggers inverse Compton
scattering that rapidly cools the coronal plasma. This
cooling manifests observationally as a sharp deficit
in hard X-ray emission during the burst peak. In
extreme cases, the bursts immense radiation pressure
may completely blow the corona away, which could
temporarily shut off the collimated radio jets that are
linked to coronal magnetic fields. The burst also
strongly impacts the physical structure of the accretion
disk. Irradiation can heat the disk, causing it to puff
up and increase its scale height, while the intense
photon flux can induce Poynting-Robertson radiation
drag, forcing the inner disk material to drain onto the
neutron star. Furthermore, burst photons scattering off
the inner accretion disk generate observable reflection
features, such as fluorescent iron emission lines and
absorption edges. Finally, photons that reach the cooler
outer accretion disk and the donor star are reprocessed,
producing transient optical and ultraviolet flashes.
Studying thermonuclear X-ray bursts provides a
unique astrophysical laboratory to probe matter and
physical processes under extreme conditions that cannot
be replicated on Earth. One of the primary motivations
for studying these powerful explosions is to constrain
the Equation of State (EoS) of the supranuclear dense
matter inside neutron star cores. By using methods
like Photospheric Radius Expansion (PRE) bursts
and continuum spectrum modeling, researchers can
accurately measure a neutron star’s mass and radius,
thereby ruling out specific theoretical models of exotic
matter. Thermonuclear Burst allows astronomers to
dynamically probe the accretion process by observing
how burst photons interact with the accretion disk
and the hot electron corona. Finally, they provide a
testing ground for complex nuclear physics, such as
the rapid-proton (rp) process, and multidimensional
hydrodynamics like flame spreading.
We will discuss more methods of producing X-rays
in the upcoming articles.
Reference:
Sudip Bhattacharyya Measurement of neutron
star parameters: A review of methods for low-
mass X-ray binaries
Anna L. Watts Thermonuclear burst oscillations
Nathalie Degenaar, David R. Ballantyne, Tomaso
24
3.2 Thermonuclear X-ray burst
Belloni, Manoneeta Chakraborty, Yu-Peng Chen,
Long Ji, Peter Kretschmar, Erik Kuulkers, Jian
Li, Thomas J. Maccarone, Julien Malzac, Shu
Zhang, and Shuang-Nan Zhang Accretion disks
and coronae in the X-ray flashlight
Duncan K. Galloway and Laurens Keek
Thermonuclear X-ray bursts
Duncan K. Galloway, Zac Johnston, Adelle
Goodwin, and Alexander Heger High-energy
transients: thermonuclear (type-I) X-ray bursts
L. Colleyn, Z. Medin, and A. C. Calder Modeling
X-ray Bursting Neutron Star Atmospheres
W. H. G. Lewin, J. van Paradijs, and R. E. Taam
X-ray bursts
About the Author
Aromal P is a research scholar in
Department of Astronomy, Astrophysics and Space
Engineering (DAASE) in Indian Institute of
Technology Indore. His research mainly focuses on
studies of Thermonuclear X-ray Bursts on Neutron star
surface and its interaction with the Accretion disk and
Corona.
25
Part III
Biosciences
The Genetic Whisper: Unlocking Biodiversity
with eDNA
by Geetha Paul
airis4D, Vol.4, No.4, 2026
www.airis4d.com
Figure 1: An illustration of the core elements of a
Biodiversity research: the endemic Odonata of the
Western Ghats (specifically Vestalis apicalis) flying
near a water surface, integrated with a glowing DNA
double helix structure that transitions from blue to green.
The image is set against a lush, biodiverse wetland,
symbolising the intersection of traditional zoology and
modern eDNA technology.
1.1 Introduction
Environmental DNA (eDNA) is a powerful
biomonitoring method that allows scientists to detect
the presence of organisms, from bacteria to blue whales,
by analysing the genetic material they leave behind in
their environment.
For centuries, the study of biodiversity required the
physical presence of the observer and the observed. To
confirm a species lived in a habitat, a scientist had to see
it, catch it, or find its remains. In the dense, monsoon-
swept wetlands or the intricate canal networks of the
Western Ghats, this traditional approach often hits a
wall. Many species are elusive, nocturnal, or exist
only as microscopic larvae/nymph hidden in the silt.
Enter Environmental DNA (eDNA), a revolutionary
molecular tool that has turned the natural world into a
giant, living library of genetic information.
eDNA refers to the genetic material shed by
organisms into their surrounding environment, water,
soil, or even air, via skin cells, faeces, mucus, or
decaying tissue. This genetic soup allows researchers to
detect entire communities without ever seeing a single
animal. It is the forensic science of ecology, where a
single litre of river water can reveal the presence of
everything from endangered dragonflies to migratory
fish. For a professional scientist, eDNA represents a
shift from searching for life to filtering for it. It bypasses
the limitations of human observation, providing a high-
resolution map of life that is non-invasive and incredibly
sensitive. As we face a global biodiversity crisis, eDNA
has emerged as the early warning system of the 21st
century, offering a way to monitor ecosystem health in
real-time and at a scale previously thought impossible.
It is the bridge between traditional zoology and high-
throughput genomic science, ensuring that even the
most cryptic inhabitants of our planet are no longer
invisible to conservation efforts.
1.2 What is eDNA?
At its core, eDNA is extra-organismal DNA. Unlike
traditional sampling, which requires a tissue biopsy
from a captured specimen, eDNA is collected from the
medium in which the organism lives. In freshwater
systems, this DNA remains suspended in the water
1.4 Applications in New Era Science
column for roughly 7 to 21 days before degrading. By
targeting specific barcode genes, most commonly the
Cytochrome c oxidase I (COI) gene, scientists can
identify a species genetic fingerprint from a simple
water sample.
1.3 The Role of eDNA in the Modern
World
In an era defined by rapid climate change and
habitat loss, eDNA enables biosecurity and industrial-
scale monitoring. Its primary roles include,
Detecting Invasive Species: Identifying harmful
pests (like the Apple Snail) before they become
established and cause economic damage. Invasive
species typically follow an invasion curve. Traditional
monitoring (visual sightings) usually only detects a
species once it has already reached the Exponential
Growth phase, at which point eradication is nearly
impossible. eDNA, however, can detect the Genetic
Whisper of a single organism during the Lag Phase
allowing for immediate, low-cost intervention.
Tracking Rare Species: Confirming the survival
of Lazarus species or endemics in remote, inaccessible
areas of the Western Ghats. The ability to detect Lazarus
species, those thought to be extinct but rediscovered and
rare endemics, is perhaps the most magical application
of eDNA. In the rugged, high-altitude streams of the
Western Ghats, traditional sampling is often dangerous
or physically impossible. eDNA allows us to survey
a mountain peak by simply sampling the stream at its
base.
Ecosystem Health Assessment: Providing a
Biotic Index that reflects the total health of a watershed
based on the presence of sensitive indicator species like
Odonates. By utilising Odonates as biological sensors,
the e-Biotic Index transforms molecular data into a high-
resolution map of watershed health. This framework
assigns sensitivity scores to Odonata families detected
via eDNA; for instance, the presence of intolerant
families, such as Gomphidae or Chlorocyphidae,
indicates a pristine, oxygen-rich environment, while a
dominance of tolerant species, such as Brachythemis
Figure 2: This infographic illustrates the Genetic Soup
concept of environmental DNA (eDNA) within an
aquatic ecosystem. It serves as a visual model of how
biological information moves from an organism into
the surrounding medium, a foundational principle for
modern molecular monitoring.
Image courtesy: https://c02.purpledshub.com/uploads/sites/62/2023/02/what-is-eDNA-0f1b49d.p
ng?w=1200&webp=1
contaminata, signals a degraded or polluted ecosystem.
By calculating a Multi-Metric Index (MMI) from these
genetic reads, researchers can objectively categorise
water bodies into conservation or restoration zones,
identifying early-stage ecological shifts long before
physical changes become visible to the naked eye.
1.4 Applications in New Era Science
1.5 The eDNA Metabarcoding Model
Modern science is moving toward Metabarcoding,
the ability to simultaneously sequence thousands of
species from a single sample. Metabarcoding is a
high-throughput molecular technique that enables the
identification of entire biological communities from
a single environmental sample. While traditional
DNA barcoding focuses on a single individual at a
time, metabarcoding uses Next-Generation Sequencing
(NGS) to read thousands of distinct DNA tags
simultaneously. Advances in technology have made
this process potentially faster, less expensive and more
thorough than traditional identification methods. The
workflow follows a rigorous pipeline that moves from
the riverbank to the bioinformatic cloud:
28
1.6 Deep-Sea Exploration:
Figure 3: This diagram shows the steps in the
eDNA process, including sample collection from the
environment, DNA extraction, PCR amplification,
Sequencing, Sequence comparison with known
sequences, and species assignment.
Image courtesy: https://www.integratesustainability.com.au/wp-content/uploads/2019/11/Ruppert-e
DNA-Metabarcoding.png
1.5.1 Field Collection & Filtration:
Water is collected and passed through fine filters
(typically 0.45
µ
m or 0.22
µ
m) to trap cellular debris,
scales, and metabolic waste.
1.5.2 DNA Extraction:
In the lab, the trapped cells are lysed (broken
open), and total genomic DNA is purified. This sample
is a genetic soup containing DNA from fish, insects,
bacteria, and even humans.
1.5.3 Library Preparation (Universal PCR):
Unlike your previous work with species-specific
primers, metabarcoding uses Universal Primers. These
target a highly conserved region (e.g., a specific segment
of the COI or 12S rRNA gene) that is present in all target
organisms but contains sufficient internal variation to
distinguish species.
1.5.4 Multiplexing & Sequencing:
Unique molecular indices (barcodes for the
barcodes) are added to each sample so multiple sites
can be sequenced in one run on platforms like Illumina
MiSeq or NextSeq.
Figure 4: Without knowledge of deep-sea
species, effective conservation is impossible, leading
international marine scientists warn in a new policy brief
presented at the UN Biodiversity Conference (COP15)
in Montreal.
Credit: Senckenberg Research Institute and Natural History Museum. https://www.infohackit.com/w
p-content/uploads/2023/02/Senckenberg-Infographic.png
1.5.5 Bioinformatics Pipeline:
The raw sequence data (millions of reads)
is filtered, denoised, and compared against global
databases like BOLD or GenBank. The output is
a comprehensive Taxon List of every species present
in that litre of water.
1.6 Deep-Sea Exploration:
Mapping life in the midnight zone where
submersibles cannot easily reach. In the midnight
zone or bathypelagic layer (1,000 to 4,000 meters
deep), the immense pressure and absolute darkness
make traditional visual surveys with Remotely Operated
Vehicles (ROVs) incredibly expensive and technically
limited. eDNA serves as a genetic telescope in these
depths, allowing scientists to identify giant squid, rare
bioluminescent fish, and entire microbial communities
by simply sampling the surrounding seawater. Since
DNA can persist in the cold, high-pressure environment
of the deep sea, a single water sample can provide a
snapshot of the hidden biodiversity that submersibles
often miss because their lights and noise scare away
sensitive deep-sea creatures.
29
1.8 Ancient DNA:
Figure 5: Illustration of Environmental DNA
collected from open environments to identify terrestrial
vertebrates, as this genetic material remains detectable
in the air several hundred meters from its biological
source.
Image courtesy: https://tse2.mm.bing.net/th/id/OIP.8lrNxI4M 5EmgcDYG NlFQHaEK?rs=1&pid
=ImgDetMain&o=7&rm=3
1.7 Airborne eDNA:
Sucking air through filters to detect birds and
mammals in tropical rainforest canopies. Airborne
eDNA (airDNA) represents the latest frontier in
terrestrial biomonitoring, particularly in dense, multi-
layered environments like the tropical rainforests of
the Western Ghats. In these habitats, traditional visual
surveys are often obstructed by thick canopy cover,
and many mammals and birds are highly elusive or
canopy-dwelling. By using high-volume air samplers
or specialised vacuum filters, researchers can capture
microscopic fragments of DNA, carried in skin cells, fur,
feathers, or dried saliva, that have become aerosolised.
This technique effectively turns the air into a biological
record, allowing for the simultaneous detection of entire
terrestrial communities from a single stationary point.
1.8 Ancient DNA:
Extracting eDNA from permafrost or cave
sediments to reconstruct ecosystems from thousands of
years ago. Extracting ancient environmental DNA
aDNA from permafrost or cave sediments allows
scientists to perform biological time travel. Unlike
modern eDNA, which reflects current populations,
ancient eDNA is preserved for millennia by the
cold, stable conditions of ice or deep soil. By
sequencing these microscopic fragments, researchers
Figure 6: A recent breakthrough, highlighted bythe
Conversation, reveals how analysing DNA from cave
sediments could unveil long-hidden secrets about life
during the Ice Age. This exciting research may
reshape our understanding of the ecosystems that existed
thousands of years ago and provide insight into the
species, including humans, who once inhabited these
regions.
Image courtesy: https://indiandefencereview.com/wp-content/uploads/2025/12/image-15.png
can reconstruct entire extinct ecosystems, identifying
the plants, megafauna (like mammoths), and even the
microbial communities that existed long before human
records. This sedimentary ancient DNA sedaDNA
provides a high-resolution chronicle of how life on
Earth responded to past climate shifts, offering vital
clues for predicting how current biodiversity might
survive modern global warming.
References
1.
Beng, K. C., & Corlett, R. T. (2020). Applications
of environmental DNA (eDNA) in ecology and
conservation: opportunities, challenges, and
prospects. Biodiversity and Conservation.
2.
Banerjee, P., et al. (2022) When
Conventional Methods Fall Short: Identification
of Invasive Cryptic Golden Apple Snails Using
Environmental DNA.
3.
Bohmann, K., et al. (2014). Environmental DNA
for wildlife biology and biodiversity monitoring.
Trends in Ecology & Evolution.
4.
Canals, O., et al. (2021). Environmental DNA
reveals the seasonal dynamics of deep-sea fish
communities. Scientific Reports.
30
1.8 Ancient DNA:
5.
Clare, E. L., et al. (2022). Measuring biodiversity
from DNA in the air. Current Biology.
6.
Gupte, R., et al. (2023). Environmental DNA
metabarcoding reveals the hidden diversity of
freshwater communities in the Western Ghats
biodiversity hotspot. Scientific Reports.
7.
Govindarajan, A. F., et al. (2021). Environmental
DNA metabarcoding of the Deep Blue: Exploring
marine biodiversity in the midwater and benthos.
Frontiers in Marine Science.
8.
Jerde, C. L., et al. (2011). Sight-unseen detection
of rare aquatic species using environmental DNA.
Conservation Letters.
9.
Littlefair, J. E., et al. (2023). Airborne
environmental DNA for terrestrial vertebrate
community monitoring. Methods in Ecology
and Evolution.
10.
Thomsen, P. F., et al. (2012). Monitoring
endangered freshwater biodiversity using
environmental DNA. Molecular Ecology.
About the Author
Geetha Paul is one of the directors of
airis4D. She leads the Biosciences Division. Her
research interests extends from Cell & Molecular
Biology to Environmental Sciences, Odonatology, and
Aquatic Biology.
31
About airis4D
Artificial Intelligence Research and Intelligent Systems (airis4D) is an AI and Bio-sciences Research Centre.
The Centre aims to create new knowledge in the field of Space Science, Astronomy, Robotics, Agri Science,
Industry, and Biodiversity to bring Progress and Plenitude to the People and the Planet.
Vision
Humanity is in the 4th Industrial Revolution era, which operates on a cyber-physical production system. Cutting-
edge research and development in science and technology to create new knowledge and skills become the key to
the new world economy. Most of the resources for this goal can be harnessed by integrating biological systems
with intelligent computing systems offered by AI. The future survival of humans, animals, and the ecosystem
depends on how efficiently the realities and resources are responsibly used for abundance and wellness. Artificial
intelligence Research and Intelligent Systems pursue this vision and look for the best actions that ensure an
abundant environment and ecosystem for the planet and the people.
Mission Statement
The 4D in airis4D represents the mission to Dream, Design, Develop, and Deploy Knowledge with the fire of
commitment and dedication towards humanity and the ecosystem.
Dream
To promote the unlimited human potential to dream the impossible.
Design
To nurture the human capacity to articulate a dream and logically realise it.
Develop
To assist the talents to materialise a design into a product, a service, a knowledge that benefits the community
and the planet.
Deploy
To realise and educate humanity that a knowledge that is not deployed makes no difference by its absence.
Campus
Situated in a lush green village campus in Thelliyoor, Kerala, India, airis4D was established under the auspicious
of SEED Foundation (Susthiratha, Environment, Education Development Foundation) a not-for-profit company
for promoting Education, Research. Engineering, Biology, Development, etc.
The whole campus is powered by Solar power and has a rain harvesting facility to provide sufficient water supply
for up to three months of drought. The computing facility in the campus is accessible from anywhere through a
dedicated optical fibre internet connectivity 24×7.
There is a freshwater stream that originates from the nearby hills and flows through the middle of the campus.
The campus is a noted habitat for the biodiversity of tropical Fauna and Flora. airis4D carry out periodic and
systematic water quality and species diversity surveys in the region to ensure its richness. It is our pride that the
site has consistently been environment-friendly and rich in biodiversity. airis4D is also growing fruit plants that
can feed birds and provide water bodies to survive the drought.