Cover page
Flying Liard in airis4D Campus :The ”flying lizard” in Kerala is the Southern Flying Lizard (Draco dussumieri),
a small, arboreal lizard from the Western Ghats known for its ability to glide between trees using wing-like flaps
(patagium) supported by elongated ribs, commonly seen in forests and plantations, especially males displaying
bright throat flaps (dewlaps). The picture was taken by Geetha Paul while doing a survey in airis4D campus. It is
a common sight of the lizard gliding from tree to tree.
Managing Editor Chief Editor Editorial Board Correspondence
Ninan Sajeeth Philip Abraham Mulamoottil K Babu Joseph The Chief Editor
Ajit K Kembhavi airis4D
Geetha Paul Thelliyoor - 689544
Arun Kumar Aniyan India
Sindhu G
Journal Publisher Details
Publisher : airis4D, Thelliyoor 689544, India
Website : www.airis4d.com
Email : nsp@airis4d.com
Phone : +919497552476
i
Editorial
by Fr Dr Abraham Mulamoottil
airis4D, Vol.4, No.2, 2026
www.airis4d.com
Welcome to this edition of airis4D, where the
frontier of science begins just outside our door.
The cover image captures a moment of remarkable
natural engineering: a Southern Flying Lizard (Draco
dussumieri) in mid-glide across our campus in Kerala.
With its wing-like patagium extended, this agile reptile
exemplifies adaptation and elegant motion in the
Western Ghats. Photographed by Geetha Paul during
a field survey, this ”flying lizard” serves as a fitting
emblem for our journal—reminding us that inspiration,
observation, and discovery are often found in the
dynamic intersection of our immediate environment
and the expansive quest for knowledge that drives the
research within these pages.
In this edition, we highlight the profound and
deeply regrettable oversight. The footnote to Prof.
Kembhavi’s article points to a critical injustice in the
history of science. The ”matched filtering” technique he
mentions, the very computational backbone that made
detecting the infinitesimal ripple of a gravitational wave
possible, was pioneered by Indian scientists Professor
Sanjeev Dhurandhar and Professor B.S. Sathyaprakash.
Their theoretical work in the 1980s and
1990s provided the essential ”template” to find the
gravitational wave signal buried in overwhelming noise.
Without their algorithms, LIGO’s detectors would have
been deaf to the cosmic events they were built to
hear. While the 2017 Nobel Prize in Physics rightly
honored the experimental visionaries Rainer Weiss, Kip
Thorne, and Barry Barish for making the detection a
reality, the foundational contribution of Dhurandhar
and Sathyaprakash was conspicuously absent from the
recognition.
That this foundational contribution remains under-
recognized on the global stage is a controversy in
itself. However, the greater shame, as you rightly
point out, lies in the failure of their own nation to
honor them commensurately. Despite their world-
altering contribution to one of humanity’s greatest
scientific achievements, they have not been awarded
Indias highest civilian honors, such as the Bharat Ratna
or even a Padma Vibhushan, in a timely and unequivocal
manner. This neglect is not merely an administrative
lapse; it is a national failure to celebrate and protect
our own intellectual legacy. It sends a disheartening
message to young scientists in India about the value
placed on pure, foundational research that may not
have immediate commercial application but expands
the very horizons of human knowledge. True national
pride would demand that these architects of a new era in
astronomy be celebrated as heroes, ensuring their names
are etched alongside the discovery itself in textbooks
and public memory. The debt owed to them is not just
scientific; it is a matter of national honor.
The first article in this edition is by
Blesson George: “Introduction to Neuromorphic
Computing”. Neuromorphic computing is an emerging
paradigm that bridges the gap between conventional
computers—which excel at logic but struggle with
learning—and the human brains efficient, adaptive
processing. By mimicking the brains spiking neurons
and plastic synapses in hardware, neuromorphic systems
perform massively parallel, event-driven computation
with very low power consumption. Introduced by Carver
Mead in the 1980s and advanced by processors like
IBM TrueNorth and Intel Loihi, this approach avoids
the von Neumann bottleneck by integrating memory
and processing, enabling real-time learning. Using
models such as the Leaky Integrate-and-Fire neuron,
neuromorphic computing is well-suited for edge devices,
robotics, and AI applications, promising a future of
energy-efficient, intelligent machines.
The next article is by Abishek P S on ”Plasma
Physics- Cometary Plasma”. Cometary plasma forms
when a comet approaches the Sun, causing the
sublimation of its icy nucleus and the release of neutral
gases that expand into a coma. This material is then
ionized by solar ultraviolet radiation and the solar
wind, creating a dynamic mixture of ions, electrons,
and charged dust particles. This plasma environment
exhibits distinct structures such as an ion tail that
points away from the Sun, a curved dust tail, and
boundaries like a bow shock and ionopause. Studying
these processes provides a natural laboratory for
understanding fundamental plasma physics, including
charge exchange, magnetic field interactions, and the
transition from collisional to collisionless regimes. This
research is significant as it offers insights into solar
system evolution, serves as an astrophysical analogy
for phenomena like stellar winds, aids in space weather
studies, and informs laboratory plasma and fusion
research.
In the article ”Black Hole Stories-24: Some
Back Hole Mergers From Gravitational Wave Detector
Observing Runs O1 and O2” by Ajit Kembhavi,
key binary black hole merger events detected by
gravitational wave observatories during their first two
observing runs, O1 (2015-2016) and O2 (2016-2017),
are discussed. From O1, it highlights GW151226,
a weaker signal involving black holes of 14.2 and
7.5 solar masses that merged into a 20.8 solar-
mass black hole. From O2, it describes several
mergers, including GW170104, GW170608 (the lowest-
mass binary at the time), the distant and luminous
GW170729, and GW170814—the first event detected
by three observatories (two LIGO and Virgo), which
confirmed gravitational wave polarizations as predicted
by general relativity. The analysis shows that the black
holes detected via gravitational waves are typically
more massive than those found in electromagnetic
observations of X-ray binaries, a bias attributed to
the greater detectability of higher-mass mergers. The
data also reveal a ”mass gap” between the maximum
known neutron star mass and the lowest black hole mass
detected, with the remnant of the neutron star merger
GW170817 potentially falling into this intriguing range.
The article X-ray Astronomy: Theory by Aromal
P. provides a theoretical overview of the primary X-ray
sources in the universe and their emission mechanisms.
Stellar coronae, like the Suns, produce X-rays via
magnetic reconnection, heating plasma to millions of
Kelvin for thermal emission. Supernova remnants
generate X-rays through both thermal bremsstrahlung
from shock-heated gas and non-thermal synchrotron
radiation from particles accelerated by a central pulsar.
X-ray binaries, the galaxy’s brightest sources, emit X-
rays from accretion disks; neutron star binaries release
energy from matter impacting a solid surface, while
black hole binaries produce X-rays from the inner
disk and a hot corona via inverse Compton scattering.
Isolated neutron stars emit through surface cooling or,
in the case of magnetars, from the decay of ultra-strong
magnetic fields. At cosmological scales, active galactic
nuclei produce X-rays via inverse Compton scattering in
a corona around supermassive black holes, and galaxy
clusters shine in X-rays due to thermal bremsstrahlung
from the hot intracluster medium heated by gravitational
infall.
The article The Birth of Stars: Physical Processes
Governing Stellar Formation” by Sindhu G outlines
the physical processes governing stellar formation,
beginning within the cold, dense environments of giant
molecular clouds (GMCs). Through the interplay of
gravity, turbulence, magnetic fields, and cooling, these
clouds fragment into dense cores, which collapse under
gravitational instability when their mass exceeds the
Jeans limit. This collapse forms a central protostar,
deeply embedded in an envelope and powered by
gravitational accretion. Conservation of angular
momentum leads to the formation of an accretion
disk, which channels material onto the star and is
the birthplace of planets, while bipolar jets and
outflows remove excess angular momentum and inject
feedback into the surrounding medium. Once accretion
iii
subsides, the pre-main-sequence star contracts until
hydrogen fusion ignites in its core, marking its
arrival on the main sequence. The timescale of
this process is mass-dependent, with massive stars
forming rapidly and exerting strong feedback that shapes
their natal environments. Overall, star formation is a
hierarchical, multi-scale process governed by gravity,
thermodynamics, and feedback, which sets the initial
conditions for planetary systems and galactic evolution.
Linn Abraham’s article “Visualizing attention in
Vision Transformers” explains a method for visualizing
the decision-making process of Vision Transformer
(ViT) models by creating saliency maps from their
built-in attention mechanisms, a technique known as
Attention-Guided Class Activation Mapping (AGCAM).
Unlike traditional saliency methods designed for CNNs,
AGCAM leverages the self-attention scores naturally
learned by the transformer. The implementation
involves two key programming techniques: ”monkey
patching” to dynamically replace the model’s standard
attention module with a custom version during inference,
and the use of PyTorch forward and backward hooks
to intercept and store the attention matrices and their
gradients as the image passes through the network.
These captured values are then combined, normalized,
and multiplied to generate a final heatmap that highlights
the image pixels most influential for the model’s
prediction, providing a transparent visual explanation
of the ViT’s focus.
Aengela Grace Jacob’s article “Seed to Callus
Revolution: Optimising Germination and Tissue
Culture in Brassica nigra” details a protocol for the
in-vitro germination and callus induction of Black
Mustard (Brassica nigra). The process begins with
a critical, rapid surface sterilization of seeds using
ethanol and sodium hypochlorite to remove microbial
contaminants while preserving seed viability, followed
by rinsing in sterile water. The sterilized seeds are then
inoculated onto a Murashige and Skoog (MS) basal
medium supplemented with a specific hormonal ratio
(0.5 mg/L BAP to 3.0 mg/L 2,4-D) designed to favor
callus formation. Under controlled conditions, the seeds
germinate within 7-10 days. Subsequently, explants
from the seedlings are transferred to a callus-induction
medium containing the same hormones and incubated
in the dark, resulting in the successful formation of
embryogenic callus within 14-15 days. This optimized
tissue culture method demonstrates an efficient balance
between sterility and seed health, enabling reliable
propagation and setting the stage for further genetic
studies or micropropagation of this important oilseed
crop.
Geetha Paul, Gateway to Infection: The Molecular
Architecture of Viral Entry into Human T Cells,
explores the sophisticated molecular strategy viruses
like HIV-1 use to infect human T cells by hijacking
the Nuclear Pore Complex (NPC), the gateway to
the cell’s nucleus. The NPC’s structural integrity
relies on nucleoporins such as NUP93, which acts
as a critical adaptor, and NUP188, a flexible scaffold.
Viruses subvert these components; for example, they
can cleave NUP93 to disrupt host defense mechanisms
or interact with NUP188 to physically widen the
pore. To cross the NPC’s selective hydrophobic filter,
formed by FXFG peptide repeats, viral proteins like the
HIV-1 capsid mimic host transport receptors, binding
transiently to these peptides to ”melt” through the
barrier. Understanding these specific protein-protein
interactions (PPIs) is crucial for developing a new class
of antiviral drugs, known as nuclear entry inhibitors,
which aim to block these critical contacts and trap the
virus outside the nucleus, thereby preventing infection.
Ajay Vibhute’s article on Astronomical computing
has evolved from a supplementary tool into a
fundamental instrument essential for modern astronomy,
driven by the massive and complex data produced
by contemporary observatories. It addresses
unique challenges inherent in astronomical data,
such as noise, instrumental limitations, and high-
dimensional structures, by employing specialized
computational pipelines for calibration, reduction,
and analysis. This field encompasses the entire
data lifecycle—from acquisition and simulation to
visualization and archiving—effectively acting as
a virtual instrument that transforms raw, noisy
measurements into scientific insights. As data volumes
continue to grow, astronomical computing remains
a critical, interdisciplinary backbone that integrates
iv
computer science, statistics, and physics to enable
discoveries beyond the reach of traditional observation
alone.
The article “The Disappearing Role of Judgment
in Automated Systems” by Jinsu Ann Mathew argues
that automation subtly erodes human judgment by
designing it out of decision-making processes. As
systems transition from providing suggestions to making
definitive outputs—like rankings or scores—human
oversight often dwindles to passive approval, creating a
workflow where predictions quietly become decisions.
This shift fosters automation bias, where the appearance
of objectivity makes dissent effortful, diffusing
responsibility and allowing the skill of judgment to
atrophy from disuse. The resulting systems are efficient
in stable conditions but become fragile when faced with
novel situations, as humans, relegated to monitoring,
lose the practiced capacity to intervene effectively. To
counter this, the author advocates for designs that
intentionally preserve judgment by making human
override practical and routine, ensuring automation
handles scale while humans retain responsibility for
interpretation and edge cases, ultimately creating more
reliable and adaptable systems.
v
News Desk
Draco dussumieri, Southern Flying Lizard of Western Ghats
Draco dussumieri, commonly known as the Southern Flying Lizard, is a remarkable agamid lizard endemic
to the Western Ghats and associated hill ranges of Southern India. It stands as a pinnacle of evolutionary
adaptation, specifically specialised for an arboreal life within the dense canopy of evergreen forests, areca nut
and rubber plantations. Measuring roughly 20 to 25 centimetres in length, its most distinctive anatomical feature
is the patagium, a pair of large, wing-like membranes supported by five to six greatly elongated thoracic ribs.
These ribs are mobile; when the lizard is at rest, they fold against the body to facilitate camouflage, but during a
leap, they are extended by specialised intercostal muscles to create a broad surface area for gliding. This allows
the lizard to traverse distances of over 30 meters between trees, effectively bypassing ground-level predators and
conserving energy.
The colouration of Draco dussumieri is a sophisticated example of disruptive camouflage and sexual
signalling. The dorsal surface is typically a mottled, bark-like grey or olive-brown, making the lizard almost
invisible when pressed flat against a tree trunk. However, this drab exterior hides vibrant secondary sexual
characteristics. Males possess a vivid, lemon-yellow gular appendage (dewlap) under the chin, which they flick
rapidly to signal dominance to rivals or attract mates. The underside of the patagium is similarly striking, often
featuring orange or yellow hues with dark spots, visible only during flight or during specific territorial displays.
Unlike the powered flight of birds or bats, Draco utilises a controlled glide, using its long, slender tail as a rudder
to navigate and its forelimbs to ”steer” the leading edge of the wing membranes.
In terms of ecology and life history, Draco dussumieri is predominantly insectivorous, with a diet consisting
almost exclusively of ants and small termites found on the bark of trees. Their social structure is highly territorial;
a single male typically patrols a ”harem” of several trees, each inhabited by a female. While they are strictly
arboreal, the life cycle necessitates a brief, perilous descent to the forest floor. Gravid females must leave the
safety of the canopy to excavate a small hole in the moist soil, where they deposit a clutch of 3 to 5 eggs before
vi
immediately returning to the trees. This vulnerable moment is the only time these ”dragons” are found on
the ground. Because of their reliance on specific microclimates and tree densities, they serve as important
bioindicators for the health of the Western Ghats forest ecosystems.
Classification & Taxonomy
Order: Squamata (Lizards and Snakes)
Family: Agamidae (Agamid lizards)
Genus: Draco (The ”Gliding Dragons”)
Species: Draco dussumieri
vii
Contents
Editorial ii
I Artificial Intelligence and Machine Learning 1
1 Introduction to Neuromorphic Computing 2
1.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.2 Basic Idea of Neuromorphic Computing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.3 Why Neuromorphic Computing is Relevant . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.4 Emergence and Development . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.5 A Simple Spiking Neuron Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.6 Difference from Conventional Computing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.7 Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.8 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
II Astronomy and Astrophysics 4
1 Plasma Physics- Cometary Plasma 5
1.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.2 Formation of Cometary Plasma . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.3 Plasma Structures around Comets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
1.4 Plasma Processes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
1.5 Significance of Studying Cometary Plasma . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
2 Black Hole Stories-24
Some Back Hole Mergers From Gravitational Wave Detector Observing Runs O1 and O2
11
2.1 Inter-University Centre for Astronomy and Astrophysics . . . . . . . . . . . . . . . . . . . . . . . 11
2.2 The First Observing Run O1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
2.3 The Second Observing Run O2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
2.4 Black Hole Masses From O1 and O2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
3 X-ray Astronomy: Theory 14
3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
3.2 Stellar Corona . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
3.3 Supernova Remnants (SNRs) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
3.4 X-ray Binaries (XRBs) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
3.5 Isolated Neutron Stars (INS) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
3.6 Active Galactic Nuclei (AGN) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
3.7 Galaxy Clusters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
4 The Birth of Stars: Physical Processes Governing Stellar Formation 17
4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
CONTENTS
4.2 Giant Molecular Clouds . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
4.3 Dense Cores and Cloud Fragmentation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
4.4 Gravitational Instability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
4.5 Protostar Formation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
4.6 Accretion Disks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
4.7 Jets, Outflows, and Feedback . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
4.8 Pre-Main-Sequence Evolution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
4.9 Timescales and Mass Dependence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
4.10 Observational Constraints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
4.11 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
5 Visualizing attention in Vision Transformers 20
5.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
5.2 Attention-Guided CAM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
5.3 Monkey Patching . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
5.4 PyTorch Hooks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
5.5 Modify Attention using Hooks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
5.6 Generating heat maps using Attention . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
III Biosciences 23
1 Seed to Callus Revolution: Optimising Germination and Tissue Culture in
Brassica nigra 24
1.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
1.2 Process Involved . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
1.3 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
1.4 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
2 Gateway to Infection: The Molecular Architecture of Viral Entry into Human T Cells 29
2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
2.2 The Structural Foundation: NUP93 and NUP188 . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
2.3 The Role of Nup93: The Adaptor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
2.4 The Role of Nup188: The Large Scaffold . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
2.5 The Selective Filter: FXFG Peptides and the Hydrophobic Mesh . . . . . . . . . . . . . . . . . . 31
2.6 Protein-Protein Interactions (PPI) as Therapeutic Targets . . . . . . . . . . . . . . . . . . . . . . 32
2.7 Protein Interactions in Health and Disease . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
IV Computer Programming 34
1 Astronomical Computing 35
1.1 What Is Astronomical Computing? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
2 The Disappearing Role of Judgment in Automated Systems 39
2.1 Prediction Quietly Becomes Decision . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
ix
CONTENTS
2.2 Automation Bias, Objectivity, and the Loss of Responsibility . . . . . . . . . . . . . . . . . . . . 40
2.3 Judgment as a Skill That Fades . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
2.4 Stability Hides Fragility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
2.5 Reintroducing Judgment by Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
2.6 Conclusion - Keeping Judgment in the Loop . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
x
Part I
Artificial Intelligence and Machine Learning
Introduction to Neuromorphic Computing
by Blesson George
airis4D, Vol.4, No.2, 2026
www.airis4d.com
1.1 Introduction
Conventional computers are designed to perform
calculations by following a fixed sequence of
instructions. They are highly efficient for numerical
and logical tasks, but they struggle with problems that
involve learning, adaptation, and pattern recognition.
Such tasks are easily handled by the human brain,
which operates with remarkable speed and extremely
low power consumption.
Neuromorphic computing is an emerging
computing paradigm that attempts to bridge this gap by
taking inspiration from the structure and functioning
of the human brain. Instead of forcing intelligence
through software alone, neuromorphic computing aims
to realize brain-like computation directly in hardware.
1.2 Basic Idea of Neuromorphic
Computing
In the human brain, information is processed by
billions of neurons that communicate through short
electrical signals called spikes. Learning occurs by
modifying the strength of connections, known as
synapses, between neurons.
Neuromorphic computing tries to mimic this
biological process using electronic circuits. Artificial
neurons generate spikes, and artificial synapses
store and update connection strengths. Unlike
traditional computers that process data continuously,
neuromorphic systems are event-driven and become
active only when spikes occur.
This approach allows neuromorphic systems
to perform massively parallel computation while
consuming very little energy.
1.3
Why Neuromorphic Computing is
Relevant
The growing interest in neuromorphic computing
is mainly due to the limitations of current computing
methods.
Modern artificial intelligence systems, especially
deep learning models, require large datasets, powerful
processors, and high energy consumption. This makes
them unsuitable for many real-time and low-power
applications.
Neuromorphic computing addresses these issues
by:
reducing power consumption,
avoiding the von Neumann bottleneck by
combining memory and computation,
enabling real-time learning and decision-making.
As a result, neuromorphic systems are well suited
for edge devices, robotics, and autonomous systems.
1.4 Emergence and Development
The concept of neuromorphic computing was
introduced in the late 1980s by Carver Mead. He
proposed building electronic circuits that behave
similarly to biological neurons. His work laid the
foundation for brain-inspired hardware design.
Further development of this field was driven
by advances in neuroscience, microelectronics, and
artificial neural networks. The introduction of spiking
1.7 Applications
Figure 1: Comparison between a biological neuron
and an artificial spiking neuron used in neuromorphic
computing
neural networks (SNNs) marked an important step
toward more biologically realistic computation.
In recent years, neuromorphic processors such
as IBM TrueNorth and Intel Loihi have demonstrated
that large-scale brain-inspired hardware systems are
practically feasible.
1.5 A Simple Spiking Neuron Model
Spiking neural networks are commonly used in
neuromorphic computing to model the behavior of
biological neurons. One of the simplest and most
widely used models is the Leaky Integrate-and-Fire
(LIF) neuron.
The change in membrane potential of a neuron is
described by the equation
τ
dV (t)
dt
= V (t) + RI(t) (1.1)
where
V (t)
represents the membrane potential,
I(t)
is the input current,
R
is the membrane resistance,
and τ is the membrane time constant.
When the membrane potential reaches a fixed
threshold value
V
th
, the neuron generates a spike and the
potential is reset. Although simple, this model captures
the essential spike-based behavior of biological neurons
and is widely used in neuromorphic systems.
1.6 Difference from Conventional
Computing
Neuromorphic computing differs fundamentally
from traditional computing methods.
In conventional computers, memory and
processing units are physically separate, and
computation is mostly sequential. In contrast,
neuromorphic systems integrate memory and
computation and operate in a highly parallel manner.
Moreover, traditional systems use continuous-
valued signals, while neuromorphic systems rely on
discrete spikes. This spike-based processing enables
efficient and low-power computation, especially for
sensory and real-time applications.
1.7 Applications
Neuromorphic computing has potential
applications in many areas, including:
robotics and autonomous systems,
speech and image recognition,
edge computing and Internet of Things (IoT)
devices,
brain–machine interfaces.
These applications benefit from fast response,
adaptability, and low energy consumption.
1.8 Conclusion
Neuromorphic computing is a brain-inspired
computing approach that offers a promising alternative
to conventional computing methods. By combining
ideas from neuroscience and electronics, it enables
energy-efficient and adaptive computation.
Although neuromorphic computing is still an
evolving field, continued research and technological
progress may allow it to play a key role in the future of
artificial intelligence and intelligent machines.
About the Author
Dr. Blesson George presently serves as
an Assistant Professor of Physics at CMS College
Kottayam, Kerala. His research pursuits encompass
the development of machine learning algorithms, along
with the utilization of machine learning techniques
across diverse domains.
3
Part II
Astronomy and Astrophysics
Plasma Physics- Cometary Plasma
by Abishek P S
airis4D, Vol.4, No.2, 2026
www.airis4d.com
1.1 Introduction
Space is the immense expanse that stretches beyond
Earths atmosphere, a realm that appears empty yet is
far from void. Though it is dominated by vacuum,
this environment is constantly influenced by powerful
forces such as radiation from stars, vast magnetic fields
that thread through galaxies, and streams of charged
particles emitted by the Sun known as the solar wind.
These elements create a dynamic and ever-changing
stage where celestial bodies interact in remarkable
ways. Among them, comets stand out as natural
laboratories of physics. Composed of ice, dust, and
organic compounds, comets preserve material from
the early solar system, offering scientists a glimpse
into its primordial past. As they travel through space
and approach the Sun, the heat and solar wind trigger
sublimation of their icy cores, producing glowing
comas and long, luminous tails. These processes
reveal fundamental principles of thermodynamics,
plasma physics, and magnetohydrodynamics in action.
Moreover, the organic molecules found within comets
provide tantalizing clues about the origins of life,
suggesting that they may have played a role in delivering
essential building blocks to Earth. In this way, comets
are not merely wandering icy bodies but dynamic
testbeds where the laws of physics and chemistry unfold
on a cosmic scale.
When sunlight and the solar wind interact with
the gases released from a comet, they trigger a process
of ionization in which neutral atoms and molecules
are transformed into charged particles, creating what
is known as cometary plasma, a mixture of ions and
electrons. This plasma is not static; it is continuously
influenced and sculpted by the solar wind, which streams
outward from the Sun at high speeds. As a result, the
comet develops a distinct plasma tail that always points
directly away from the Sun, regardless of the comets
orbital path or direction of travel. This phenomenon
illustrates the powerful influence of solar radiation
and charged particle flows on matter in space. By
studying cometary plasma, we get valuable insights
not only into the physical behaviour and evolution of
comets but also into broader plasma processes that occur
throughout the universe. These investigations shed
light on how radiation and magnetic fields interact with
matter, offering clues about the dynamics of planetary
magnetospheres, the behaviour of stellar winds, and
even the fundamental physics of plasma that governs
much of the cosmos [1,2].
1.2 Formation of Cometary Plasma
The formation of cometary plasma is a multi-
stage process that vividly demonstrates the interaction
between solar energy and the primordial materials of a
comet. It begins with nucleus outgassing, which occurs
as the comet approaches the Sun. The nucleus, a solid
core composed of frozen volatiles such as water ice,
carbon dioxide, and carbon monoxide, is heated by solar
radiation. This heating causes sublimation, the direct
transition of these ices into gas releasing jets of vapor
along with entrained dust particles. These outgassing
events can be highly localized, producing geyser-like
eruptions from cracks and vents on the comets surface,
and they intensify as the comet moves closer to the Sun.
1.2 Formation of Cometary Plasma
The released gases expand outward to form the
coma, a vast, diffuse envelope surrounding the nucleus.
The coma acts as a temporary atmosphere for the
comet, composed primarily of neutral molecules and
dust grains. Its size can reach tens of thousands of
kilometres across, making it the most visible part of the
comet when observed from Earth. Within this coma,
the density of particles is still extremely low compared
to Earths atmosphere, but it is sufficient to interact with
incoming solar radiation and charged particles.
Next comes ionization, a critical step in plasma
formation. Ultraviolet photons from the Sun and
energetic particles carried by the solar wind collide
with the neutral molecules in the coma, stripping away
electrons and creating ions. This process transforms
portions of the neutral gas into a plasma, a dynamic
mixture of positively charged ions and free electrons.
Because plasma is highly responsive to electromagnetic
forces, it becomes entrained in the solar wind flow. The
interaction between the solar wind and the cometary
plasma generates a distinct plasma tail, which always
points directly away from the Sun, regardless of the
comets orbital trajectory. This tail can stretch millions
of kilometres into space, serving as a visible marker of
plasma dynamics in action.
Finally, dust contribution adds further complexity
to the plasma environment. Alongside gases, dust
grains are continuously ejected from the nucleus during
outgassing. These grains can become electrically
charged through photoionization or collisions with
plasma particles. Once charged, they interact with the
surrounding plasma and magnetic fields, influencing
the structure and behaviour of both the dust tail and
the plasma tail. Dust particles also scatter sunlight,
contributing to the comets brightness, and their
interactions with plasma can lead to phenomena such
as turbulence, recombination, and variations in tail
morphology [2].
Taken together, these stages of outgassing, coma
development, ionization, and dust-plasma interactions
illustrate how comets act as natural laboratories for
studying fundamental physical processes. They reveal
the interplay of thermodynamics, radiation physics,
plasma dynamics, and electromagnetic forces in a way
that is observable on a cosmic scale. By examining
cometary plasma, scientists gain not only a deeper
understanding of cometary behaviour but also valuable
insights into universal plasma processes that shape
planetary atmospheres, stellar winds, and the broader
dynamics of the solar system.
The plasma components of a comet are diverse and
intricately connected, each contributing to the dynamic
environment that surrounds the nucleus and extends into
the comets tails. At the core of this system is the neutral
gas, which originates from the sublimation of volatile
ices such as water (
H
2
O
), carbon dioxide (
CO
2
), and
carbon monoxide (CO) as the comet approaches the
Sun. These gases expand outward to form the coma, a
vast neutral atmosphere that serves as the reservoir for
plasma creation. The neutral molecules are the starting
point for ionization processes, and their abundance
determines the density and composition of the resulting
plasma.
Through the action of solar ultraviolet radiation
and collisions with energetic solar wind particles, these
neutral molecules undergo ionization, producing a
variety of ions. Common examples include
H
2
O
+
,
CO
+
, and
CO
+
2
, along with other molecular ions
formed through fragmentation and chemical reactions
within the coma. These ions are highly responsive to
electromagnetic forces, and once created, they are swept
up by the solar wind, contributing to the formation of
the plasma tail. The ions also participate in complex
chemical pathways, recombining with electrons or
interacting with dust grains, which further modifies
the plasmas structure and behaviour [3].
Alongside ions, electrons are generated during
the ionization process. These free electrons are
extremely light and mobile compared to ions, allowing
them to move quickly through the plasma. Their
mobility makes them essential for maintaining charge
neutrality and for driving electrical currents within the
cometary environment. Electrons also play a role in
exciting neutral molecules, leading to the emission of
characteristic spectral lines that allow scientists to study
the composition of the comet remotely. In addition,
electron interactions can produce instabilities and waves
within the plasma, adding to its dynamic nature.
6
1.3 Plasma Structures around Comets
Finally, dust particles released from the comets
nucleus add another layer of complexity to the plasma
system. These grains, ranging in size from microscopic
specks to larger fragments, can become electrically
charged through photoionization or collisions with
plasma particles. Once charged, dust grains interact
with the surrounding plasma and magnetic fields,
influencing currents and wave activity. Their presence
can alter the distribution of ions and electrons, create
turbulence, and contribute to the splitting or bending of
cometary tails. Dust also scatters sunlight, enhancing
the comets brightness and making its tails visible
from Earth, while simultaneously serving as active
participants in plasma dynamics.
Together, these components such as neutral gas,
ions, electrons, and dust particles form a highly
interactive and evolving plasma environment. The
balance between them determines the structure of the
comets coma and tails, while their interactions provide
a natural laboratory for studying fundamental processes
such as ionization, recombination, electromagnetic
coupling, and wave propagation.
1.3
Plasma Structures around Comets
The plasma structures around comets are among
the most striking and scientifically rich features in
the solar system, arising from the interplay between
cometary material and the solar wind. At the core of
these structures lies the coma plasma, which forms
when neutral gases released from the comets nucleus
are ionized by solar ultraviolet radiation and energetic
particles. This ionized region envelops the nucleus,
creating a dense, glowing atmosphere of ions and
electrons that serves as the foundation for further plasma
interactions. The coma plasma is not static; it expands
outward and interacts continuously with the solar wind,
setting the stage for the development of large-scale
plasma tails and boundaries.
One of the most distinctive features is the ion tail,
also known as the plasma tail. Composed of ions and
electrons swept away by the solar wind, this tail always
points directly away from the Sun, regardless of the
comets orbital path. Its orientation is dictated by the
solar wind’s flow and the interplanetary magnetic field,
which guide the charged particles into long, narrow
streams that can extend millions of kilometres into
space. The ion tail often appears bluish due to the
emission of light from ionized carbon monoxide and
other molecules, and its dynamic behaviour such as
sudden disconnections or kinks provides direct evidence
of solar wind variability and magnetic reconnection
events. In contrast, the dust tail is formed by solar
radiation pressure acting on dust grains released from
the nucleus. Unlike the plasma tail, the dust tail curves
along the comet’s trajectory, reflecting the combined
influence of the comet’s motion and solar radiation.
Dust grains scatter sunlight, making this tail bright and
often yellowish, and its structure reveals information
about particle sizes and the rate of dust release.
Beyond these visible tails, comets also exhibit
complex plasma boundaries that mark the transition
zones between cometary plasma and the solar wind. The
ionopause is the boundary where the dense plasma of the
coma meets the incoming solar wind, acting as a shield
that separates the comet’s internal plasma environment
from external influences. This boundary is shaped by
pressure balance between the cometary outflow and
the solar wind, and its position can shift depending on
the comet’s activity level and solar wind conditions.
Ahead of the comet, the bow shock forms as the
supersonic solar wind slows down and deflects around
the expanding coma. This shock region is analogous
to those found around planets with atmospheres or
magnetospheres, and it provides a natural laboratory for
studying shock physics and energy transfer in plasma
systems. Additionally, comets can develop magnetotail-
like structures, elongated regions of plasma shaped by
the solar wind’s magnetic field lines. These structures
resemble planetary magnetotails and demonstrate how
even small bodies like comets can generate large-scale
plasma phenomena through their interaction with the
solar wind [4].
Together, these plasma structures such as
coma plasma, ion tail, dust tail, ionopause, bow
shock, and magnetotail-like regions illustrate the
complexity of cometary environments and their role
as natural laboratories for plasma physics. They
7
1.4 Plasma Processes
reveal fundamental processes such as ionization,
electromagnetic coupling, shock formation, and
magnetic field interactions, all of which are central
to understanding not only comets but also the broader
dynamics of the solar system and interstellar space.
1.4 Plasma Processes
The plasma processes around comets are intricate
and unfold in multiple regimes, reflecting the transition
from dense, collision-dominated environments near the
nucleus to tenuous, collisionless plasma farther out
in space. Close to the nucleus, the collisional regime
dominates because the density of neutral gas released by
sublimation is relatively high. In this region, frequent
collisions occur between neutral molecules and newly
formed ions, leading to energy transfer, recombination,
and chemical reactions that continuously alter the
plasma composition. These collisions also slow down
particle motion, creating a more fluid-like behaviour
where the plasma is tightly coupled to the neutral gas.
This inner region is crucial for understanding how
cometary atmospheres evolve and how the initial plasma
is generated.
As the plasma expands outward into space,
the density of particles decreases significantly, and
the system transitions into a collisionless regime.
Here, collisions between particles become rare,
and collective electromagnetic effects dominate.
Instead of direct collisions, plasma behaviour is
governed by wave-particle interactions, instabilities,
and long-range electromagnetic forces. This regime
resembles the plasma environments found in planetary
magnetospheres and the solar wind itself, making
comets excellent natural laboratories for studying
universal plasma physics. In this region, charged
particles can be accelerated by electric fields, trapped
by magnetic fields, and influenced by plasma waves,
leading to highly dynamic structures such as the plasma
tail.
A particularly important process in cometary
plasma is charge exchange, where solar wind ions
interact with neutral atoms or molecules from the
comets coma. In this interaction, a solar wind ion
captures an electron from a neutral particle, becoming a
slower-moving neutral atom, while the neutral particle
becomes ionized. This process alters the composition
of the plasma, introduces new ion species, and modifies
the energy distribution of particles. Charge exchange
also produces energetic neutral atoms that can escape
the plasma environment, providing valuable diagnostic
signals for spacecraft observations. It is one of the
key mechanisms by which the solar wind and cometary
material exchange energy and momentum.
Another fundamental aspect is the interaction
with magnetic fields. The solar wind carries the
interplanetary magnetic field (IMF), which couples
directly with the cometary plasma. This coupling
generates turbulence, induces electric currents, and
shapes large-scale structures such as bow shocks
and magnetotail-like regions. The magnetic field
interaction is responsible for bending and stretching
the plasma tail, and it can trigger reconnection
events where magnetic field lines break and reconnect,
releasing energy and causing sudden changes in tail
morphology. These processes mirror those observed in
planetary magnetospheres, highlighting the universality
of plasma-magnetic field interactions.
Finally, comets are rich sources of plasma waves,
which arise from instabilities in the ionized environment.
Electrostatic waves, driven by fluctuations in charge
density, and electromagnetic waves, influenced by
magnetic field variations, propagate through the plasma.
These waves can accelerate particles, redistribute
energy, and create turbulence within the cometary
environment. For example, ion cyclotron waves and
Langmuir waves have been detected near comets,
providing direct evidence of plasma instabilities. Such
waves not only shape the local plasma dynamics but
also serve as diagnostic tools for understanding the
conditions within the comet’s coma and tail.
The plasma processes around comets collisional
and collisionless regimes, charge exchange, magnetic
field interactions, and plasma wave propagation
represent a complex interplay of physics that bridges
small-scale particle interactions with large-scale
electromagnetic phenomena. Studying these processes
in detail allows us to uncover fundamental principles
8
1.5 Significance of Studying Cometary Plasma
of plasma behaviour that apply not only to comets
but also to planetary atmospheres, stellar winds, and
astrophysical plasmas throughout the universe.
1.5 Significance of Studying
Cometary Plasma
The scientific importance of cometary plasma
lies in its ability to connect small-scale processes
around comets with large-scale questions about the
solar system, astrophysics, and plasma physics. One of
the most significant aspects is its role in understanding
solar system evolution. Comets are considered to be
among the most primitive bodies, preserving volatile
compounds such as water, carbon dioxide, and carbon
monoxide from the early solar nebula. When these
volatiles are released and ionized to form plasma, we
can analyse their composition and isotopic ratios. This
helps trace the origin and distribution of volatiles across
the solar system, offering clues about how planets
acquired their atmospheres and oceans, and whether
comets may have contributed to the delivery of life-
essential molecules to Earth. In this way, cometary
plasma serves as a direct link to the chemical and
physical conditions of the solar system’s formation
billions of years ago.
Cometary plasma also provides an astrophysical
analogy, functioning as a natural laboratory for plasma
processes that occur on vastly different scales in stars,
nebulae, and galaxies. The ionization, charge exchange,
and magnetic field interactions observed around comets
mirror phenomena seen in stellar winds, interstellar
clouds, and galactic plasmas. For example, the bow
shock formed ahead of a comet resembles shocks
around supernova remnants, while plasma instabilities
in cometary tails are similar to those in astrophysical
jets. By studying comets up close, researchers can test
theories of plasma physics under conditions that are
otherwise difficult to replicate in laboratories, thereby
gaining insights into universal processes that govern
the cosmos.
In addition, cometary plasma plays a crucial role
in space weather studies. The interaction between the
solar wind and cometary material creates boundaries
such as bow shocks, ionopauses, and magnetotail-like
structures, which are analogous to those found around
planets. Observing these boundaries in comets helps
scientists understand how solar wind shapes planetary
magnetospheres and atmospheres, and how energy and
momentum are transferred across plasma interfaces.
This knowledge is vital for predicting and mitigating the
effects of space weather on Earth, including disruptions
to satellites, communication systems, and power grids.
Comets, therefore, act as natural probes of solar
wind behaviour, offering real-time case studies of how
plasma boundaries form and evolve under varying solar
conditions.
Finally, cometary plasma contributes to fusion
and plasma physics research by providing real-world
examples of both collision-dominated and collisionless
regimes. Near the nucleus, the plasma is dense enough
that collisions between neutrals and ions dominate,
creating a fluid-like environment. Farther out, the
plasma becomes tenuous and collisionless, where
collective electromagnetic effects and wave-particle
interactions govern its behaviour. This transition
between regimes is of great interest to laboratory plasma
research, especially in the context of controlled nuclear
fusion, where understanding instabilities, turbulence,
and energy transport is critical. Observations of plasma
waves, charge exchange, and magnetic reconnection
in comets provide natural experiments that inform the
design and interpretation of laboratory plasma systems.
The study of cometary plasma is important because
it bridges planetary science, astrophysics, and applied
plasma physics. It helps reconstruct the history of the
solar system, serves as an analogy for cosmic plasma
processes, enhances our understanding of space weather,
and informs laboratory research aimed at harnessing
fusion energy. By examining comets, scientists gain
access to a wealth of information about how matter
and radiation interact across scales, making these icy
wanderers invaluable laboratories for exploring the
fundamental physics of the universe.
9
1.5 Significance of Studying Cometary Plasma
References
[1] Goetz, C., Gunell, H., Volwerk, M. et al.
Cometary plasma science. Exp Astron 54, 1129–1167
(2022). https://doi.org/10.1007/s10686-021-09783-z
[2] Miyamoto, K. (1979). “Plasma physics for
nuclear fusion”, Wallis, M. K. (1967). The physics of
cometary plasma. Planetary and Space Science, 15
[3] Schmidt, H. U., Wegmann, R., Huebner,
W. F., & Boice, D. C. (1988). Cometary gas and
plasma flow with detailed chemistry. Computer Physics
Communications, 49(1), 17-59.
[4] Cravens, T. E. (1989). Cometary plasma
boundaries. Advances in Space Research, 9(3), 293-
304.
About the Author
Abishek P S is a Research Scholar in
the Department of Physics, Bharata Mata College
(Autonomous) Thrikkakara, kochi. He pursues
research in the field of Theoretical Plasma physics.
His works mainly focus on the Nonlinear Wave
Phenomenons in Space and Astrophysical Plasmas.
10
Black Hole Stories-24
Some Back Hole Mergers From Gravitational
Wave Detector Observing Runs O1 and O2
by Ajit Kembhavi
airis4D, Vol.4, No.2, 2026
www.airis4d.com
2.1 Inter-University Centre for
Astronomy and Astrophysics
In this story we will consider some of the binary
black holes detected during the observing runs O1 and
O2. These examples are indicative of the variety of
binary mergers found in these two runs.
2.2 The First Observing Run O1
The first observing run of the aLIGO detectors
began on September 12, 2015 and continued until
January 19, 2016. During this run a total of three black
hole binary merger events were detected, including
the very first detection GW150914, which we have
described in BHS-21. Here we will describe another
black hole binary from O1,GW151226.
2.2.1 GW151226:
This is again a black hole binary source which
merged to form a single black hole. It was detected, as
it name implies, on December 26, 2015, i.e., on Boxing
Day, which is the day following Christmas Day. It
was observed at 03:38:53 UTC by the LIGO detectors
at Livingston and Hanford, the inferred merger time
at Livingston being one millisecond earlier than at
Hanford.
Figure 1: Black hole merger eventGW151226. Upper
panel: The waveform as estimated from theory is
shown. The inset boxes show details of the model
in three time intervals, together with numerically
calculated models for the event. Lower panel: the
decrease in separation as the component spiral in, and
the consequent increase in their relative velocity are
shown. Image Credit: B. P. Abbott et al. Physical
Review Letters volume 116, p. 241103, 2016.
The signal in this case was weaker than in the
case of GW150914, and the event was spread over one
second, compared to the 0.2 second spread of the earlier
detection. As a result, it was more difficult to detect
the signal, and a special technique known as matched
filtering had to be used. This was first developed in the
context of gravitational wave detection by Bangalore
Sathyaprakash and Sanjeev Dhurandhar. During the
1s that the signal was detected, it went through 55
cycles, which are shown in Figure 1. The height of the
peaks increases with time, as does the frequency of the
signal, since the two components of the binary spiral
towards each other due to the energy loss to gravitational
2.4 Black Hole Masses From O1 and O2
wave emission. The peak frequency reached is 450 Hz,
which means that the two components approach each
other very closely without being destroyed, and must
therefore be black holes. The ringdown to single black
hole formation is clearly seen in the inset on the extreme
right. Calculations show that the two black holes in the
binary are of about 14.2 and 7.5 Solar masses, which
makes them significantly less massive than the black
holes in GW150914. The mass of the spinning black
hole produced through the merger is about 20.8 Solar
masses, so that about a Solar mass is lost from the
system due to the emission of gravitational waves. The
source is at a distance of about 1.4 billion light years
(430Mpc) from us.
2.3 The Second Observing Run O2
After detector and system upgradation, the second
observing run O2 began on September 30, 2016
and continued until August 25, 2017. During this
period eight detections were made, including the
binary neutron star merger GW170817, which was
also detected at electromagnetic wavelengths. We have
described the source in detail in BHS-23. We will
now describe four black hole mergers of those detected
during O2, GW170104, GW170608, GW170729 and
GW170814.
GW170104: This source was detected on January
4, 2017, and was the first source to be found after the
start of the second observing run. Detailed analysis
showed that it was a black hole binary which merged
to form a single black hole, with masses of the two
components in the initial binary system being 31.2 and
19.4 Solar masses. The final black hole has mass 48.7
Solar masses, so that about two Solar masses was lost
from the system upon merger The source is at a distance
of about 2.9 billion light years (890 Mpc).
GW170608: This source was detected on June 8,
2017. It was again a black hole binary which merged
to form a single black hole. The masses of the two
components in the binary system were 12 and 7 Solar
masses respectively. About one Solar mass was radiated
away during the merger. The source is at a distance of
1.1 billion light years (340 Mpc). The masses of the
black holes in this source were considerably smaller
than the black hole component masses found in other
sources observed until this discovery.
GW170729: This source was the most distant and
luminous source observed in O2. The merger occurred
about 5 billion years ago, that is before the formation
of the Solar system, and about 5 Solar masses were
lost from the system in the form of gravitational waves
emitted during the merger.
GW170814: This is a black hole binary detected
on August 14, 2017, with the two components found
to have about 30.5 and 25.3 Solar masses, while the
mass of the black hole after the merger is about 53.2
Solar masses. The source is at a distance of about 1.8
billion light years (560 Mpc) from us. The novelty
in the detection is that for the first time the source
was observed by three detectors: the two advanced
LIGO detectors at Livingston and Hanford and the
advanced VIRGO detector in Italy and was the first
detection by VIRGO. This enabled the direction of
the source in the sky to be limited to a region of
60 square degrees. This area is much smaller than
the possible regions in the sky for the other sources,
which were observed only by the two advanced LIGO
detectors. Using observations from the three detectors
it has been possible to measure gravitational wave
polarisation. These measurements are fully consistent
with the prediction of general relativity, and are not
consistent with results expected from other competing
theories of gravity. This detection was followed in three
days by the neutron star binary GW170817, which have
discussed in detail in BHS-23.
12
2.4 Black Hole Masses From O1 and O2
2.4 Black Hole Masses From O1 and
O2
Figure 2: The 10 black hole binaries and one neutron
star binary whose mergers were observed in observing
runs O1 and O2. Details of the figure are explained in
the text. Image Credit: LIGO/Frank Elavsky/Northwestern. This diagram as also used in
BHS-1.
The ten black hole binary merger and one neutron
star binary were detected in the observing runs O1 and
O2 are shown schematically in Figure 2. For each black
hole merger, a set of three blue circles indicate the mass
of each component of the binary before merger, and the
mass of the remnant post-merger. The purple circles
indicate the mass of black holes detected as a component
of an X-ray binary system. These black holes are
detected through X-ray and optical observations of the
binary systems, through electromagnetic means. The
purple circles together have been labelled as EM black
holes, while the blue circles are labelled as LIGO-
Virgo black holes. It is seen from the diagram that
the masses of the LIGO-Virgo black holes in many
cases are significantly larger than the masses of the
EM black holes. The reason is that the larger the
black hole mass, the greater is the gravitational wave
luminosity of the system and shorter is the time over
which the signal is spread out at the merger. This makes
the signal more easily detectable over the background
noise. It is expected that there will in fact be many
more low mass black hole binaries than the high mass
binaries which have been detected so far. The lower
mass systems will be detected with increased sensitivity
of the gravitational wave detectors.
Each yellow circle indicates the mass of a neutron
star, which was electromagnetically detected as the
compact component of an X-ray binary system. The
EM neutron star masses in the figure are all well below
the maximum mass limit for neutron stars, which is
about three Solar masses. In the case of the binary
neutron star merger event GW170817 shown in the
figure, the mass of the remnant is estimated to be
2.82 Solar masses, which is at the upper end of the
permissible mass of neutron stars. The remnant could
very well be a black hole. In that case it would be
the only black hole which is in the mass range of 2-
5 Solar masses. This region is known as the mass
gap region because of the lack of black holes in it.
Binary black holes with pre- or post-merger component
mass in this region would be difficult to detect through
their gravitational wave emission, given the present
sensitivity of the detectors, and none has been found
through electromagnetic observations. It is possible of
course that there are no black holes, or at least relatively
few black holes, with mass in this region. That would
be a very interesting astrophysical fact which would
need to be carefully examined and explained.
In the next story we will consider binary mergers
detected in O3 and O4.
About the Author
Professor Ajit Kembhavi is an emeritus
Professor at Inter University Centre for Astronomy
and Astrophysics and is also the Principal Investigator
of the Pune Knowledge Cluster. He was the former
director of Inter University Centre for Astronomy and
Astrophysics (IUCAA), Pune, and the International
Astronomical Union vice president. In collaboration
with IUCAA, he pioneered astronomy outreach
activities from the late 80s to promote astronomy
research in Indian universities.
13
X-ray Astronomy: Theory
by Aromal P
airis4D, Vol.4, No.2, 2026
www.airis4d.com
3.1 Introduction
In the previous article we have discussed about
the Different Mechanism that produces X-rays! As the
second article in the series in this article we will be
discussing about different X-ray sources and the main
mechanism in primarily acting on each system.
3.2 Stellar Corona
While stars like our Sun emit most of their energy
in the optical band, they possess hot outer atmospheres
called corona that radiate in X-rays.
Mechanism: The heating of the stellar corona is
driven by the stressing and relaxation of magnetic
field loops anchored in the star’s photosphere.
When these magnetic loops reconnect (magnetic
reconnection), they release stored energy, heating
the coronal plasma to temperatures of nearly
million kelvin. The hot plasma emits X-rays
primarily through thermal bremsstrahlung and
atomic line transitions. Young, rapidly rotating
stars can exhibit X-ray luminosities 4 times
magnitudes higher than the Sun due to more
vigorous magnetic dynamos.
3.3 Supernova Remnants (SNRs)
When massive stars explode, they leave behind
expanding shock waves that heat the interstellar medium
(ISM) and accelerate particles.
Mechanism: SNRs produce X-rays via two
distinct processes:
Figure 1: X-ray image of Tycho SNR in 1.2-4.0 keV.
(Image credit : Petruk et al. 2025)
1.
Thermal Emission (Shell-type): The
supernova shock wave sweeps up the
surrounding ISM, heating it to temperatures
of million Kelvin. This gas emits thermal
X-rays via bremsstrahlung.
2.
Non-Thermal Emission (Plerions) In
remnants like the Crab Nebula, a central
pulsar accelerates electrons to relativistic
speeds. These electrons spiral in
the magnetic field, emitting synchrotron
radiation that extends into the X-ray band.
3.4 X-ray Binaries (XRBs)
These are the brightest X-ray sources in the galaxy,
consisting of a compact object accreting matter from a
companion star.
3.6 Active Galactic Nuclei (AGN)
Figure 2: An artists impression of the Cygnus X-1 X-
ray binary, which features a black hole stripping material
from a companion star, and the material forming a hot
accretion disk. (Image credit: NASA/CXC/M. Weiss)
3.4.1 Neutron Star Binaries
Mechanism: Matter flowing from the companion
forms an accretion disk. As it spirals
inward, viscosity heats the disk to X-ray
temperatures. When this material crashes onto
the solid surface of the neutron star, its kinetic
energy is thermalized, releasing intense X-rays.
Additionally, accumulated matter can undergo
thermonuclear runaway, causing Type I X-ray
bursts.
3.4.2 Black Hole Binaries
Mechanism: Similar to neutron stars, black
holes accrete via a disk. However, lacking a
solid surface, the X-rays originate entirely from
the inner accretion disk and a hot corona of
electrons above it. The emission is characterized
by a soft thermal component from the disk
and a hard power-law component produced by
Inverse Compton scattering of disk photons by
hot coronal electrons.
3.5 Isolated Neutron Stars (INS)
Not all neutron stars are in binary systems. Some
are isolated, radiating X-rays through cooling or
magnetic activity.
Mechanism:
1.
Cooling (Thermally Emitting INS): Young
neutron stars retain immense heat from their
formation. They emit soft X-rays purely
Figure 3: an artists impression of a magnetar. (Image
credit: ESA)
as a result of blackbody cooling from their
surface having a temperature about a million
Kelvin.
2.
Magnetars: These are isolated neutron
stars with ultra-strong magnetic fields (
B
10
14
10
15
G
). Their X-ray emission is
powered by the decay of their internal
magnetic field, which heats the crust
and magnetosphere, rather than rotational
energy or accretion.
3.6 Active Galactic Nuclei (AGN)
AGN are supermassive black holes (
10
6
10
9
M
)
at the centers of galaxies, actively consuming matter.
Mechanism The primary X-ray continuum in
AGN is produced by Inverse Compton scattering.
UV photons from the cool accretion disk are
up-scattered to X-ray energies by a hot corona
of electrons surrounding the black hole. This
radiation can also reflect off the accretion disk,
producing fluorescence lines, most notably the
Iron Kα line at 6.4 keV.
3.7 Galaxy Clusters
Clusters are the largest gravitationally bound
structures, containing hundreds of galaxies immersed
in a vast cloud of hot gas called the Intracluster Medium
(ICM).
Mechanism: The ICM is heated to temperatures
of order of tens of million Kelvin by the
gravitational potential energy of the clusters dark
15
3.7 Galaxy Clusters
Figure 4: Composite image shows the galaxy cluster
1E 0657-556, also known as the ”bullet cluster.” (Image
credit: ESA)
matter halo. At these temperatures, the hydrogen
and helium are fully ionized. The electrons are
deflected by atomic nuclei, emitting X-rays via
thermal bremsstrahlung. This mechanism makes
galaxy clusters the most luminous extended X-ray
sources in the universe.
The author is mostly focused on Neutron Star X-ray
Binaries we will be discussing about the theory behind
such system shortly and will extend to other system as
well in the upcoming articles.
Reference:
Rosner, R., Golub, L., & Vaiana, G. S. (1985).
On stellar X-ray emission. Annual Review of
Astronomy and Astrophysics, 23, 413-452.
Reynolds, S. P. (2008). Supernova Remnants at
High Energy. Annual Review of Astronomy and
Astrophysics, 46, 89-126.
Lewin, W. H. G., & van der Klis, M.
(Eds.). (2006). Compact Stellar X-ray Sources.
Cambridge University Press.
Shakura, N. I., & Sunyaev, R. A. (1973).
Black holes in binary systems. Observational
appearance. Astronomy and Astrophysics, 24,
337-355.
Yakovlev, D. G., & Pethick, C. J. (2004). Neutron
Star Cooling. Annual Review of Astronomy and
Astrophysics, 42, 169-210.
Kaspi, V. M., & Beloborodov, A. M. (2017).
Magnetars. Annual Review of Astronomy and
Astrophysics, 55, 261-301.
Haardt, F., & Maraschi, L. (1991). A two-
phase model for the X-ray emission from Seyfert
galaxies. The Astrophysical Journal, 380, L51-
L54.
Sarazin, C. L. (1986). X-ray emission from
clusters of galaxies. Reviews of Modern Physics,
58(1), 1-115.
About the Author
Aromal P is a research scholar in
Department of Astronomy, Astrophysics and Space
Engineering (DAASE) in Indian Institute of
Technology Indore. His research mainly focuses on
studies of Thermonuclear X-ray Bursts on Neutron star
surface and its interaction with the Accretion disk and
Corona.
16
The Birth of Stars: Physical Processes
Governing Stellar Formation
by Sindhu G
airis4D, Vol.4, No.2, 2026
www.airis4d.com
4.1 Introduction
Stars are the primary engines of energy production
and chemical enrichment in the Universe. Through
nuclear fusion, they generate radiation, synthesize heavy
elements, and shape their environments via winds,
radiation, and supernova explosions. Understanding
how stars form is therefore essential for explaining the
structure and evolution of galaxies, the initial mass
function, and the origin of planetary systems.
Star formation proceeds under highly selective
physical conditions within molecular clouds, with only
a fraction of the available gas ultimately incorporated
into stellar objects. The masses of newly formed
stars cover a wide range, extending over several
orders of magnitude. Stellar birth is initiated when
cold interstellar gas becomes gravitationally unstable
and undergoes collapse, leading to the formation of
self-gravitating condensations that eventually reach
the temperatures and densities required for hydrogen
fusion in their cores. This sequence of events
encompasses spatial scales ranging from tens of parsecs
in giant molecular clouds to stellar radii, rendering
star formation an intrinsically multi-scale and complex
phenomenon.
4.2 Giant Molecular Clouds
Stars form almost exclusively within giant
molecular clouds (GMCs), which represent the coldest
and densest phase of the interstellar medium. GMCs
typically have masses of
10
4
10
6
M
, sizes of tens
of parsecs, and temperatures of 10–20 K. Their
composition is dominated by molecular hydrogen, with
helium and trace amounts of heavier molecules and
dust.
Because molecular hydrogen lacks a permanent
dipole moment, GMCs are observed indirectly through
emission from tracer molecules such as CO and through
thermal emission from dust grains. Dust plays a crucial
role in star formation by shielding cloud interiors from
ultraviolet radiation, enabling molecular survival and
efficient cooling. Cooling allows the gas to reach low
temperatures, reducing thermal pressure and promoting
gravitational collapse.
4.3 Dense Cores and Cloud
Fragmentation
Star formation within GMCs is highly structured.
Stars do not form uniformly but instead originate in
dense subregions known as clumps and cores. Dense
cores typically have sizes of
0.1
pc, masses of a few
solar masses, and densities exceeding
10
4
cm
3
. These
cores represent the immediate progenitors of individual
stars or small multiple systems.
The fragmentation of molecular clouds is
influenced by gravity, turbulence, and magnetic fields.
Supersonic turbulence generates density fluctuations
that may collapse if they exceed a critical threshold.
Magnetic fields provide additional support against
4.6 Accretion Disks
gravity, while ambipolar diffusion allows neutral gas to
drift relative to magnetic field lines, enabling collapse
on longer timescales. The interplay of these processes
determines the mass distribution of forming stars.
4.4 Gravitational Instability
Gravitational collapse begins when a region of gas
becomes unstable to its own gravity. This condition
is commonly described by the Jeans criterion, which
defines a critical mass above which thermal pressure
can no longer support a cloud against collapse. The
Jeans mass decreases with decreasing temperature and
increasing density, favouring collapse in cold, dense
environments.
External triggering mechanisms can also promote
collapse. Supernova shock waves, expanding H ii
regions, and spiral density waves can compress
molecular gas, increasing its density and driving regions
above the critical threshold for gravitational instability.
These processes connect star formation to galactic-scale
dynamics.
4.5 Protostar Formation
As collapse proceeds, dense cores contract and
form central condensations known as protostars. During
this stage, the protostar is deeply embedded within
its natal envelope and is optically obscured at visible
wavelengths. Its emission is dominated by infrared and
sub-millimetre radiation produced by heated dust and
gas.
The luminosity of a protostar is primarily powered
by the conversion of gravitational potential energy into
heat as material accretes onto the central object. Nuclear
fusion has not yet begun, and the internal structure
is governed by hydrostatic balance between gravity
and thermal pressure. Protostellar evolution during
this phase depends strongly on the accretion rate and
surrounding environment.
4.6 Accretion Disks
Conservation of angular momentum during
collapse naturally leads to the formation of a
circumstellar accretion disk. These disks regulate the
flow of material from the envelope onto the protostar
and play a central role in determining stellar mass. They
also provide the initial conditions for planet formation.
Accretion is often episodic rather than steady,
leading to luminosity variability in young stellar objects.
Disk instabilities and interactions with magnetic
fields can strongly influence accretion efficiency and
timescales.
4.7 Jets, Outflows, and Feedback
Accreting protostars commonly drive bipolar jets
and molecular outflows. These highly collimated
structures are thought to be launched by magneto-
centrifugal processes operating in the inner regions
of the accretion disk. Jets remove excess angular
momentum, enabling continued accretion, while
injecting energy and momentum into the surrounding
medium.
Observationally, jets are traced by Herbig–Haro
objects, which mark shock fronts produced as outflows
interact with the interstellar medium. Feedback from
radiation, winds, and outflows disperses the remaining
envelope material and helps regulate star formation
efficiency.
4.8 Pre-Main-Sequence Evolution
Once the protostellar envelope is largely dispersed,
the object enters the pre-main-sequence (PMS) phase.
Low-mass stars appear as T Tauri stars, characterized
by strong magnetic activity and variability, while
intermediate-mass stars evolve as Herbig Ae/Be objects.
During this stage, stars contract quasi-hydrostatically
and follow characteristic evolutionary tracks in the
Hertzsprung–Russell diagram.
Hydrogen fusion begins when the central
temperature reaches approximately
10
7
K, marking the
stars arrival on the zero-age main sequence (ZAMS).
18
4.11 Conclusions
4.9 Timescales and Mass Dependence
Star formation timescales depend strongly on
stellar mass. Low-mass stars typically form over
10
6
10
7
yr, whereas massive stars may reach the main
sequence in less than
10
5
yr. In the most massive
cases, nuclear burning may begin while accretion is
still ongoing.
Massive stars exert strong radiative and mechanical
feedback, ionizing nearby gas and driving powerful
winds that can suppress or trigger further star formation.
This feedback shapes the evolution of star-forming
regions and young clusters.
4.10 Observational Constraints
Advances in infrared, sub-millimetre, and time-
domain astronomy have revolutionized the study
of stellar birth. Observations from missions such
as Spitzer and JWST, combined with ground-based
facilities, reveal embedded protostars, accretion disks,
and molecular outflows. Variability studies provide
additional constraints on accretion processes and
magnetic activity in young stars.
4.11 Conclusions
The birth of stars is a complex and hierarchical
process governed by gravity, thermodynamics,
turbulence, magnetic fields, and feedback. From the
fragmentation of molecular clouds to the emergence
of main-sequence stars, stellar birth shapes the
evolution of galaxies and sets the initial conditions
for planetary systems. Continued multi-wavelength
observations and theoretical modelling are essential for
achieving a complete understanding of this fundamental
astrophysical process.
References:
Theory of Star Formation
MASS, LUMINOSITY, AND LINE WIDTH
RELATIONS OF GALACTIC MOLECULAR
CLOUDS
The Big Problems in Star Formation: the Star
Formation Rate, Stellar Clustering, and the Initial
Mass Function
Molecular Clouds in the Milky Way
FROM PRE-STELLAR CORES TO
PROTOSTARS: THE INITIAL CONDITIONS
OF STAR FORMATION
Control of star formation by supersonic
turbulence
The Stability of a Spherical Nebula
Star formation in molecular clouds: observation
and theory
An internet server for pre-main sequence tracks
of low- and intermediate-mass stars
About the Author
Sindhu G is a research scholar in the
Department of Physics at St. Thomas College,
Kozhencherry. She is doing research in Astronomy
& Astrophysics, with her work primarily focusing
on the classification of variable stars using different
machine learning algorithms. She is also involved
in period prediction for various types of variable
stars—especially eclipsing binaries—and in the study
of optical counterparts of X-ray binaries.
19
Visualizing attention in Vision Transformers
by Linn Abraham
airis4D, Vol.4, No.2, 2026
www.airis4d.com
5.1 Introduction
So far we have been using techniques like
Integrated Gradients (IG) for creating saliency maps
from a trained (image classification) model. The goal
is to give an image for inference and get a prediction
but also to be able to visualize how the model came to
that prediction and which parts of the image (features,
locations in the image) were more important for the
prediction. So what we want here is something like a
saliency map for a given image - that is, a map which
assigns an importance score to each pixel in an image.
However, the majority of existing techniques such as
IGs were created with CNN models in mind.
5.2 Attention-Guided CAM
For transformer-based models like
VisionTransformer (ViT), there is a more natural
approach of using the attention values learnt by the
model to create such saliency maps. Let us look at one
such approach called the Attention Guided CAM.
To implement their approach in code lets first try to
understand a technique in code called monkey patching.
5.3 Monkey Patching
Monkey patching refers to dynamically modifying
or extending code at runtime. The need to do monkey
patching arises because of the following. You already
have a code base where you have defined your custom
ViT architecture which has its custom attention blocks
or whatever already defined. The investigation you are
Figure 1: Figure showing the AGCAM heat maps and a
comparison with other existing technique taken from the
original paper. Method uses two images taken from the
PASCAL VOC 2012 for visualizing the self-attention
scores.
doing now is to be done during inference. We do not
want to train our model using this modified architecture.
But during inference we do want to alter its behaviour.
Monkey patching is perfect for this.
The reason why this works in Python is because
classes and modules are mutable objects. You can
replace a method in a third-party library with your own
custom function by simply assigning the new function
to the existing attribute name. This can also be quite
useful during development when you want to just test
things around.
5.4 PyTorch Hooks
The next thing we need to understand is the concept
of hooks
1
in PyTorch. A hook is simply a function
that you can attach to either a Tensor or a nn.module
1
https://www.digitalocean.com/community/tutorials/pytorch-hoo
ks-gradient-clipping-debugging
5.6 Generating heat maps using Attention
Figure 2: Patching the Attention module
Figure 3: Forward Function
object. This function gets executed automatically when
the forward or backward pass happens.
You might encounter hooks in many places for
examples there are hooks that you can attach in a git
repository to check your code before a commit gets
pushed to the remote etc. Also there are hooks in Linux
package managers like ArchLinux’s pacman where once
a package is installed on your system it runs certain
other commands.
In PyTorch hooks are a severely under-documented
feature especially considering the functionality that they
bring. PyTorch provides two types of hooks:
1.
The forward hook which gets executed during
the forward pass
2.
The backward hook which gets executed
during the backward pass
5.5 Modify Attention using Hooks
Now lets see all these in action. We first define
two hooks as identity layers.
We then modify the forward function as follows.
The forward hook gets attached before the softmax
function. And the backward hook gets attached after
the softmax and the dropout.
Now how do we use this patched attention? Lets
look at the following snippet of code. Note that I have
only shown the relevant parts of the code. Our custom
Figure 4: Using the patched attention to create
heatmaps.
ViT model is first imported as usual. Next we monkey
patch, i.e., dynamically change the behaviour of the
attention block with our PatchedAttention. So that in
the next line when our model gets created, it uses our
patched version for the attention blocks. Next the rest
of the processing happens with the help of the generate
function implemented inside the AGCAM class
2
. We
import the class first and then we create an object and
initialize it by giving it the ViT model we just created
that uses the modified attention layers.
Now lets see what happens inside this AGCAM
class and the generate function.
Lets zoom in on the loop defined in the constructor
function. The for loop iterates over the layers in our
model looking for layers which contain the phrase
‘before softmax’ and registers Pytorchs forward hook
using the get attn matrix function. Similarly for layers
that contain the phrase ‘after softmax’ it attaches the
‘full backward hook’ using the get grad attn function.
What do these functions achieve? They simply
store the attention matrix values into the empty lists
defined in the constructor function.
5.6 Generating heat maps using
Attention
Lets put together everything we have learnt so far
to understand how the heat maps are generated within
the generate function.
The generate function receives a single input
image (batch size of 1).
2
https://github.com/LeemSaebom/Attention-Guided-CAM-Visua
l-Explanations-of-Vision-Transformer-Guided-by-Self-Attenti
on
21
5.6 Generating heat maps using Attention
Figure 5: The AGCAM Class
Figure 6: Registering hooks in the model
Figure 7: Defining the hook functions.
It does a forward pass on the image using the
model which (i) uses the patched attention layers
and (ii) the PyTorch hooks registered.
It obtains the predicted class using the max of
the predictions.
It then does a backpropagation step to compute
the loss. Now both our hooks would have been
triggered and the variables we used to store the
attention values and the gradients would have
been populated. We then combine the values
across the different attention layers using simple
summing.
Next, the self-attention score matrix that we
so obtained are normalized using the sigmoid
function. It is then multiplied with the gradient
matrix to create the final class activation map
(CAM).
References
Leem, S., & Seo, H. (2024). Attention
Guided CAM: Visual Explanations of Vision
Transformer Guided by Self-Attention. arXiv preprint
arXiv:2402.04563. doi:10.48550/arXiv.2402.04563.
About the Author
Linn Abraham is a researcher in Physics,
specializing in A.I. applications to astronomy. He is
currently involved in the development of CNN based
Computer Vision tools for prediction of solar flares
from images of the Sun, morphological classifications
of galaxies from optical images surveys and radio
galaxy source extraction from radio observations.
22
Part III
Biosciences
Seed to Callus Revolution: Optimising
Germination and Tissue Culture in
Brassica nigra
by Aengela Grace Jacob
airis4D, Vol.4, No.2, 2026
www.airis4d.com
1.1 Introduction
Plant Tissue Culture is a technique which involves
the aseptic culture of cells, tissue or seed or part of a
plant under controlled conditions in vitro. Generally,
a part of the tissue usually named as the explant is
used to propagate or induce growth. This growth is
monitored by providing an essential nutrient medium,
adequate temperature, pH and sterile conditions to assist
its growth.
Micropropagation(Clonal Propagation) is a
technique in which a desired plant material is allowed
to regenerate or multiply in vitro under suitable growth
conditions. Micropropagation involves four stages:
Stage 0 (selection of desired plant material)
Stage I (establishment of cultures)
Stage II (multiplication of regenerated shoots)
Stage III (rooting of shoots and germination of
somatic embryos)
Stage IV (transfer to soil and acclimatization)
Micropropagation is one of the breeding
applications of plant tissue culture.
The first commercialization of micropropagation
in the 1970s was initiated by Dr Toshio Murashige. The
starting material for clonal propagation include parts
like node, internode, leaf and buds and root tips. Tissue
Culture Techniques involve 4 major steps which discuss
the factors to consider during the procedure. These are
mainly:
1.Inoculation of Explant
2.Incubation of culture
3.Sub - culturing
4.Transplantation of regenerated plant
1.1.1 Inoculation of Explant
The control of contamination is a crucial step
to prevent the entry of microorganisms. Hair,dust
are all contaminating agents. A dust free inoculating
chamber should be arranged with the person wearing a
sterile headgear and clothes inside the culturing area.
Preliminary sterilisation should be done inside the
chamber and in the hands of the person with 95%
alcohol before starting the transferring process.
1.1.2 Incubation of Culture
The cultures after inoculation are incubated at a
temperature of around room temperature that is 25 C. In
order to prevent contamination of the cultural medium
Some tissues are regenerated well in low light
condition (about 1000 lux), regeneration light and dark
periods are required, regenerated plantlet well lighted
(about 3000 lux) condition and 16h light with 8h dark
period is required.
1.1 Introduction
1.1.3 Sub-Culturing
Suspension culture requires the media change or
fresh inoculation at short intervals to occur and callus
culture requires that the sub-culturing of the callus
tissue is done to obtain the callus tissue in dividing
conditions.
Depending on the observation made, with the help
of hand-lens or upon the help of a simple microscope
under aseptic conditions, the explants could have to
change to fresh media (newly prepared) or with new
components or hormonal formulation in accordance
with the condition of cell or tissue growth.
1.1.4 Transplantation of Regenerated Plant
The plants at this time develop adequate root
systems and cuticular leaf surface structure so that
it can withstand the temperature . Prior to transfer to
pots the acclimatization of these regenerated plants are
needed. Plants regenerated from in vitro tissue culture
are planted in pots in soil.
The medium used to inoculate the explant is the
Murashige Skoog medium, it is said to contain a mixture
of micro and macronutrients in a balanced form. These
nutrients include nitrogen ,phosphorous ,potassium
,calcium (macronutrients) and iron ,copper,manganese
(micronutrients) and various vitamins, solidifying
agents like agar and gellan gum and plant growth
regulators are added for callus growth.
The seed used as an explant to allow germination
and induce callusing was the Brassica nigra (black
mustard) which belongs to the brassicaceae family. It
is mainly known for its rapid growth and adaptability.
It was clearly understood during its germination period
that its growth rate was fast. It is said to be a dicot
angiosperm belonging to the brassicaceae family. The
members of the family of mustard are said to have a
cross shape arranged by four petals and has six stamens
in which two are short and four are long. The pods
of the mustard seed when mature slit open from both
sides exposing the seeds. A well known fact says that
Egyptian Pharaohs put mustard seeds in their tombs to
take with them to the afterlife.
1.1.5 Insights to In vitro Development
Black mustard, Brassica nigra is an oil seed, and
medicinal cruciferous crop. Developing effective in-
vitro germination and tissue-culture systems can enable
the acceleration of crop life cycle and preservation of its
genetic bases.The most frequent hurdle of plant tissue
culture is surface contamination. Black mustard seeds
possess a heavy microbial load- bacteria and fungi- on
their coats. The protocol followed had the components
:70% ethanol (1minute) - 1% sodium hypochlorite (10-
15 minutes) - sterile water wash (3 moves) Short, rapid
method; has a wide spectrum of considerations. 20 ml
mercuric chloride (30 seconds) to sterile water rinse (
3x) -Anatoxic; possible seed damage.
The sterility and seed vigor trade-off is observed.
Ethanol interferes with the membranes of lipids and
renders seeds more susceptible to the bleach, but an
excessively long exposure may lead to the process of
desiccation. All the procedures seem to be suitable as
far as black mustard seeds are concerned. Formation
of the callus is characteristic of the totipotency. A
1:6 ratio of cytokinin:auxin is of the essence in
Brassica spp.: concentrations of cytokinin stimulate
the growth, whereas dedifferentiation is stimulated by
auxin. The preparation of Murashige and Skoog (MS)
basal medium was with it.
The achievement of germinating in a test tube
requires the elimination of mechanical and microbial
obstacles and maintenance of seed health. Brief
disruption of seed coat lipids by ethanol facilitates
increased penetration of the bleach. The Sodium
hypochlorite oxidizes surface proteins of the cells, and
has the effect of killing bacteria/fungi.
The presence of oxidants in the process destroys
the viability of the seeds since they may form
reactive oxygen species (ROS) that damage DNA and
proteins. Mitigation of this risk is done through rapid
rinse protocol (3x sterile water). The tendency to
decrease germination in terms of tougher treatments
(e.g., mercuric chloride) supports the delicateness of
embryonic tissues to the elements of mercury.
The germination of the seed Brassica nigra in the
nutrient MS medium was seen around 8-10 days. After
25
1.3 Results
which the seedling was administered for callusing,it was
taken under sterile conditions and placed horizontally
in the nutrient medium containing the phytohormones
BAP and 2,4-D in amounts 0.5mg/L and 3mg/L, it
showed callus around 15 th day after inoculation.
1.2 Process Involved
Initially, we make sure the equipment and
stock solutions are prepared and are of the accurate
concentrations required. The stock concentrations were
made in multiples of 10, where Stock A was 10x,
Stock B was 1000x, Stock C was 1000x, Stock D was
100x and Stock E was 10x.The seeds of Brassica nigra
were collected and washed under running tap water for
a minute. The seeds along with the equipment and
chemical required for surface sterilization was taken
into LAF for sterilisation. The LAf was sterilized using
alcohol and UV lights were turned on. 20mL of 2%
sodium hypochlorite was added into a 50mL beaker
containing the seeds and swirled for about 15 minutes.
The hypochlorite solution was discarded and the seeds
were washed with distilled water 3-4 times for 2-3
minutes each. 20mL of HgCl2 (0.1%) was poured into
the beaker containing seeds and was swirled for about
30 seconds. The metal chloride solution was discarded
and the seeds were washed with distilled water 3-4 times
for about 2-3 minutes each. Fresh sterile water was
added to the beaker containing seeds. This concludes
the initial process of surface sterilization of the explant
taken.
Further we continue by Inoculation of the explant
in which the explant were held using sterile forceps and
placed on a sterile filter paper placed in a sterile petri
dish. The seeds were dried by rubbing it over the sterile
filter paper. The explant was inoculated in the sterile
MS media in a sterilised tube.
Finally, the callus formation of the germinated
seedling is assessed after 12 days of germination,
seedlings were removed from the culture medium under
aseptic conditions inside the Laminar Air Flow(LAF).
Explants were taken from different regions of the
seedling(Internodes were used as explant) using a sterile
scalpel. MS basal medium supplemented with 0.5 mg/L
6-Benzylaminopurine(Cytokinin) and 3.0mg/L 2,4-
Dichlorophenoxyacetic acid (2,4-D) (Auxin) was added
to the medium. Explants were placed horizontally on
the callus induction medium in culture tubes. Cultures
were incubated at 25 ± 2°C under complete darkness to
promote callus formation.Callus induction was observed
over a period of 2 to 3 weeks.
1.3 Results
Fig 1. The day of inoculation
As per the given procedure the seeds of Brassica
nigra were surface sterilised and inoculated in the
basal MS medium and incubated at 25 C.
26
1.4 Discussion
Fig 2: 12th day from inoculation
Germination of the plantlet is seen.
Fig 3 : Callus Induction of Brassica nigra (Black
mustard)
The inoculated explant (internode) proliferated
to form callus on the 15 th day after inoculation of
the explant in the nutrient medium.
Fig 4: Callus Induction of the inoculated
explant
1.4 Discussion
The project demonstrated the germination of Black
Mustard (Brassica nigra) seeds can be reliably achieved
through an inoculation protocol carried out within a
laminar airflow (LAF) hood, thereby ensuring aseptic
conditions throughout the experiment.
By combining effective seed surface-sterilisation,
meticulous handling under LAF, and a well-defined
callus induction medium, we were able to obtain high
rates of germination, promote uniform embryogenic
callus formation, and set the stage for downstream
tissue-culture applications.
Successful germination began with a two-step
sterilisation regimen that proved to be both rapid and
thorough. Seeds were first immersed in 70 % ethanol
for 30 seconds to disrupt the outer lipid layer, followed
immediately by 10 % sodium hypochlorite (NaOCl)
solution for 10 minutes. This step was critical for
eliminating epiphytic microbes that could otherwise
compete with the seedlings or cause contamination.
27
1.4 Discussion
After the NaOCl treatment, the seeds were rinsed five
times with sterile distilled water to remove residual
disinfectants. The entire sterilisation process, including
rinsing, required roughly 20 minutes per batch, a
time that is considerably shorter than more traditional
protocols that often extend to 30–45 minutes for NaOCl
exposure.
The abbreviated but effective sterilisation schedule
not only conserves time but also reduces the risk
of chemical damage to the seed coat or embryo.
Post-sterilisation, the seeds were placed onto Murashige
and Skoog (MS) basal medium supplemented with 3 %
(w/v) sucrose and solidified with 0.8 % agar.
The medium was further enriched with a
balanced cytokinin–auxin ratio (6-benzylaminopurine
0.5 mg L[207B?]
¹
+ 2,4-Dichlorophenoxyacetic acid 3
mg L[207B?]
¹
) to favour callus induction over direct
germination. Under controlled photoperiod (16 h
light/8 h dark) and temperature (25 ± 2 °C), the sterilised
seeds were inoculated directly onto the medium inside
the LAF hood, ensuring that no airborne contaminants
could interfere with the delicate embryonic tissues.
Within 7–10 days, most seeds had germinated, while
embryogenic calli were observed emerging from the
cotyledonary bases by day 14. The calli displayed a
friable, pale green appearance and were indicative
of a high totipotent potential, suitable for further
sub-culturing or genetic manipulation.
References
Chitralekha Shyam, M. Tripathi, S. Tiwari,
and V. Gupta, “In vitro regeneration from callus
and cell suspension cultures in Indian mustard
[Brassica juncea (Linn.)...,” ResearchGate, pp.
1095–1112, May 2021, Accessed: Sep. 10, 2025.
[Online].Available:https://www.researchgate.net/pub
lication/351514041 In vitro regeneration from callu
s and cell suspension cultures in
M. Ikeuchi, K. Sugimoto, and A. Iwase, “Plant
Callus: Mechanisms of Induction and Repression,
The Plant Cell, vol. 25, no. 9, pp. 3159–3173, Sep.
2013, doi: https://doi.org/10.1105/tpc.113.116053.
“Callus - an overview
|
ScienceDirect Topics,”
www.sciencedirect.com. https://www.sciencedirect.co
m/topics/agricultural-and-biological-sciences/callus
H. Chandran, M. Meena, T. Barupal, and K.
Sharma, “Plant tissue culture as a perpetual source
for production of industrially important bioactive
compounds, Biotechnology Reports, vol. 26, p.
e00450, Jun. 2020, doi: https://doi.org/10.1016/j.btre
.2020.e00450.
“Murashige and Skoog Complete Medium-50X
Concentrate liquid
|
Thermo Fisher Scientific - US,”
Thermofisher.com, 2025. https://www.thermofisher.
com/in/en/home/technical-resources/media-formulati
on.130.html
A. Varma and A. Jain, “Protocol for Seed
Surface Sterilization and In Vitro Cultivation,”
Biology and Biotechnology of Quinoa, pp. 265–
282, 2021, doi: https://doi.org/10.1007/978-981-16-3
832-9 13.
About the Author
Aengela Grace Jacob is Final Year Student
of Bsc Biotechnology , Chemistry ( BSc BtC ) Dual
Major at Christ University Central campus, Bangalore
28
Gateway to Infection: The Molecular
Architecture of Viral Entry into Human T
Cells
by Geetha Paul
airis4D, Vol.4, No.2, 2026
www.airis4d.com
2.1 Introduction
The human immune system relies on the precise
coordination of T lymphocytes to identify and eliminate
pathogens. The most notable Retroviruses, like HIV-1,
have evolved sophisticated mechanisms to bypass the
T cell’s defence by infiltrating the most secure vault of
the cell, the nucleus. This translocation is not a random
occurrence but a highly regulated journey through the
Nuclear Pore Complex (NPC), a massive proteinaceous
gateway embedded in the nuclear envelope. The NPC is
composed of approximately 30 different proteins known
as Nucleoporins (NUPs), which are organised into a
cylindrical structure with eight-fold symmetry. In the
context of cell biology and immunology, the interaction
between viral proteins and these NUPs represents a
critical Protein-Protein Interaction (PPI) that determines
whether an infection succeeds or fails.
For a virus to successfully replicate, its genetic
material must cross the double-membrane barrier of
the nucleus to integrate into the host genome. This
process is hindered by the NPC’s selective permeability
barrier, which naturally excludes molecules larger
than 40 kDa. To overcome this, viruses mimic
the hosts endogenous nuclear transport receptors
(karyopherins). The molecular handshake between viral
components, such as the viral capsid or Vpr proteins,
and specific nucleoporins, like NUP93, NUP188, and
the intrinsically disordered FXFG (Phenylalanine-X-
Phenylalanine-Glycine) repeats, defines the frontier of
modern virology research. Understanding these PPIs
is essential for developing entry inhibitors , a class of
antiviral drugs designed to lock the nuclear door against
viral intruders.
2.2 The Structural Foundation:
NUP93 and NUP188
The architectural integrity of the Nuclear Pore
Complex (NPC), the massive gatekeeper of the cell
nucleus, relies heavily on the Nup93/Nup188 complex.
These proteins are core components of the inner ring,
providing the structural scaffolding that supports the
transport channel.
2.3 The Role of Nup93: The Adaptor
Nup93 acts as a critical biochemical bridge. It
doesnt just sit in the pore; it organises it. Its
primary job is to link the inner ring scaffolds to the
channel nucleoporins (the FG-Nups) that actually filter
molecules. Structure: Nup93 features a distinctive
N-terminal extended helix and a C-terminal ACE1
(Ancestral Coatomer Element 1) domain. The Velcro
Effect: It uses its N-terminus to tether other proteins,
specifically Nup62, Nup58, and Nup54, to the NPC
framework. Stability: Without Nup93, the inner
2.4 The Role of Nup188: The Large Scaffold
Figure 1: (A–B) Cryo-ET map of the NPC: core
scaffold, blue; membrane region, grey; central
transporter, pink. MR: membrane ring. (A) (Left)
top, cytoplasmic view; (middle) cross-section side
view; (right) central cross-section top view. (B) Cryo-
ET map is presented at a higher threshold. (Left)
top view; (middle) inner ring 60° tilted view; (right)
inner ring side (top) and cross-section (bottom) views.
Scale bar 200
˚
A. (C–F) Cross-section views show a
representative structure embedded within the Cryo-ET
density (grey), presented with different filtering and
thresholding, to demonstrate a good fit to the Cryo-ET
map in the inner (C, D) and membrane ring (E), the
cytoplasmic outer ring, and the mRNA export platform
(F). Nups; Scale bar 50
˚
A (C, D); 100
˚
A (E, F) Image courtesy:
https://pmc.ncbi.nlm.nih.gov/articles/PMC6022767/
Figure 2: A close-up view on the interface between
Nup93-1 and Nup188. Image courtesy: https://www.nature.c
om/articles/s41422-022-00633-x.pdf
ring loses its connection to the transport machinery,
effectively breaking the gate.
2.4 The Role of Nup188: The Large
Scaffold
Nup188 is one of the largest proteins in the NPC.
Its structure is dominated by HEAT repeats, spiral-like
motifs that give the protein a flexible, curved, S-shape or
C-shape. Architectural Flexibility: Because Nup188
is composed of these flexible repeats, it can adjust its
conformation to accommodate the massive curvature
of the nuclear envelope where the inner and outer
membranes fuse. Functional Redundancy: Nup188 is
often discussed alongside Nup205. They are structural
paralogs, meaning they share a similar shape and can
occupy similar positions within the NPC, though they
are not identical in function. Molecular Sieve: Recent
studies suggest that Nup188 may regulate the passage of
integral membrane proteins through the lateral channels
of the pore.
At the heart of the NPC’s architecture lies the
Inner Ring complex, where NUP93 and NUP188 serve
as the primary structural anchors. In human T cells,
NUP93 acts as a linchpin adaptor protein. It possesses
a dual-domain functionality: its C-terminus anchors
the scaffold to the nuclear membrane, while its N-
terminus extends inward to organise the 1 nucleoporin.
30
2.6 Protein-Protein Interactions (PPI) as Therapeutic Targets
From a viral standpoint, NUP93 is a frequent target for
subversion. Some viruses utilise proteases to cleave
NUP93, effectively dismantling the door frame of
the nucleus. This collapse serves a dual purpose: it
allows viral components to leak into the nucleus while
simultaneously preventing the T cell from exporting
messenger RNA (mRNA) encoding antiviral interferons,
effectively silencing the cell’s alarm system.
NUP188, a paralog of NUP93, provides the
necessary flexibility and bolt-like stability to the pore.
Its large, S-shaped conformation allows it to act as a
structural spacer. In healthy T cells, NUP188 helps
regulate the diameter of the central transport channel.
During viral entry, certain viral proteins form PPIs
with NUP188, inducing conformational shifts in the
NPC. By stretching the pore’s dimensions, the virus
ensures that large pre-integration complexes (PICs),
which are often much wider than the pores standard
limit and can pass through without becoming stuck.
This structural remodelling is a hallmark of viruses that
infect non-dividing or slowly dividing immune cells.
2.5 The Selective Filter: FXFG
Peptides and the Hydrophobic
Mesh
While NUP93 and NUP188 provide the hardware,
the FXFG peptides (Phenylalanine-X-Phenylalanine-
Glycine repeats) provide the selective software. The
selective filter of the nuclear pore complex (NPC)
regulates the transport of molecules between the
cytoplasm and the nucleus, allowing small molecules to
diffuse freely while restricting large molecules unless
they are bound to transport receptors. This barrier
is formed by the hydrophobic mesh or hydrophobic
gel created by phenylalanine-glycine (FG) repeats,
specifically FxFG sequences, that fill the NPC central
channel. These peptides are found on the disordered
tails of nucleoporins that line the interior of the channel.
Because Phenylalanine is highly hydrophobic, these
tails stick together to form a jelly-like mesh, often
described as a beaded curtain.
Viral proteins, specifically the HIV-1 Capsid
Figure 3: The life cycle of the HIV-1 virus. The
early-stage begins with the recognition of host cell
receptors (1), resulting in the fusion of the virus and
release of the viral core into the cytoplasm of the host
cell (2). This is followed by the trafficking of the
core through the cytoplasm (3) as reverse transcription
and uncoating begins to take place (4). Once at the
nuclear pore, the viral contents are imported into the
nucleus and localized (5) to transcriptionally active
chromatin while uncoating and reverse transcription
are completed (6). Following uncoating and reverse
transcription, integration occurs (7). After the viral
genome is integrated into the host cell, viral genes
are transcribed (8) and translated (9) into the Gag
polyprotein. The Gag polyprotein then localizes to
the host cell membrane (10), where budding occurs
(11), followed by the release of an immature virion (12).
The final step in the HIV-1 lifecycle is maturation (13),
where the viral protease cleaves the Gag polyprotein
into its constituent, functional proteins. Image courtsey:
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1a93/791084
3/ec34e8ccf0ce/life-11-00100-g003.jpg
(CA), have evolved a remarkable ability to navigate
this mesh. Through high-resolution crystallography,
researchers have identified FG-binding pockets on the
surface of the viral capsid.
The PPI between the viral capsid and the FXFG
repeats is transient and low-affinity. This allows the
virus to melt through the hydrophobic mesh. The capsid
essentially hops from one FXFG peptide to the next,
using these repeats as stepping stones to glide into the
nucleoplasm. This mimicry is so effective that the T
cell’s NPC recognises the viral capsid not as a foreign
invader, but as a legitimate transport receptor carrying
authorised cargo.
31
2.7 Protein Interactions in Health and Disease
Figure 4: FxFG-repeat cores bind to a hydrophobic
depression at the dimer interface of yNTF2. ( A )
Stereo drawing showing how the FxFG peptide (chain
E, yellow) binds to residues from both chains of the
NTF2 dimer (chain A, red and chain B, blue). Image
courtesy: https://www.researchgate.net/profile/B-Qui
mby? tp=eyJjb250ZXh0Ijp7ImZpcnN0UGFnZSI6Il
9kaXJlY3QiLCJwYWdlIjoiX2RpcmVjdCJ9fQ
2.6
Protein-Protein Interactions (PPI)
as Therapeutic Targets
The study of PPIs between viral proteins and T
cell NUPs has shifted the focus of drug discovery.
Traditional antivirals often target viral enzymes (such
as reverse transcriptase), but newer Inhibitors of Nuclear
Entry target the physical interactions between the virus
and its host. By designing small molecules that occupy
the FXFG-binding pockets on the viral capsid, scientists
can prevent the virus from touching the NUPs. Without
this physical contact, the virus remains trapped in the
cytoplasm, where it is eventually detected and degraded
by the T cell’s internal sensors (like cGAS/STING),
leading to a successful immune response.
2.7
Protein Interactions in Health and
Disease
The precise network of protein-protein interactions
is fundamental to maintaining cellular health, and
disruptions in this network are at the heart of many
diseases. For example, in some cancers, proteins in
signalling pathways can become “stuck” in their active,
interacting state, leading to uncontrolled cell growth.
An example of a disease caused by abnormal
PPIs is Alzheimer’s disease. This neurodegenerative
condition is characterised by the misfolding and
aggregation of proteins in the brain, particularly
amyloid-beta and tau. These proteins engage in harmful
interactions, forming large, insoluble plaques and
tangles that are toxic to neurons and disrupt brain
function.
Understanding the role of specific PPIs in disease
has opened new avenues for medical treatment. Modern
drug development often aims to create molecules
that can specifically block a single, disease-causing
protein interaction. This targeted approach promises
greater precision and potentially fewer side effects. For
instance, researchers design small molecules that bind
to the interface between two proteins, preventing their
interaction and disrupting the disease process.
References
Molecular architecture of the inner ring scaffold
of the human nuclear pore complex Science
https://www.science.org/doi/10.1126/science.aa
f0643
https://www.researchgate.net/profile/B-Quimby?
tp=eyJjb250ZXh0Ijp7ImZpcnN0UGFnZSI6Il9kaXJl
Y3QiLCJwYWdlIjoiX2RpcmVjdCJ9fQ
Kim, S. J., et al. (2018). Integrative structure and
functional anatomy of a nuclear pore complex. Nature,
555(7697), 475-482.
Matreyek, K. A., & Engelman, A. (2013). Viral
and cellular determinants of HIV-1 nuclear entry.
Future Virology, 8(5), 425-442.
Vollmer, B., & Antonin, W. (2014). The control
of nuclear pore complex assembly. Frontiers in Cell
and Developmental Biology, 2, 30.
Lin, J. R., et al. (2020). The Role of NUP93 in
Nuclear Transport and Immune Evasion. Journal of
Cell Science, 133(10).
32
2.7 Protein Interactions in Health and Disease
About the Author
Geetha Paul is one of the directors of
airis4D. She leads the Biosciences Division. Her
research interests extends from Cell & Molecular
Biology to Environmental Sciences, Odonatology, and
Aquatic Biology.
33
Part IV
Computer Programming
Astronomical Computing
by Ajay Vibhute
airis4D, Vol.4, No.2, 2026
www.airis4d.com
1.1 What Is Astronomical
Computing?
1.1.1 Introduction: Why Astronomy Became
Computational
Astronomy has always been driven by observation,
but over the past century the nature of those observations
has changed dramatically. Telescopes have grown
larger, detectors more sensitive, and the volume of
data generated by modern observatories has reached
levels unimaginable in the early 20th century. Where
astronomers once analyzed measurements manually,
they now rely on sophisticated computational tools to
calibrate, reduce, and interpret data. Computational
techniques are no longer optional; they have become
essential instruments in their own right, enabling
discoveries that would be impossible to achieve with
human effort alone.
The computational nature of modern astronomy is
not limited to number crunching. It encompasses the
entire lifecycle of astronomical knowledge: from the
initial detection of photons, through data reduction and
imaging, to simulations that test theories of cosmic
evolution. In this sense, astronomical computing
forms the backbone of the field, linking hardware,
software, and human insight in the quest to understand
the universe.
1.1.2 Historical Roots of Astronomical
Computing
The roots of astronomical computing trace back
to the era of celestial navigation and the manual
Figure 1: Modern observatories generate vast amounts
of data requiring computational analysis.
calculation of ephemerides, tables predicting the
positions of planets, moons, and stars. Before digital
computers, astronomers relied on logarithmic tables,
mechanical calculators, and punched cards to perform
these complex calculations. Tasks that could take
weeks or months required careful, repetitive human
effort, often carried out by dedicated observatory
staff. The mid-20th century brought a revolution
with the arrival of digital computers. Mainframes
such as the IBM 704 enabled astronomers to automate
calculations for stellar motions, orbital mechanics,
and the reduction of photographic plates. This
computational power made large-scale projects feasible,
including the Hipparcos satellite, which precisely
measured the positions of over 100,000 stars, and
the Sloan Digital Sky Survey (SDSS), which mapped
1.1 What Is Astronomical Computing?
millions of celestial objects across multiple wavelengths.
These initiatives demonstrated that computation could
turn raw measurements into quantitative, statistically
robust insights, enabling discoveries that would
have been impossible by hand. By the late 20th
century, the concept of a computational pipeline
became standard. Pipelines automate the conversion
of raw measurements into scientifically meaningful
data products, typically including calibration, noise
reduction, and integration of multiple observations.
Modular pipelines allow algorithms to be adjusted
or replaced without reprocessing entire datasets,
enhancing efficiency, reproducibility, and transparency.
The historical progression shows a clear trend: as
astronomical data volumes increased, so did the need
for robust computational methods. What began as
manual calculation has evolved into sophisticated
software systems capable of processing terabytes of
data, integrating multiple instruments, and performing
complex analyses. Today, computational methods are
an inseparable part of astronomical observation, acting
both as a tool for measurement and a lens through which
the cosmos is interpreted.
Figure 2: Early computational tools such as punch cards
and mainframes were pivotal in large-scale astronomical
calculations.
1.1.3 What Makes Astronomical Data
Different
Astronomical data differ from typical datasets
in several fundamental ways. Observations are
inherently constrained by the physical properties of
detectors, including sensitivity, dynamic range, and
pixel scale, as well as by the environment—for
ground-based telescopes, atmospheric turbulence blurs
images, introduces background light, and limits faint-
source detection. Even in space-based observatories,
limitations such as cosmic ray hits, thermal noise, and
finite detector resolution affect the raw measurements.
Noise is unavoidable and arises from multiple sources:
photon statistics, electronic readout fluctuations, cosmic
ray events, and background contamination. Each
measurement is also convolved with the instrument’s
response function—for instance, the point-spread
function (PSF) in imaging or the spectral response
in spectrographs—meaning that raw counts are only
an indirect representation of the intrinsic physical
quantities astronomers wish to measure. Astronomical
datasets are also massive and multi-dimensional. Large
surveys can catalog billions of objects, each with
repeated measurements over time, across multiple
wavelengths, and sometimes in different polarization
states. The data are structured in intertwined
spatial, temporal, and spectral dimensions, forming
a complex, high-dimensional space. This richness
enables deep scientific insights but also requires
sophisticated computational methods for analysis, such
as image deconvolution, time-series modeling, and
multi-wavelength cross-correlation. These unique
challenges distinguish astronomical computing from
conventional data science, emphasizing the need for
specialized algorithms, carefully designed pipelines,
and domain-specific data management techniques
capable of handling scale, complexity, and uncertainty
inherent to the cosmos.
1.1.4 Computing as an Astronomical
Instrument
Computation in astronomy is not merely a tool for
processing data; it functions as a scientific instrument
in its own right. Just as a spectrograph disperses light
to reveal the chemical composition of a star, or a radio
dish converts incoming waves into measurable voltages,
computational algorithms transform raw measurements
into scientifically meaningful information. Algorithms
such as image deconvolution, interpolation, source
extraction, and statistical modeling do more than
36
1.1 What Is Astronomical Computing?
automate analysis—they actively shape the data,
determining which structures, correlations, or subtle
signals can be recovered from noisy observations. The
choice of method, the parameters selected, and the
assumptions embedded in the algorithm all influence
the final scientific result, much like the optical design
and calibration of a physical instrument. Modern
astronomical software often integrates simulation,
forward modeling, and visualization tools, allowing
researchers to explore not only what the data show
but also how uncertainties propagate through analysis
pipelines. For example, synthetic images can be
generated with known noise and instrumental effects to
test the fidelity of deconvolution algorithms, while
Monte Carlo simulations can assess the reliability
of derived parameters such as stellar mass or
distance. By enabling these tests and providing
interpretive frameworks, computation extends the
capabilities of telescopes, allowing astronomers to
make discoveries that would be impossible through
hardware alone. In effect, software pipelines, models,
and computational methods collectively act as virtual
instruments, complementing and amplifying the power
of physical observatories.
1.1.5 Scope and Boundaries of the Field
Astronomical computing encompasses a broad
and diverse range of activities that touch nearly
every stage of modern observational and theoretical
astronomy. It begins with data acquisition and
calibration, ensuring that raw signals from telescopes
are accurately recorded and corrected for instrumental
and environmental effects. From there, it extends
to data reduction and analysis, where algorithms
transform noisy, incomplete, or high-dimensional
measurements into scientifically meaningful quantities.
Beyond observation, astronomical computing also
includes the simulation of astrophysical phenomena,
from galaxy formation to stellar evolution, allowing
researchers to compare theoretical predictions with
observations. To make sense of these massive
and complex datasets, scientists employ advanced
visualization techniques, exploring multi-dimensional
parameter spaces and temporal sequences that would
be impossible to interpret manually. Finally, the field
requires robust data management and archiving systems
capable of handling petabyte-scale datasets and ensuring
long-term accessibility for both current and future
research. Despite this breadth, astronomical computing
is constrained by the physical limitations of instruments,
the stochastic nature of photons, and the computational
resources available. Large datasets and high-resolution
simulations push the boundaries of storage, memory,
and processing power, necessitating careful design of
algorithms and pipelines to achieve both efficiency
and accuracy. Recognizing these limitations is critical
for building robust analysis frameworks and correctly
interpreting the results. Astronomical computing is
inherently interdisciplinary, drawing upon principles
from computer science, statistics, applied mathematics,
and physics, while being tightly coupled to the practical
and scientific needs of astronomy. Researchers
must strike a delicate balance between algorithmic
sophistication, computational feasibility, and physical
fidelity. Sophisticated methods are only valuable if they
produce results that are interpretable, reproducible, and
scientifically meaningful, bridging the gap between raw
data and our understanding of the cosmos.
1.1.6 Summary and Outlook
Astronomical computing represents the
intersection of observation, theory, and computation.
It transforms vast, noisy, and complex datasets into
structured knowledge about the universe. The field
has grown from simple calculations to sophisticated
pipelines and simulations, forming a central component
of modern astronomy. Looking ahead, continued
increases in data volume and complexity will
further integrate computation into every stage of
astronomical research, making the development
of robust, transparent, and efficient computational
methods more crucial than ever.
37
1.1 What Is Astronomical Computing?
About the Author
Dr. Ajay Vibhute is currently working
at the National Radio Astronomy Observatory in
the USA. His research interests mainly involve
astronomical imaging techniques, transient detection,
machine learning, and computing using heterogeneous,
accelerated computer architectures.
38
The Disappearing Role of Judgment in
Automated Systems
by Jinsu Ann Mathew
airis4D, Vol.4, No.2, 2026
www.airis4d.com
Automation doesnt usually arrive as a bold
decision. It slips in quietly.
A recommendation replaces a choice. A score
replaces a conversation. A system that once suggested
now decides. Over time, no one remembers exactly
when judgment stopped being exercised and started
being assumed.
In many modern systems, humans are still present,
but mostly as spectators. Algorithms produce rankings,
flags, or predictions, and these outputs move smoothly
into action. The human role is often limited to approving
what the system already decided or stepping in only
when something goes visibly wrong.
What disappears in this process is not control, but
judgment. Judgment is the ability to pause, question,
and interpret. It is the capacity to say “this may be
correct, but it doesnt fit this situation.” When systems
are designed to minimize friction, that pause becomes
inconvenient, and disagreement starts to feel like an
error.
This article explores how judgment is quietly
designed out of automated systems, why that happens,
and what we risk losing when decisions become efficient
but unexamined.
2.1 Prediction Quietly Becomes
Decision
Consider this example. A large company
introduces an automated system to help screen job
applications. The goal is not to replace human recruiters,
but to manage volume. The system scores resumes and
ranks candidates so recruiters can focus their attention
more efficiently.
In the beginning, recruiters use the ranking as a
guide. They scroll through the list, occasionally opening
lower-ranked profiles out of curiosity. The system feels
like a helpful assistant.
Over time, the workflow changes. Only the top-
ranked candidates are reviewed. Others are rarely seen.
There is no rule forbidding recruiters from checking the
rest, but doing so feels unnecessary and time-consuming.
Trust in the system grows, not because it is perfect, but
because it is consistent.
Eventually, the ranking stops being a suggestion
and starts functioning as a decision. Candidates
filtered out by the system effectively disappear from
consideration, even though a human is still officially
“in charge.”
This is how prediction quietly becomes decision.
Authority is not transferred through policy or instruction,
but through habit. When a system is always present,
always confident, and rarely questioned, its outputs
begin to define the boundaries of action.
The human role remains, but it changes in nature.
Judgment is no longer exercised across all cases. It
is reserved only for those situations where the system
clearly fails. Most decisions happen automatically,
without anyone explicitly deciding that they should.
What is lost is not oversight, but engagement.
When predictions are treated as conclusions, judgment
2.4 Stability Hides Fragility
fades not because it is forbidden, but because it is
no longer required.
2.2
Automation Bias, Objectivity, and
the Loss of Responsibility
Once automated systems become routine, trust in
them stops being an active choice and becomes a default.
Outputs arrive as scores, rankings, or flags, and these
formats carry an air of objectivity. They look neutral,
precise, and detached from human judgment.
This appearance matters. Disagreeing with a
system requires effort and explanation, while agreeing
requires none. Over time, this imbalance shapes
behavior. People do not blindly follow machines; they
follow them because it is easier, safer, and institutionally
encouraged.
As trust shifts, responsibility begins to blur.
Decisions feel justified by process rather than owned by
individuals. When outcomes are questioned, the answer
often points elsewhere: the model, the threshold, the
workflow. Each step is defensible on its own, yet the
decision as a whole has no clear owner.
This is not a failure of intention. It is a structural
consequence of systems designed to minimize friction.
Efficiency removes pauses. Consistency removes
discretion. What disappears along the way is the
moment where someone must say, this decision is
mine.
The result is a peculiar kind of authority without
accountability. Automated decisions feel final, yet
responsibility is diffused across designers, operators,
and policies. When things work, this diffusion goes
unnoticed. When they fail, it becomes a serious
problem.
2.3 Judgment as a Skill That Fades
Judgment is not a switch that can be turned on
when needed. It is a skill maintained through regular
use. When automated systems handle most decisions,
humans lose opportunities to practice interpreting
context, weighing exceptions, and noticing subtle
signals.
Over time, this changes the human role. People
become monitors rather than decision-makers. They
learn how to manage systems, not how to judge
situations. When rare or unfamiliar cases appear,
intervention becomes difficult precisely because
judgment has not been exercised regularly.
This creates a fragile dependency. Systems work
smoothly under familiar conditions, but when reality
drifts outside their assumptions, the humans overseeing
them are least prepared to step in. Failures appear
sudden, even though the conditions for failure have
been building quietly.
In this sense, automation does not merely remove
judgment from daily decisions it erodes the capacity
for judgment itself.
2.4 Stability Hides Fragility
Consider an automated navigation system used
by emergency response teams to determine the fastest
route to an incident. Under normal conditions, the
system works exceptionally well. It accounts for traffic,
distance, and historical patterns, and responders learn
to trust it without hesitation.
During a large public event, however, conditions
change. Roads are closed unexpectedly, temporary
barriers appear, and crowd movement alters traffic
in ways the system has not seen before. The
navigation system continues to recommend routes that
are technically optimal but practically unusable.
Responders hesitate. Deviating from the system
feels risky, especially under pressure. Following it feels
defensible. Minutes are lost before someone decides
to rely on local knowledge rather than the automated
guidance.
The failure appears sudden, but it is not accidental.
The system was designed for efficiency in stable
conditions, not for judgment under uncertainty. Over
time, responders had stopped actively reasoning about
routes because the system rarely required them to do
so.
This is how stability hides fragility. When
automation works well most of the time, it discourages
40
2.6 Conclusion - Keeping Judgment in the Loop
independent judgment. When conditions change, the
system’s confidence becomes a liability, and human
intervention arrives too late.
What looks like a routing error is, at its core, a
judgment failure that developed long before the crisis.
2.5 Reintroducing Judgment by
Design
If judgment disappears because systems make
decisions too easily, then bringing it back requires
deliberate design choices.
Consider an automated loan approval system used
by a bank. For most applications, the system works
smoothly. Applicants are approved or rejected within
seconds, and human officers are rarely involved. The
process is fast, consistent, and efficient.
Now imagine an applicant with an unusual profile
a freelancer with irregular income but a long history
of timely repayments. The system rejects the application
because it does not fit familiar patterns. A human
officer can technically override the decision, but doing
so requires extra documentation, justification, and
approval. Following the system is easier.
Over time, officers stop intervening. Not because
they disagree less, but because the system makes
disagreement inconvenient.
Designing for judgment means avoiding this
situation. If humans are expected to exercise judgment,
systems must make it practical to do so. Questioning
a decision should not feel like breaking the process.
Overrides should be part of normal operation, not
treated as exceptions.
Judgment also needs practice. If humans only
step in during rare failures, they lose confidence and
context. Systems should encourage regular review
and engagement, even when everything appears to be
working. This keeps people familiar with how decisions
are made and where their limits lie.
Reintroducing judgment does not mean rejecting
automation. It means using automation to handle
routine cases while preserving human responsibility for
interpretation, edge cases, and consequences. When
judgment is designed into the system, automation
becomes more reliable not less.
2.6
Conclusion - Keeping Judgment in
the Loop
Automated systems have become part of everyday
decision-making, often without much notice. They help
us move faster, process more information, and maintain
consistency at scale. Used well, they are powerful tools.
Problems arise when efficiency quietly replaces
judgment. When decisions flow smoothly from
prediction to action, the human role shrinks not
because people are excluded, but because they are
no longer needed to think. Responsibility becomes
diffused, intervention becomes rare, and failures feel
sudden when they occur.
Judgment is not an obstacle to good systems. It is
what allows systems to adapt when conditions change,
rules break down, or consequences matter more than
speed. Removing judgment may make systems efficient,
but it also makes them fragile.
The question, then, is not how much automation
we should adopt, but how we design it. Systems
should handle repetition and scale, while leaving
room for human interpretation, disagreement, and
responsibility. When judgment remains part of the
process, automation does not replace human decision-
making it strengthens it.
In the end, the most reliable systems are not those
that eliminate judgment, but those that know where it
still belongs.
References
IRONIES OF AUTOMATION
Humans and Automation: Use, Misuse, Disuse,
Abuse
Concrete Problems in AI Safety
Thinking, Fast and Slow.
41
2.6 Conclusion - Keeping Judgment in the Loop
About the Author
Jinsu Ann Mathew is a research scholar
in Natural Language Processing and Chemical
Informatics. Her interests include applying basic
scientific research on computational linguistics,
practical applications of human language technology,
and interdisciplinary work in computational physics.
42
About airis4D
Artificial Intelligence Research and Intelligent Systems (airis4D) is an AI and Bio-sciences Research Centre.
The Centre aims to create new knowledge in the field of Space Science, Astronomy, Robotics, Agri Science,
Industry, and Biodiversity to bring Progress and Plenitude to the People and the Planet.
Vision
Humanity is in the 4th Industrial Revolution era, which operates on a cyber-physical production system. Cutting-
edge research and development in science and technology to create new knowledge and skills become the key to
the new world economy. Most of the resources for this goal can be harnessed by integrating biological systems
with intelligent computing systems offered by AI. The future survival of humans, animals, and the ecosystem
depends on how efficiently the realities and resources are responsibly used for abundance and wellness. Artificial
intelligence Research and Intelligent Systems pursue this vision and look for the best actions that ensure an
abundant environment and ecosystem for the planet and the people.
Mission Statement
The 4D in airis4D represents the mission to Dream, Design, Develop, and Deploy Knowledge with the fire of
commitment and dedication towards humanity and the ecosystem.
Dream
To promote the unlimited human potential to dream the impossible.
Design
To nurture the human capacity to articulate a dream and logically realise it.
Develop
To assist the talents to materialise a design into a product, a service, a knowledge that benefits the community
and the planet.
Deploy
To realise and educate humanity that a knowledge that is not deployed makes no difference by its absence.
Campus
Situated in a lush green village campus in Thelliyoor, Kerala, India, airis4D was established under the auspicious
of SEED Foundation (Susthiratha, Environment, Education Development Foundation) a not-for-profit company
for promoting Education, Research. Engineering, Biology, Development, etc.
The whole campus is powered by Solar power and has a rain harvesting facility to provide sufficient water supply
for up to three months of drought. The computing facility in the campus is accessible from anywhere through a
dedicated optical fibre internet connectivity 24×7.
There is a freshwater stream that originates from the nearby hills and flows through the middle of the campus.
The campus is a noted habitat for the biodiversity of tropical Fauna and Flora. airis4D carry out periodic and
systematic water quality and species diversity surveys in the region to ensure its richness. It is our pride that the
site has consistently been environment-friendly and rich in biodiversity. airis4D is also growing fruit plants that
can feed birds and provide water bodies to survive the drought.