Cover page
”Kakka Vezhambal” is another name for the Malabar Grey Hornbill (Ocyceros griseus), a bird endemic to
the Western Ghats . Commonly found in Keralas evergreen and moist deciduous forests, it is known for its
loud, cackling calls that are considered a characteristic sound of the regions wilderness . Recognizable by its
brownish-grey plumage and yellow-orange bill, it primarily feeds on fruits like figs and plays a vital role in the
forest ecosystem as a seed disperser . Photograph taken by Geetha Paul from airis4D campus.
Managing Editor Chief Editor Editorial Board Correspondence
Ninan Sajeeth Philip Abraham Mulamoottil K Babu Joseph The Chief Editor
Ajit K Kembhavi airis4D
Geetha Paul Thelliyoor - 689544
Arun Kumar Aniyan India
Sindhu G
Journal Publisher Details
Publisher : airis4D, Thelliyoor 689544, India
Website : www.airis4d.com
Email : nsp@airis4d.com
Phone : +919497552476
i
Editorial
by Fr Dr Abraham Mulamoottil
airis4D, Vol.4, No.3, 2026
www.airis4d.com
We are delighted to announce that airis4D Journal
will feature a new monthly column titled Vijnanam—the
Sanskrit word for knowledge—beginning in the May
2026 issue. The column will be authored by Prof.
K. Babu Joseph, former Vice-Chancellor of Cochin
University of Science and Technology (CUSAT) and
currently an advisor to this journal. A distinguished
physicist and accomplished writer, Prof. Joseph has
published extensively on popular science, philosophy,
and poetry in both English and Malayalam. Vijnanam
will primarily explore themes in science, though it will
occasionally venture into other disciplines, offering
readers a rich and thoughtful perspective shaped by his
multifaceted intellectual pursuits. We warmly welcome
Prof. Babu Joseph to this regular feature and look
forward to his valuable contributions.
This edition starts with ”The Power of Small
Visual Language Models,” by Dr. Arun Aniyan,
who argues that the future of artificial intelligence
lies not in massive, resource-intensive systems but
in their compact, efficient counterparts known as
Small Visual Language Models (SVLMs). Aniyan
explains that SVLMs achieve their efficiency through a
significantly reduced parameter count and specialized
training techniques like knowledge distillation, where a
smaller ”student” model learns from a larger ”teacher”
model’s nuanced understanding. This streamlined
design enables faster response times, lower energy
consumption, reduced costs, and most importantly,
allows these models to run directly on edge devices
such as smartphones and smart cameras. By enabling
this on-device intelligence, SVLMs enhance data
privacy while powering practical applications across
accessibility tools, manufacturing quality control, retail
management, and contextual smart home systems,
ultimately representing what Aniyan describes as a
crucial step toward embedding ubiquitous, personal AI
into everyday technology.
In ”Introduction to Neuromorphic Computing-
Part II,” author Dr. Blesson George explores how
brain-inspired computing offers an energy-efficient
alternative to traditional systems by mimicking the
neural and synaptic structures of the human brain.
Aniyan explains that, unlike conventional computers,
which suffer from the Von Neumann bottleneck and high
power consumption, neuromorphic systems use event-
driven Spiking Neural Networks (SNNs) and specialized
chips like IBM’s TrueNorth and Intel’s Loihi to
perform parallel, real-time processing with remarkable
efficiency. The article highlights how these systems
integrate memory and computation to enable low-power
applications in robotics, edge AI, and adaptive sensing,
while also acknowledging ongoing challenges such as
the lack of standard programming frameworks and
the difficulty of training SNNs. Ultimately, Aniyan
positions neuromorphic computing as a promising
paradigm shift toward intelligent systems that can
learn continuously and operate with minimal energy
consumption.
In ”Plasma Physics- Magnetosphere &
Ionosphere, Abishek P S examines how Earths
magnetic field creates a dynamic plasma environment
that shields the planet while continuously exchanging
energy and particles with the upper atmosphere. The
article explains that the magnetosphere, formed by
Earths internal dynamo and shaped by the solar wind,
traps charged particles in radiation belts and funnels
them toward the poles to create auroras, while the
ionosphere acts as both a source of plasma—supplying
ions like oxygen and hydrogen upward—and a sink
where precipitating particles deposit energy. Abishek
details the coupling mechanisms that link these regions,
including field-aligned currents that act as electrical
pathways, Alfv
´
en waves that transfer energy along
magnetic field lines, and the pressure cooker effect
that drives heavy ions into space during geomagnetic
storms. The author emphasizes that this interconnected
system serves as a natural plasma laboratory where
scientists can study universal processes like turbulence,
instabilities, and nonlinear coupling through direct
satellite measurements, offering insights that extend
from space weather forecasting to understanding
astrophysical phenomena across the universe.
Aromal P provides in ”X-ray Astronomy: Theory”
a comprehensive overview of the primary cosmic
sources that emit X-rays and the physical mechanisms
responsible for their radiation. The article explains
that X-ray emission across the universe arises from
extreme environments where matter is heated to millions
of degrees or particles are accelerated to relativistic
speeds. Stellar coronae, like our Suns outer atmosphere,
produce X-rays through magnetic reconnection and
thermal bremsstrahlung, while supernova remnants
generate both thermal emission from shock-heated gas
and non-thermal synchrotron radiation from pulsar-
accelerated electrons. X-ray binaries, the brightest
sources in our galaxy, involve neutron stars or black
holes accreting matter from companion stars, producing
X-rays through disk viscosity, surface impact, and
inverse Compton scattering. The author also describes
isolated neutron stars cooling from formation or
powered by magnetic field decay in magnetars, active
galactic nuclei where supermassive black holes generate
X-rays via coronal inverse Compton scattering, and
galaxy clusters where the intracluster medium emits
thermal bremsstrahlung from gravitational heating.
Aromal, whose research focuses on thermonuclear X-ray
bursts on neutron stars, presents these diverse sources
as a testament to the universal processes of accretion,
magnetic reconnection, and shock heating that make
X-ray astronomy a powerful window into the most
energetic phenomena in the cosmos.
In ”Giant Molecular Clouds: Structure, Evolution,
and Their Role in Star Formation”, Sindhu G provides
a comprehensive overview of the complex, multi-
scale processes that transform cold interstellar gas
into nuclear-burning stars. The article explains
that star formation occurs almost exclusively within
giant molecular clouds, where dense cores become
gravitationally unstable when thermal pressure,
turbulence, and magnetic fields can no longer support
them against collapse—a condition quantified by the
Jeans criterion. As collapse proceeds, conservation
of angular momentum leads to the formation of
circumstellar accretion disks that regulate material flow
onto the central protostar, while magneto-centrifugal
forces launch bipolar jets and outflows that remove
excess angular momentum and provide feedback into
the surrounding medium. Sindhu describes how
protostars evolve through embedded phases, powered
by gravitational contraction rather than fusion, before
emerging as pre-main-sequence objects like T Tauri
stars that gradually contract until hydrogen ignition
marks their arrival on the zero-age main sequence.
The author emphasizes that formation timescales vary
dramatically with mass—from millions of years for
Sun-like stars to less than a hundred thousand years
for massive stars—and highlights how modern infrared
and sub-millimetre observations from facilities like
JWST continue to refine our understanding of these
fundamental processes that shape galaxies and set the
stage for planet formation.
Linn Abraham explores in ”Deep Learning: A
Review,” the ttransition from Mchine Learning to Deep
Learning. The article explains that while traditional
interpretability tools like Integrated Gradients were
designed for CNN architectures, transformer models
offer a more natural approach by leveraging their
built-in attention mechanisms. Abraham demonstrates
how monkey patching—dynamically modifying code
at runtime—allows researchers to replace standard
attention blocks with patched versions without
retraining the model, while PyTorch hooks enable the
capture of attention matrices and their gradients during
iii
forward and backward passes. By registering hooks
before and after the softmax operation in attention
layers, the AGCAM method stores attention values
and computes gradients through backpropagation, then
combines these across layers and normalizes them
with sigmoid activation to produce class activation
maps that highlight which image regions influenced the
model’s decisions. This approach provides an intuitive
visualization tool for understanding ViT behavior, with
applications ranging from model debugging to building
trust in AI systems used in fields like medical imaging
or, as in the author’s own work, solar flare prediction
and galaxy classification.
Aengela Grace Jacob explores in ”Kombucha:
’The Symbiotic Elixir’ A Comprehensive View
on its Bioprocessing, the microbial science behind
the popular fermented tea beverage, revealing how
a simple mixture of sweetened tea transforms into a
complex probiotic drink through the action of a SCOBY
(Symbiotic Culture of Bacteria and Yeast). The article
explains that this rubbery, cellulose-based disc serves
as the living engine of fermentation, housing a diverse
community of Acetic Acid Bacteria that produce acetic
acid and build the SCOBY’s physical structure, yeasts
like Saccharomyces that convert sugar to ethanol and
carbon dioxide, and Lactic Acid Bacteria that contribute
sour flavors and probiotic properties. Jacob details
the two-phase fermentation process—an initial aerobic
stage where oxygen enables acid production, followed
by anaerobic bottle conditioning that creates natural
carbonation—and describes how enzymes like invertase
break down sucrose into fermentable sugars. The
author emphasizes kombucha’s scientifically-backed
benefits for gut health, including its ability to restore
balanced gut flora through probiotic diversity, suppress
harmful pathogens with organic acids, and increase the
bioavailability of anti-inflammatory tea polyphenols,
positioning this ancient elixir as a modern testament to
the power of microbial symbiosis in promoting human
wellness.
In ”The Genomic Landscape of Major Depressive
Disorder,” Geetha Paul explores how modern
genetic research has fundamentally transformed our
understanding of depression from a single-gene disorder
to a complex polygenic trait influenced by thousands of
common genetic variations called single nucleotide
polymorphisms (SNPs). The article explains that
while individual SNPs exert only negligible effects
on depression risk, their cumulative impact—measured
through Polygenic Risk Scores (PRS)—can identify
individuals with up to five times greater susceptibility to
MDD before clinical symptoms ever manifest, enabling
preemptive monitoring and personalized interventions.
Paul highlights how SNP analysis helps resolve the
diagnostic heterogeneity of depression by revealing
distinct biological subtypes, with some patients
carrying variants concentrated in serotonin pathways
while others show genetic burdens in inflammatory
genes like IL-6. The author also examines critical
pharmacogenomic applications, detailing how specific
polymorphisms such as the BDNF Val66Met variant,
which impairs neural plasticity, and the SLC6A4 5-
HTTLPR polymorphism, which influences serotonin
transporter expression, can predict antidepressant
response and guide treatment decisions. This molecular
approach, Paul argues, moves psychiatry beyond the
traditional one-size-fits-all model toward a more precise,
stratified framework where genetic biomarkers bridge
the gap between subjective symptom reporting and
objective biological data.
Finally, Dr. Ajay Vibhute presents in ”From
Telescope to Data Product” a comprehensive exploration
of how computational methods have evolved from
manual calculations and mechanical aids to become
an indispensable instrument in modern astronomy,
fundamentally transforming how scientists observe,
analyze, and understand the cosmos. The article
traces the historical progression from logarithmic
tables and punched-card mainframes like the IBM 704
to sophisticated software pipelines that now process
terabytes of data from surveys such as Hipparcos and the
Sloan Digital Sky Survey, highlighting that computation
functions not merely as a tool but as a virtual scientific
instrument that actively shapes raw measurements
into meaningful discoveries through algorithms for
image deconvolution, source extraction, and statistical
modeling. Dr. Vibhute explains that astronomical
data presents unique challenges—massive volumes,
iv
inherent noise from multiple sources, instrumental
response functions like point-spread functions, and
multi-dimensional structure across spatial, temporal,
and spectral domains—requiring specialized techniques
and careful pipeline design that balances algorithmic
sophistication with computational feasibility. The
author emphasizes that astronomical computing
encompasses the entire knowledge lifecycle from data
acquisition and calibration through reduction, analysis,
simulation of phenomena like galaxy formation,
advanced visualization, and petabyte-scale archiving,
all while remaining constrained by physical instrument
limitations and the stochastic nature of photons. As data
volumes continue to grow, Dr. Vibhute concludes that
robust, transparent computational methods will become
increasingly crucial, cementing computations role as
the essential bridge connecting raw observations to our
fundamental understanding of the universe.
v
News Desk
Vijnanam by Prof Babu Joseph, former VC, CUSAT
airis4D has the pleasure of announcing a monthly column, titled Vijnanam, which in Sanskrit, means
knowledge. It will be handled by Prof. K. Babu Joseph, formerly of CUSAT, and currently, an advisor to this
journal. He is a physicist and a writer of popular science, philosophy and poetry in English and Malayalam.The
series will be focussed on science, but occasionally, address other disciplines as well. The first contribution will
appear in the May 2026 issue.
vi
Contents
Editorial ii
I Artificial Intelligence and Machine Learning 1
1 The Power of Small Visual Language Models 2
1.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.2 What is a Visual Language Model (VLM)? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.3 The ’Small’ in Small Visual Language Models (SVLMs) . . . . . . . . . . . . . . . . . . . . . . 2
1.4 Efficiency and Speed . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.5 How SVLMs Work: The Training Secret . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.6 Why Small Models are the Future . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.7 Real-World Applications of Small Visual Language Models . . . . . . . . . . . . . . . . . . . . . 6
1.8 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
2 Introduction to Neuromorphic Computing- Part II 7
2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
2.2 Basic Idea of Neuromorphic Computing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
2.3 Why Neuromorphic Computing is Important . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
2.4 Spiking Neural Networks and Simple Neuron Model . . . . . . . . . . . . . . . . . . . . . . . . . 8
2.5 Neuromorphic Chips . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
2.6 Difference from Conventional Computing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
2.7 Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
2.8 Challenges . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
2.9 Future Scope . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
2.10 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
3 Deep Learning: A Review 10
3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
3.2 What is Machine Learning? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
3.3 Classical Machine Learning Techniques . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
3.4 Deep Learning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
II Astronomy and Astrophysics 14
1 Plasma Physics- Magnetosphere & Ionosphere 15
1.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
1.2 The Magnetosphere as a Plasma Environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
1.3 The Ionosphere as a Plasma Source and Sink . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
1.4 Coupling Mechanisms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
1.5 Energy Flow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
1.6 Magnetosphere & Ionosphere in Research Perspective . . . . . . . . . . . . . . . . . . . . . . . . 18
CONTENTS
2 X-ray Astronomy: Theory 20
2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
2.2 X-ray Binaries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
3 Giant Molecular Clouds: Structure, Evolution, and Their Role in Star Formation 23
3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
3.2 Chemical Composition and Physical Characteristics . . . . . . . . . . . . . . . . . . . . . . . . . 23
3.3 Formation of Giant Molecular Clouds . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
3.4 Internal Structure of GMCs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
3.5 Cloud Dynamics: Turbulence and Magnetic Fields . . . . . . . . . . . . . . . . . . . . . . . . . . 24
3.6 Star Formation in Giant Molecular Clouds . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
3.7 Lifetimes and Evolution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
3.8 Observational Techniques . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
3.9 Role in Galactic Evolution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
3.10 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
III Biosciences 27
1 Kombucha: “The Symbiotic Elixir”
A Comprehensive View on its Bioprocessing 28
1.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
1.2 SCOBY: The Heart of the Brew . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
1.3 Microbial Contents: The Living Force . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
1.4 The Fermentation Mechanism . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
1.5 Major Benefits on Gut Health . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
1.6 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
2 The Genomic Landscape of Major Depressive Disorder 31
2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
2.2 Quantifying Genetic Liability via Polygenic Risk Scores (PRS) . . . . . . . . . . . . . . . . . . . 31
2.3 Resolving Diagnostic Heterogeneity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
2.4 Pharmacogenomics and Treatment Response . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
IV Computer Programming 35
1 From Telescope to Data Product 36
1.1 From Telescope to Data Product . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
viii
Part I
Artificial Intelligence and Machine Learning
The Power of Small Visual Language Models
by Arun Aniyan
airis4D, Vol.4, No.3, 2026
www.airis4d.com
1.1 Introduction
Artificial Intelligence (AI) is rapidly transforming
every sector of the world, and at the forefront
of this evolution are models with the remarkable
capability to process and understand both textual
and visual information. These sophisticated systems
are broadly categorized as Visual Language Models
(VLMs). While the public imagination is often
captured by immense, resource-intensive models that
boast staggering performance metrics, a quieter, yet
profoundly significant, technological shift is underway.
This shift centers on their leaner, more efficient cousins:
Small Visual Language Models (SVLMs).
The rise of SVLMs represents a critical inflection
point in the AI landscape. They are designed to achieve
high performance with a fraction of the parameters,
computational power, and memory footprint required
by their massive predecessors. This focus on efficiency
makes them ideal for deployment in environments where
resources are constrained, such as mobile devices, edge
computing systems, and integrated hardware solutions.
This detailed exploration will delve into the core
of the AI revolution driven by these compact systems.
We will meticulously define the technical architecture
and operational mechanisms of Small Visual Language
Models, scrutinize the compelling reasons behind their
growing importance—ranging from cost reduction and
energy efficiency to improved privacy and real-time
inference—and analyze the transformative impact they
are having on the design and functionality of everyday
technology.
1.2 What is a Visual Language Model
(VLM)?
To understand a small VLM, we first need to grasp
the concept of a regular VLM.
Imagine a machine that can look at a picture and
simultaneously read a description of it, then use that
combined understanding to answer questions or generate
new text. This is the essence of a Visual Language
Model. It is a type of AI that has been trained on a
massive dataset of images paired with descriptive text
(like captions, articles, or transcripts).
VLMs have two main ”brains” working together:
A Vision Encoder: This part processes the
image, recognizing objects, shapes, colors, and spatial
relationships. It turns the visual information into a
numerical code the computer can understand.
A Language Decoder: This part handles the text,
understanding grammar, meaning, and context. Its
often a powerful language model, similar to the ones
used in smart assistants or translation tools.
The magic happens when these two parts learn
to communicate, creating a unified understanding of
”what is seen” and ”what is said.”
1.3 The ’Small’ in Small Visual
Language Models (SVLMs)
When we refer to Small Visual Language Models
(SVLMs), the descriptor ”small” is fundamentally tied
to the model’s size and its computational requirements.
Unlike their larger, often monolithic counterparts,
1.4 Efficiency and Speed
SVLMs are characterized by a significantly reduced
number of parameters. This decreased parameter count
directly translates into a smaller memory footprint,
making these models substantially more accessible for
deployment on edge devices, mobile platforms, and
resource-constrained environments where large-scale
GPU clusters are unavailable or impractical.
The minimized computational requirements are
a critical advantage. This involves lower demands for
processing power (FLOPs) during both the training
and, crucially, the inference phases. A smaller model
is faster to train, reducing the associated energy
consumption and monetary cost, democratizing the
development of multimodal AI. Furthermore, their
lower inference latency and energy expenditure allow
for real-time processing of visual and textual data locally,
unlocking new applications in areas like augmented
reality, robotics, and instantaneous on-device image
analysis without needing constant cloud connectivity.
Therefore, ”small” is not a compromise on capability
but a strategic design choice focused on efficiency,
accessibility, and real-world deployability.
1.3.1 Model Size (Parameters)
Every AI model, regardless of its specific
application—whether its a sophisticated Large
Language Model (LLM) or a specialized Small Visual
Language Model (SVLM)—is fundamentally built upon
an enormous number of adjustable settings referred
to as parameters. These parameters are the core,
tangible components that define the model’s structure
and capabilities.
Think of parameters as the accumulated
knowledge, memories, and intricate connections that
reside within the AI’s artificial brain. During the
training process, the AI is fed vast amounts of data
(text, images, code, etc.). As it processes this data,
it constantly adjusts the values of these billions of
parameters. These adjustments are the process through
which the model learns to identify patterns, understand
context, generate coherent text, or recognize objects in
an image.
Essentially, the greater the number of parameters,
the more potential capacity the model has to learn
complex relationships, store nuanced information, and
perform diverse tasks. It is the precise configuration
of these parameters—the delicate balance of their
numerical values—that allows an AI to successfully
execute its designed function.
SVLMs contain significantly fewer parameters
than their massive counterparts. This reduction is
intentional and offers profound advantages.
1.4 Efficiency and Speed
Smaller models require less computing power,
which leads to several practical benefits:
Faster Response Times (Latency): Because
there are fewer calculations to perform, an SVLM
can process an image and generate a response
much quicker. This is crucial for real-time
applications like autonomous driving or instant
camera translations.
Reduced Energy Consumption: Less
processing power translates directly to less energy
usage. This is vital for sustainability and for
running AI on battery-powered devices.
Lower Cost: Training and deploying smaller models is
vastly cheaper than deploying large models, making the
technology accessible to a wider range of businesses
and developers.
1.5
How SVLMs Work: The Training
Secret
SVLMs dont just shrink the large models; they are
often trained using specialized techniques to maximize
their performance despite their size. One of the most
effective and widely adopted strategies for creating
powerful, yet resource-efficient, Small Visual Language
Models (SVLMs) is a technique called knowledge
distillation. This process is a revolutionary training
paradigm that fundamentally shifts how smaller models
acquire complex capabilities, moving away from brute-
force training toward targeted, efficient knowledge
transfer.
3
1.5 How SVLMs Work: The Training Secret
Model Type Typical Parameter Count Computational Needs Primary Use Case
VLM Billions (e.g., 50B+)
Requires massive,
specialized data centers
Cutting-edge research,
complex creative tasks
SVLM
Millions to low billions
(e.g., 1B to 10B)
Can run on modern
mobile devices
or standard computers
Edge computing,
specific enterprise
applications
Table 1.1: Table showing comparison between VLMs and SVLMs
4
1.6 Why Small Models are the Future
Table 1.1 shows the comparison of a typical VLM
vs SVLM.
Knowledge distillation leverages a two-model
system, often conceptualized using a teacher and
student analogy:
1. The ’Teacher’ Model (VLM):
This is typically a massive, state-of-the-art
Large Visual Language Model (LVLM).
It is characterized by billions of parameters,
trained on enormous and diverse datasets
of raw images and text pairs over extensive
computational resources.
The teacher model’s ’knowledge’ isn’t
just its final answer, but the nuanced
logic, intermediate representations, and soft
probability distributions it generates across
various tasks (e.g., classifying an object as
”90% cat, 10% dog” rather than just ”cat”).
This depth of understanding forms the core
curriculum for the student.
2. The ’Student’ Model (SVLM ):
This is the small, target SVLM—the model
designed for deployment on edge devices,
mobile applications, or in environments
with strict computational and memory
constraints.
It possesses a significantly smaller
architecture with far fewer parameters,
which inherently makes it faster and more
memory-efficient during inference.
3.
The Distillation Process (Efficient Knowledge
Transfer):
Instead of the traditional, laborious training
approach where the SVLM is trained from
scratch on vast, raw datasets, the SVLM
is trained under the direct tutelage of the
larger teacher model.
The core goal of distillation is to train
the student to mimic the decisions,
sophisticated internal representations,
and complex knowledge encoded within
the larger teacher model. The student is
primarily guided by the teachers outputs
(the distilled’ knowledge) rather than just
the raw labels of the original data.
This is accomplished by utilizing a
specialized distillation loss function. This
function penalizes the student model when
its predictions deviate from the teacher
model’s soft targets (i.e., its predicted
probability distributions) for a given input.
By aligning its internal workings with
the teachers nuanced outputs, the student
effectively learns the wisdom’ of the
master.
It learns the shortcuts, the most important
conceptual features, and the efficient
decision-making pathways that the teacher
model already took immense computational
effort to figure out.
This highly optimized and focused training approach
allows the SVLM to retain a surprising, and often near-
equivalent, amount of the teacher model’s high-level
capability and performance across a range of visual-
language tasks. Critically, it achieves this without
the burden of the teacher’s massive size, enabling the
deployment of highly capable AI in resource-limited
settings and accelerating the pace of the AI revolution
on edge devices.
1.6
Why Small Models are the Future
For most real-world applications, large models are
overkill. SVLMs are perfectly suited for deployment
outside of huge data centers—a concept known as Edge
Computing.
1.6.1 On-Device Intelligence
This is perhaps the biggest advantage. An SVLM
can be entirely installed and run on a local device (the
”edge”), such as:
Smartphones: Enabling immediate, private
image analysis and captioning without needing
to send data to the cloud.
Smart Cameras: Allowing a security camera to
identify a package delivery in real-time, even if
the internet connection is slow or unavailable.
5
1.8 Conclusion
Wearable Devices: Providing instant information, like
identifying a foreign plant or translating a menu sign
through smart glasses.
1.6.2 Enhanced Data Privacy
When the AI model runs directly on your device,
your images and data never have to leave it to be
processed. This ”on-device processing” significantly
enhances user privacy and is becoming a major selling
point for consumer electronics.
1.7 Real-World Applications of Small
Visual Language Models
The efficiency and local nature of SVLMs open
the door to countless practical uses:
1.7.1 Accessibility and Assistance
SVLMs are powerful tools for accessibility. For
instance, a small model running on a persons phone
could:
Describe Environments: For people with visual
impairments, an SVLM can process a live camera
feed and verbally describe the scene: ”You are
standing next to a park bench; a dog is running
toward a red ball.”
Read Complex Documents: Instantly translate
and summarize documents like utility bills
or complex medication instructions by simply
pointing a camera at them.
1.7.2 Manufacturing and Quality Control
In a factory setting, SVLMs can be deployed on a
cheap, rugged computer to monitor production lines:
Defect Detection: A camera feed can be analyzed
in real-time to spot tiny imperfections in a product, such
as a scratch on a screen or a misaligned component.
Since the model is small, it can make decisions instantly
without causing bottlenecks.
1.7.3 Retail and Inventory Management
SVLMs are streamlining retail operations:
Stock Monitoring: Cameras in a grocery store
can use an SVLM to continually check shelves,
instantly recognizing when a product is low and
notifying staff for restocking.
Self-Checkout Verification: Ensuring
customers are scanning the correct items
by visually confirming the item against its
description.
1.7.4 Smart Home Devices
Future smart home hubs will rely on SVLMs for
complex, local intelligence:
Contextual Understanding: Not just recognizing
a person, but understanding context. For example,
recognizing that a child has spilled a cup of milk and
generating an alert for assistance.
1.8 Conclusion
Small Visual Language Models represent a crucial
step toward ubiquitous AI. They address the practical
limitations of massive models—cost, speed, and
privacy—by providing highly capable intelligence in a
compact, efficient package. As AI continues to evolve,
its not always the biggest models that will win, but
the smart, swift, and highly accessible SVLMs that
will embed intelligence into the very fabric of our
everyday devices, making the technology truly personal
and powerful.
About the Author
Dr.Arun Aniyan is leading the R&D for
Artificial intelligence at DeepAlert Ltd,UK. He comes
from an academic background and has experience
in designing machine learning products for different
domains. His major interest is knowledge representation
and computer vision.
6
Introduction to Neuromorphic Computing-
Part II
by Blesson George
airis4D, Vol.4, No.3, 2026
www.airis4d.com
2.1 Introduction
Traditional computing systems have enabled
remarkable technological progress. However, modern
applications such as artificial intelligence, robotics,
and real-time sensing require systems that are not
only powerful but also energy efficient and adaptive.
Conventional computers consume large amounts of
energy when running complex AI models, which
motivates researchers to explore alternative computing
paradigms.
The human brain performs learning, perception,
and decision-making using very little power.
Neuromorphic computing attempts to imitate these
biological principles to create more efficient intelligent
systems.
2.2 Basic Idea of Neuromorphic
Computing
The human brain consists of neurons connected
through synapses. Neurons communicate using short
electrical pulses called spikes. Learning occurs when
the strength of synaptic connections changes over time.
Neuromorphic computing tries to reproduce
this mechanism using electronic circuits. Artificial
neurons process signals, and artificial synapses store
connection strengths. Unlike traditional computers that
continuously process data, neuromorphic systems are
event-driven and become active only when input events
occur.
This approach enables highly parallel computation
and improved energy efficiency.
2.3
Why Neuromorphic Computing is
Important
The growing interest in neuromorphic computing
comes from several limitations of conventional
computing.
2.3.1 Energy Consumption
Modern AI systems require large computational
resources and high power usage. In contrast, the human
brain operates at approximately 20 W while performing
complex tasks. Neuromorphic systems aim to achieve
similar efficiency through spike-based computation.
2.3.2 Von Neumann Bottleneck
In conventional computers, memory and
processing units are physically separated. Constant
data transfer between them creates delays and energy
loss. Neuromorphic architectures combine memory
and computation, reducing this bottleneck.
2.3.3 Real-Time Intelligence
Applications such as robotics and edge devices
require fast responses and local decision-making.
Neuromorphic systems support real-time processing
due to their parallel and event-driven operation.
2.6 Difference from Conventional Computing
2.4 Spiking Neural Networks and
Simple Neuron Model
Spiking Neural Networks (SNNs) are commonly
used in neuromorphic computing because they closely
resemble biological neural systems.
A simple model used to describe neuron behavior
is the Leaky Integrate-and-Fire (LIF) model.
τ
dV (t)
dt
= V (t) +RI(t) (2.1)
Here,
V (t) is the membrane potential,
I(t) is the input current,
R represents membrane resistance,
τ is the membrane time constant.
When the membrane potential reaches a threshold
value
V
th
, the neuron generates a spike and the potential
is reset. This model captures the essential firing
behavior of biological neurons and forms the foundation
of many neuromorphic systems.
2.5 Neuromorphic Chips
Neuromorphic chips are specialized processors
designed to implement brain-inspired computation
directly in hardware. Instead of executing instructions
sequentially, these chips contain large numbers of
artificial neurons and synapses that operate in parallel.
2.5.1 Working Principle
The basic operation of a neuromorphic chip
includes:
1. Receiving input signals,
2. Integrating signals within neurons,
3. Generating spikes when thresholds are reached,
4. Transmitting spikes to connected neurons,
5. Updating synaptic strengths during learning.
Because computation occurs only when spikes
are present, these chips consume much less energy
compared to conventional processors.
2.5.2 Examples of Neuromorphic Chips
IBM TrueNorth is one of the earliest large-scale
neuromorphic chips, containing around one million
artificial neurons. It demonstrates extremely low power
consumption and large-scale parallelism.
Intel Loihi is another important neuromorphic
processor that supports on-chip learning. It allows
adaptive behavior and real-time learning, making it
useful for research in robotics and intelligent systems.
2.6 Difference from Conventional
Computing
Neuromorphic computing differs fundamentally
from traditional computing methods.
Traditional computers use clock-driven operation,
while neuromorphic systems are event-driven.
Memory and processing are separate in
conventional systems but integrated in
neuromorphic architectures.
Conventional AI simulates neural behavior
in software, whereas neuromorphic systems
implement it directly in hardware.
Neuromorphic systems provide low-power and
parallel processing.
These differences make neuromorphic computing
attractive for future intelligent systems.
2.7 Applications
Neuromorphic computing has potential
applications in several areas:
Robotics and autonomous systems
Speech and image recognition
Edge AI and Internet of Things (IoT)
Brain–machine interfaces
Adaptive sensing and control systems
In many of these applications, fast response and
low energy consumption are critical requirements.
8
2.10 Conclusion
2.8 Challenges
Despite its potential, neuromorphic computing
still faces challenges:
Lack of standard programming frameworks
Difficulty in training spiking neural networks
Hardware design complexity
Limited large-scale commercial adoption
Research continues to address these challenges
and improve usability.
2.9 Future Scope
Neuromorphic chips may play an important
role in next-generation artificial intelligence. Future
systems could combine conventional AI methods with
neuromorphic hardware to achieve both accuracy and
efficiency.
As research progresses, neuromorphic computing
may enable intelligent systems that learn continuously,
adapt to their environment, and operate with extremely
low power.
2.10 Conclusion
Neuromorphic computing represents a shift toward
brain-inspired computation. By using spike-based
processing and highly parallel architectures, it offers an
energy-efficient alternative to conventional computing.
Neuromorphic chips demonstrate how these ideas can
be implemented in hardware, opening new possibilities
for robotics, AI, and intelligent devices. Although still
in the early stages of development, this field has strong
potential to shape the future of computing.
About the Author
Dr. Blesson George presently serves as
an Assistant Professor of Physics at CMS College
Kottayam, Kerala. His research pursuits encompass
the development of machine learning algorithms, along
with the utilization of machine learning techniques
across diverse domains.
9
Deep Learning: A Review
by Linn Abraham
airis4D, Vol.4, No.3, 2026
www.airis4d.com
3.1 Introduction
In this series we attempt a review of deep
learning. We start with defining machine learning
and distinguishing it from existing methods. Then
we learn about the early techniques in machine and
about a special kind of ensemble learning technique
as a case study. We follow this by learning about one
of the most important algorithms in classical machine
learning called the Perceptron. We end the review by
discussing how Multi-Layer Perceptrons have led to
the primitive forms of today’s modern deep learning
architectures.
3.2 What is Machine Learning?
We usually use the term Machine Learning in the
context of referring to a class of computer algorithms
or programs. If so, we need to ask ourselves the
following question What differentiates machine
learning algorithms or programs from those that existed
before. There exists a plethora of mathematical and
statistical techniques from before the time Machine
Learning became popular. Take for example the familiar
least-squares fitting method. To answer this, I use two
well-known quotes in the domain of machine learning
or artificial intelligence.
“Machine learning is the subfield of computer
science that gives computers the ability to learn
without being explicitly programmed.” Arthur
Samuel.
A computer program is said to learn from
experience E with respect to some class of tasks
T and performance measure P, if its performance
at tasks in T, as measured by P, improves with
experience E”. Tom M. Mitchell.
Combining both the ideas, we see that in
comparison to algorithms which try to solve a problem
using first principles or domain knowledge, machine
learning algorithms rely more on the data (or instances)
that we have may already have access to.
Another key differences is the idea of
generalization to unseen data. This is where one first
encounters the idea of splitting data into the three sets,
namely training, validation and testing.
In machine learning, the objective is not merely to
minimize error on the observed dataset, but to minimize
error on unseen data drawn from the same underlying
distribution. Formally, while classical fitting procedures
often minimize the empirical risk
ˆ
R(f) =
1
n
n
i=1
f(x
i
), y
i
,
machine learning is fundamentally concerned with
minimizing the expected risk
R(f) = E
(x,y)D
f(x), y

,
where
D
denotes the (unknown) data-generating
distribution.
Because
D
is inaccessible in practice, machine
learning algorithms must rely on finite samples
to approximate this objective. This concern with
generalization leads naturally to one of the defining
methodological practices in machine learning: splitting
available data into separate subsets, commonly referred
to as training, validation, and testing sets. The training
set is used to fit model parameters, the validation set to
tune hyperparameters or select among models, and the
3.3 Classical Machine Learning Techniques
test set to provide an unbiased estimate of generalization
performance.
In this sense, machine learning can be viewed as
an intersection of computer science, statistics, and
optimization, with a distinctive focus on scalable
algorithms, empirical evaluation, and predictive
accuracy. The field is less concerned with
discovering closed-form solutions derived from domain
assumptions, and more concerned with building systems
that adapt as data grows and improves performance
through experience.
This perspective provides the foundation for the
techniques discussed in the remainder of this chapter,
from early “classical” machine learning methods to
modern deep learning models.
3.3 Classical Machine Learning
Techniques
Early machine learning research focused on a
class of methods that are now commonly referred to
as classical or shallow machine learning techniques.
These methods typically rely on explicitly designed
feature representations and relatively simple model
architectures, in contrast to modern deep learning
approaches that learn hierarchical representations
automatically from raw data.
Some of the most influential classical machine
learning techniques include support vector machines,
decision trees, naive Bayes classifiers, and ensemble
methods such as boosting and bagging. Despite
their diversity, these techniques share a common goal:
learning a function that generalizes well from observed
examples to unseen data.
3.3.1 Gradient Boosting and XGBoost
Gradient Boosting generalizes the boosting
framework underlying AdaBoost by framing the
learning process as an optimization problem in function
space. Rather than explicitly adjusting instance weights,
Gradient Boosting sequentially fits weak learners to the
negative gradient of a specified loss function, allowing
greater flexibility in loss design and regularization
compared to AdaBoost. While AdaBoost is commonly
implemented using decision stumps, Gradient Boosting
typically employs shallow decision trees with controlled
depth, which can capture feature interactions.
Extensions such as XGBoost further enhance
Gradient Boosting through an optimized and scalable
implementation. XGBoost incorporates second-order
gradient information, sparsity-aware tree construction,
and approximate split-finding strategies, enabling
efficient training on large and sparse datasets.
3.3.2 The Perceptron
The perceptron occupies a central place in the
history of machine learning, serving both as one of
the earliest learning algorithms and as the conceptual
foundation for modern neural networks.Introduced in
the late 1950s, the perceptron was among the first
models capable of learning directly from data through
an iterative update rule, rather than relying on fixed,
hand-crafted decision rules.
At its core, the perceptron is a binary linear
classifier. Given an input vector
x R
d
, the model
computes a weighted sum of the inputs and applies a
threshold to produce an output:
y = sign(w
x + b),
where
w R
d
is a vector of learnable weights and
b R
is a bias term. This formulation makes explicit
the perceptrons role as a linear decision function: the
model partitions the input space using a hyperplane
defined by the parameters w and b.
What distinguishes the perceptron from earlier
linear models is not its functional form, but its learning
rule. Given a labeled training example
(x
i
, y
i
)
with
y
i
{1, +1}
, the perceptron updates its parameters
only when it makes a mistake. The update rule can be
written as
w w + η y
i
x
i
, b b +η y
i
,
where
η > 0
is the learning rate. This simple rule
embodies the idea of learning from experience: the
model adjusts its parameters incrementally in response
to errors, improving performance over time.
Nevertheless, the perceptron remains critically
11
REFERENCES
important as a conceptual building block. By
stacking multiple perceptrons and introducing nonlinear
activation functions, one obtains multi-layer perceptrons
(MLPs), which are capable of representing highly
complex functions. In this sense, modern deep learning
models can be viewed as extensions of the basic
perceptron idea, augmented with depth, nonlinearity,
and large-scale optimization techniques.
The perceptron thus serves as a bridge between
classical machine learning and deep learning. It retains
the interpretability and simplicity of early models
while introducing the core mechanisms—parameterized
functions and data-driven learning—that underpin
contemporary neural network architectures.
3.4 Deep Learning
A direct extension of the perceptron is the multi-
layer perceptron (MLP), which consists of multiple
layers of neurons arranged in a feedforward architecture.
Each layer applies an affine transformation followed by a
nonlinear activation function. For an MLP with
L
layers,
the forward computation can be written recursively as
h
(0)
= x
and
h
(l)
= σ W
(l)
h
(l1)
+b
(l)
, l = 1, . . . , L,
where
W
(l)
and
b
(l)
denote the weights and biases of
layer
l
, and
σ()
is a nonlinear activation function such
as the sigmoid, hyperbolic tangent, or rectified linear
unit.
The introduction of nonlinear activation functions
is essential. In the absence of nonlinearity, the
composition of multiple linear layers would reduce
to a single linear transformation, offering no additional
expressive power beyond that of a single-layer model.
Nonlinearity enables deep networks to represent
complex functions and capture intricate structure in
data.
Training deep neural networks involves adjusting
their parameters to minimize a loss function that
quantifies the discrepancy between predictions and
ground-truth labels. Given a dataset
{(x
i
, y
i
)}
n
i=1
and
a model
f
θ
parameterized by
θ
, the learning objective
is typically written as
min
θ
1
n
n
i=1
(f
θ
(x
i
), y
i
),
where
(, )
denotes an appropriate loss function, such
as mean squared error for regression or cross-entropy
loss for classification.
The optimization of this objective is made feasible
by the backpropagation algorithm, which efficiently
computes gradients of the loss with respect to all
model parameters using repeated applications of the
chain rule. These gradients are then used by gradient-
based optimization methods, such as stochastic gradient
descent and its variants, to iteratively update the
parameters.
A defining characteristic of deep learning is its
ability to perform representation learning. Rather
than relying on manually engineered features, deep
models automatically learn representations that become
increasingly abstract across successive layers. Lower
layers typically capture simple patterns, while higher
layers encode more complex and task-specific features.
Depth plays a central role in this process. From a
theoretical perspective, deep architectures can represent
certain classes of functions more efficiently than shallow
ones. From a practical standpoint, depth allows models
to reuse and recombine intermediate representations,
improving both expressiveness and generalization.
Despite their success, deep learning models
introduce new challenges. They often require large
datasets, substantial computational resources, and
careful regularization to avoid overfitting. Techniques
such as dropout, weight decay, and batch normalization
are commonly employed to stabilize training and
improve generalization performance.
References
[1]
I. Goodfellow, Y. Bengio, and A. Courville, Deep
Learning. MIT Press, Cambridge, MA, 2016.
[2]
C. M. Bishop, Pattern Recognition and Machine
Learning. Springer, New York, NY, 2006.
12
REFERENCES
[3]
S. Marsland, Machine Learning: An Algorithmic
Perspective, 2nd ed. Chapman and Hall/CRC,
Boca Raton, FL, 2014.
About the Author
Linn Abraham is a researcher in Physics,
specializing in A.I. applications to astronomy. He is
currently involved in the development of CNN based
Computer Vision tools for prediction of solar flares
from images of the Sun, morphological classifications
of galaxies from optical images surveys and radio
galaxy source extraction from radio observations.
13
Part II
Astronomy and Astrophysics
Plasma Physics- Magnetosphere & Ionosphere
by Abishek P S
airis4D, Vol.4, No.3, 2026
www.airis4d.com
1.1 Introduction
The magnetosphere is a vast, invisible region
surrounding Earth where its magnetic field dominates
and interacts with charged particles from the Sun.
It is generated by the planets internal dynamo, the
movement of molten, electrically conducting material
in the outer core of the planet which produces a magnetic
field resembling that of a bar magnet near the surface.
However, this field is distorted farther out by the
solar wind, a continuous stream of charged particles
emitted by the Sun. On the side facing the Sun, the
magnetosphere is compressed, while on the opposite
side it stretches into a long tail called the magnetotail.
Its boundaries include the bow shock, where the solar
wind first encounters resistance, and the magnetopause,
where the pressure of the solar wind balances Earths
magnetic field [1].
This protective bubble plays a crucial role in
shielding Earth from harmful solar and cosmic radiation,
preventing atmospheric erosion, and making the
planet habitable. It also traps charged particles in
radiation belts and funnels some toward the poles,
producing auroras. The magnetosphere is highly
dynamic, constantly reshaped by solar activity, and
during strong solar storms it can be compressed,
leading to geomagnetic disturbances that affect satellites,
power grids, and communications. Other planets
also have magnetospheres, though their strength and
structure vary[2]. Jupiters magnetosphere is the largest
in the solar system, while Mars largely lacks one,
contributing to its thin atmosphere. In a broader cosmic
perspective, magnetospheres are fundamental structures
that govern how celestial bodies interact with their space
environment, acting as shields and mediators between
planetary systems and the energetic universe around
them.
1.2 The Magnetosphere as a Plasma
Environment
The magnetosphere, when viewed as a plasma
environment, is far more than just a magnetic bubble
around Earth. It is a dynamic system filled with
charged particles from both the solar wind and Earths
ionosphere. These plasma populations interact with the
magnetic field in complex ways, giving rise to processes
that drive space weather and influence conditions on
Earth. One important plasma population comes from
the solar wind. Protons and electrons from the Sun
continuously stream toward Earth, and some of them
penetrate the magnetosphere through a process called
magnetic reconnection at the magnetopause. This
entry point allows solar particles to mix with Earths
magnetic environment, fuelling storms and disturbances.
Alongside these, heavy ions such as oxygen (O
+
) and
helium (He
+
) originating from Earths ionosphere are
lifted upward by electric fields and waves. Once
energized, they contribute to the overall plasma density
and dynamics of the magnetosphere, especially during
geomagnetic storms[3,4].
Magnetic reconnection is a key process in this
environment. It occurs when oppositely directed
magnetic field lines break and reconnect, releasing
stored magnetic energy and converting it into particle
kinetic energy. This drives large-scale convection of
1.4 Coupling Mechanisms
plasma throughout the magnetosphere and triggers
substorms, which are sudden disturbances that can
intensify auroras and disrupt communication systems.
Reconnection is often described as the “engine of
magnetospheric dynamics because it powers so many
of the system’s changes[3].
Alfv
´
en waves are another crucial mechanism.
These are oscillations that travel along magnetic
field lines, carrying energy and field-aligned currents
between the magnetosphere and ionosphere. They act
as conduits for energy transfer, enabling the coupling
of Earth’s upper atmosphere with space. Through these
waves, disturbances in the magnetosphere can directly
influence ionospheric currents and auroral activity,
making them a vital link in the chain of magnetospheric
processes[5].
Ring current is a large-scale plasma structure
formed by energized ions circulating around Earth.
During geomagnetic storms, these ions build up
and create a current that encircles the planet. The
ring current contributes significantly to geomagnetic
disturbances by altering Earths magnetic field, which
can weaken the overall field strength temporarily[6].
This weakening is one of the hallmarks of geomagnetic
storms and has practical consequences, such as affecting
satellite operations and navigation systems. Magnetic
reconnection, Alfv
´
en waves, and ring current formation
are central to its behaviour, making the magnetosphere
a fascinating and complex plasma laboratory in space.
1.3 The Ionosphere as a Plasma
Source and Sink
The ionosphere is a dense plasma layer of Earths
upper atmosphere, created when solar ultraviolet (UV)
and X-ray radiation ionizes neutral atoms and molecules.
This region acts both as a source of plasma for the
magnetosphere and as a sink where magnetospheric
particles deposit their energy. Its dual role makes it a
critical part of the coupled Earth–space environment.
As a plasma source, the ionosphere provides
heavy ions such as oxygen (O
+
), which are accelerated
upward by electric fields, wave–particle interactions,
and heating processes. These ions escape into the
magnetosphere, where they contribute to plasma
populations in the plasma sheet and ring current.
During geomagnetic storms, this outflow intensifies,
and O
+
ions become a dominant component, altering the
dynamics of the magnetosphere. In addition to heavy
ions, lighter ions like hydrogen (H
+
) and helium (He
+
)
continuously flow outward in what is known as the
polar wind. This steady stream of light ions represents
a constant supply of plasma to the magnetosphere, even
under quiet conditions.
As a plasma sink, the ionosphere absorbs energy
and particles from the magnetosphere. Precipitating
electrons and ions collide with neutral atoms in the
upper atmosphere, producing auroras, the shimmering
lights seen near polar regions. These collisions also
enhance ionization in localized regions, modifying
the conductivity of the ionosphere and influencing
currents that flow between the magnetosphere and
Earth. This feedback loop ensures that the ionosphere
is not only a supplier of plasma but also a recipient of
magnetospheric energy[7].
Plasma transport processes highlight the
ionospheres dynamic role. The polar wind represents a
continuous, low-level outflow of light ions, maintaining
a background plasma supply. In contrast, storm-time
outflows are episodic and powerful, with enhanced
O
+
fluxes dominating the plasma sheet and ring
current during geomagnetic storms. These storm-
driven outflows significantly reshape the magnetosphere,
fuelling geomagnetic disturbances and contributing to
the weakening of Earths magnetic field during storms.
1.4 Coupling Mechanisms
Field-aligned currents (FACs) are one of the
most important coupling mechanisms between the
magnetosphere and ionosphere. These electric currents
flow directly along magnetic field lines, linking the
dynamics of the magnetosphere to the conductivity of
the ionosphere. When solar wind and magnetospheric
processes drive changes in the magnetic field, FACs
carry those changes down into the ionosphere, where
they influence ionospheric currents and heating[8]. In
16
1.6 Magnetosphere & Ionosphere in Research Perspective
this way, FACs act as the “wires of the system, ensuring
that energy and momentum are transferred between
space and Earths upper atmosphere.
Auroral acceleration regions represent another
key coupling process. In these regions, Alfv
´
enic
turbulence and parallel electric fields accelerate
electrons downward along magnetic field lines. As
these electrons collide with neutral atoms in the upper
atmosphere, they produce the bright auroral arcs that
we see near the poles[5]. The acceleration process is
not uniform; instead, it creates structured, shimmering
displays that reflect the turbulent nature of the plasma
environment. These regions are therefore critical in
transforming magnetospheric energy into visible auroral
phenomena, while also enhancing ionization in the
ionosphere.
The pressure cooker effect is a more subtle but
equally important mechanism. Downward currents
compress plasma at low altitudes, creating localized
regions of high pressure. This compression energizes
heavy ions such as O
+
, driving them upward into
the magnetosphere. In effect, the ionosphere acts
like a plasma reservoir that can be “pumped” by
magnetospheric currents. During geomagnetic storms,
this effect becomes especially pronounced, with large
fluxes of O
+
ions escaping upward to dominate the
plasma sheet and ring current[6]. This upward transport
of ions alters the composition and dynamics of the
magnetosphere, feeding back into the larger system.
Together, these coupling mechanisms illustrate
how the magnetosphere and ionosphere are not isolated
systems but deeply interconnected. FACs provide the
electrical pathways, auroral acceleration regions convert
energy into light and ionization, and the pressure
cooker effect drives plasma upward. Each process
contributes to the continuous exchange of energy and
particles, making Earth’s near-space environment a
highly dynamic and interactive plasma system.
1.5 Energy Flow
Solar wind energy enters Earth’s magnetosphere
primarily through magnetic reconnection at the dayside
magnetopause. This process allows the solar wind’s
magnetic field to merge with Earths, opening pathways
for energy and plasma to flow into the magnetosphere.
Once inside, this energy does not remain localized
but is redistributed through several interconnected
mechanisms that shape the dynamics of Earths near-
space environment[2].
One major redistribution process is
magnetospheric convection, which refers to the
large-scale circulation of plasma throughout the
magnetosphere. Driven by reconnection, plasma flows
from the dayside magnetopause into the magnetotail
and then returns toward Earth[1]. This circulation
pattern transports energy and particles across vast
distances, fuelling substorms and setting the stage for
auroral activity. Convection ensures that solar wind
energy is spread throughout the magnetosphere rather
than concentrated in one region.
Wave–particle interactions provide another
pathway for energy redistribution. Waves such as Alfv
´
en
waves and whistler-mode waves propagate through the
magnetosphere, transferring energy to charged particles.
Alfv
´
en waves, in particular, carry field-aligned currents
that couple the magnetosphere to the ionosphere,
enabling energy to flow downward. Whistler-mode
waves can scatter energetic electrons, redistributing
their energy and sometimes precipitating them into
the atmosphere[5]. These interactions are essential
for regulating particle populations and maintaining the
dynamic balance of the magnetosphere.
Particle precipitation represents a direct way in
which magnetospheric energy is deposited into Earths
atmosphere. Electrons accelerated along magnetic
field lines collide with neutral atoms in the upper
atmosphere, producing auroras. This process not
only creates the spectacular light displays near the
poles but also enhances ionization in the ionosphere,
altering its conductivity and influencing global current
systems. Through precipitation, magnetospheric
energy is transformed into both visual phenomena and
atmospheric changes.
17
1.6 Magnetosphere & Ionosphere in Research Perspective
1.6 Magnetosphere & Ionosphere in
Research Perspective
Plasma physicists regard the magnetosphere–
ionosphere system as a natural laboratory for studying
universal plasma processes, because it exhibits many of
the same behaviours seen in astrophysical environments
across the universe. One of the most important aspects
is turbulence. In the magnetosphere, plasma flows
and magnetic reconnection events generate turbulent
structures, where energy cascades from large scales
down to smaller ones. This turbulence is similar to
what occurs in solar corona plasmas, astrophysical jets,
and even interstellar medium turbulence, making near-
Earth space an accessible place to study these otherwise
remote phenomena.
Instabilities are another key feature. The
magnetosphere is full of plasma instabilities, such as
the Kelvin–Helmholtz instability at the magnetopause,
which develops when solar wind flows shear against
Earths magnetic boundary[9]. Other instabilities, like
those in the plasma sheet, can trigger substorms and
auroral activity. These processes mirror instabilities
found in fusion devices and astrophysical disks, offering
scientists a chance to test theories of how plasmas
behave when disturbed.
Nonlinear coupling is also central to this
system. The magnetosphere and ionosphere are tightly
linked through field-aligned currents, wave–particle
interactions, and feedback loops. Small changes in one
region can lead to large-scale responses in the other,
demonstrating nonlinear dynamics. This coupling is a
microcosm of the complex interactions seen in larger
astrophysical systems, such as star–planet interactions
or accretion flows around black holes.
What makes near-Earth space especially valuable
is the ability to obtain in situ measurements. Satellites
and ground-based instruments can directly measure
plasma density, temperature, particle fluxes, and wave
spectra. These observations provide real-time data
that allow scientists to test and refine theories of
plasma circulation, turbulence, and energy transfer.
Unlike distant astrophysical plasmas, which can only
be observed indirectly through light or radiation, the
magnetosphere–ionosphere system offers a hands-on
testbed for plasma physics.
In essence, the magnetosphere–ionosphere system
demonstrates turbulence, instabilities, and nonlinear
coupling in ways that are universal to plasma
environments across the cosmos. By studying it with
direct measurements, plasma physicists gain insights
that apply not only to Earths space weather but also to
the broader universe, from solar flares to galactic jets.
References
[1] Borovsky, J. E., & Valdivia, J. A. (2018). “The
Earths magnetosphere: A systems science overview
and assessment”. Surveys in geophysics, 39(5), 817-
859.
[2] Cahill, L. J. (1965). “The magnetosphere”.
Scientific American, 212(3), 58-71.
[3] Cowley, S. W. H. (1991). The plasma
environment of the Earth”. Contemporary physics,
32(4), 235-250..
[4] Pfaff Jr, R. F. (2012). The near-Earth plasma
environment”. Space science reviews, 168(1), 23-112.
[5] Chen, L., & Zonca, F. (2016). “Physics
of Alfv
´
en waves and energetic particles in burning
plasmas”. Reviews of Modern Physics, 88(1), 015008.
[6] Daglis, I. A. (2006). “Ring current dynamics”.
Space science reviews, 124(1), 183-202.
[7] Chappell, C. R., Moore, T. E., & Waite Jr, J.
H. (1987). “The ionosphere as a fully adequate source
of plasma for the Earths magnetosphere”. Journal of
Geophysical Research: Space Physics, 92(A6), 5896-
5910.
[8] L¨uhr, H., & Kervalishvili, G. (2021). “Field-
aligned currents in the magnetosphere–ionosphere”.
Magnetospheres in the solar system, 193-205.
[9] Hasegawa, A. (1971). “Plasma instabilities
in the magnetosphere”. Reviews of Geophysics, 9(3),
703-772.
18
1.6 Magnetosphere & Ionosphere in Research Perspective
About the Author
Abishek P S is a Research Scholar in
the Department of Physics, Bharata Mata College
(Autonomous) Thrikkakara, kochi. He pursues
research in the field of Theoretical Plasma physics.
His works mainly focus on the Nonlinear Wave
Phenomenons in Space and Astrophysical Plasmas.
19
X-ray Astronomy: Theory
by Aromal P
airis4D, Vol.4, No.3, 2026
www.airis4d.com
2.1 Introduction
In our previous article, we discussed the various
sources of cosmic X-rays and the mechanisms that
produce them. We will continue with a series exploring
these different sources in detail. As the author
specializes in X-ray binaries, we will begin with an
in-depth look at their various classes before moving on
to other topics.
As the name suggests, X-ray binaries are double-
star systems that emit electromagnetic waves in the
X-ray region. To put this in perspective, consider our
own Sun. It is an average main-sequence star whose
thermal emission peaks in the visible range, with a
surface temperature of around 5,500 K. Producing
thermal X-rays, however, requires temperatures of
millions of Kelvin! It feels unbearably hot in summer,
even though we are 8.3 light-minutes away from the
Sun. So, imagine the intensity of an environment
with temperatures reaching millions or even billions of
Kelvin! X-ray binaries are home to these incredibly
energetic phenomena.
What is the driving force behind this? Surprisingly,
it is the weakest of all fundamental forces—gravity. This
is the remarkable story of how gravity, the universe’s
most delicate force, gives rise to some of its most
powerful and energetic displays.
2.2 X-ray Binaries
Gravity achieves this spectacular heating through
a mechanism called accretion, the continuous flow of
matter from a companion star into the deep gravitational
well of a dense, compact object. Just as water releases
energy as it plunges down a waterfall, the gas in these
systems converts its enormous gravitational potential
energy into intense heat and radiation as it falls.
To understand the math behind this sheer power, we
can look at the fundamental formula for the luminosity
(or power output), L, generated by this accretion:
L = GM
˙
MR. (2.1)
In this equation, G represents the gravitational constant,
M is the mass of the compact object,
˙
M
is the mass
accretion rate (how much matter falls per second), and
R is the radius of the compact object. Notice that
the radius R is in the denominator. Because compact
objects are incredibly small, the denominator is minimal,
making the resulting X-ray luminosity very large.
We can also approximate this power using Albert
Einsteins famous equation for rest-mass energy,
E =
mc
2
. The luminosity can be expressed as a fraction
of the total energy the falling matter possesses:
L =
η
˙
Mc
2
, where
η
represents the efficiency of the energy
conversion
(η = GMRc
2
)
. For nuclear fusion the
exact process that makes our Sun shine this efficiency
is a mere 0.001 to 0.01 (that is only
0.1%
to
1%
of
the mass is converted into energy). In contrast, the
efficiency
η
for matter accreting onto a neutron star
is roughly
0.1
(that is
10%
), and for a black hole, it
can reach an astonishing
0.42
(
42%
). This proves
that gravitational accretion onto compact objects is the
most efficient mechanism known in the universe for
converting matter into energy, vastly outperforming
nuclear reactions. Just a handful of matter can produce
an X-ray luminosity 10,000 times greater than the total
2.2 X-ray Binaries
energy output of the Sun at all wavelengths.
What is at the core of these extreme systems
that generate such a strong gravitational pull? The
engines behind them are compact objects, which are the
collapsed remnants of former stars, the ghost of a dead
star. These remnants appear when a star dies. There
are generally three forms of these objects, depending
on the original mass of the dying star. It’s important to
outline these forms to provide a complete discussion of
the topic.
2.2.1 White Dwarfs
When an average star, similar to our Sun, exhausts
its nuclear fuel, its core can no longer generate the
outward pressure needed to support its own weight.
The core collapses until it is halted by the quantum
resistance of crowded electrons, a phenomenon known
as degenerate electron pressure. The resulting object is
a white dwarf. A white dwarf packs roughly the mass
of the Sun (
1 M
) into a sphere about the size of
the Earth, which has a radius of approximately 10,000
kilometres. White dwarfs are strictly limited in weight
and cannot exceed
1.44 M
, a boundary known as the
Chandrasekhar limit. Systems with accreting white
dwarfs are called cataclysmic variables. Because white
dwarfs have a relatively large radius compared to other
compact objects, their gravitational wells are shallower,
making their accretion process much less luminous in
X-rays.
2.2.2 Neutron Stars
When a much more massive star (greater than
10 M
) reaches the end of its life, its core collapses
so violently that even electron pressure cannot stop the
crush of gravity. In a fraction of a second, electrons and
protons are squeezed together to form a fluid of neutrons.
This triggers a catastrophic supernova explosion that
blows away the outer star, leaving behind a neutron
star. A neutron star is a bizarre state of matter; it
compresses a mass equivalent to the Suns into a sphere
with a radius of just 10 to 15 kilometers, roughly the
size of a small city. Its density is equal to that of
an atomic nucleus, so dense that a chunk the size of
a table spoon of matter would weigh more than the
mass of earth. Because its radius R is so tiny, the
accretion formula
L = GM
˙
MR
dictates that in falling
matter reaches immense velocities, heating up to tens
of millions of degrees and radiating copious amounts
of X-rays. Typical neutron stars have a mass of around
1.4 M , with theoretical maximum limits near 3 M .
2.2.3 Black Holes
For the most massive stars in the universe(
>
25 M
), the core collapse is so profound that even the
rigid pressure of compressed neutrons cannot halt the
crushing force of gravity. The core collapses entirely
upon itself, creating a black hole. A black hole is
characterized by an event horizon—a boundary beyond
which the gravitational pull is so intense that nothing,
not even light, can escape. While black holes do not
have a hard physical surface like a neutron star, we
can still observe them as brilliant X-ray sources. The
intense X-ray emission we see actually comes from the
superheated accretion disk of matter swirling frantically
just outside the event horizon before it crosses the
boundary. Black holes are also of different sizes, and
here we are discussing about stellar-mass black holes
in X-ray binaries which typically have masses ranging
from greater than
3 M
up to about
15 M
.. Do not
get confused with the supermassive black holes that
used to be found in the center of the galaxies we will
discuss about then in the upcoming articles.
As we have discussed about the compact object
its a good time to discuss about the different classes
of X-ray binaries usually called as XRBs. Based on
the mass of the companion star we can classify the
X-ray binaries into two as Low-mass X-ray binaries and
High-mass X-ray binaries.
2.2.4 Low-mass X-ray Binaries
In an LMXB, the companion star supplying the
fuel is an older, late-type star that is generally less
massive than our own Sun, or even a tiny degenerate
dwarf. Because these companion stars are much older
and have long lifespans, LMXBs are typically found
residing in regions populated by old stars, such as the
21
2.2 X-ray Binaries
central Galactic Bulge of the Milky Way and the dense,
crowded cores of globular clusters.
The mass transfer mechanism in an LMXB relies
on the evolutionary expansion of this small companion
star. As the star ages, it pulls outward until it
completely fills its gravitational boundary, a pear-shaped
equipotential surface known as a Roche lobe. Once it
fills this lobe, the stars outer gaseous layers physically
spill over a gravitational saddle point between the two
stars—called the inner Lagrangian point (
L1
) and fall
toward the compact object. Because the two stars
are orbiting each other, this transferred matter carries
significant orbital angular momentum. Therefore, it
cannot plummet straight down onto the compact object.
Instead, it spirals inward to form a flat, superheated,
rapidly spinning structure called an accretion disk.
Internal friction and viscosity within this disk heat
the gas to millions of degrees, producing the bright
X-rays we observe before the matter makes its final
plunge.
2.2.5 High-mass X-ray Binaries
In these systems, the companion star is a massive,
young, early-type star (such as an O or B spectral type)
that can weigh 10 times the mass of the Sun or more.
Because these massive stars burn through their nuclear
fuel rapidly and live very short, furious lives, HMXBs
are typically found in the spiral arms of our galaxy, close
to the stellar nurseries where they recently formed.
Unlike the older, smaller stars in LMXBs that
gently spill their mass over a gravitational boundary,
these massive hot stars naturally blow off a highly
energetic stellar wind of gas similar to the solar storms
observed in sun. This wind is driven outward by the
intense pressure of the stars own ultraviolet radiation,
and it carries off a huge amount of material in all
directions. As the compact object orbits through this
dense, outflowing hurricane of gas, its intense gravity
acts like a cosmic vacuum cleaner. It captures a fraction
of the passing gas in a process mathematically described
as Bondi-Hoyle accretion. This wind-fed material falls
onto the compact object, generating a powerful shock
front and emitting X-rays as it crashes down. In some
specific HMXBs, especially those containing rapidly
spinning ”Be” stars, the compact object travels on a
highly elliptical orbit and periodically plunges through
a dense equatorial disk of ejected material surrounding
the massive star, causing bright, repeating transient
outbursts of X-ray energy.
Is accretion the only process responsible for
producing X-rays in X-ray binary systems? Is there
something more? To answer this question, we need to
discuss these systems in detail, which we will continue
in the upcoming article.
Reference:
Frederick D. Seward, Philip A. Charles Exploring
the X-ray Universe
Keith A. Arnaud, Randall K. Smith, and Aneta
Siemignowska Handbook of X-ray Astronomy
H. Schatz a and K. E. Rehm X-ray binaries
About the Author
Aromal P is a research scholar in
Department of Astronomy, Astrophysics and Space
Engineering (DAASE) in Indian Institute of
Technology Indore. His research mainly focuses on
studies of Thermonuclear X-ray Bursts on Neutron star
surface and its interaction with the Accretion disk and
Corona.
22
Giant Molecular Clouds: Structure,
Evolution, and Their Role in Star Formation
by Sindhu G
airis4D, Vol.4, No.3, 2026
www.airis4d.com
3.1 Introduction
The space between stars in galaxies is not empty
but filled with gas, dust, cosmic rays, and magnetic
fields, collectively known as the interstellar medium
(ISM). Among its various components, molecular
clouds are the densest and coldest regions. Giant
Molecular Clouds (GMCs) are the largest of these
structures and serve as stellar nurseries where new stars
and planetary systems are formed.
In spiral galaxies such as the Milky Way, GMCs
are primarily located along spiral arms. These regions
experience enhanced gas compression, which promotes
the formation of molecular hydrogen. GMCs typically
contain tens of thousands to several million times the
mass of the Sun and extend over distances ranging from
a few tens to a few hundred parsecs. Their extremely
low temperatures, generally between 10 and 20 Kelvin,
create ideal conditions for gravitational collapse and
star formation.
3.2 Chemical Composition and
Physical Characteristics
3.2.1 Molecular Content
The dominant component of GMCs is molecular
hydrogen, which constitutes roughly three-quarters of
their total mass. Helium accounts for most of the
remaining mass, while heavier elements and dust grains
contribute a small but important fraction.
Although molecular hydrogen is the most abundant
molecule, it is difficult to observe directly because it
does not emit radiation efficiently at low temperatures.
Astronomers therefore use carbon monoxide as a
tracer molecule. Emission from carbon monoxide
allows researchers to estimate the amount of molecular
hydrogen present within a cloud.
Dust grains play an essential role in molecular
cloud chemistry. They shield molecules from
destructive ultraviolet radiation and provide surfaces on
which molecular hydrogen can form. In addition, dust
grains emit infrared radiation, which helps regulate the
thermal balance of the cloud.
3.2.2 Temperature and Density
GMCs are extremely cold compared to most
astrophysical environments. Their temperatures
typically range from 10 to 20 Kelvin. At such low
temperatures, thermal motion of particles is weak,
allowing gravity to influence the cloud’s evolution more
effectively.
The average particle density in a GMC is relatively
modest compared to terrestrial standards, but it is
significantly higher than in the surrounding ISM. Within
these clouds, denser regions known as clumps and cores
can reach much higher densities. These dense cores are
the direct birthplaces of stars.
3.4 Internal Structure of GMCs
3.2.3 Mass and Size
GMCs are among the largest coherent structures
in galaxies. Their masses range from ten thousand to
several million solar masses. Despite their enormous
mass, they remain gravitationally delicate structures
due to their low temperatures and complex internal
dynamics.
Observations show that many GMCs share similar
surface densities, suggesting that common physical
processes govern their formation and evolution.
3.3 Formation of Giant Molecular
Clouds
The formation of GMCs is closely linked to large-
scale galactic processes.
3.3.1 Spiral Density Waves
In spiral galaxies, large-scale density waves move
through the galactic disk. As gas enters these regions,
it becomes compressed. This compression increases
density, allowing atomic hydrogen to transform into
molecular hydrogen. Over time, these compressed
regions grow into giant molecular clouds.
3.3.2 Gravitational Instabilities
If the gas density in a galactic disk becomes
sufficiently high, gravitational forces can cause it
to fragment into large molecular complexes. These
instabilities help convert diffuse interstellar gas into
dense molecular clouds.
3.3.3 Stellar Feedback and Cloud Collisions
Massive stars influence their surroundings through
stellar winds and supernova explosions. These energetic
events can sweep up surrounding gas into expanding
shells. As these shells cool and accumulate material,
they may fragment and form new molecular clouds.
Additionally, collisions between smaller clouds can
compress gas and contribute to GMC formation.
3.4 Internal Structure of GMCs
Modern observations reveal that GMCs are far
from uniform. Instead, they exhibit a highly complex
and hierarchical structure.
3.4.1 Filaments
Filamentary structures are commonly observed
within molecular clouds. These elongated structures
often extend for several parsecs and contain chains
of dense cores. Filaments appear to be fundamental
building blocks of molecular clouds and play a crucial
role in star formation.
3.4.2 Clumps and Cores
Within filaments, material gathers into clumps and
cores. Clumps are intermediate-scale dense regions
that may form clusters of stars. Cores are smaller and
denser regions that typically give rise to individual stars
or small multiple systems.
The hierarchical organization of filaments, clumps,
and cores reflects the combined influence of turbulence
and gravity within the cloud.
3.5
Cloud Dynamics: Turbulence and
Magnetic Fields
3.5.1 Turbulence
Observations show that gas motions within GMCs
are highly turbulent and often supersonic. Turbulence
serves two important roles. On large scales, it
can support the cloud against complete gravitational
collapse. On smaller scales, it creates localized density
enhancements that promote star formation.
However, turbulence dissipates energy over time
and must be continuously replenished. Large-scale
galactic motions and feedback from young stars are
likely sources of this energy.
3.5.2 Magnetic Fields
Magnetic fields are present throughout the
interstellar medium and permeate GMCs. These fields
24
3.8 Observational Techniques
can provide additional support against gravitational
collapse. In some cases, magnetic forces influence the
orientation of filaments and regulate how material flows
within the cloud.
The balance between gravity, turbulence, and
magnetic fields determines the efficiency and rate of
star formation.
3.6 Star Formation in Giant
Molecular Clouds
Star formation begins within dense cores when
gravity overcomes internal support mechanisms. As
a core collapses, material accumulates at the center,
forming a protostar. Surrounding material may form
an accretion disk, from which planets can eventually
emerge.
Most stars form in clusters rather than in isolation.
Massive GMCs can produce entire stellar associations
containing hundreds or thousands of stars.
Despite their large mass, GMCs convert only a
small fraction of their gas into stars. Radiation, stellar
winds, and eventual supernova explosions from massive
stars disperse the surrounding gas, limiting further star
formation.
3.7 Lifetimes and Evolution
GMCs are not permanent structures. Their
lifetimes are typically estimated to be between 10 and
30 million years. During this time, they evolve through
several stages:
1. Formation from diffuse interstellar gas.
2.
Growth and fragmentation into filaments and
cores.
3. Active star formation.
4. Dispersal by stellar feedback.
The dispersal phase returns enriched material to the
ISM, contributing to the next generation of molecular
clouds.
3.8 Observational Techniques
Because GMCs are cold and dust-rich, they are
primarily studied at radio, infrared, and submillimeter
wavelengths.
Radio observations of carbon monoxide trace the
distribution of molecular gas. Infrared observations
reveal embedded young stellar objects and warm dust
emission. Submillimeter telescopes provide high-
resolution maps of dense cores and filaments.
Advances in observational facilities have
significantly improved our understanding of molecular
cloud structure and star formation processes.
3.9 Role in Galactic Evolution
GMCs regulate the star formation rate within
galaxies. The continuous cycle of cloud formation, star
birth, and cloud dispersal drives chemical enrichment
and influences galactic structure.
Massive stars formed within GMCs eventually
explode as supernovae, injecting energy and heavy
elements into the ISM. This feedback process shapes
the future evolution of galaxies.
3.10 Conclusion
Giant Molecular Clouds are fundamental
components of galaxies and the principal sites of star
formation. Their low temperatures, large masses, and
complex internal structures create ideal environments
for gravitational collapse and stellar birth. Although
significant progress has been made in understanding
their formation and evolution, ongoing research
continues to refine our knowledge of these remarkable
cosmic structures.
Understanding GMCs provides essential insight
into the origin of stars, planetary systems, and the
broader evolution of galaxies.
References:
MOLECULAR CLOUDS
Protostars and Planets VI
25
3.10 Conclusion
Theory of Star Formation
Molecular Clouds in the Milky Way
Star formation in molecular clouds: observation
and theory
Star Formation in the Milky Way and Nearby
Galaxies
Formation of Molecular Clouds and Global
Conditions for Star Formation
About the Author
Sindhu G is a research scholar in the
Department of Physics at St. Thomas College,
Kozhencherry. She is doing research in Astronomy
& Astrophysics, with her work primarily focusing
on the classification of variable stars using different
machine learning algorithms. She is also involved
in period prediction for various types of variable
stars—especially eclipsing binaries—and in the study
of optical counterparts of X-ray binaries.
26
Part III
Biosciences
Kombucha: “The Symbiotic Elixir”
A Comprehensive View on its Bioprocessing
by Aengela Grace Jacob
airis4D, Vol.4, No.3, 2026
www.airis4d.com
1.1 Introduction
Kombucha, often referred to as the immortal health
elixir in ancient Chinese texts, is a fermented tea
beverage that has exploded in popularity across the
globe. Far more than just a trendy drink, Kombucha
is a complex, living ecosystem captured in a bottle. In
small-scale and home brewing, kombucha is typically
made in glass jars topped with fabric. Black or green
tea leaves are steeped in hot water with sugar, then
removed. When the sweetened tea has cooled, it is
mixed with a bit of kombucha from a previous batch
to make the liquid more acidic. A gelatinous mat of
symbiotic culture of bacteria and yeast (SCOBY) is
then added, and the brew is covered with a tight-weave
fabric or paper coffee filter and left to ferment at room
temperature for 7–30 days. This article delves into
the microbial composition, the remarkable symbiotic
culture that creates it, the intricate fermentation process,
and its powerful impact on human gut health.
1.2 SCOBY: The Heart of the Brew
The core of Kombucha is the SCOBY (Symbiotic
Culture of Bacteria and Yeast). Visually, it is a thick,
rubbery, beige disc. While often called a Kombucha
mushroom, it is actually a dense structure of cellulose
fibers created by bacteria. The living components of
SCOBY can vary widely but generally include strains
of Saccharomyces cerevisiae and other yeasts, as well
as a number of bacteria, including Gluconacetobacter
Figure 1: Home made kombucha
Image Courtesy:https://www.healthygreenkitchen.com/how- to-make-kombucha-at-home/
xylinus. Fresh or dehydrated SCOBY can be bought
from suppliers, or a “mother can be taken from a
previous batch of kombucha. In the fermentation
process, the alcohols produced by the yeasts are
converted by the bacteria into organic acids. The final
kombucha product contains vitamin C, vitamins B6
and B12, thiamin, acetic acid, and lactic acid, as well
as small amounts of sugar and ethanol. The SCOBY
acts as the living engine that transforms sweetened tea
into an elixir, serving as both a fermenting agent and a
protective seal for the liquid.
1.3 Microbial Contents: The Living
Force
A single bottle of Kombucha is teeming with
billions of microorganisms. While the exact
1.4 The Fermentation Mechanism
Figure 2: SCOBY
Image Courtesy:https://escarpmentlabs.com/blogs/resources/the-road-to-scoby
Figure 3: SCOBY GROWTH
Image Courtesy: https://www.reddit.com/r/Kombucha/comments/glo9ez/scoby growth experimen
t varying steep times/
Figure 4: Magnification of Scoby at 400X by Benjamin
Wolfe
Image Courtesy:https://microbialfoods.org/science-digested-microbial- diversity-kombucha/
microbiome varies, the core players include:
Acetic Acid Bacteria (AAB): These produce the
defining acetic acid (vinegar) and build the SCOBY’s
physical structure.
Yeast: These convert sugar into ethanol and carbon
dioxide (CO
2
), providing the f uel for the bacteria.
Lactic Acid Bacteria (LAB): These contribute
to the sour flavor and enhance the drink’s probiotic
properties.
1.4 The Fermentation Mechanism
The creation of Kombucha is a synchronized
chemical dance consisting of two primary phases:
Phase 1:Primary Fermentation (Aerobic)
In an open container, yeast consumes sugar to
produce ethanol. Simultaneously, the Acetic Acid
Bacteria consume that ethanol and oxygen from the air
to produce acetic acid and gluconic acid. This stage
creates the tart flavor profile.
Phase 2: Secondary Fermentation (Anaerobic)
The liquid is bottled and sealed. Without oxygen,
the yeast continues to ferment the remaining sugar,
but the resulting
CO
2
is trapped, creating natural
carbonation.
The fermentation mechanism operates through
a complex, synergistic interaction between various
microorganisms.
29
1.6 Conclusion
Figure 5: Fermentation Mechanism
: Image Courtesy:https://www.researchgate.net/figure/Types-of-fermentation-in- Kombucha-Creat
ed-with- BioRendercom fig1 365463218
Initial Hydrolysis (Yeast): Yeast (such as
Saccharomyces) produces the enzyme invertase,
which breaks down sucrose into glucose and
fructose. Secondly, alcoholic fermentation when
Yeast metabolizes the glucose and fructose to produce
ethanol and carbon dioxide (
CO
2
), contributing
to the beverage’s carbonation. Thirdly, Acidic
fermentation Bacteria like Acetic acid bacteria (AAB,
e.g., Acetobacter and Gluconobacter) and Lactic Acid
Bacteria (LAB) consume the ethanol and sugars to
produce acetic acid, lactic acid, and gluconic acid.
Cellulose formation occurs when the AAB uses the
glucose to synthesize a thick cellulose biofilm, which
is the SCOBY pellicle, acting as a barrier and enabling
oxygen exchange.
1.5 Major Benefits on Gut Health
The most scientifically backed benefits of
Kombucha revolve around the gastrointestinal system
which includes probiotic diversity as it restores natural
balance to gut flora, aiding digestion and immune
function. The probiotics (such as Lactobacillus
and Bifidobacterium) can help restore balance to the
microbiome by suppressing the growth of harmful
bacteria. Fermented foods like kombucha are
recommended to help re-establish healthy gut bacteria
after a course of antibiotics disrupts the natural flora.
Studies suggest that regular kombucha consumption
may modulate the gut microbiota in ways that
improve metabolic health, particularly in individuals
with obesity, by encouraging beneficial bacteria like
Akkermansiaceae. Organic acids like acetic and
gluconic acids inhibit harmful pathogens and act as
postbiotics to support the gut lining. It provides an
antioxidant boost through fermentation as it increases
the bioavailability of tea polyphenols, which combat
inflammation in the digestive tract.
1.6 Conclusion
Kombucha is a testament to the power of microbial
symbiosis. Through the collaborative effort of bacteria
and yeast, a simple pot of sweet tea is transformed
into a potent tool for human wellness. By providing a
rich source of probiotics and antioxidants, it remains a
timeless path toward internal balance.
REFERENCES
https://www.sciencedirect.com/science/article/
pii/S2772753X22000144
https://thegoodbug.com/blogs/news/kombucha
-fermentation-benefits-process
https://www.sciencedirect.com/science/article/
pii/S2772753X23002423
https://pmc.ncbi.nlm.nih.gov/articles/PMC965
8962/
About the Author
Aengela Grace Jacob is Final Year Student
of Bsc Biotechnology , Chemistry ( BSc BtC ) Dual
Major at Christ University Central campus, Bangalore
30
The Genomic Landscape of Major Depressive
Disorder
by Geetha Paul
airis4D, Vol.4, No.3, 2026
www.airis4d.com
2.1 Introduction
Major Depressive Disorder (MDD) is a globally
prevalent psychiatric condition characterised by
persistent low mood, anhedonia, and cognitive
dysfunction. Despite decades of clinical study, the
underlying aetiology of MDD remained largely elusive
until the advent of the genomic era. Traditional views
often sought a single depression gene, a silver bullet that
could explain the condition’s onset. However, modern
genetic research, particularly large-scale Genome-
Wide Association Studies (GWAS), has fundamentally
dismantled this notion. We now understand MDD
as a polygenic trait, meaning its genetic liability
is distributed across thousands of common genetic
variations, primarily single-nucleotide polymorphisms
(SNPs).
A SNP represents a single letter change in the
DNA sequence (e.g., a Cytosine replacing a Thymine).
Individually, these SNPs exert a negligible effect on an
individual’s risk; when aggregated, they form a complex
genetic architecture that accounts for roughly 10–15%
of the total variance in depression liability (SNP-
based heritability). This polygenic nature explains the
high degree of heterogeneity seen in clinics: because
different combinations of thousands of SNPs can lead
to a diagnosis, two patients may share the same label
but possess entirely different biological drivers.
The role of SNPs in MDD detection is not
about a binary yes/no diagnosis but about quantifying
susceptibility and resolving the disorder’s cryptic
biological subtypes. Researchers utilise Polygenic
Risk Scores (PRS) as a standardised quantitative
metric. This score aggregates the weighted effects
of thousands of single-nucleotide polymorphisms
(SNPs) discovered via large-scale Genome-Wide
Association Studies (GWAS). By calculating this
cumulative genetic burden, scientists can effectively
stratify populations by risk level. This process identifies
individuals with high latent vulnerability to Major
Depressive Disorder (MDD), even before clinical
symptoms manifest. This molecular approach bridges
the gap between subjective symptom reporting and
objective biological data, paving the way for a more
precise, stratified approach to mental health that
moves away from a one-size-fits-all model toward
personalised psychiatry.
2.2 Quantifying Genetic Liability via
Polygenic Risk Scores (PRS)
Since no single gene causes depression, risk
detection relies on the cumulative signal from thousands
of SNPs. The Polygenic Risk Score (PRS) is the
primary tool used to measure this. Recent studies have
shown that individuals in the top 2.5% of the PRS
distribution have a significantly higher risk (up to 5
times greater) of developing MDD than those with
average scores. PRS can identify high-risk individuals
before the first clinical episode, enabling preemptive
monitoring and resilience-building interventions.
2.4 Pharmacogenomics and Treatment Response
2.3 Resolving Diagnostic
Heterogeneity
One of the most significant roles of SNPs is
identifying cryptic subtypes of depression. While the
DSM-5 lists symptoms, SNPs reveal the underlying
biological pathways. Some patients carry SNPs
concentrated in serotonin/dopamine pathways (e.g.,
SLC6A4, DRD2), while others show a high burden
of SNPs in inflammatory genes (IL-6, CRP). Research
shows that MDD shares a p-factor (a general genetic
liability) with other disorders like Bipolar Disorder and
Schizophrenia, which SNPs help untangle.
2.4 Pharmacogenomics and
Treatment Response
SNPs are critical in detecting how a patient
will respond to medication, a field known as
pharmacogenomics. Variants in the CYP2D6 and
CYP2C19 genes determine how quickly the liver
metabolises antidepressants. Detection of Ultra-rapid
or Poor metabolize r SNPs prevents toxicity and
treatment failure. Variations in the BDNF (Brain-
Derived Neurotrophic Factor) gene can predict the
success of SSRIs, as they influence the brain’s ability
to physically adapt and heal during treatment.
Example 1: The BDNF Val66Met Polymorphism
(rs6265)
The Brain-Derived Neurotrophic Factor
(BDNF) gene encodes a protein that acts like fertilizer
for the brain, helping neurons grow and form new
connections (synapses).
The BDNF transcript comprises one of eight 5’
untranslated exons (exon I-VIII) and the common 3’
protein coding exon IX; B: Intracellular signalling after
TrkB activation. Following BDNF binding, TrkB
dimerisation and its phosphorylation at intracellular
tyrosine residues occur. Then, the activated TrkB
stimulates three main signalling pathways: (1) mitogen-
activated protein kinase/extracellular signal-regulated
kinase (MAPK/ERK); (2) phosphatidylinositol 3-
kinase (PI3K); and (3) phospholipase C
γ
(PLC
γ
)
Figure 1: Brain-derived neurotrophic factor (BDNF)
gene and stimulated intracellular signalling cascades
after activation of tropomyosin-related kinase (Trk)B.
Image courtesy:https://tse3.mm.bing.net/th/id/OIP.Kk1ntIO-iez3fzmc93LTdAHaHS?pid=ImgDet
&w=204&h=200&c=7&dpr=1.3&o=7&rm=3
pathways. The MAPK pathway, in which MAPK/ERK
kinase (MEK) is involved, plays a role in neuronal
differentiation and outgrowth. PI3K signalling
promotes neuronal survival via Ras or GRB-associated
binder 1 (Gab1). Following PLC
γ
activation, inositol-
1,4,5-trisphosphate (IP3) and diacylglycerol (DAG)
are both produced. DAG activates protein kinase
C (PKC), which is important for regulating synaptic
plasticity. Meanwhile, IP3 increases intracellular
Ca 2+ concentration via IP3 receptors on the
endoplasmic reticulum (ER), resulting in activation
of Ca 2+ /calmodulin (CaM)-dependent protein kinase,
including CaMKII, CaMKK, and CaMKI. These
MAPK/ERK, PI3K, and PLC
γ
pathways can regulate
gene transcription.
In the SNP, a single nucleotide switch causes the
amino acid Valine (Val) to be replaced by Methionine
(Met) at position 66. The Met variant impairs BDNF
secretion. Research shows that individuals with the
Met allele often have a smaller hippocampus, the brain
region responsible for emotion regulation and memory.
Clinical Impact:
Detection of this SNP helps clinicians understand
why some patients suffer from treatment-resistant
depression. Because their brains have lower plasticity,
32
2.4 Pharmacogenomics and Treatment Response
Figure 2: Schematic view of how the SLC6A4 and
5-HTTLPR genes effectiveness in different parts of the
brain.
Image courtesy:https://medcraveonline.com/IJMBOA/assessment-of-genetic-mutations- by-the-g
enersquos-5- httlpr-and-slc6a4-and-scc6a4- in-the-human-depression.html
standard antidepressants may take longer to work or
require supplemental therapies like Exercise or TMS
(Transcranial Magnetic Stimulation), which naturally
boost BDNF levels.
Example 2: The SLC6A4 5-HTTLPR
Polymorphism
The SLC6A4 5-HTTLPR Polymorphism is perhaps
the most famous example of a cryptic genetic marker in
psychiatry. It involves the Serotonin Transporter
Gene, which is the primary target for Selective
Serotonin Reuptake Inhibitors (SSRI) antidepressants
(like Prozac or Lexapro). Serotonin is the happiness
hormone.
The SLC6A4 gene encodes the serotonin
transporter, and its 5-HTTLPR polymorphism
significantly influences serotonin reuptake, impacting
mood regulation and responses to antidepressants.
The Variation: This is a functional polymorphism
in the genes promoter region, often categorised into
Short (S) and Long (L) alleles.
The Mechanism: The S allele results in lower
expression of the serotonin transporter. This makes the
brains emotional centre (the amygdala) hyper-reactive
to stress.
Clinical Impact
Stress Sensitivity: People with the S allele are
statistically more likely to develop MDD following a
stressful life event (Gene-Environment Interaction).
SSRI Response: Studies suggest that individuals
with the L/L genotype generally respond better and
Figure 3:
SLC6A4-5-HTTLPR. The SLC6A4 gene encodes
the protein responsible for serotonin reuptake. The
5-HTTLPR polymorphism in the promoter region
dictates expression levels: the short (S) allele reduces
transcription, leading to lower transporter density on
the presynaptic membrane compared to the long (L)
allele. This mechanism is the primary target for SSRI
antidepressants.
Image courtesy:https://genesight.com/white-papers/get-to-know-a-gene- slc6a4/
faster to SSRIs than those with the S allele, who may
experience more side effects
References
https://genesight.com/white-papers/get-to-kno
w-a-gene-slc6a4/
https://medcraveonline.com/IJMBOA/assessme
nt-of-genetic-mutations-by-the-genersquos-5-httlpr-a
nd-slc6a4-and-scc6a4-in-the-human-depression.html
https://tse3.mm.bing.net/th/id/OIP.Kk1ntIO-iez
3fzmc93LTdAHaHS?pid=ImgDet&w=204&h=200
&c=7&dpr=1.3&o=7&rm=3
Howard, D. M., et al. (2019). Genome-wide
meta-analysis of depression identifies 102 independent
variants and highlights the prefrontal cortex. Nature
Neuroscience. A foundational paper establishing the
polygenic architecture of depression.
McIntosh, A. M., et al. (2024). Genome-wide
study of major depression in 685,808 diverse individuals
identifies 697 independent associations. medRxiv.
33
2.4 Pharmacogenomics and Treatment Response
This massive meta-analysis highlights the polygenic
nature of MDD and identifies hundreds of novel SNP
associations.
Mullins, N., et al. (2021). Genome-wide
association study of more than 40,000 bipolar disorder
cases provides new insights into the underlying biology.
Nature Genetics. (Used for transdiagnostic SNP
comparison).
Wray, N. R., et al. (2018). Genome-wide
association analyses identify 44 independent loci
associated with major depressive disorder. Nature
Genetics. This study confirms that MDD risk is driven
by the additive effects of common SNPs.
Zajac, G. J., et al. (2025). Influence and role of
polygenic risk score in the development of 32 complex
diseases. Journal of Global Health. Provides data on
how PRS improves clinical risk prediction for early-
onset cases of MDD.
About the Author
Geetha Paul is one of the directors of
airis4D. She leads the Biosciences Division. Her
research interests extends from Cell & Molecular
Biology to Environmental Sciences, Odonatology, and
Aquatic Biology.
34
Part IV
Computer Programming
From Telescope to Data Product
by Ajay Vibhute
airis4D, Vol.4, No.3, 2026
www.airis4d.com
1.1 From Telescope to Data Product
1.1.1 Raw Measurements vs Scientific Data
The journey from a telescopes detector to a usable
scientific data product begins with raw measurements,
which are simply numerical values—counts or
voltages—recorded by the instrument. These values
reflect the detectors response to incoming radiation
rather than direct physical properties of celestial
objects. A CCD pixel, for example, measures
accumulated charge generated by photons, but that
signal also includes contributions from background
sky emission, electronic offsets, thermal noise, and
occasional cosmic ray events. Raw measurements
are further shaped by the instrument itself. The
telescopes point-spread function spreads light from
a single source over multiple pixels, and variations
in pixel sensitivity introduce spatial inconsistencies.
Detector nonlinearities and electronic effects can distort
signals, particularly at high flux levels. As a result, the
raw image is not a direct representation of the sky, but a
convolution of the true scene with instrumental response,
embedded within noise, figure 1. Because of this,
raw data are not immediately interpretable in physical
terms. Transforming them into scientifically meaningful
data requires computational corrections such as bias
subtraction, flat-fielding, background modeling, and
calibration into standardized physical units. These steps
convert instrument-dependent signals into quantities
that can be reliably compared and scientifically analyzed,
figure 2.
Figure 1: Recorded Raw image
Figure 2: Reconstructed sky image
1.1 From Telescope to Data Product
1.1.2
Data Acquisition and Instrument Effects
Data acquisition in astronomy is far from a passive
recording process. Each measurement is shaped
by the design of the telescope, the properties of
the detectors, and the observing environment. The
optical configuration determines angular resolution
and light-gathering power, while alignment errors,
mirror imperfections, or thermal expansion can subtly
alter image quality. In ground-based observations,
atmospheric turbulence blurs incoming light, producing
time-varying distortions commonly referred to as
seeing. Atmospheric extinction and sky background
further modify the signal before it even reaches the
detector. In space-based instruments, although the
atmosphere is absent, other challenges arise, including
cosmic ray impacts, thermal fluctuations, and small
pointing instabilities of the spacecraft. Detectors
themselves introduce additional complexities. Readout
noise affects faint signals, pixel-to-pixel sensitivity
variations create spatial non-uniformities, and charge
diffusion can blur sharp features. Bright sources
may saturate detector elements, leading to nonlinear
responses or data loss in high-intensity regions. Even
the timing of exposures and the stability of electronics
influence the final measurement. Understanding these
influences is essential because they fundamentally
shape the raw data. Computational corrections applied
later—such as deblurring, background subtraction,
and calibration—depend on accurate models of these
instrumental and environmental effects. In this way, data
acquisition and computational processing are tightly
interconnected stages of a single measurement process.
1.1.3 Calibration as a Computational Task
Calibration transforms raw measurements into
quantitative, scientifically meaningful data by
systematically correcting for instrumental and
environmental effects. At its most basic level,
this includes procedures such as bias subtraction to
remove electronic offsets, dark current subtraction to
account for thermally generated charge, and flat-fielding
to correct for pixel-to-pixel sensitivity variations.
These steps ensure that the recorded signal more
accurately reflects incoming radiation rather than
detector imperfections. In imaging, calibration may
also involve modeling and subtracting background sky
emission, while in spectroscopy it includes wavelength
calibration using known spectral lines. Beyond these
corrections, calibration extends to photometric and
spectral transformations, converting detector counts
into standardized physical units such as flux density,
magnitude, or calibrated wavelength. This often
requires reference observations of standard stars or
laboratory calibration sources, enabling measurements
from different instruments or observing runs to be
directly compared. Astrometric calibration further
aligns images with celestial coordinate systems,
allowing precise positional measurements. In modern
astronomy, calibration is primarily a computational
process, embedded within automated pipelines that
apply corrections consistently across thousands or
millions of observations. Because calibration defines
the quantitative scale of the data, even small inaccuracies
can introduce systematic biases that propagate into
derived parameters such as distances, masses, or
luminosities. For this reason, calibration procedures
must be rigorously validated, regularly updated, and
carefully documented to ensure the reliability and
reproducibility of scientific results.
1.1.4 Standard Data Products in Astronomy
Once calibrated, data can be transformed into
standardized products suitable for analysis, distribution,
and long-term archiving. These products include
processed images, calibrated spectra, time-series
light curves, and structured catalogs of detected
sources. Each product is typically accompanied by
extensive metadata describing observing conditions,
calibration parameters, uncertainty estimates, and
quality flags. This contextual information is essential,
as it allows researchers to assess reliability, propagate
uncertainties, and reproduce results. Standardization
is critical because modern astronomy is inherently
comparative and cumulative. Datasets from different
instruments, observatories, or observing epochs must
be interoperable to enable cross-matching, multi-
37
1.1 From Telescope to Data Product
wavelength analysis, and large-scale statistical studies.
Uniform data formats, coordinate systems, and
calibration conventions make it possible to combine
observations across surveys and to perform automated
analyses on millions or billions of sources. In large-scale
projects such as the Sloan Digital Sky Survey (SDSS)
or Gaia, standardized data products form the backbone
of scientific research, supporting investigations into
stellar populations, galactic structure, dark matter
distribution, and cosmology. Computational pipelines
are central to the production of these products. From
ingesting raw detector outputs to performing source
detection, photometric measurement, and catalog
assembly, pipelines ensure consistency and scalability.
They also embed quality-control procedures that flag
anomalies and track processing provenance. In this
way, standardized data products represent not just
processed observations, but carefully curated outputs of
an integrated computational system designed to support
reliable scientific discovery.
1.1.5 Sources of Systematic Error
Even after calibration and processing, systematic
errors can affect data quality and interpretation
in subtle but significant ways. Unlike random
noise, which tends to average out over many
measurements, systematic effects introduce consistent
biases that can shift results in a particular direction.
These errors may stem from instrumental limitations
such as optical aberrations, imperfect flat-field
corrections, detector nonlinearities, or long-term drift
in sensitivity. Environmental factors—including
variations in atmospheric transparency, scattered
light, thermal instability, or imperfect background
subtraction—can also leave residual signatures in the
data. In addition, the data reduction algorithms
themselves can introduce biases. Assumptions
about noise distributions, source shapes, background
levels, or model parametrizations may not hold
uniformly across all observations. For example, an
incorrect model of the point-spread function can
distort photometric measurements, while oversimplified
background modeling can artificially enhance or
suppress faint sources. In large surveys, even small
systematic biases can accumulate across millions
of objects, leading to measurable distortions in
statistical analyses. Systematic errors are particularly
challenging because they can mimic or obscure
genuine astrophysical signals, sometimes producing
apparent trends or correlations that are not physically
real. Detecting and mitigating these effects requires
rigorous validation procedures, cross-comparisons with
independent instruments or surveys, simulation-based
testing, and careful uncertainty modeling. In modern
astronomical computing, controlling systematic error
is often as important as increasing statistical precision,
and it remains one of the central challenges in producing
reliable scientific results.
1.1.6 Concluding Remarks
The transformation from telescope to data product
illustrates the central role of computation in modern
astronomy. Every stage—from raw measurement to
calibrated image or structured catalog—depends on
algorithms that correct instrumental effects, model
uncertainties, and extract meaningful signals from
complex data. Scientific results therefore reflect
not only the performance of the telescope, but also
the assumptions and design of the computational
pipeline. Understanding both the physical origins of
measurements and the methods used to process them is
essential for producing reliable and reproducible science.
In this sense, modern data products are true co-creations
of hardware and software, where computational
processes are as fundamental to discovery as the
instruments that collect the light.
About the Author
Dr. Ajay Vibhute is currently working
at the National Radio Astronomy Observatory in
the USA. His research interests mainly involve
astronomical imaging techniques, transient detection,
machine learning, and computing using heterogeneous,
accelerated computer architectures.
38
About airis4D
Artificial Intelligence Research and Intelligent Systems (airis4D) is an AI and Bio-sciences Research Centre.
The Centre aims to create new knowledge in the field of Space Science, Astronomy, Robotics, Agri Science,
Industry, and Biodiversity to bring Progress and Plenitude to the People and the Planet.
Vision
Humanity is in the 4th Industrial Revolution era, which operates on a cyber-physical production system. Cutting-
edge research and development in science and technology to create new knowledge and skills become the key to
the new world economy. Most of the resources for this goal can be harnessed by integrating biological systems
with intelligent computing systems offered by AI. The future survival of humans, animals, and the ecosystem
depends on how efficiently the realities and resources are responsibly used for abundance and wellness. Artificial
intelligence Research and Intelligent Systems pursue this vision and look for the best actions that ensure an
abundant environment and ecosystem for the planet and the people.
Mission Statement
The 4D in airis4D represents the mission to Dream, Design, Develop, and Deploy Knowledge with the fire of
commitment and dedication towards humanity and the ecosystem.
Dream
To promote the unlimited human potential to dream the impossible.
Design
To nurture the human capacity to articulate a dream and logically realise it.
Develop
To assist the talents to materialise a design into a product, a service, a knowledge that benefits the community
and the planet.
Deploy
To realise and educate humanity that a knowledge that is not deployed makes no difference by its absence.
Campus
Situated in a lush green village campus in Thelliyoor, Kerala, India, airis4D was established under the auspicious
of SEED Foundation (Susthiratha, Environment, Education Development Foundation) a not-for-profit company
for promoting Education, Research. Engineering, Biology, Development, etc.
The whole campus is powered by Solar power and has a rain harvesting facility to provide sufficient water supply
for up to three months of drought. The computing facility in the campus is accessible from anywhere through a
dedicated optical fibre internet connectivity 24×7.
There is a freshwater stream that originates from the nearby hills and flows through the middle of the campus.
The campus is a noted habitat for the biodiversity of tropical Fauna and Flora. airis4D carry out periodic and
systematic water quality and species diversity surveys in the region to ensure its richness. It is our pride that the
site has consistently been environment-friendly and rich in biodiversity. airis4D is also growing fruit plants that
can feed birds and provide water bodies to survive the drought.