Cover page
Image Name: Glow Window Tetra:Known as ‘glow widow tetra, ‘glow skirt tetra, ‘colour widow tetra in the
ornamental fish industry, a genetically modified variety of the black widow tetra Gymnocorymbus ternetzi is the
most favoured by aquarium fish hobbyists in West Bengal. Stunning and sparkling red, blue, yellow, green, pink,
orange, and purple bodied colour variations of this tetra have been developed through gene transfer or transgenic
technology. Photo: Geetha Paul
Managing Editor Chief Editor Editorial Board Correspondence
Ninan Sajeeth Philip Abraham Mulamoottil K Babu Joseph The Chief Editor
Ajit K Kembhavi airis4D
Geetha Paul Thelliyoor - 689544
Arun Kumar Aniyan India
Sindhu G
Journal Publisher Details
Publisher : airis4D, Thelliyoor 689544, India
Website : www.airis4d.com
Email : nsp@airis4d.com
Phone : +919497552476
i
Editorial
by Fr Dr Abraham Mulamoottil
airis4D, Vol.3, No.7, 2025
www.airis4d.com
This edition starts with the article “Entropy in
Natural Language Processing: Structure in Seeming
Chaos” by Jinsu Ann Mathew. The author explores
how entropy—a concept from physics and information
theory—is used in Natural Language Processing (NLP)
to uncover structure in language that often appears
random. The article introduces lexical entropy to
measure the unpredictability of words or characters
in a text, highlighting how repetition leads to lower
entropy and greater variety indicates higher entropy. It
then explains n-gram entropy, which examines patterns
in sequences of language units (like word pairs or trios),
providing deeper insights into syntax and style. Cross-
entropy, a metric used to evaluate how well language
models predict upcoming words, is discussed as both
a performance measure and an indicator of linguistic
complexity. Together, these entropy measures reveal
that beneath the surface of human language lies an
underlying structure that can be analyzed and modeled,
making entropy a powerful tool in computational
linguistics.
The article “Nonlinear Waves in Plasma Physics”
by Abishek P S explores the crucial role of nonlinear
wave phenomena in understanding and harnessing the
behavior of plasma—the fourth state of matter that
dominates the universe. Unlike linear waves, nonlinear
plasma waves give rise to complex structures like shock
waves, solitons, and double layers, which govern energy
transport, particle acceleration, and self-organization
across a range of environments from space to laboratory
settings. These structures are central to astrophysical
phenomena such as supernova explosions and solar
flares, and are also vital in applied fields like fusion
energy, plasma-based propulsion, and advanced particle
accelerators. The article emphasizes that linear theories
fall short in capturing the richness of plasma dynamics,
and advanced simulations and diagnostics are essential
for studying these nonlinear effects. Understanding
these waves not only enhances predictive capabilities
in space weather and fusion stability but also opens
avenues for transformative technologies. Ultimately,
the study of nonlinear waves bridges theoretical insight
with practical impact, offering tools to decode and
control the dynamic nature of plasma across scales.
In the article Black Hole Stories-19, Professor Ajit
Kembhavi explores how gravitational waves—ripples in
the fabric of four-dimensional space-time—affect matter
and how their presence can be experimentally detected.
Unlike electromagnetic waves that exert noticeable
forces on charged particles, gravitational waves cause
minute changes in spatial geometry, subtly stretching
and compressing distances. When such a wave passes
through a circle of free particles, it deforms the circle
into an ellipse in a periodic fashion, with the direction
of deformation depending on the waves polarization.
Early attempts to detect these waves included bar
detectors, notably constructed by physicist Joseph
Weber in the 1960s. Although Weber’s experiments
with large aluminium cylinders aimed to detect wave-
induced vibrations, his findings were ultimately not
confirmed by others. However, his pioneering work
paved the way for more advanced detection methods.
The article then introduces interferometric detectors,
particularly a simplified Michelson interferometer,
where laser beams reflect off mirrors placed at equal
distances. The passage of a gravitational wave
alters the distances between mirrors, changing the
interference pattern of the light. These fluctuations in
the interference pattern serve as a detectable signature
of a gravitational wave. The story sets the stage for
the next article, which will describe how this principle
is applied in the large-scale LIGO experiment that
successfully detected gravitational waves in 2015.
The article X-ray Astronomy: Through Missions
by Aromal P discusses key X-ray astronomy missions
launched in the 2010s, focusing on three significant
satellites: NuSTAR, AstroSat, and Hitomi. NuSTAR,
launched in 2012 by NASA, is the first focusing
high-energy X-ray telescope in orbit and has enabled
groundbreaking studies of supermassive black holes,
neutron stars, and supernova remnants. AstroSat,
Indias first multi-wavelength space observatory
launched in 2015, carries five co-aligned instruments
covering UV to hard X-rays. It has advanced studies in
gamma-ray bursts, neutron stars, and stellar evolution.
Hitomi, a Japanese mission launched in 2016, was
designed for high-resolution spectroscopy across a
wide energy range but was unfortunately lost shortly
after beginning operations. Each mission significantly
expanded our understanding of the high-energy universe
through improved instrumentation and observational
capabilities.
The article “The Bright Centers of Galaxies:
A Peek into Active Galactic Nuclei (AGN)” by Dr.
Savithri H. Ezhikode explores the nature, structure, and
history of active galactic nuclei—extremely luminous
regions at the centers of some galaxies powered
by supermassive black holes (SMBHs). While
most SMBHs are dormant, those actively accreting
matter form AGN, which emit vast energy across
the electromagnetic spectrum and can outshine entire
galaxies. Historical observations dating back to the early
20th century laid the groundwork for identifying AGN
subclasses such as Seyfert galaxies and quasars. AGN
exhibit distinctive features including high luminosity,
broadband emission, rapid variability, emission lines,
and powerful relativistic jets. Their core engine
consists of an SMBH surrounded by an accretion disk,
a hot corona, and various emission regions like the
broad and narrow line regions. A dusty torus further
influences the AGN’s appearance depending on viewing
angle. The article highlights the multi-component
and highly energetic nature of AGN and emphasizes
their importance in understanding cosmic evolution,
with future studies promising deeper insight into their
dynamic processes.
Sindhu G’s article, T Coronae Borealis (T CrB)
describes a rare and fascinating recurrent nova located
in the constellation Corona Borealis, poised for a
spectacular eruption in 2025 after previous outbursts
in 1866 and 1946. This binary system, consisting
of a white dwarf and a red giant, undergoes nova
eruptions when hydrogen transferred from the red giant
accumulates on the white dwarf and ignites in a surface
thermonuclear explosion. Unlike most novae, T CrB
erupts roughly every 80 years, offering scientists a
unique chance to observe and study the full nova cycle.
Since 2023, the system has shown pre-eruption signs
such as optical brightening and increased X-ray activity,
strongly indicating an imminent explosion. When it
erupts, T CrB is expected to become visible to the
naked eye and offer a rare celestial spectacle. Beyond
the visual awe, the event will help astronomers refine
nova prediction models, explore binary star evolution,
and assess whether the system could one day become a
Type Ia supernova.
This article ”Decoding DNA’s Secrets: How
Sequence Shapes Life” by Geetha Paul explores the
intricate language of DNA and how its sequence of four
chemical bases—adenine (A), thymine (T), cytosine
(C), and guanine (G)—encodes the instructions for life.
It explains how DNAs double-helix structure enables
the storage and transmission of genetic information,
with genes acting as functional units that dictate
protein synthesis. The genetic code is read in triplets
(codons), each representing a specific amino acid
or signaling function. Through transcription and
translation, these codes are converted into proteins
essential for cellular functions. The article also
examines the roles of regulatory DNA in controlling
gene expression and highlights real-world implications
of DNA sequence changes, such as in sickle cell
anemia, lactose intolerance, and eye color variation.
Emphasizing the near-universality of the genetic code,
iii
it discusses its significance in biotechnology and
medicine, illustrating that DNA is not just a molecular
blueprint but the very narrative of life.
The article “Understanding Performance
Metrics in Parallel Computing” by Dr. Ajay
Vibhute explores essential metrics used to evaluate the
effectiveness of parallel algorithms. It emphasizes that
designing parallel programs is only half the challenge;
assessing their performance is equally critical. Key
metrics discussed include speedup, which measures
how much faster a task runs on multiple processors;
efficiency, which gauges how well processor resources
are utilized; and scalability, which evaluates how
performance evolves as more processors or larger
problems are introduced. The article also covers
Amdahl’s Law, highlighting the theoretical limits of
speedup imposed by the non-parallelizable portions
of code. Through examples and analysis, it shows
how performance bottlenecks, overheads, and design
limitations impact real-world outcomes. Ultimately, the
article underscores the need for thoughtful, iterative
optimization and early integration of performance
evaluation to develop scalable, high-performance
software for modern multicore and distributed systems.
iv
v
Contents
Editorial ii
I Artificial Intelligence and Machine Learning 1
1 Entropy in Natural Language Processing: Structure in Seeming Chaos 2
1.1 Lexical Entropy: Quantifying Uncertainty in Text . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.2 N-Gram Entropy: Capturing Sequence Patterns . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.3 Language Models and Cross-Entropy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.4 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
II Astronomy and Astrophysics 5
1 Nonlinear Waves in Plasma Physics 6
1.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
1.2 Nonlinear Waves . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
1.3 Types of Nonlinear Waves . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
1.4 Significance of Studying Nonlinear Waves . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
1.5 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
2 Black Hole Stories-19
The Detection of Gravitational Waves 12
2.1 Effects Produced by Gravitational Waves . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
2.2 Bar Detectors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
2.3 Interferometric Detectors for Gravitational Waves . . . . . . . . . . . . . . . . . . . . . . . . . . 14
3 X-ray Astronomy: Through Missions 16
3.1 Satellites in 2010s . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
3.2 NuSTAR . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
3.3 AstroSat . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
3.4 HITOMI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
4 The Bright Centers of Galaxies: A Peek into Active Galactic Nuclei 20
4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
4.2 Observational Signatures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
4.3 The AGN Central Engine . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
4.4 Components of AGN . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
4.5 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
5 T Coronae Borealis: A Recurrent Nova on the Verge of Eruption 24
5.1 Introduction: The Sleeping Star in Corona Borealis . . . . . . . . . . . . . . . . . . . . . . . . . 24
5.2 Past Eruptions and Their Significance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
CONTENTS
5.3 The Science Behind a Nova Eruption . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
5.4 Current Observations and Signs of Imminent Eruption . . . . . . . . . . . . . . . . . . . . . . . . 25
5.5 The Anticipated Nova Event and Its Scientific Impact . . . . . . . . . . . . . . . . . . . . . . . . 26
5.6 Conclusion: Awaiting a Celestial Spectacle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
III Biosciences 28
1 Decoding DNAs Secrets: How Sequence Shapes Life 29
1.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
1.2 The Structure of DNA: The Blueprint of Life . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
1.3 Genes: The Units of Heredity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
1.4 The Genetic Code: Triplets and Codons . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
1.5 From DNA Sequence to Biological Message . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
1.6 Protein Folding and Function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
1.7 Real-World Examples of DNA Messages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
1.8 The Universality and Flexibility of the Genetic Code . . . . . . . . . . . . . . . . . . . . . . . . . 33
1.9 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
IV Computer Programming 34
1 Understanding Performance Metrics in Parallel Computing 35
1.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
1.2 Speedup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
1.3 Efficiency . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
1.4 Scalability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
1.5 Amdahl’s Law . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
1.6 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
vii
Part I
Artificial Intelligence and Machine Learning
Entropy in Natural Language Processing:
Structure in Seeming Chaos
by Jinsu Ann Mathew
airis4D, Vol.3, No.7, 2025
www.airis4d.com
Language might seem unpredictable at
times—with new words, different writing styles,
and sentences that dont always follow clear rules. But
when we look closely, there is a hidden structure behind
the way we speak and write. One useful way to study
this structure is through a concept called entropy.
Originally used in physics and information theory,
entropy is a way to measure uncertainty or randomness.
In Natural Language Processing (NLP), we use entropy
to understand how ordered or chaotic a piece of text is.
For example, we can measure how predictable certain
words or letters are in a sentence or how much variety
there is in the way language is used.
In this article, we’ll explore how entropy helps
us analyze text using simple tools like n-grams and
language models. We’ll also see how it can reveal
interesting patterns in how language works—even when
it seems random.
1.1 Lexical Entropy: Quantifying
Uncertainty in Text
Lexical entropy is a way of measuring how
unpredictable or random the characters or words in a
piece of text are. In simple terms, it helps us understand
how much variety exists in the language being used. If
a text is highly repetitive, it has low entropy, meaning it
is more predictable. On the other hand, if a text uses a
wide range of words or characters in an unpredictable
way, it has high entropy. This makes lexical entropy a
useful tool for examining the complexity and structure
of language.
At the character level, consider a string like
’aaaaaa. It contains only one character repeated several
times, making it extremely predictable. The lexical
entropy here is low, possibly even zero. Now compare
that to a string like ’abcdef, which contains six different
letters with no repetition. Each character is equally
likely and gives us new information, making the string
less predictable and the entropy higher. This kind of
calculation helps quantify the randomness in a sequence
of letters.
The same idea applies at the word level. For
example, a simple sentence like The cat sat on the
mat” repeats common words and follows a familiar
pattern, leading to lower entropy. In contrast, a complex
sentence from a scientific paper or literary text might
contain rare or specialized words, used less frequently
and less predictably, resulting in higher entropy. By
measuring how often each word appears and calculating
their probabilities, we can compute the lexical entropy
of the text.
Understanding lexical entropy helps us make useful
observations. Low entropy often suggests structured
or constrained language—such as in childrens books,
instructions, or poetry—where repetition and patterns
are common. High entropy indicates greater diversity
or randomness, which can be seen in academic texts,
stream-of-consciousness writing, or even randomly
generated strings of text. Each case tells us something
about how the text is constructed and what kind of
1.2 N-Gram Entropy: Capturing Sequence Patterns
cognitive load it may carry for the reader.
Lexical entropy has practical applications in many
areas of Natural Language Processing. It can be used
to compare writing styles, estimate the richness or
simplicity of a document, or detect changes in language
use over time. It also plays a role in stylometry (the
study of writing style), language development research,
and text classification. In essence, lexical entropy gives
us a way to measure the invisible patterns that shape
our communication.
1.2 N-Gram Entropy: Capturing
Sequence Patterns
While lexical entropy looks at the unpredictability
of individual characters or words, n-gram entropy
focuses on the patterns formed by sequences of these
elements. An n-gram is a group of ’n items taken from
a sequence of text—these items could be characters
or words. For example, in the phrase “the cat, the
character bigrams (n = 2) are “th,” “he, “e,” c, “ca,
and “at.” Similarly, word bigrams are “the cat.” By
analyzing how frequently these n-grams occur in a text,
we can calculate the entropy of these sequences and
gain insights into the structure of the language.
N-gram entropy helps us understand how
predictable these short sequences are. If a certain
bigram appears very often—like “th” in English—it
contributes less to the overall entropy. In contrast,
if a wide variety of bigrams are used with roughly
equal frequency, the entropy increases, reflecting greater
diversity and less predictability. For example, a tightly
structured text such as a legal document may repeat
specific phrases and word combinations, leading to
low n-gram entropy. Meanwhile, creative writing or
spontaneous speech often contains a much broader
and less predictable set of n-grams, resulting in higher
entropy.
The level of n (i.e., how many items are grouped
together) matters significantly. Unigram entropy gives
a basic sense of variation among individual items, like
letters or words. Bigram and trigram entropy go a step
further by capturing how elements combine, offering
Figure 1: Sequences of n language units.
(image courtesy:https://www.pykonik.org/media/slides/tech-talks-49-statistical-language-modeling-
with-n-grams-in-python.pdf)
a better view of language patterns such as grammar,
syntax, or common phraseology (Fig 1). Higher-order
n-grams (like 4-grams or 5-grams) can capture more
complex patterns, but they also require more data to be
reliable.
In practical applications, n-gram entropy is used
in tasks like language modeling, authorship analysis,
and stylistic comparison. It can reveal how formulaic or
expressive a text is, and it plays a crucial role in machine
learning models that predict text, such as predictive
keyboards or speech recognition systems. By analyzing
how predictable a sequence is, these systems can better
guess what comes next in a sentence.
Ultimately, n-gram entropy allows us to move
beyond looking at individual words and begin analyzing
the structure and rhythm of language. It shows how
pieces of language are put together and gives us a way to
quantify how tightly or loosely a text follows common
patterns.
1.3 Language Models and
Cross-Entropy
In Natural Language Processing, a language model
is a tool that learns the structure of a language by
analyzing large amounts of text. Its main job is to predict
the next word (or character) in a sentence, given the
words that came before. For example, given the phrase
“The sky is, a good language model might predict that
the next word is “blue” with a high probability. To
do this well, the model must learn which sequences of
words are common and which are not.
As language models make predictions, they assign
probabilities to all the possible next words. Some
3
1.4 Conclusion
words are more likely than others based on context. For
instance, after the word “peanut,” the word “butter”
is more probable than “car” in most contexts. The
model’s accuracy depends on how closely its predicted
probabilities match the actual words that appear in the
text.
This is where cross-entropy comes in. Cross-
entropy is a way to measure how well a language model
predicts the next word. It calculates the difference
between the model’s predicted probability distribution
and the actual outcome. If the model is confident and
correct, the cross-entropy will be low. But if it assigns
low probability to the actual word that appears, the cross-
entropy will be high. In other words, low cross-entropy
means better predictions and more understanding of
language structure.
Cross-entropy is not just a performance metric—it
also reflects the complexity of the text being modeled.
Text that follows common patterns or contains repeated
phrases will be easier for the model to predict, resulting
in lower cross-entropy. On the other hand, texts with
rare words, creative language, or unusual grammar
will confuse the model more and produce higher cross-
entropy values. This makes cross-entropy a useful tool
for evaluating both the model and the text itself.
In practical terms, cross-entropy is used during
the training and evaluation of language models like
n-gram models, recurrent neural networks (RNNs), and
transformers. It helps determine how well the model
has learned and how it might perform on unseen data.
Researchers and developers use cross-entropy loss to
guide the learning process, reducing it over time to
improve accuracy.
1.4 Conclusion
Entropy, though rooted in physics and mathematics,
proves to be an incredibly insightful tool for
understanding the complexity of human language.
Through lexical entropy, we can measure how
predictable individual characters or words are in a text,
revealing the richness or simplicity of its vocabulary.
With n-gram entropy, we dive deeper into the structure
of language, uncovering patterns in sequences of words
or letters that reflect grammar, style, and coherence.
Finally, cross-entropy allows us to assess how well
predictive models capture the underlying structure of
language, giving us a way to evaluate both text and
model performance.
Each of these forms of entropy sheds light on
different aspects of linguistic order and variation.
Together, they demonstrate that language—though
it may often appear chaotic—is shaped by subtle,
measurable regularities. By using entropy-based
approaches, we gain not only a better understanding
of how language works but also powerful tools for
analyzing, modeling, and processing it more effectively.
References
Entropy of natural languages: Theory and
experiment
The Word Entropy of Natural Languages
Entropy increasing for NLP: Understanding its
impact
Entropy Rate Estimates for Natural Language—A
New Extrapolation of Compressed Large-Scale
Corpora
EDT: Improving Large Language Models
Generation by Entropy-based Dynamic
Temperature Sampling
Excess entropy in natural language: Present state
and perspectives
Entropy analysis of natural language written texts
About the Author
Jinsu Ann Mathew is a research scholar
in Natural Language Processing and Chemical
Informatics. Her interests include applying basic
scientific research on computational linguistics,
practical applications of human language technology,
and interdisciplinary work in computational physics.
4
Part II
Astronomy and Astrophysics
Nonlinear Waves in Plasma Physics
by Abishek P S
airis4D, Vol.3, No.7, 2025
www.airis4d.com
1.1 Introduction
Plasma, often hailed as the fourth state of matter,
is a highly energetic and conductive medium composed
of freely moving ions and electrons. It constitutes over
99% of the visible universe, from the blazing interiors
of stars and solar flares to the vast interstellar and
intergalactic mediums. Unlike solids, liquids, or gases,
plasma is intrinsically collective in its behavior—its
charged particles respond not only to local forces but
also to long-range electromagnetic fields, giving rise
to a fascinating array of instabilities, turbulence, and
wave phenomena[1,2].
Among these, nonlinear plasma waves occupy a
particularly important niche. Unlike linear waves, which
obey simple superposition principles, nonlinear waves
exhibit complex interactions, amplitude-dependent
propagation speeds, and can form persistent structures
such as solitons, shocks, and double layers. These
structures are not mere curiosities; they encapsulate
deep physical principles and serve as signatures of
plasmas intricate internal dynamics.
The study of these nonlinear phenomena is
pivotal—not just to decode the universe’s grand
mechanisms, but also to push the boundaries of human
achievement in applied technologies. In fusion energy
research, for example, understanding how nonlinear
waves transport energy and particles helps us fine-
tune confinement strategies in magnetic and inertial
fusion devices. In space and astrophysical plasmas,
these waves govern critical processes like particle
acceleration in the magnetosphere, shock formation
in supernova remnants, and even the modulation of
cosmic rays. They also play a subtle yet vital role in
next-generation communication systems, where space
plasma interactions can affect signal propagation in
satellite and deep-space missions.
Thus, delving into the world of nonlinear plasma
waves is not merely a theoretical endeavour—it is a
key to unlocking transformative capabilities in science,
technology, and our understanding of the cosmos.
1.2 Nonlinear Waves
Linear wave theory, while foundational to classical
plasma physics, provides only a limited lens through
which to view the vibrant, often chaotic dynamics
of real plasma systems. At its core, linear analysis
assumes small amplitude perturbations and neglects
feedback mechanisms and intermodal interactions. As
a result, it falters when applied to scenarios marked
by strong fluctuations, steep gradients, or long-range
electromagnetic coupling—conditions that are not
only common but fundamental in both laboratory and
astrophysical plasmas.
Under such circumstances, nonlinear effects
emerge as the primary architects of plasma behaviour.
These effects give rise to a remarkable array of
phenomena that cannot be understood as mere
sums of simpler waveforms. Instead, the plasma
reorganises itself into structures characterised by
stability, resilience, and complex interactions[3].
Solitary waves, for example, maintain their integrity
through a precise balance between nonlinearity and
dispersion[4]. Shock waves propagate with abrupt
discontinuities, often associated with irreversible
1.3 Types of Nonlinear Waves
processes like heating or particle acceleration[5].
Double layers act as microscopic particle accelerators,
embedding significant potential drops across narrow
spatial regions[6].
Importantly, these nonlinear structures are not
just academic curiosities—they are powerful agents of
transport, energy conversion, and self-regulation. In
the realm of controlled fusion, they can either hinder
confinement through enhanced transport or facilitate
heating mechanisms such as nonlinear wave-particle
interactions. In space plasmas, from the solar wind to
planetary magnetospheres, nonlinear wave phenomena
govern everything from auroral substorms to cosmic ray
modulation. They are also instrumental in mediating
magnetic reconnection, a process central to explosive
energy release events like solar flares and geomagnetic
storms.
Capturing these rich behaviours demands a
departure from purely analytic approaches toward
advanced numerical simulations, where the full
nonlinearity of Maxwell’s equations and the Vlasov or
fluid equations can be incorporated. These models are
essential for predicting the onset of instabilities, the
evolution of coherent structures, and the thresholds at
which linear theory ceases to apply.
In this broader light, the study of nonlinear waves
is not just a deeper layer of theory—it is the essential
framework for making sense of the dynamic tapestry
that defines plasma as a state of matter.
1.3 Types of Nonlinear Waves
1.3.1 Shock Waves
Shock waves in plasma represent a class of
nonlinear structures that arise when disturbances
propagate faster than the characteristic wave speeds
(see Fig.1) —such as the ion acoustic or magnetosonic
speed—of the ambient medium. These waves
produce abrupt, irreversible changes in plasma
parameters including density, temperature, velocity,
and electromagnetic fields, acting as key sites for
energy dissipation, particle acceleration, and entropy
generation. Their formation mechanisms differ
Figure 1: Formation of Shock Wave.
Image courtesy : https://www.kindpng.com/imgv/Twwhoib show-more-plots- debye-shielding-in-p
lasma-physics/
significantly between collisional and collisionless
regimes. In collisional plasmas, inter-particle
collisions provide the requisite dissipation, whereas
in collisionless systems—prevalent in astrophysical and
space environments—energy is redistributed through
wave-particle interactions, kinetic instabilities, and
electromagnetic fields. These collisionless shocks
exhibit layered internal structures, including a foot,
ramp, and overshoot, which encapsulate ion reflection,
electron heating, and self-generated turbulence. Earth’s
bow shock, resulting from the interaction between
the supersonic solar wind and the geomagnetic
field[7], serves as a benchmark example and has been
extensively studied through high-resolution satellite
missions like MMS, Cluster, and THEMIS. These
missions have revealed the spatial and temporal
intricacies of shock fronts, including shock reformation,
electron-scale dissipation, and anisotropic pressure
effects. Beyond the geospace context, shock waves
are fundamental to numerous astrophysical phenomena
such as supernova remnant expansion, cosmic ray
acceleration through diffusive mechanisms, and energy
transport in relativistic jets from active galactic nuclei.
In all cases, the inherently nonlinear and dissipative
nature of shocks underscores their central role in
mediating energy flow and structuring plasma behavior
across vastly different scales and environments.
1.3.2 SOLITONS
Solitons are a class of nonlinear wave structures
that exhibit remarkable stability and coherence,
maintaining their shape and speed over extended
distances and through mutual interactions. Unlike
ordinary wave packets that disperse due to the medium’s
7
1.4 Significance of Studying Nonlinear Waves
Figure 2: Shape of Soliton Wave.
Image courtesy : https://www.kindpng.com/imgv/Twwhoib show-more-plots- debye-shielding- in-p
lasma-physics/
natural dispersion, solitons emerge from a precise
balance between nonlinear steepening and dispersive
spreading. This balance is governed by integrable
systems—such as those described by the Korteweg–de
Vries (KdV) equation or its variants—making solitons
exact solutions within certain mathematical frameworks.
In plasma environments, ion-acoustic solitons are
among the most commonly observed, particularly in
low-temperature, collisionless, and weakly magnetized
plasmas where ion inertia and electron pressure effects
are dominant. These structures appear as localized
density enhancements or depletions and can be either
compressive or rarefactive, depending on the electron-
to-ion temperature ratio and the presence of additional
plasma species such as dust grains, negative ions, or
energetic electrons.
One of the most intriguing features of solitons
is their particle-like behaviour: they can propagate
long distances without attenuation and interact non-
destructively. When two solitons “collide,” they may
temporarily distort, but afterwards each reemerges with
its original shape and velocity—a property unique to
integrable systems. This has profound implications
not only for plasma theory but also for communication
systems, optical fibre technologies, and quantum field
analogues, where soliton dynamics offer robust modes
of signal transmission and energy localisation.
In laboratory and space plasmas, soliton detection
and modelling often rely on nonlinear fluid models,
Vlasov simulations, and Langmuir probe diagnostics,
providing insights into nonlinear energy transport
and wave-particle interactions. Understanding soliton
behaviour in multi-component or magnetised plasmas
is also critical in refining models of space weather
phenomena, dusty plasma dynamics, and even early-
universe cosmology.
1.3.3 Double Layers
Double layers (DLs) are highly localised, non-
neutral plasma structures characterised by a sharp
electric potential drop across a narrow spatial region,
typically spanning just a few Debye lengths. This
abrupt potential gradient arises from the juxtaposition
of two thin layers of net positive and negative space
charge, resulting in an intense electrostatic field that
can accelerate charged particles to high energies over
very short distances. From a dynamical standpoint,
DLs function as microscopic electrostatic barriers,
often forming spontaneously in regions with current-
driven instabilities, charge separation, or boundary
interactions. Their capacitive-like nature—in which
the adjacent space-charge layers mimic the plates
of a capacitor—makes them uniquely efficient at
concentrating and directing electrical energy into
kinetic energy. In space plasmas, double layers are
frequently observed in auroral acceleration zones,
where they are implicated in generating bursts of
suprathermal electrons responsible for discrete auroral
arcs. In laboratory environments, particularly in
glow discharges, plasma thrusters, and double plasma
devices, DLs manifest during transitions between
different plasma regimes, often serving as diagnostic
indicators of nonequilibrium conditions. Theoretical
models such as the Bernstein–Greene–Kruskal (BGK)
approach and Sagdeev pseudopotential analysis are
commonly employed to study their formation and
stability. Understanding double layers is therefore
critical not only to unravelling the microphysics of
space weather phenomena but also to optimising plasma
propulsion systems and advancing high-voltage plasma
applications.
8
1.4 Significance of Studying Nonlinear Waves
Figure 3: Formation of Double Layer.
Image courtesy: https://www.kindpng.com/imgv/Twwhoib show-more-plots-debye-shielding- in-p
lasma-physics/
1.4 Significance of Studying
Nonlinear Waves
1.4.1 Predicting and Controlling Instabilities
in Fusion Devices such as Tokamaks
In magnetic confinement fusion devices,
maintaining plasma stability is paramount to achieving
sustained thermonuclear reactions. Nonlinear
wave theory plays a pivotal role in deciphering
how microturbulence, drift-wave instabilities, and
magnetohydrodynamic (MHD) modes evolve and
interact[8]. Researchers analyze nonlinear couplings,
saturation mechanisms, and energy cascades to model
how coherent structures such as zonal flows and
blobs influence cross-field transport. A deeper
understanding of these phenomena is instrumental for
devising real-time control schemes, including resonant
magnetic perturbations (RMPs) and feedback-driven
current profile shaping, ultimately enhancing plasma
performance and preventing disruptions.
Figure 4: Tokamak.
Image courtesy: https://www.kindpng.com/imgv/Twwhoib show-more-plots-debye-shielding- in-p
lasma-physics/
1.4.2 Interpreting Satellite Observations of a
Planet’s Magnetosphere and Solar Wind
Space-borne missions continuously detect
signatures of nonlinear plasma processes, particularly
during events such as magnetic reconnection, bow
shock formation, or auroral acceleration. Researchers
leverage these observations to identify nonlinear solitary
structures, kinetic-scale Alfv
´
en waves, and double
layers that mediate particle heating and anisotropic
pressure distributions. By coupling spacecraft
data with numerical models and theory, scientists
can infer the role of nonlinear wave interactions
in driving magnetospheric substorms, ring current
enhancement, and radiation belt dynamics, enriching
our understanding of space weather and its terrestrial
implications.
1.4.3 Advancing Plasma-Based Technologies
like Particle Accelerators and Pulsed
Power Systems
In laser-plasma and beam-plasma systems,
nonlinear wave dynamics underpin critical mechanisms
of field generation, energy transfer, and structure
formation. Researchers explore wakefield excitation,
parametric instabilities, and wave-breaking phenomena
to optimise the generation of ultra-relativistic particle
beams. In pulsed power devices, nonlinear wave
steepening, magnetic insulation, and sheath formation
9
1.5 Conclusion
are analysed to improve energy delivery and focus[9].
Such studies inform the design of compact, high-
brilliance accelerators for medical, industrial, and high-
energy physics applications, as well as pulsed fusion
drivers.
1.4.4 Modelling Astrophysical Phenomena
such as Shock Fronts in Supernova
Remnants or Jets from Active Galactic
Nuclei (AGN)
Astrophysical plasmas evolve in regimes where
nonlinear MHD effects dominate, often involving
relativistic speeds, strong field gradients, and multi-
scale coupling. Researchers study the nonlinear
evolution of current-driven and shear instabilities in
AGN jets to explain filamentation, variability, and
energy dissipation. Likewise, in supernova remnants,
nonlinear shock acceleration mechanisms—such as
diffusive shock acceleration (DSA) with feedback
from cosmic ray pressure—are modeled to understand
synchrotron emission and elemental composition.
These efforts bridge the gap between microphysical
wave-particle interactions and large-scale observables
in high-energy astrophysics[10].
1.5 Conclusion
In summary, the study of nonlinear plasma
waves unveils a deep and multifaceted understanding
of plasma behaviour beyond linear approximations.
Phenomena such as shock waves, solitons, and
double layers illustrate how nonlinearity gives rise
to coherent structures capable of mediating energy
transport, particle acceleration, and self-organisation
across vastly different plasma environments. These
structures are observed in natural systems—from
Earths magnetosphere to astrophysical jets—as well as
in laboratory settings critical to fusion energy and
space propulsion technologies. As our diagnostic
capabilities advance and computational methods grow
more sophisticated, we stand at the threshold of not
only decoding but also manipulating these nonlinear
processes for practical benefit.
Looking ahead, future research will likely focus
on integrating multi-scale modelling techniques—from
fluid to kinetic regimes—to resolve the full hierarchy
of nonlinear interactions. The development of
high-resolution, real-time diagnostics may enable
direct observation of wave-particle coupling and
dynamic transitions in shock and soliton behaviour.
Additionally, tailoring plasma parameters to engineer
nonlinear structures may pave the way for novel
applications in propulsion, radiation sources, and
materials processing. Understanding how nonlinear
waves influence turbulence and anomalous transport
in magnetised plasmas remains a critical challenge
for advancing magnetic and inertial fusion schemes.
Moreover, with increasing access to space-borne
instruments and laboratory analogues, comparative
studies of nonlinear structures across planetary, solar,
and astrophysical contexts will deepen our grasp of
universality and variance in plasma dynamics. In
essence, nonlinear waves are not just signatures of
complexity—they are active agents shaping the physical
universe, and our ability to harness them may hold the
key to future breakthroughs in energy, exploration, and
understanding.
References
[1] Francis, F, Chen., (2015). Introduction
to Plasma Physics and Controlled Fusion (3rd ed.).
Springer Cham. https://doi.org/10.1007/978-3-319-
22309-4
[2] Lewi, Tonks.,The Birth of “Plasma”. Am.
J. Phys. 1 September 1967; 35 (9): 857–858.
https://doi.org/10.1119/1.1974266
[3] Saha, A., & Banerjee, S. (2021). Dynamical
Systems and Nonlinear Waves in Plasmas (1st ed.).
CRC Press. https://doi.org/10.1201/9781003042549
[4] Petviashvili, VLADIMIR .I., & Pohkotelov,
OLEG .A. (1992). Solitary Waves in Plasmas
and in the Atmosphere (1st ed.). Routledge.
https://doi.org/10.4324/9781315075556
[5] D. A. Tidman., (1967). Turbulent Shock
Waves in Plasmas”. Phys. Fluids.10 (3): 547–564.
https://doi.org/10.1063/1.1762148
10
1.5 Conclusion
[6] P. Leung, A. Y. Wong, B. H. Quon., (1980).
Formation of double layers”. Phys. Fluid. 23 (5):
992–1004. https://doi.org/10.1063/1.863073
[7] Montgomery, M. D., J. R. Asbridge, and S.
J. Bame (1970). “Vela 4 plasma observations near
the Earth’s bow shock”. J. Geophys. Res., 75(7),
1217–1231, doi:10.1029/JA075i007p01217.
[8] Stacey, W.M., (2007). A Survey of
Thermal Instabilities in Tokamak Plasmas: Theory,
Comparison with Experiment, and Predictions for
Future Devices. Fusion Science and Technology, 52(1),
29-67. https://doi.org/10.13182/FST07-A1485
[9] Joshi, C. (2006). “PLASMA
ACCELERATORS”. Scientific American, 294(2),
40–47. http://www.jstor.org/stable/26061335
[10] Vink, J. (2004). “Shocks and particle
acceleration in supernova remnants: observational
features”. Advances in Space Research, 33(4), 356-365.
https://doi.org/10.1016/j.asr.2003.05.012
About the Author
Abishek P S is a Research Scholar in
the Department of Physics, Bharata Mata College
(Autonomous) Thrikkakara, kochi. He pursues
research in the field of Theoretical Plasma physics.
His works mainly focus on the Nonlinear Wave
Phenomenons in Space and Astrophysical Plasmas.
11
Black Hole Stories-19
The Detection of Gravitational Waves
by Ajit Kembhavi
airis4D, Vol.3, No.7, 2025
www.airis4d.com
In this story we will see how the passage of
a gravitational wave through a region of space can
effect particles which are present there. We will
then consider how the passage of gravitational waves
can be experimentally detected, using an arrangement
very similar to a Michelson interferometer. That will
provide the background for a description of the LIGO
gravitational wave detector in our next story.
2.1
Effects Produced by Gravitational
Waves
The effects produced by an electromagnetic wave
are easy to understand and discern. An electromagnetic
wave passing in some direction exerts a force on
any charged particles in its path, like electrons, and
accelerates them. The accelerated particles in turn
emit radiation, which results in some of the energy in
the electromagnetic wave being scattered in various
directions, which can be detected. If the wave
encounters an antenna, it sets up a current in it, which
again can be easily detected. Modern life is closely
tied to the generation and detection of electromagnetic
waves.
The effects of gravitational waves are more subtle
and weak, and therefore far more difficult to detect.
We have seen in earlier stories that gravitational waves
are ripples in four dimensional space-time. When a
gravitational wave passes some region of space, the
geometry there gets affected. As a result, there is change
Figure 1: The effect of a gravitational wave on a set of
free point masses arranged in a circle. The gravitational
wave travels into the plane of the paper, in a direction
perpendicular to the plane. The upper and lower parts
of the figure correspond to two polarisation states, as
explained in the text.
in all measured distances around the location. To see
the effect of such a change, consider tiny particles which
are arranged in a circle in space, as shown in Figure 1.
Now suppose a gravitational wave passes in a
direction perpendicular to the circle into the plane of
the paper. The change in the geometry produced by the
wave is such that when there is contraction along the
x-direction as shown in the figure, there is expansion
along the y-direction which is perpendicular to it. As
time passes, and the gravitational wave propagates
forward, the directions of contraction and expansion
are exchanged. For particles which are not along the x-
or y-directions, whether expansion or contraction takes
place depends on the direction in which it is located.
The net effect is that the circle of particles changes
shape periodically: it alternately becomes an ellipse
first stretched along the x-direction and then stretched
along the y-direction, as shown in the upper part of the
2.2 Bar Detectors
figure.
The periodic change of shape can occur in a
somewhat different way, with the maximum stretching
taking place in a direction which is inclined to the x- and
y- directions, as shown in the lower part of the figure.
This occurs because of the difference in a property of
the gravitational waves known as polarisation. If a
gravitational wave approaches the circle in a direction
which is not perpendicular to or there is more than one
gravitational wave passing through at the same time,
then the effects will be similar to those described, with
the details depending on the situation.
2.2 Bar Detectors
The particles we considered above are free
particles, in the sense that they are not under the
influence of any force and are affected only by the
gravitational wave passing through. The distance
between the particles changes as the space expands
and contracts due to the passage of the gravitational
wave. But what about a bar of steel, say, which is in the
path of the gravitational wave? Would the rod expand
and contract too, if the effect of the gravitational wave
is universal?
The situation with the steel bar is quite different
from that of the particles considered above. The bar is
made of particles, in the form of atoms and molecules,
which are in incessant motion, and in interaction with
each other through electrical forces. It is these forces
which hold the particles together so that the bar acquires
a form which is rigid on the large scale. When the bar
changes its length due to the expansion and contraction
of space due to passage of a gravitational wave, there
is a change in the forces between the particles, and
restoring forces are set up. These cause the bar to return
to its original size. The result of the two opposing
effects is that the bar begins to vibrate; these vibrations
continue even after the wave has passed by, until they
are gradually damped out. This is like the ringing of
a bell when it is struck, which persists for some time
after the blow, and gradually reduces in intensity. The
vibrations of the bar are of very low amplitude, i.e.,
they are very feeble, so it is extremely difficult to detect
them. For a bar of cylindrical shape of a given size and
material, there is a particular frequency, known as the
resonance frequency which has the greatest amplitude.
This frequency depends on the length of the bar, its
matter density and on how elastic the material of the
bar is.
In the 1960s, Professor Joseph Weber of the
University of Maryland decided to measure the bar
vibrations triggered by gravitational waves to detect
their passage. For this purpose he constructed a number
of accurately machined bars of aluminium, one of which
was 2 meters long and 1 meter in diameter, with other
bars having similar dimensions. The idea was that when
a gravitational wave of a specific frequency passed by the
bar, the bar would begin to vibrate, or ring, as described
in the previous section. While vibrations with various
frequencies would be present, the most dominant would
be the resonant or ringing frequency. For the cylinder
used by Weber, this resonant frequency was close to
1660 Hertz, i.e., 1660 vibrations per second. This
frequency was chosen as Weber expected it would be
the dominant frequency emitted by various gravitational
wave sources. Through a series experiments with bar
detectors that he constructed, Weber believed that he
had detected gravitational waves.
Weber’s results could not be reproduced by other
similar experiments, and his analysis of the data
was believed to be incorrect. The community of
scientists therefore did not accept that Weber had
detected gravitational waves. But Weber’s work and his
enthusiasm inspired many other groups to develop other
versions of bar detectors, with improved ability to detect
the waves. These attempts were not successful either
in spite of the great improvement in the technology.
One had to wait until 2015 when the Advanced
LIGO experiment was finally successful in detecting
gravitational waves from a coalescing binary black hole
system. We now describe a schematic gravitational
wave detector to clarify basic concepts.
13
2.3 Interferometric Detectors for Gravitational Waves
2.3 Interferometric Detectors for
Gravitational Waves
We have seen above that when a gravitational
wave passes through a circle of free particles, the circle
periodically changes shape to an ellipse which is first
stretched in one direction and then in the perpendicular
direction. We will use the arrangement of a Michelson
interferometer, which is a familiar object in physics
laboratories, to see how the change in shape could be
used to detect the passage of a gravitational wave.
Imagine that mirrors
M
1
and
M
2
are attached to
two particles on a circle as shown on the left in Fig. 2.
and that a third mirror M is placed at the centre of the
circle. The distances
d
1
between
M
1
and M, and the
distance
d
2
between
M
2
and M are equal. The two paths
of the beam between M and
M
1
and M and
M
2
, can be
viewed as two arms of a Michaelson interferometer.
A beam of laser light emitted from the source at
the left of the diagram travels to the mirror M, which
is known as a beam splitter. This mirror allows half
the light of the beam to pass through it, towards mirror
M
1
, while the other half of the light is reflected towards
mirror
M
2
. The light striking the mirrors
M
1
and
M
2
is reflected back towards the beam splitter, and the
combined beam then reaches a light detector at the
bottom of the figure. The distance travelled by the two
beams is exactly the same and so is the time taken for
the beams to arrive at the detector. Since the light waves
in the beam arise from the same source and travel the
same distance until they reach the detector, they are in
phase and combine constructively, producing a bright
spot at the detector. If, on the other hand, the distance
travelled differs by half the wavelength of the light wave,
then the two waves combine destructively, producing a
dark spot. This can be achieved by moving one of the
mirrors slightly, so that the distances
d
1
and
d
2
are differ
just enough to produce the half wavelength difference
in the distance travelled by the beams. Depending on
the exact configuration of the mirrors the combined
beam can produce concentric light and dark circles, or
a pattern of light and dark straight lines at the detector.
The process of combining two wave trains of light to act
Figure 2: An interferometer configuration with the two
mirrors
M
1
and
M
2
being equidistant from the central
mirror M is shown at the top left of the figure. The
ensuing constructive interference is shown at bottom
left. When a gravitational wave passes through the
configuration, mirror
M
1
moves away from M while
M
2
moves towards M as shown at top right. If the
difference in path length is exactly equal to half a
wavelength, then there is destructive interference, as
shown at the bottom right.
constructively or destructively is known as interference
and the pattern produced is known as an interference
pattern. Constructive and destructive interference is
shown in the lower half of Figure 2.
Now suppose a gravitational wave passes through
the circle of particles, so that it is stretched to form
an ellipse as shown on the right of the figure. The
mirror
M
1
is now farther from M than it was in the
absence of the wave, while
M
2
has moved closer to M.
The distances travelled by the two beams are therefore
different and there is a change in the interference pattern.
As the mirrors move back and forth the interference
pattern changes periodically, which can be detected if
the change is large enough. The changing interference
pattern therefore becomes a signature of the passage
of a gravitational wave. The idea is schematically
represented in Figure 2.
In the next story we will see how the simple
Michaelson interferometer described above is converted
to a giant LIGO gravitational wave detector.
14
2.3 Interferometric Detectors for Gravitational Waves
About the Author
Professor Ajit Kembhavi is an emeritus
Professor at Inter University Centre for Astronomy
and Astrophysics and is also the Principal Investigator
of the Pune Knowledge Cluster. He was the former
director of Inter University Centre for Astronomy and
Astrophysics (IUCAA), Pune, and the International
Astronomical Union vice president. In collaboration
with IUCAA, he pioneered astronomy outreach
activities from the late 80s to promote astronomy
research in Indian universities.
15
X-ray Astronomy: Through Missions
by Aromal P
airis4D, Vol.3, No.7, 2025
www.airis4d.com
3.1 Satellites in 2010s
So far we have covered the X-ray astronomical
satellites launched till 2010. Even though first half of
the last decade only had one X-ray specific mission,
but the second half seen an increase in the number of
missions which had greater scientific capabilities. In
this article we will discuss 3 of the satellites launched
in the last decade and the rest will be discussed in the
upcoming article.
3.2 NuSTAR
The Nuclear Spectroscopic Telescope Array
(NuSTAR) is the first focusing high-energy X-ray
telescope in orbit launched in June 2012. NuSTAR
is classified as NASAs 11th Small Explorer (SMEX)
mission and is designated as Explorer 93. It was
launched into an orbit of 600-650 km radius with an
inclination of
6
using Pegasus rocket. The rocket was
air-launched from an ”TriStar” aircraft flying 40,000
feet above the surface.
The stowed configuration of NuSTAR measured
1.2 × 2.2
m. Once orbited, a
10 m
mast was deployed
to focus the high energy X-ray photons which separated
the optic module from the spacecraft. The optic module
consists of two co-aligned hard X-ray grazing-incidence
telescopes pointed at celestial targets. The three-axis
stabilized spacecraft contains the focal plane module,
necessary electronics and solar panel. Telescopes
contain 133 multilayer-coated grazing incidence shells
approximated with Wolter-1 geometry. Each telescope
has its own focal plane which consists of a solid state
Op#cs&
modules&
Deployed&mast&
Focal&plane&bench&
Instrument&star&tracker&
Mast&canister&
Metrology&
detector&(1&of&2)&
Metrology&
lasers&
Focal&plane&
detector&
module&(1&of&2)&
Figure 1: Diagram of the observatory in the stowed
(bottom) and deployed (top) configurations. Credits:[
1
]
CdZnTe pixel detector. Detector is not integrated
in CCD style readout hence providing pile-up free
observations for bright sources. It has a timing
resolution of
2 µs
and provides a field of view of
10
at
10 keV
and
6
at
68 keV
. Detectors also provide
a spectral resolution of
400 eV
at
10 keV
and
900 eV
at 68 keV [1].
NuSTAR was developed to do scientific
observations for 2 years and it is still functional. The
mission has successfully identified numerous heavily
buried supermassive black holes that were previously
undetectable with the help of its high energy X-ray
capabilities. It had also identified new accreting
millisecond X-ray pulsars and the characterization of
previously unknown neutron star systems [
2
]. NuSTAR
achieved a major breakthrough by producing the first
map of radioactive material in a supernova remnant,
specifically titanium-44 in Cassiopeia A [
3
]. NuSTAR’s
ability to detect high-energy X-rays from supernovae
3.3 AstroSat
Figure 2: Diagram of the AstroSat satellite and
scientific instruments. Credits: AstroSat Archive
shortly after their explosions offers unique insights
into the shock propagation and energy deposition
mechanisms [1].
3.3 AstroSat
AstroSat is Indias first multi-wavelength
observatory in space. AstroSat was successfully
launched on September 28, 2015, by the Indian Space
Research Organization (ISRO) using a Polar Satellite
Launch Vehicle (PSLV) from the Sriharikota Range
on the eastern coast of India. Satellite was placed in
a circular low earth orbit of radius
650 km
with an
inclination of 6
.
Astrosat carried 5 primary science instrument
which observed the sky at different wavelengths
including UV and X-rays. All the instruments are
co-aligned to give the simultaneous multi-wavelength
observation of the corresponding source.
Large Area X-ray Proportional Counter (LAXPC)
is the primary instrument in AstroSat. It
consists of three identical(LAXPC10, LAXPC20,
LAXPC30) xenon gas filled proportional counters
with an effective area of
6000 cm
2
. LAXPC
works in the energy range of
3 80 keV
with a
great timing resolution of
10 µs
. It does not have
imaging capabilities and has a moderate spectral
capability [4].
Soft X-ray Telescope (SXT) is a wolter-1 grazing
incident telescope that works in the energy range
of
0.3–8.0 keV
. It is capable of providing
imaging, spatially resolved spectroscopy and
variability observations of cosmic sources. SXT
has a focal length of
2 m
and geometric area of
250 cm
2
.The detector used is the CCD-22 MOS
device. It had a timing resolution of 0.28 second
in Fast Window mode and spectral resolution of
approximately 150 eV at 6 keV [5].
The Cadmium Zinc Telluride Imager (CZTI) is
a high-energy, wide-field imaging instrument
that operates in the energy range of 20-200 keV.
The instrument is composed of four identical
independent quadrants. Each quadrant features a
Coded Aperture Mask (CAM) positioned at the
top, with sixteen CZT detector modules located
48 cm
below the CAM. The coded aperture mask
provides an angular resolution of
17arcminutes
over a field of view measuring 4.6 degrees by 4.6
degrees. Additionally, the instrument has a time
resolution of 20 µs [6].
The Ultra Violet Imaging Telescope (UVIT) is
designed to capture images simultaneously in
near-ultraviolet and far-ultraviolet wavelengths.
The instrument consists of two co-aligned
telescopes, each featuring f/12 Ritchey–Chretien
optics with an aperture of 375 mm, along with
filters and detectors. One telescope observes in
the far-ultraviolet (FUV) range, spanning from
1,300 to 1,800 angstroms, while the other covers
the near-ultraviolet (NUV) range of 2,000 to
3,000 angstroms, as well as the visible (VIS)
range of 3,200 to 5,500 angstroms. For each of
the three channels, a filter wheel is employed to
select either a filter or a grating in the FUV and
NUV ranges [7].
Scanning Sky Monitor (SSM) is an X-ray sky
monitor in the soft X-ray band designed with a
large field of view to detect and locate transient X-
ray sources. SSM comprises of position sensitive
proportional counters with 1D coded mask for
imaging. There are three detector units having
a FOV of the order of
22
100
mounted on a
platform capable of rotation which helps covering
about 50% of the sky in one full rotation [8].
Although initially planned for 5 years, AstroSat is still
17
3.4 HITOMI
operational. AstroSats CZTI has made groundbreaking
contributions to gamma-ray burst science, detecting
over 650 GRBs during its operational period and these
Analysis revealed that some GRBs have Poynting flux-
dominated jets while others show baryonic-dominated
jets with mild magnetization [
9
] and CZTI has searched
for electromagnetic counterparts of gravitational wave
events, placing important constraints on theoretical
models [
10
]. The LAXPC has revolutionized studies of
X-ray binary systems and neutron star physics through its
exceptional timing capabilities. It includes the discovery
of new dip phenomena in X-ray binaries, revealing
insights into accretion flow geometry[
11
], detection of
Type-I thermonuclear X-ray bursts from neutron stars,
providing constraints on neutron star structure[
12
], and
first detection of evolving low-frequency quasi-periodic
oscillations in hard X-rays up to 100 keV in black hole
systems[
13
]. UVIT has made significant contributions
in understanding stellar evolution, and first detected
the extremely low mass white dwarfs as companions to
blue straggler stars in globular clusters [14]
3.4 HITOMI
Hitomi, also known as ASTRO-H, was a joint
Japanese–international X-ray observatory launched
to perform high-resolution, broad-band spectroscopy
of astrophysical sources. Hitomi was launched
on February 2016 aboard an H-IIA rocket from
Tanegashima Space Center. The satellite entered a
circular low-Earth orbit at an altitude of approximately
575 km, with an orbital period of
96
minutes and
an inclination of
31.01
. Hitomi was conceived to
extend the capabilities of previous Japanese X-ray
satellites by covering the energy range from soft X-rays
to soft gamma rays
(0.3–600 keV )
with unprecedented
resolution and sensitivity.
Hitomi carried four co-aligned telescopes and four
primary instruments
Soft X-Ray Spectrometer (SXS) is an imaging
spectrometer that uses charge-coupled devices
(CCDs). The SXI sensor has four CCDs with an
imaging area size of
31 mm × 31 mm
arranged
in a
2 × 2
array. SXI detects x-rays between
0.4 12 keV
and covers a
38
× 38
field of view.
The energy resolution measured is 161 to
170 eV
in full width at half maximum for
5.9 keV
X-rays
[15].
Soft X-ray imaging system consisting of the
Soft X-ray Telescope and the Soft X-ray Image.
Combining the SXT-I and the SXI, is an imaging
system with a focal length of
5.6m
and a field of
view of
38
× 8
. This was the largest FoV among
focal plane X-ray detectors that cover up to 10
keV. The basic design of the SXT-I is the same as
that of Suzaku XRT which having Wolter-I-type
optic. SXI used CCDs as the imager.[16].
The Hard X-ray Imager (HXI) onboard Hitomi
(ASTRO-H) is an imaging spectrometer covering
hard X-ray energies of 5 to 80 keV. Combined
with the Hard X-ray Telescope, it enables imaging
spectroscopy with an angular resolution of
1
.7
half-power diameter, in a field of view of
9
× 9
.
The main imager is composed of four layers of
Si detectors and one layer of CdTe detector [
17
].
The Soft Gamma-ray Detector(SGD) was an
instrument that observed the highest energy band
(60 to 600 keV). The SGD design was based on
the success of the Hard X-ray Detector(HXD)
onboard Suzaku. The SGD design includes two
identical units with each holding three Compton
cameras with a geometrical area of
25 cm
2
. The
Compton camera is employed as a direction-
sensitive gamma-ray detector [18].
After the successful launch of the satellite, its control
was lost when the scientific operations began. By
April 2016, JAXA announced that they had lost the
control over the satellite and hence the mission was
decommissioned.
References
[1]
Harrison, F. A. et al. The Astrophysical Journal
770, 103. issn: 0004637X, 1538-4357 (May 2013)
[2]
Sanna, A. et al. Astronomy & Astrophysics 617,
L8. issn: 0004-6361, 1432-0746 (Sept. 2018)
18
REFERENCES
[3]
Grefenstette,B. W. et al. en. Nature 506, 339–342.
issn: 0028-0836, 1476-4687 (Feb.2014).
[4]
Yadav, J. S. et al. Current Science 113, 591. issn:
0011-3891(Aug. 2017).
[5]
Singh, K. P. et al. en. Journal of Astrophysics and
Astronomy 38, 29. issn: 0250-6335, 0973-7758
(June 2017).
[6]
Bhalerao, V. et al. Journal of Astrophysics and
Astronomy
[7]
Tandon, S. N. et al. The Astronomical Journal 154,
128. issn: 0004-6256, 1538-3881 (Sept. 2017).
[8]
Ramadevi, M. C. et al. en. Experimental Astronomy
44, 11–23. issn: 0922-6435, 1572-9508 (Oct.
2017).
[9]
Gupta, R. et al. The Astrophysical Journal 972, 166.
issn: 0004-637X, 1538-4357 (Sept. 2024).
[10]
Waratkar,G., Bhalerao, V. & Bhattacharya, D. The
Astrophysical Journal 976, 123. issn: 0004-637X,
1538-4357 (Nov. 2024).
[11]
Leahy, D. A. & Chen, Y. en.Journal of
Astrophysics and Astronomy 42, 44. issn: 0250-
6335, 0973-7758 (Oct. 2021).
[12]
Bhulla, Y., Roy, J. & Jaaffrey, S. Research in
Astronomy and Astrophysics 20, 098. issn: 1674-
4527 (June 2020).
[13]
Nandi, A. et al. en. Monthly Notices of the Royal
Astronomical Society 531, 1149–1157. issn: 0035-
8711, 1365-2966 (May 2024).
[14]
Dattatrey, A. K. et al. The Astrophysical Journal
943, 130. issn: 0004-637X, 1538-4357 (Feb. 2023).
[15]
Tanaka, T. et al. Journal of Astronomical
Telescopes, Instruments, and Systems 4, 1. issn:
2329-4124 (Feb. 2018).
[16]
Nakajima, H. et al. en. Publications of the
Astronomical Society of Japan 70, 21. issn: 0004-
6264, 2053-051X (Mar. 2018).
[17]
Journal of Astronomical Telescopes, Instru- ments,
and Systems 4, 1. issn: 2329-4124 (Mar. 2018).
[18]
Tajima, H., Watanabe, S., Fukazawa, Y. &
Blandford, R. Journal of Astronomical Telescopes,
Instruments, and Systems 4, 1. issn: 2329-4124
(Apr. 2018).
About the Author
Aromal P is a research scholar in
Department of Astronomy, Astrophysics and Space
Engineering (DAASE) in Indian Institute of
Technology Indore. His research mainly focuses on
studies of Thermonuclear X-ray Bursts on Neutron star
surface and its interaction with the Accretion disk and
Corona.
19
The Bright Centers of Galaxies: A Peek into
Active Galactic Nuclei
by Savithri H. Ezhikode
airis4D, Vol.3, No.7, 2025
www.airis4d.com
4.1 Introduction
At the centres of many galaxies reside
supermassive black holes (SMBHs) with masses
ranging from millions to billions of solar masses. While
most of these black holes remain quiescent, a fraction
exhibit intense activity when accreting substantial
quantities of matter. Such systems are classified as
active galaxies and their bright core regions are known
as active galactic nuclei (AGN), which represent some
of the most luminous persistent sources in the universe.
Fig. 1 presents a comparative view of the structural
and luminosity differences between normal galaxy and
active galaxy.
Early indications of AGN activity date back to the
early 20th century. Fath (1909) first reported strong
optical emission lines in the spiral nebula NGC 1068,
later supported by Slipher (1917), who noted unusually
broad (width of several hundred km/s) emission lines.
Curtis (1918) subsequently detected an optical jet in
M87, and Hubble (1926) documented similar nuclear
spectral features in several galaxies, including NGC
1068, NGC 4051, and NGC 4151 [
1
,
2
,
3
,
4
]. A
systematic study of such galaxies was conducted by
Seyfert in 1943 [
5
], who identified a class of galaxies
with high surface brightness nuclei and strong high-
excitation emission lines. These galaxies—now known
as Seyfert galaxies—formed one of the first well-defined
subclasses of AGN. The advent of radio astronomy in
the mid-20th century further advanced AGN research,
which led to the discovery of Quasars (quasi-stellar radio
sources). Pioneering studies by Reber [
6
], Bolton and
Stanley [
7
], Smith [
8
], Brown [
9
], Baade and Minkowski
[
10
], and Schmidt [
11
], among others, identified such
compact radio sources with extreme luminosities.
4.2 Observational Signatures
AGN exhibit a diverse range of observational
characteristics that set them apart from normal galactic
nuclei. These include extremely high luminosities,
typically ranging from
10
42
to
10
48
erg s
1
, originating
from compact central regions. AGN display broadband
continuum emission covering the entire electromagnetic
spectrum—from radio waves to
γ
rays—indicating a
variety of emission mechanisms and spatial regions.
They also show rapid and significant variability in
multiple wavebands on timescales ranging from hours
to years, pointing to the small size and dynamic
nature of the emitting zones. Prominent broad and
narrow emission lines are observed in their optical and
ultraviolet spectra, produced by high-velocity gas and
slower-moving material, respectively. Furthermore,
many AGN, especially the radio-loud class, exhibit
relativistic jets—collimated plasma outflows moving at
speeds close to that of light—predominantly detected
in radio frequencies [15, 16].
4.3 The AGN Central Engine
The compact size of the AGN emission region
(
1
parsec) implies that traditional stellar mechanisms
4.4 Components of AGN
Figure 1: Normal galaxy and active galaxy.
(Image Credit: ahead.astro.noa.gr/?p=2444)
Figure 2: A schematic representation of the
components of AGN.
(Image courtesy: [Beckmann & Shrader(2012)])
such as nuclear fusion or supernovae are insufficient
to explain AGN luminosities. Instead, the energy
is believed to arise from the release of gravitational
potential energy as matter accretes onto a SMBH [
12
].
The infalling matter forms a geometrically thin, optically
thick accretion disk around the SMBH. The AGN
lumnosity powered by accretion is given by
L = η
˙
Mc
2
,
where
˙
M
is the mass accretion rate,
η
is the radiative
efficiency (typically 0.1), and c is the speed of light.
4.4 Components of AGN
In the framework of the standard model of AGN,
the central engine comprises a supermassive black hole
surrounded by an optically thick and geometrically
thin accretion disk, composed of high-density gas (
10
15
cm
3
) with substantial column density. It sustains
temperatures between
10
5
10
6
K and resides at radial
distances of
10
100 R
g
(equivalent to
10
3
parsecs)
from the central black hole. The magneto-rotational
instabilities within the accretion disk give rise to a
magnetically active corona above the innermost disk
regions (
10
5
pc). This corona, composed of optically
thin, high-temperature plasma (
10
5
K). However,
the exact geometry of the corona remains uncertain,
with models proposing slab, sphere-disc, or patchy
configurations [17].
A salient feature of many AGN is the presence
of relativistic jets. These jets, launched via
magnetohydrodynamic mechanisms in the innermost
zones of the accretion disk, are narrow and highly
collimated plasma outflows. They can span
distances up to several hundred kiloparsecs or even
megaparsecs, depending on the system [
22
,
23
,
24
].
Observationally, the strength and velocity of these
jets vary significantly—from weak radio outflows to
highly relativistic jets extending up to hundreds of
kiloparsecs to even megaparsec scales. Emission-line
regions constitute another integral component of the
AGN structure. These include the Broad Line Region
21
4.5 Conclusion
(BLR) and Narrow Line Region (NLR), composed of
clouds photoionized by the intense radiation from the
AGN central engine [
18
,
19
,
20
,
21
]. The BLR, situated
at
0.01
1
pc from the central black hole, consists of
high-density gas (
10
10
cm
3
) with column densities
up to
10
23
cm
2
. The NLR, in contrast, lies farther from
the central region (
100
1000
pc) and is composed of
low-density clouds (
10
4
cm
3
) with smaller column
densities (
10
20
10
21
cm
2
). Enclosing the BLR is
a dusty, molecular toroidal structure that significantly
influences the observed properties of AGN depending
on the orientation with respect to our line of sight. This
torus is geometrically and optically thick, composed of
gas and dust with densities ranging from
10
3
10
6
cm
3
and column densities exceeding
10
25
cm
2
[
25
]. The
spatial extent of the torus can range from a few parsecs
up to several hundred parsec.
A schematic of the AGN structure is shown in
Fig. 2. The various physical components of AGN emit
radiation across different regions of the electromagnetic
spectrum, from radio waves to γrays.
4.5 Conclusion
AGN are among the most powerful and complex
astrophysical systems in the universe, fueled by
accretion onto supermassive black holes residing at the
centers of galaxies. Through decades of observational
and theoretical work, a coherent picture has emerged in
which diverse components—including accretion disks,
relativistic jets, emission-line regions, and obscuring
dusty tori—contribute to the rich phenomenology of
AGN. Emissions across the electromagnetic spectrum,
coupled with temporal variability and morphological
structures, provide vital diagnostics for probing the
physical conditions and dynamic processes near these
cosmic powerhouses. A detailed examination of AGN
multi-wavelength emissions, along with an exploration
of the many outstanding questions in the field, will be
featured in forthcoming issues.
References
[1]
Fath, E. A. (1909). Lick Observatory Bulletin, 5,
71.
[2]
Slipher, V. M. (1917). Lowell Observatory
Bulletin, 3, 59.
[3]
Curtis, H. D. (1918). Publications of the
Astronomical Society of the Pacific, 30, 19.
[4]
Hubble, E. (1926). Astrophysical Journal, 64, 321.
[5]
Seyfert, C. K. (1943). Astrophysical Journal, 97,
28.
[6]
Reber, G. (1940). Astrophysical Journal, 91, 621.
[7]
Bolton, J. G., & Stanley, G. J. (1948). Nature, 161,
312.
[8] Smith, F. G. (1952). Nature, 170, 108.
[9]
Hanbury Brown, R., et al. (1952). Monthly Notices
of the Royal Astronomical Society, 112, 445.
[10]
Baade, W., & Minkowski, R. (1954). Astrophysical
Journal, 119, 206.
[11] Schmidt, M. (1963). Nature, 197, 1040.
[12] Lynden-Bell, D. (1969). Nature, 223, 690.
[13]
Bondi, H. (1952). Monthly Notices of the Royal
Astronomical Society, 112, 195.
[14]
Eddington, A. S. (1926). The Internal Constitution
of the Stars, Cambridge University Press.
[15]
Peterson, B. M., An Introduction to Active Galactic
Nuclei, Cambridge University Press, 1997.
[16]
Krolik, J. H., Active Galactic Nuclei: From the
Central Black Hole to the Galactic Environment,
Princeton University Press, 1999.
[17]
Reynolds, C. S., & Nowak, M. A. (1999).
Fluorescent iron lines as a probe of astrophysical
black hole systems. Physics Reports, 377(6), 389–
466. https://doi.org/10.1016/S0370-1573(99)000
51-8
22
REFERENCES
[18]
Osterbrock, D. E., & Mathews, W.
G. (1986). Emission-line regions of
active galactic nuclei. Annual Review of
Astronomy and Astrophysics, 24, 171–212.
https://doi.org/10.1146/annurev.aa.24.090186.001131
[19]
Netzer, H., & Wills, B. J. (1983). Broad
emission lines in QSOs and active galactic
nuclei. The Astrophysical Journal, 275, 445–456.
https://doi.org/10.1086/161544
[20]
Collin-Souffrin, S. (1988). The broad emission-
line region in active galactic nuclei: interpretation
and evolution. Publications of the Astronomical
Society of the Pacific, 100(633), 1041–1058.
https://doi.org/10.1086/132284
[21]
Ferland, G. J., & Netzer, H. (1983).
Radiation transfer in emission-line clouds.
The Astrophysical Journal, 264, 105–115.
https://doi.org/10.1086/160575
[Beckmann & Shrader(2012)]
Beckmann, V. &
Shrader, C. R. 2012,
[22]
Blandford, R. D. & Payne, D. G. 1982, Monthly
Notices of the Royal Astronomical Society, 199,
883. doi:10.1093/mnras/199.4.883
[23]
Begelman, M. C., Blandford, R. D., & Rees, M. J.
1984, Reviews of Modern Physics, 56, 2, 255.
doi:10.1103/RevModPhys.56.255
[24]
Koide, S., Shibata, K., & Kudoh, T. 1999,
The Astrophysical Journal, 522, 2, 727.
doi:10.1086/307667
[25]
Netzer, H. 2015, Annual Review of Astronomy
and Astrophysics, 53, 365. doi:10.1146/annurev-
astro-082214-122302
About the Author
Dr. Savithri H. Ezhikode is an
Assistant Professor at St. Francis de Sales
College (Autonomous), Electronics City, Bengaluru.
Her research focuses on observational astrophysics,
particularly X-ray astronomy, multi-wavelength
investigations of active galactic nuclei (AGN), and
stellar astrophysics.
23
T Coronae Borealis: A Recurrent Nova on the
Verge of Eruption
by Sindhu G
airis4D, Vol.3, No.7, 2025
www.airis4d.com
5.1 Introduction: The Sleeping Star
in Corona Borealis
In the serene northern constellation of Corona
Borealis lies a star with an explosive past and
an imminent future. Known as T Coronae Borealis
or T CrB, this object has captured the attention of
astronomers for over a century. It belongs to a rare
class of stars called recurrent novae, which experience
multiple nova explosions over time. Unlike one-time
novae or distant supernovae, recurrent novae give us a
chance to watch the same star go through outbursts in
real time across generations.
As of June 2025, all eyes are on T CrB. This
system previously erupted in 1866 and 1946, each time
brightening suddenly and dramatically enough to be
seen with the naked eye. Based on the roughly 80-
year interval between those events, scientists believe
that the next eruption is due any day now, most likely
within 2025. Recent observational evidence supports
this, as the star has been undergoing unusual changes
since 2023, suggesting that it is entering the final stages
before its next eruption.
5.2 Past Eruptions and Their
Significance
The recorded history of T CrB begins with its first
known eruption in 1866. That year, observers were
stunned to see a new bright star in the sky, visible without
Figure 1: This Hubble Space Telescope image shows
the classic nova GK Persei. Astronomers are eager to
capture comparable views when T Coronae Borealis
explodes.
(Image Credit: NASA/CXC/RIKEN/D.Takei et al/STScI/NRAO/VLA.)
a telescope. It was one of the earliest well-documented
nova events. T CrB’s sudden transformation from a faint
star to a second-magnitude object intrigued astronomers
and marked the beginning of its reputation as a celestial
time bomb.
Then, in 1946, precisely 80 years later, it erupted
again reaffirming its classification as a recurrent nova.
The similarity in brightness and behavior between the
two eruptions established a pattern. Now, after another
79 years of quiet, astronomers have reason to believe
that the star will soon repeat its fiery display.
These eruptions aren’t just visually spectacular
they provide critical data for understanding how mass is
transferred and ignited in binary systems. The regularity
of T CrB’s outbursts gives scientists a rare opportunity
5.3 The Science Behind a Nova Eruption
to predict and observe the entire cycle of a nova before,
during, and after eruption.
5.3 The Science Behind a Nova
Eruption
5.3.1 Binary Systems and Mass Transfer
T CrB is a symbiotic binary system, composed of
two stars in a close gravitational relationship. One is a
white dwarf the dense, compact remnant of a dead
star. The other is a red giant, a swollen, cool star in
the later stages of its life. The red giant sheds material
through a stellar wind, and this material is pulled toward
the white dwarf by gravity, forming an accretion disk
around it.
Over time, hydrogen from the red giant builds up
on the white dwarfs surface. This process can continue
for decades until the layer of hydrogen becomes dense
and hot enough to ignite in a thermonuclear explosion
a nova. The outer layers are blown off into space,
but the white dwarf itself survives and the cycle begins
again.
5.3.2 Triggering a Nova: The Thermonuclear
Runaway
The explosion of a nova is caused not by
the destruction of a star, but by a surface-level
thermonuclear reaction. As hydrogen accumulates, the
pressure and temperature increase until the conditions
become extreme enough to fuse hydrogen explosively.
This releases a massive amount of energy in a short
time, dramatically brightening the star.
This is known as a thermonuclear runaway, and it
leads to the ejection of material at speeds of thousands
of kilometers per second. The system can remain bright
for days or even weeks, after which it slowly fades back
to its quiescent state.
5.3.3 What Makes T CrB Unique Among
Novae
While novae themselves are not uncommon,
recurrent novae like T CrB are extremely rare. Most
novae are single events in observable timescales, with
recurrence times of tens of thousands of years. T CrB,
however, erupts roughly every 80 years, making it an
ideal system to study the full nova cycle.
Its brightness during eruption, long-term
monitoring data, and clear pattern of activity make T
CrB one of the most scientifically valuable nova systems
known. Studying it helps scientists better understand
how binary systems evolve, how mass is transferred and
retained by white dwarfs, and whether such systems
might eventually become Type Ia supernovae, which
play a key role in measuring the universes expansion.
5.4 Current Observations and Signs
of Imminent Eruption
5.4.1 The 2023–2025 Brightening Phase
Beginning in early 2023, astronomers began to
notice that T CrB was behaving unusually. It entered a
brightening phase, with its optical brightness increasing
by several tenths of a magnitude. This behavior had not
been observed since the lead-up to the 1946 eruption.
The similarity has sparked intense interest.
Brightening before a nova is thought to be caused
by changes in the accretion disk or increased mass flow
from the red giant. Whatever the cause, the pre-eruption
glow is seen as one of the most reliable signs that a
nova is imminent.
5.4.2
Multi-Wavelength Indicators of Activity
Observations across multiple wavelengths
including ultraviolet (UV), optical, and X-rays show
changes consistent with a system preparing to erupt.
Instruments aboard NASAs Swift satellite and other
space-based observatories have detected increases in
high-energy radiation, signaling rising activity in the
region around the white dwarf.
Spectroscopic data show variations in the red
giants stellar wind and in the dynamics of the accretion
disk. All of these changes suggest that the system is
entering the final stages before detonation.
25
5.5 The Anticipated Nova Event and Its Scientific Impact
5.4.3 Prediction Challenges and Timing
Uncertainty
Despite the wealth of observations, astronomers
still cannot pinpoint the exact day or month of the
eruption. Predicting the precise timing of a nova
is notoriously difficult due to complex and poorly
understood processes governing mass flow and ignition
conditions.
What makes T CrB valuable is that it gives
scientists a rare chance to test and improve eruption
prediction models in real time. With dozens of
observatories and amateur astronomers monitoring the
star, any sign of the final trigger will be quickly noticed
and studied.
5.5 The Anticipated Nova Event and
Its Scientific Impact
5.5.1 What to Expect When T CrB Erupts
When the eruption finally happens, T CrB is
expected to brighten rapidly over a period of hours
to days, reaching a magnitude of 2 or brighter. This
will make it clearly visible to the naked eye, even in
semi-urban skies. The eruption could last for a week or
more, with the star slowly fading over the next several
months.
During the peak, it will rival the North Star
(Polaris) in brightness and may become the brightest
nova seen in the Northern Hemisphere in over a century.
It will be a rare opportunity for both scientists and
casual skywatchers to witness a dramatic stellar event.
5.5.2 Opportunities for Astronomers and the
Public
Professional observatories are preparing for 24/7
monitoring during the eruption. But the event wont
be limited to scientists. Amateur astronomers can
make valuable contributions by recording the brightness,
taking images, and sharing real-time observations.
Educational institutions and outreach
organizations are also preparing resources to
help the public understand and enjoy this rare
phenomenon. T CrB could inspire a new generation of
astronomers, just as it has captivated observers in 1866
and 1946.
5.5.3 Scientific Implications of the Eruption
Beyond the beauty of the event, T CrB’s nova
will yield valuable scientific insights. By studying the
eruption, astronomers can:
Measure how much mass the white dwarf gains
or loses
Assess whether the system is evolving toward a
Type Ia supernova
Test models of binary evolution and disk
instability
Improve predictions for other potential nova
systems
The explosion of T CrB is not just a light show
its a key to understanding the life cycles of stars and
the future of similar systems.
5.6 Conclusion: Awaiting a Celestial
Spectacle
As of mid-2025, the world waits. T Coronae
Borealis has been building toward this moment for
decades, and all the signs suggest that its eruption is
near. Whether it happens tonight, next week, or a few
months from now, the nova will be one of the most
significant astronomical events of the decade.
In a time when much of space remains distant and
intangible, T CrB offers something rare: a cosmic event
that we can watch unfold with our own eyes, from our
own backyards. Keep looking up the sky may light
up at any moment.
References:
When will the next T CrB eruption occur ?
T Coronae Borealis Isnt the Only Star Ready to
Blow Meet U Gem
Hold onto your hats! Is the ’Blaze Star’ T Corona
Borealis about to go boom?
26
5.6 Conclusion: Awaiting a Celestial Spectacle
Recurrent nova T CrB has just started its Pre-
eruption Dip in March/April 2023, so the eruption
should occur around 2024.4 +- 0.3
Wait Continues for Eruption of T Coronae
Borealis
A ‘Once-in-a-Lifetime Nova Explosion Is
Running Late
T Coronae Borealis: an image, waiting for its
eruption
A Star on the Edge: T Coronae Borealis Poised
for a Spectacular Cosmic Eruption
T Coronae Borealis
Review of the Recurrent Nova T CrB
About the Author
Sindhu G is a research scholar in Physics
doing research in Astronomy & Astrophysics. Her
research mainly focuses on classification of variable
stars using different machine learning algorithms. She
is also doing the period prediction of different types
of variable stars, especially eclipsing binaries and on
the study of optical counterparts of X-ray binaries.
27
Part III
Biosciences
Decoding DNAs Secrets: How Sequence
Shapes Life
by Geetha Paul
airis4D, Vol.3, No.7, 2025
www.airis4d.com
1.1 Introduction
DNA, or deoxyribonucleic acid, stands as the
essential molecular foundation upon which all known
life is built. This remarkable molecule is responsible
for storing, transmitting, and expressing the genetic
instructions that guide the development, functioning,
and evolution of every organism from the simplest
bacteria to the most complex multicellular beings. The
architecture of DNA is elegantly simple yet profoundly
complex: it consists of two long strands twisted into
a double helix, each composed of repeating units
called nucleotides. Each nucleotide, in turn, is made
up of a sugar group, a phosphate group, and one
of four nitrogenous bases, adenine (A), thymine (T),
cytosine (C), and guanine (G). The unique sequence
of these chemical bases along the DNA strand forms a
sophisticated biological language, much like letters in
an alphabet spell out words and sentences. This genetic
language contains all the instructions necessary to build
and maintain an organism, dictating everything from the
color of an individual’s eyes to the specific enzymes that
drive cellular metabolism. The arrangement of these
bases not only determines the structure and function of
proteins but also regulates when and where genes are
turned on or off, allowing for the intricate orchestration
of life processes.
This article delves deeply into the mechanisms
by which the precise arrangement of DNA’s bases
encodes biological messages. It examines how these
messages are transcribed and translated into proteins
and regulatory signals that govern cellular activity.
Furthermore, it explores how even minor alterations in
the DNA sequence, such as single base changes can
have different consequences for an organism’s health,
survival, and evolution. These principles are brought
to life through real-world examples that illustrate
the profound impact of DNAs sequence on biology,
medicine, and biotechnology. By understanding how
DNA encodes and transmits genetic information, we
gain insight into the very essence of life itself.
1.2 The Structure of DNA: The
Blueprint of Life
DNA is a double-stranded, helical molecule
that serves as the fundamental repository of genetic
information in all living organisms. Each strand of
DNA is composed of repeating structural units known
as nucleotides. A nucleotide consists of three key
components: a sugar group (deoxyribose), a phosphate
group, and one of four nitrogenous bases, adenine (A),
thymine (T), cytosine (C), or guanine (G). The two
DNA strands are oriented in opposite directions and
are held together by hydrogen bonds that form between
complementary bases: adenine pairs exclusively with
thymine, and cytosine with guanine. This precise
base pairing is essential for maintaining the stability
and integrity of the iconic double helix, a structure
that enables DNA to store vast amounts of genetic
information in a compact and efficient manner.
1.3 Genes: The Units of Heredity
Figure 1: DNA is a double helix formed by base pairs
attached to a sugar-phosphate backbone.
Image courtesy:U.S. National Library of Medicine
The specific sequence of these four bases along the
DNA strand is what encodes the genetic instructions for
life. Much like the arrangement of letters in an alphabet
spells out words and sentences, the linear order of DNA
bases forms the language of heredity. This language
is organized into functional segments, including genes
and regulatory elements, which together instruct the
cell on how to develop, function, and respond to its
environment. The sequence of bases is not random;
instead, it is highly conserved and precisely arranged to
ensure the accurate transmission of genetic information
from one generation to the next.
1.3 Genes: The Units of Heredity
A gene is a discrete segment of DNA that contains
the complete set of instructions required to produce
a specific protein or functional RNA molecule. Each
gene is defined by a unique sequence of bases, which
determines the exact order in which amino acids
are assembled during protein synthesis. This amino
acid sequence, in turn, dictates the three-dimensional
structure and biological function of the resulting protein.
Genes are the fundamental units of heredity, and their
precise sequence is crucial for the proper development,
maintenance, and regulation of all cellular processes.
Through the expression of genes, cells are able to
synthesise the proteins and RNAs necessary for growth,
metabolism, repair, and reproduction, ensuring the
continuity and complexity of life.
1.4 The Genetic Code: Triplets and
Codons
The genetic code is read in groups of three bases,
known as codons. Each codon specifies a particular
amino acid or serves as a start or stop signal during
protein synthesis. This triplet code is nearly universal
across all living organisms, reflecting its essential role
in biology.
Example: ATG (in DNA) or AUG (in RNA) codes
for methionine, often signaling the start of a protein.
TGG codes for tryptophan. TAA is a stop codon,
signaling the end of protein synthesis.
1.5
From DNA Sequence to Biological
Message
1.5.1 DNA Replication and Repair
Nucleotides are the fundamental units that make
up DNA, and they play a vital role in both the synthesis
and protection of genetic material. During DNA
replication, each nucleotide acts as a precursor for the
construction of a new DNA strand, ensuring that each
daughter cell receives an accurate copy of the genetic
information. This process is essential for cell division
and the inheritance of traits. Additionally, nucleotide
excision repair mechanisms utilize nucleotides to
correct DNA damage caused by environmental factors
such as ultraviolet radiation and chemical mutagens.
By replacing damaged or incorrect bases with new
nucleotides, cells maintain genomic integrity and
prevent mutations that could lead to disease.
1.5.2 Transcription: Copying the Genetic
Code
Before genetic information can be used to build
proteins, it must first be transcribed from DNA into
messenger RNA (mRNA), a process facilitated by
30
1.5 From DNA Sequence to Biological Message
nucleotides. Transcription begins when the DNA
sequence of a gene is read and copied into a
complementary mRNA strand. This mRNA molecule
carries the genetic message out of the nucleus and
into the cytoplasm, where it serves as the template for
protein synthesis.
Example: If the DNA sequence is 3’-TAC GGA
TTT-5’, the corresponding mRNA copy will be 5’-AUG
CCU AAA-3’.
1.5.3 Translation: Building Proteins
The next stage in gene expression is translation,
where the mRNA sequence is decoded to assemble
a specific protein. At the ribosome, the mRNA is
read in three-base units called codons. Each codon
corresponds to a particular amino acid, which is
delivered to the ribosome by transfer RNA (tRNA).
The tRNA molecules recognize codons on the mRNA
via their complementary anticodons, ensuring the
precise incorporation of amino acids into the growing
polypeptide chain. As the process continues, amino
acids are linked together in the order specified by
the mRNA sequence, ultimately forming a functional
protein.
Example: The mRNA sequence AUG CCU AAA
codes for the amino acids methionine, proline, and
lysine, respectively. The resulting polypeptide chain
will fold into a specific three-dimensional structure,
determining its biological function.
1.5.4 Cell Signaling and Gene Expression
Beyond their roles in DNA replication and
protein synthesis, nucleotides are also essential for
cell signaling and the regulation of gene expression.
Nucleotides such as ATP, cAMP, and GTP act as
secondary messengers in signal transduction pathways,
enabling cells to respond to external stimuli and
coordinate complex activities. For example, nucleotides
mediate processes like neurotransmission, hormone
signaling, and immune responses by activating specific
receptors such as G-protein-coupled receptors.
Additionally, nucleotides help regulate gene
expression by influencing the structure of chromatin
Figure 2: The sequence of bases in DNA encodes
biological messages by serving as a molecular language
that instructs cells how to build proteins and regulate life
processes. This genetic language is written with four
chemical letters —the nucleotide bases adenine (A),
thymine (T), cytosine (C), and guanine (G)—arranged
in a specific order along the DNA strand. The precise
sequence of these bases contains all the information
needed to direct the structure and function of every
living organism. Image courtesy: https://www.sciencedirect.com/topics/agricultural-a
nd-biological- sciences/dna
31
1.6 Protein Folding and Function
and modulating the activity of transcription factors.
These mechanisms allow cells to dynamically control
when and how genes are turned on or off, supporting
adaptation and development.
1.6 Protein Folding and Function
The sequence of amino acids determines how the
protein folds into its three-dimensional shape. This
shape is crucial for the proteins function, whether it
acts as an enzyme, a structural component, or a signaling
molecule.
1.6.1 Regulatory Sequences: Controlling
Gene Expression
Not all DNA encodes proteins. Many sequences
serve as regulatory elements that control when, where,
and how much a gene is expressed. These include:
Promoters: Regions where RNA polymerase binds
to initiate transcription. Enhancers and Silencers:
Sequences that increase or decrease gene expression.
Insulators: Elements that block the influence of
enhancers or silencers.
These regulatory regions ensure that genes are
turned on or off at the right time and in the right cells,
allowing for complex development and adaptation.
1.7 Real-World Examples of DNA
Messages
1.7.1 Sickle Cell Anemia
A single base change in the hemoglobin gene (from
A to T) alters one codon, resulting in the substitution
of valine for glutamic acid in the protein. This small
change causes red blood cells to become sickle-shaped,
leading to sickle cell anemia.
A single gene mutation (GAG
GTG and
CTC
CAC) results in a defective haemoglobin that
when exposed to de-oxygenation (depicted in the right
half of the diagram) polymerizes (upper right of the
diagram), resulting in the formation of sickle cells.
Vaso-occlusion can then occur. The disorder is also
Figure 3: Schematic representation of the
pathophysiology (in part) of sickle cell anemia. Image
courtesy: https://pmc.ncbi.nlm.nih.gov/articles/PMC7510211/
characterized by abnormal adhesive properties of sickle
cells; peripheral blood mononuclear cells (depicted in
light blue; shown as the large cells under the sickle cells)
and platelets (depicted in dark blue; shown as the dark
circular shapes on the mononuclear cells) adhere to the
sickled erythrocytes. This aggregate is labelled 1. The
mononuclear cells have receptors (e.g., CD44 (labeled 3
and depicted in dark green on the cell surface)) that bind
to ligands, such as P-selectin (labeled 2 and shown on
the endothelial surface), that are unregulated. The sickle
erythrocytes can also adhere directly to the endothelium.
Abnormal movement or rolling and slowing of cells
in the blood also can occur. These changes result in
endothelial damage. The sickled red cells also become
dehydrated as a result of abnormalities in the Gardos
channel. Hemolysis contributes to oxidative stress and
dysregulation of arginine metabolism, both of which
lead to a decrease in nitric oxide (NO) that, in turn,
contributes to the vasculopathy that characterizes SCD.
1.7.2 Lactose Intolerance
The gene for lactase, the enzyme that digests
lactose, is regulated by a DNA sequence upstream of the
gene. In most mammals, this regulatory sequence turns
off lactase production after weaning. In some humans, a
mutation keeps the gene active into adulthood, allowing
continued lactose digestion.
32
1.8 The Universality and Flexibility of the Genetic Code
1.7.3 Eye Colour
Variations in the DNA sequence near the OCA2
gene affect how much pigment is produced in the iris,
resulting in different eye colors. These differences are
due to changes in regulatory DNA, not the coding region
itself.
1.8
The Universality and Flexibility of
the Genetic Code
The genetic code is nearly universal, with only
minor exceptions in some organisms and organelles.
This universality allows scientists to transfer genes
between species, a foundation of genetic engineering
and biotechnology.
Example: The gene for human insulin can be
inserted into bacteria, which then produce insulin for
medical use.
1.9 Conclusion
The sequence of bases in DNA is the fundamental
language of life. Through the genetic code, this
sequence encodes the instructions for building proteins,
regulating genes, and orchestrating the complex
processes that define living organisms. Even small
changes in the DNA sequence can have profound effects,
leading to diversity, adaptation, and sometimes disease.
As our understanding of the genome deepens, the secrets
of how DNA encodes biological messages continue to
unlock new possibilities in medicine, agriculture, and
biotechnology. Ultimately, DNA is not just a blueprint
for life, it is the story of life itself, written in a language
that scientists are still learning to read and interpret.
References
https://www.nature.com/scitable/topicpage/nucl
eic-acids-to-amino-acids-dna-specifies-935/
https://www.sciencedirect.com/topics/agricultura
l-and-biological-sciences/dnaI
https://pmc.ncbi.nlm.nih.gov/articles/PMC75102
11/
J.D. Watson, F.H.C. Crick. A Structure for
Deoxyribose Nucleic Acid.” Nature 171 no. 4356
(1953):737–738.
M.H.F. Wilkins et al. “Molecular Structure of
Deoxypentose Nucleic Acids.” Nature 171 no. 4356
(1953):738–740.
About the Author
Geetha Paul is one of the directors of
airis4D. She leads the Biosciences Division. Her
research interests extends from Cell & Molecular
Biology to Environmental Sciences, Odonatology, and
Aquatic Biology.
33
Part IV
Computer Programming
Understanding Performance Metrics in
Parallel Computing
by Ajay Vibhute
airis4D, Vol.3, No.7, 2025
www.airis4d.com
1.1 Introduction
In parallel computing, designing algorithms that
utilize multiple cores or processors is only part of the
challenge. Equally important is the ability to evaluate
the performance of these algorithms. Performance
metrics offer insight into how efficiently an algorithm
uses computing resources and where improvements can
be made. They help developers identify bottlenecks,
quantify the effects of design choices, and guide
optimization efforts.
Without proper analysis, a parallel algorithm may
scale poorly, waste computational resources, or even
underperform compared to its sequential counterpart.
Performance evaluation is therefore essential—not only
for benchmarking but also for validating the practical
benefits of parallelization. This article explores key
metrics such as speedup, efficiency, scalability, and the
theoretical limits defined by Amdahl’s Laws, providing
a foundation for understanding and improving parallel
performance.
1.2 Speedup
One of the most fundamental performance metrics
in parallel computing is speedup, which measures how
much faster a parallel algorithm executes compared
to its sequential counterpart. Speedup provides a
simple yet powerful way to assess the effectiveness
of parallelization. It quantifies the benefit gained by
executing a program on multiple processors rather than
just one.
Speedup is defined as the ratio of the time taken
to execute a task on a single processor to the time taken
on multiple processors:
S(P ) =
T
1
T
P
(1.1)
Where:
T
1
is the execution time on a single processor,
T
P
is the execution time on P processors,
S(P )
is the speedup achieved with
P
processors.
The ideal or linear speedup occurs when
S(P ) =
P
, meaning that doubling the number of processors
halves the execution time. However, this is rarely
achieved in practice due to factors such as:
Communication and synchronization overhead
between processors.
Sequential portions of the code that cannot be
parallelized.
Load imbalance, where some processors are idle
while others are overloaded.
Example
Consider a task that takes 100 seconds to complete
on a single processor. When executed on four
processors, it takes 30 seconds:
S(4) =
100
30
3.33
This result indicates a substantial improvement,
but not ideal speedup (which would be S(4) = 4).
1.3 Efficiency
In some cases, speedup can exceed the number of
processors (
S(P ) > P
), which is known as superlinear
speedup. Although rare, this can occur due to factors
such as:
Improved cache utilization when the problem is
divided among processors.
Reduction in paging or memory swapping on
multicore systems.
As the number of processors increases, the
incremental gain in speedup usually decreases. This
phenomenon, known as diminishing returns, is due
to the growing influence of overhead and the limited
parallelizable portion of the task. This is also formally
modeled by Amdahl’s Law, discussed in a later section.
1.3 Efficiency
In parallel computing, efficiency is a fundamental
metric used to evaluate how effectively computational
resources are utilized when executing a parallel
algorithm. While speedup tells us how much faster a
program runs when using multiple processors, efficiency
tells us how well those processors are being used.
Efficiency is defined as the ratio of the speedup
(
S
) achieved to the number of processing units (
P
)
employed:
E =
S
P
=
T
1
P T
P
(1.2)
where:
E is the efficiency,
S =
T
1
T
P
is the speedup,
T
1
is the execution time of the algorithm on a
single processor,
T
P
is the execution time on P processors.
An efficiency value of 1 (or 100%) indicates
perfect parallel efficiency, meaning all processors
are fully utilized with no overhead or wasted work.
However, in practical scenarios, efficiency typically
decreases as more processors are added due to several
limiting factors, such as communication overhead,
synchronization delays, load imbalance, sequential
bottlenecks.
Example
Suppose a program takes
T
1
= 100
seconds to
run on a single processor. When parallelized across
P = 4
processors, the execution time reduces to
T
P
= 25
seconds. Then the speedup is:
S =
T
1
T
P
=
100
25
= 4 (1.3)
and the efficiency is:
E =
S
P
=
4
4
= 1 or 100% (1.4)
This indicates ideal efficiency, which is rare in
practice. Now, consider a case where the same task
takes T
P
= 30 seconds on four processors:
S =
100
30
3.33, E =
3.33
4
0.833 or 83.3%
(1.5)
This drop in efficiency reflects that while speedup
is still substantial, not all processors are fully
contributing due to overhead or imbalance.
Efficiency provides a normalized view of speedup,
allowing fair comparison across different system sizes
and configurations. A low efficiency may suggest:
Over-parallelization, where the overhead of
managing many threads outweighs the benefits.
Poor algorithmic scaling due to inherent serial
sections or frequent communication.
Hardware or runtime limitations such as shared
memory contention or OS scheduling delays.
In performance analysis and optimization,
efficiency is crucial for understanding diminishing
returns. For instance, if doubling the number of
processors only increases speedup marginally, efficiency
will decline, signaling that simply adding more compute
resources may not be beneficial without architectural
or algorithmic changes. An efficiency above 70–
80% is generally considered good, especially on
large-scale systems. Efficiency below 50% warrants
investigation and likely optimization. In summary,
efficiency complements speedup by capturing the cost-
effectiveness of parallelization. It highlights how well
a parallel program scales relative to the resources
it consumes, making it an indispensable metric in
36
1.4 Scalability
performance tuning and system design.
This naturally leads to the concept of scalability,
which explores how performance evolves as we scale
the number of processors or the problem size.
1.4 Scalability
Scalability is a fundamental concept in parallel
computing that describes how effectively a system
or algorithm can improve performance as additional
computational resources—such as processors or
cores—are introduced. While efficiency focuses on
how well the current resources are utilized, scalability
captures how performance trends evolve as additional
resources are added to the system. It provides insight
into the long-term viability of a parallel solution,
especially as hardware capabilities grow or problem
sizes increase.
A well-designed parallel algorithm is not just
fast—it should also scale. Scalability allows developers
and system architects to answer critical questions
such as - Will performance continue to improve if
I double the number of processors? Can the algorithm
handle significantly larger problems without additional
runtime? At what point do overheads outweigh the
benefits of parallelism?
Without scalability, a parallel application may
perform well on a few cores but fail to take advantage of
modern high-core-count CPUs or distributed computing
clusters. In domains like scientific simulation, machine
learning, and data analytics, poor scalability limits both
performance and usefulness.
1.4.1 Types of Scalability
Scalability is typically categorized into two major
types: strong scalability and weak scalability, each
addressing different scaling scenarios.
Strong scalability refers to how the execution time
of a fixed-size problem decreases as more processors are
used. Ideally, the runtime should reduce proportionally
to the number of processors:
T
P
=
T
1
P
(1.6)
where
T
1
is the runtime on one processor,
T
P
is
the runtime on
P
processors, and
P
is the number of
processors. In practice, strong scaling is mainly limited
by sequential sections of code, communication and
synchronization overheads.
Weak scalability evaluates how execution time
behaves as both the problem size and the number of
processors increase proportionally. The goal is to
maintain constant execution time as the workload per
processor remains fixed:
T
P
(N
P
) T
1
(N
1
) where N
P
= P N
1
(1.7)
Weak scalability is especially relevant for large-
scale simulations and data-parallel applications, where
the data volume grows with system size.
Its important to distinguish between high
performance and good scalability. A non-scalable
algorithm may run fast on a small system but fail on
larger configurations. Conversely, an algorithm that
scales well but has high overhead may be inefficient
at small scales. Thus, scalability analysis must be
interpreted in conjunction with other performance
metrics like execution time, throughput, and resource
utilization. By focusing on scalability early in the
design phase, developers can create software that is
robust, future-proof, and capable of fully leveraging
modern parallel architectures.
1.5 Amdahl’s Law
Amdahl’s Law is a foundational principle in
parallel computing that quantifies the theoretical
maximum speedup achievable for a program as a
function of the fraction that can be parallelized.
Proposed by Gene Amdahl in 1967, the law emphasizes
a key limitation in parallel performance: even if most of
a program can be parallelized, the remaining sequential
portion sets a hard ceiling on performance gains.
Let:
f
be the fraction of the program that is inherently
sequential and cannot be parallelized,
1 f
be the fraction of the program that is
parallelizable,
37
1.6 Conclusion
P be the number of processors,
S(P )
be the speedup achieved with
P
processors.
Then, Amdahl’s Law defines the speedup as:
S(P ) =
1
f +
1f
P
(1.8)
This equation reveals that as
P
, the
maximum achievable speedup approaches:
S
max
=
1
f
(1.9)
This implies that the speedup is fundamentally
limited by the serial portion
f
, regardless of how many
processors are used.
Example
If 10% of a program is serial (
f = 0.1
), the
maximum speedup is:
S
max
=
1
0.1
= 10
Even with an infinite number of processors, the
program cannot run more than 10 times faster.
If only 5% is serial (f = 0.05), then:
S
max
=
1
0.05
= 20
For a highly parallelizable program with
f = 0.01
,
the theoretical limit is:
S
max
= 100
This illustrates how even small sequential
components in an algorithm can significantly constrain
overall speedup, particularly as the number of
processors increases.
While Amdahl’s Law provides valuable insight, it
assumes a fixed workload as the number of processors
increases—an assumption that often fails to reflect real-
world scenarios, where workloads tend to grow with
available resources. Moreover, the model does not
account for practical overheads such as synchronization,
communication delays, or memory contention, which
typically increase with processor count (P ).
Despite these simplifications, Amdahl’s Law
remains a useful guiding principle in software
design and performance optimization. It encourages
developers to reduce serial components early in the
development process, helps set realistic expectations for
scalability, and supports decision-making on whether
parallelization efforts are justified. Additionally, it
assists performance engineers in identifying when
adding more cores yields diminishing returns during
benchmarking and tuning on multicore systems.
1.6 Conclusion
Effective performance analysis is essential in
parallel computing. Metrics such as speedup, efficiency,
scalability, and Amdahl’s Law provide critical insight
into how well parallel algorithms utilize resources and
where bottlenecks occur. While speedup shows absolute
performance gain, efficiency reveals how economically
resources are used, and scalability highlights whether
performance can grow with increased computing power.
Amdahl’s Law serves as a cautionary model,
demonstrating how even small sequential components
can limit overall speedup. However, real-world systems
often exhibit scaling behaviors better described by
models that consider growing problem sizes and
hardware complexities.
Ultimately, optimizing parallel programs is an
iterative and context-dependent process. Developers
must balance workload distribution, minimize
communication and synchronization overhead, and
continuously profile and refine their code. Integrating
performance thinking early in the design process—not
just as a post-implementation step—leads to solutions
that are robust, scalable, and well-suited to modern
multicore and distributed architectures.
About the Author
Dr. Ajay Vibhute is currently working
at the National Radio Astronomy Observatory in
the USA. His research interests mainly involve
astronomical imaging techniques, transient detection,
machine learning, and computing using heterogeneous,
accelerated computer architectures.
38
About airis4D
Artificial Intelligence Research and Intelligent Systems (airis4D) is an AI and Bio-sciences Research Centre.
The Centre aims to create new knowledge in the field of Space Science, Astronomy, Robotics, Agri Science,
Industry, and Biodiversity to bring Progress and Plenitude to the People and the Planet.
Vision
Humanity is in the 4th Industrial Revolution era, which operates on a cyber-physical production system. Cutting-
edge research and development in science and technology to create new knowledge and skills become the key to
the new world economy. Most of the resources for this goal can be harnessed by integrating biological systems
with intelligent computing systems offered by AI. The future survival of humans, animals, and the ecosystem
depends on how efficiently the realities and resources are responsibly used for abundance and wellness. Artificial
intelligence Research and Intelligent Systems pursue this vision and look for the best actions that ensure an
abundant environment and ecosystem for the planet and the people.
Mission Statement
The 4D in airis4D represents the mission to Dream, Design, Develop, and Deploy Knowledge with the fire of
commitment and dedication towards humanity and the ecosystem.
Dream
To promote the unlimited human potential to dream the impossible.
Design
To nurture the human capacity to articulate a dream and logically realise it.
Develop
To assist the talents to materialise a design into a product, a service, a knowledge that benefits the community
and the planet.
Deploy
To realise and educate humanity that a knowledge that is not deployed makes no difference by its absence.
Campus
Situated in a lush green village campus in Thelliyoor, Kerala, India, airis4D was established under the auspicious
of SEED Foundation (Susthiratha, Environment, Education Development Foundation) a not-for-profit company
for promoting Education, Research. Engineering, Biology, Development, etc.
The whole campus is powered by Solar power and has a rain harvesting facility to provide sufficient water supply
for up to three months of drought. The computing facility in the campus is accessible from anywhere through a
dedicated optical fibre internet connectivity 24×7.
There is a freshwater stream that originates from the nearby hills and flows through the middle of the campus.
The campus is a noted habitat for the biodiversity of tropical Fauna and Flora. airis4D carry out periodic and
systematic water quality and species diversity surveys in the region to ensure its richness. It is our pride that the
site has consistently been environment-friendly and rich in biodiversity. airis4D is also growing fruit plants that
can feed birds and provide water bodies to survive the drought.