Cover page
Image Name: Flame Nebula: This collage of images from the Flame Nebula shows a near-infrared light view
from NASAs Hubble Space Telescope, while the two insets below show the near-infrared view taken by NASAs
James Webb Space Telescope. Much of the dark, dense gas and dust, as well as the surrounding white clouds
within the Hubble image, have been cleared in the Webb images, giving us a view into a more translucent cloud
pierced by the infrared-producing objects within that are young stars and brown dwarfs. Astronomers used Webb
to take a census of the lowest-mass objects within this star-forming region.
For more information, visit: https://science.nasa.gov/missions/webb/nasas-webb-peers-deeper-into-mysterious-f
lame-nebula/
Managing Editor Chief Editor Editorial Board Correspondence
Ninan Sajeeth Philip Abraham Mulamoottil K Babu Joseph The Chief Editor
Ajit K Kembhavi airis4D
Geetha Paul Thelliyoor - 689544
Arun Kumar Aniyan India
Sindhu G
Journal Publisher Details
Publisher : airis4D, Thelliyoor 689544, India
Website : www.airis4d.com
Email : nsp@airis4d.com
Phone : +919497552476
i
Editorial
by Fr Dr Abraham Mulamoottil
airis4D, Vol.3, No.5, 2025
www.airis4d.com
With the stunning “Flame Nebula” on the cover
page, this edition starts with the article of Prof. Ajit
Kembhav, ”Black Hole Stories-18: Gravitational Waves
from The Binary Radio Pulsar 1913+16”. The article
details how the binary pulsar system PSR B1913+16
provided the first indirect evidence for gravitational
waves, confirming Einsteins general theory of relativity.
Discovered by Hulse and Taylor in 1974, this system
consists of two neutron stars (1.44 and 1.39 solar masses)
in a tight, elliptical orbit with a perihelion distance
smaller than the Suns radius. Precise measurements
of pulse arrival times revealed relativistic effects like
extreme perihelion precession (4.2°/year) and orbital
decay due to energy loss via gravitational waves, with
the orbital period decreasing by 76.5 microseconds
annually—matching Einstein’s predictions with less
than 1
In the article by Abhishek P. S, A Brief
Introduction to Plasma Physics”, Plasma, often called
the ”fourth state of matter, is an ionised gas consisting
of free electrons and ions that exhibit collective
behaviour due to long-range electromagnetic forces.
First termed by Irving Langmuir, plasma forms when
atoms gain enough thermal energy to overcome electron
binding forces, transitioning gradually from gas to
plasma without a distinct phase change. Unlike
neutral matter, plasmas display unique properties
like Debye shielding (screening external electric
fields over distances larger than the Debye length)
and quasineutrality (macroscopic neutrality despite
microscopic charge separation). They also oscillate
at a characteristic plasma frequency when disturbed.
Plasmas dominate the universe, evident in stars (e.g.,
the Suns corona), solar wind, Earth’s magnetosphere
(Van Allen belts), and ionosphere. Applications range
from gas discharges (fluorescent lights) to cutting-
edge technologies like magnetohydrodynamic energy
conversion, ion propulsion for spacecraft, and controlled
thermonuclear fusion—a potential clean energy source
mimicking stellar processes. The study of plasmas
bridges astrophysics (e.g., pulsars, black holes) and
engineering, leveraging their conductive and wave-
sustaining properties for diagnostics and innovation.
The article by Aromal P, ”X-ray Astronomy:
Through Missions” highlights key X-ray astronomy
missions launched in the mid-to-late 2000s, focusing
on Suzaku (ASTRO-EII), AGILE, and MAXI. Japans
Suzaku (2005–2015), a collaboration between JAXA
and NASA, featured advanced instruments like the X-ray
Imaging Spectrometer (XIS) and Hard X-ray Detector
(HXD), enabling groundbreaking studies of galaxy
clusters (e.g., the Perseus Clusters iron distribution)
and black hole spin measurements. Despite the failure of
its X-ray Spectrometer (XRS), Suzaku paved the way for
future missions. Italy’s AGILE (2007–2024) monitored
gamma-ray and hard X-ray sources with instruments
like SuperAGILE and the Gamma Ray Imager Detector
(GRID), while Japans MAXI (2009–present), mounted
on the ISS, conducts all-sky surveys with its Gas Slit
Camera (GSC) and Solid-State Slit Camera (SSC),
discovering transient events like tidal disruption events
(TDEs). Together, these missions expanded our
understanding of high-energy astrophysical phenomena,
from supernova remnants to active galactic nuclei.
In his article ”The Hunt for the Cosmic Radio
Lines of Neutral Hydrogen”, Linn Abraham takes us
to a walk to radio signals and their use in Astronomy.
Karl Jansky accidentally discovered a “steady hiss
type static of unknown origin in 1932 while studying
interference on transatlantic radio telephone service.
Later, researcher Grote Reber took up the challenge of
building a dedicated radio telescopes in his backyard.
Another researcher, Reber, produced the first radio
contour maps of the sky, beautifully outlining the Milky
Way and hints of discrete sources. This opened the
world to the branch of rasio astronomy.
According to the article by Sindhu G, ”Type I
Supernovae: Stellar Explosions Without Hydrogen”,
Type I supernovae, distinguished by their lack of
hydrogen spectral lines, are powerful stellar explosions
that play a pivotal role in astrophysics and cosmology.
They are categorised into Type Ia, Ib, and Ic, with Type
Ia supernovae arising from the thermonuclear explosion
of white dwarfs in binary systems (either through single-
degenerate or double-degenerate scenarios) and serving
as critical ”standard candles” for measuring cosmic
distances due to their consistent peak luminosity. Their
light curves, powered by radioactive decay of nickel and
iron, enabled the discovery of the universe’s accelerating
expansion, attributed to dark energy. In contrast,
Type Ib/Ic supernovae result from the core collapse of
massive, stripped-envelope stars (like Wolf-Rayet stars)
and exhibit more diverse light curves. Despite their
significance, key questions remain about progenitor
systems, metallicity effects, and explosion mechanisms,
with ongoing research leveraging advanced surveys and
simulations. These cosmic explosions not only enrich
the interstellar medium with heavy elements but also
provide profound insights into stellar evolution and the
large-scale structure of the universe.
Geetha Paul’s article ”Beyond Morphology:
How Molecular Taxonomy is Reshaping Species
Identification” explains that Molecular taxonomy has
revolutionised biological classification by shifting
from traditional morphological approaches to genetic
analysis, overcoming limitations like phenotypic
plasticity and cryptic species. Techniques such as DNA
barcoding (e.g., CO1 for animals) and high-throughput
sequencing have uncovered hidden biodiversity,
revealing that many morphologically similar organisms
are genetically distinct species—examples include
Caribbean frogs splitting into 12 species and African
elephants reclassified into two separate species.
This paradigm shift has profound implications: it
resolves taxonomic conflicts, detects hybridisation (e.g.,
wolf-coyote hybrids), and aids conservation efforts
by identifying evolutionarily significant units and
combating wildlife trafficking through genetic forensics.
In microbiology, it has exposed vast uncultured diversity,
expanding known bacterial phyla from 12 to over 100.
Despite challenges like cost and data interpretation,
innovations like portable sequencers and AI promise
to democratise the field. Molecular taxonomy not
only refines our understanding of lifes diversity but
also bridges gaps between species discovery and
conservation, proving indispensable in modern biology.
The article Ajay Vibhute, ”Classifying Parallelism:
Part-2” explores fine-grained parallelism techniques that
optimise processor performance at the hardware level,
focusing on bit-level, instruction-level (ILP), and thread-
level parallelism (TLP). Bit-level parallelism enhances
processing efficiency by operating on multiple bits
simultaneously, particularly benefiting cryptography
and data compression through wider registers (e.g.,
64-bit) and SIMD units. ILP boosts throughput by
executing multiple instructions per cycle via pipelining
and out-of-order execution, though it faces limitations
from instruction dependencies and branch hazards. TLP
leverages multi-core architectures to run concurrent
threads, improving performance in parallel workloads
but requiring careful synchronisation to avoid race
conditions. While each technique offers unique
advantages—bit-level for low-level operations, ILP
for single-core optimisation, and TLP for multi-core
scaling—their combined use in modern processors
enables efficient handling of diverse computational
tasks, from high-performance computing to real-time
systems. Understanding these layers of parallelism
is crucial for designing hardware and software that
maximises computational efficiency.
iii
Figure 1: Sheelu Abrham and Arun Kumar Aniyan, visited airis4D.
Figure 2: As usual, research was the major subject of the discussions.
News Desk
iv
Contents
Editorial ii
I Astronomy and Astrophysics 1
1 Black Hole Stories-18:
Gravitational Waves from The Binary Radio Pulsar 1913+16 2
1.1 The Nature of the Binary Pulsar PSR1913+16 . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.2 Detailed Observational Studies of the Binary Pulsar . . . . . . . . . . . . . . . . . . . . . . . . . 2
2 A Brief Introduction to PLASMA PHYSICS 6
2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
2.2 Particle Interaction and Collective Behaviour . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
2.3 Criteria for Plasma . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
2.4 Plasma in Nature . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
2.5 Basic Plasma Phenomena . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
2.6 Applications of Plasma Physics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
3 X-ray Astronomy: Through Missions 13
3.1 Satellites in 2000s . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
3.2 SUZAKU . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
3.3 AGILE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
3.4 MAXI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
3.5 Reference . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
4 The Hunt for the Cosmic Radio Lines of Neutral Hydrogen 17
4.1 The Bold Prediction (Amidst Darkness) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
4.2 Independent Confirmation and Other Ideas . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
4.3 The Breakthrough Discovery . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
4.4 The Power of the 21 cm Line . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
4.5 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
5 Type I Supernovae: Stellar Explosions Without Hydrogen 21
5.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
5.2 Classification and Spectral Characteristics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
5.3 Progenitors and Explosion Mechanisms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
5.4 Light Curves and Observational Features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
5.5 Cosmological Significance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
5.6 Current Research and Open Questions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
5.7 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
CONTENTS
II Biosciences 24
1 Beyond Morphology: How Molecular Taxonomy is Reshaping Species Identification 25
1.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
1.2 Key Applications of Molecular Taxonomy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
1.3 Challenges and Future Directions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
1.4 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
III Computer Programming 29
1 Classifying Parallelism: Part-2 30
1.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
1.2 Bit-Level Parallelism . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
1.3 Instruction-Level Parallelism . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
1.4 Thread-Level Parallelism . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
1.5 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
vi
Part I
Astronomy and Astrophysics
Black Hole Stories-18:
Gravitational Waves from The Binary Radio
Pulsar 1913+16
by Ajit Kembhavi
airis4D, Vol.3, No.5, 2025
www.airis4d.com
In this story we will consider detailed observations
of the binary radio pulsar 1913+16 and how these led
to firm evidence that gravitational waves exist.
1.1 The Nature of the Binary Pulsar
PSR1913+16
We have seen in the previous story that the binary
pulsar consists of two neutron stars each of which has a
mass of about 1.4 times the mass of the Sun, and radius
of about 10 km. Because of the second explosion, the
binary has a very eccentric orbit, so each neutron star
moves around the companion neutron star along a path
which is a narrow ellipse, as shown in Figure 2 of BHS-
17. The minimum distance between the two neutron
stars, which is known as the perihelion distance, is very
much smaller than the maximum distance between them.
Such a small minimum distance is possible only because
both neutron stars have a small radius, which permits
them to approach each other very closely. Normal stars,
because of their much larger radius, would not be able
to get so close without colliding and possibly getting
destroyed.
The compact nature of the neutron star binary
means that the gravitational force between them is very
strong, particularly at the perihelion distance, and the
neutron stars move with very high velocity along their
orbital paths. Newtons theory of gravity cannot be
applied to such a system and we instead need to use
Einsteins general theory of relativity. There are three
classical tests of general relativity which were mention
by Albert Einstein in the formulation of his theory of
gravitation: the existence of gravitational redshift, the
bending of light due to the effects of a gravitational
filed, and the precession of the perihelion of Mercury.
All these tests as well in phenomena like gravitational
lensing, only weak gravitational fields are involved, and
it could be argued that general relativity has been tested
only for such weak fields. The binary pulsar provided
the first opportunity to test general relativity in a strong
gravitational field regime.
1.2 Detailed Observational Studies of
the Binary Pulsar
The early observations by Hulse and Taylor
established that B1913+16 was a binary consisting
of a rapidly rotating radio pulsar and a companion.
The nature of the companion was not known, but it
was clear that it should be a compact object with a
small radius. If it were an extended object like a star
with a large radius, then as the pulsar went round it,
the pulsar would be eclipsed and pulses would not be
observed for the duration of the eclipse, but this was not
observed. The compact object could be a helium star,
white dwarf, neutron star or black hole. Observations
1.2 Detailed Observational Studies of the Binary Pulsar
and theoretical work over the years established that the
companion is a neutron star, rather than any of the other
possibilities. No radio pulses have been observed from
the companion. Therefore either (1) the companion is a
radio pulsar, but the radio beam from the companion
does not sweep past the Earth, so no pulsations are seen
or (2) it is not a pulsar. The fact that both components
of the binary are so compact is important in the analysis
of the system. If the companion had been an extended
object, the gravitational field of the pulsar would have
distorted the shape of companion, and it would have
been difficult to interpret the observations. The neutron
stars in the binary do not have such distortions, and
being compact, can approach each other very closely.
The system therefore is an ideal laboratory for the study
of gravitational effects.
Hulse and Taylor and later Taylor and J. Weisberg
continued to study B1913+16 for many years with the
Arecibo radio telescope. A pulse is detected every
time the radio beam from the pulsar sweeps across the
telescope. The time between successive pulses is equal
to the rotation period of the pulsar, which is very close
to 59 milliseconds (ms). The rotation period remains
very nearly constant over long periods of time, because
the neutron star is such a massive object and it is very
difficult to affect its rotation. The rotation of the pulsar
does slow down measurably over long periods of time,
since the pulsar loses energy because of the radiation
that it emits.
Even though the pulsar rotation period changes
very slowly, the time between successive pulses received
on the Earth can change due to various other reasons. As
the pulsar moves along its orbit around the companion
star, it is sometimes moving towards the Earth, and at
other times it is moving away. During its motion towards
the Earth, the observed time between two successive
pulses decreases and becomes less than then the rotation
period of the pulsar. When the pulsar is moving away,
the observed time between the pulses increases and
becomes greater than the rotation period. This is known
as the Doppler effect. Using sophisticated instruments
and measurement techniques, Taylor and Weisberg were
able to measure the arrival time of successive pulses
extremely accurately. This in turn led to very accurate
Figure 1: The change in the velocity of the pulsar as
it moves along an orbit. The x-axis shows the position
along an orbit and the y-axis the velocity of the pulsar
in the direction of the Earth. When the velocity is
positive the pulsar is moving away from the Earth while
negative velocity indicates that the pulsar is approaching
the Earth.
Image Credit: R. A. Hulse and J. H. Taylor, Astrophysical Journal 195, L51, 1975.
velocity measurements. Such measurements, made over
many orbits are shown in Figure 1. It is the existence
of such variation that establishes the binary nature of
the object.
From arrival time measurements it is possible to
infer many details about the orbit, including the size and
shape of the orbit, how closely the pulsar approaches its
companion and so forth. Such calculations have been
done over the years for several binary star systems having
components as ordinary stars using Newtons theory
of gravitation. What is generally not possible in such
calculations is the measurement of the masses of the
stars in the binary. The masses can be estimated only in
special circumstances and accurate values can seldom be
obtained. The situation is quite different for B1913+16.
Here the gravitational field is very strong, and general
relativistic effects, including gravitational redshift, and
the precession of the perihelion, which we discussed
in earlier stories, are present. These lead to further
changes in the pulse arrival times. When calculations
are performed to account for all the effects, using general
relativity, it turns out that all the parameters which are
required to determine the shape of the orbit, as well as
the mass of the pulsar and its companion can be very
accurately determined.
3
1.2 Detailed Observational Studies of the Binary Pulsar
The values for some important quantities
calculated from the analysis of pulse arrival times are
shown in Table 1. These are based on measurements
made for more than thirty years after the discovery of
the pulsar.
Table 1.1: Pulsar System Parameters
Parameter Value
Pulsar Rotation Period 59.0300032180 ms
Orbital period 7.7519938773864 hours
Maximum Separation 3,153,600 km
Minimum Separation 746,600 km
Precession of Perihelion 4.226598 degrees/year
Pulsar Mass 1.4398 Solar Mass
Companion Mass 1.3886 Solar Mass
The many places after the decimal point show
how accurate the measurements are. The precession
of the perihelion in the system is about 4.2 degrees
per year. This is about 35,000 times the value for the
precession of Mercury, which is only 43 arcseconds
per century! This shows how strong the gravitational
field experienced by the pulsar is, because of itsof close
approach to the companion. The mass of the pulsar
and the companion neutron star are 1.44 and 1.39 times
the mass of the Sun respectively. When such mass
measurements were first reported some years after the
discovery of the binary pulsar, they represented the first
ever direct measurements of neutron star mass. The
measurements are also the most accurately measured
stellar mass ever made. The fact that the two masses are
so close to the Chandrasekhar limit of about 1.4 times
the Solar mass has implications for the way in which
the neutron stars were formed.
The maximum distance between the pulsar and
its companion is about 3.15 million km, which is very
much smaller than the Earth-Sun distance of 150 million
km. The minimum or perihelion distance between the
two is 746,600 km, which is comparable to the radius
of the Sun, which is 700,000 km. The two components
of the binary can approach each other so closely only
because they are very compact objects. If the companion
were more extended, like a normal star, it would be
disrupted if the distance of approach was so small. At
the perihelion, the pulsar moves with a velocity of about
400 km/sec. The time taken to complete one orbit,
which is known as the orbital period, is about 7 hours
and 45 minutes. The binary system is at a distance of
21,000 light years away from us. Because of its position
in the sky, the system will be visible only between 1941
and 2025. It was indeed fortunate that Hulse and Taylor
could detect the system during this limited interval of
time that is available for its observation from the Earth.
Soon after the discovery of the binary pulsar, it
was pointed out by Robert Wagoner that the observation
of change in the binary rotation period with time will
allow us to test the presence of gravitational waves. The
binary pulsar is very compact and the gravitational force
between the two neutron stars is very strong. The orbit
of the pulsar around the companion is highly elliptical
in nature. Such a binary emits a significant amount of
gravitational waves due to the high acceleration. The
emission results in the loss energy by the binary, and
consequently the binary shrinks in size. The two neutron
stars gradually come closer together and therefore
revolve faster around each other, leading to a decrease
in the orbital period of the binary. A measurement
of the change in their period of revolution allows a
measurement of the rate at which they are losing energy
through the emission of gravitational waves. This can
be compared with the value predicted by Einsteins
theory. If the expected and observed values happen
to be equal, it would provide a direct validation of the
existence and emission of gravitational waves.
Hulse and Taylor continued their work on the
observations of the binary pulsar system and in 1978,
four years after its discovery, announced that they had
indeed found a decrease of the binary period which
was consistent with the prediction by Einsteins theory.
The general relativistic value was obtained using the
measured parameters of the binary pulsar. This was
the first time that the emission of gravitational waves
predicted in 1916 was seen to really exist. Observations
accumulated over the years and reported in 2010 have
shown that the observed rate of change of orbital period
and the prediction of Einstein’s theory agree to better
than one percent. The change in orbital period changes
the time taken by the pulsar to pass the perihelion in
successive orbits. Because the orbit shrinks in size,
the perihelion passage occurs somewhat earlier for
4
1.2 Detailed Observational Studies of the Binary Pulsar
Figure 2: The change in rotation period of the binary
B1913+16 over a span of 30 years from 1975 to 2005 as
predicted by Einsteins General theory of relativity and
as observed. The x-axis shows the time of observation
and the y-axis provides a measure of the decrease in
time taken for successive revolutions (see the text below
for some detail), which indicates shrinking of the orbit.
The dots show the observed values. The agreement
between observed and predicted values is remarkable
and proves the existence of gravitational waves. If
gravitational waves were not being emitted, the graph
would have been a horizontal straight line as shown at
the top.
Image Credit: J. M. Weisberg, D. J. Nice and J. H. Taylor, Astrophysical Journal 722, 1030, 2010.
successive orbits. The cumulative shift over many years
of observation is shown in Figure 2. The continuous
curve shows the cumulative shift with time as predicted
by Einstein’s theory, while the dots are the observed
values. The dots fall very closely on the predicted line,
with any observed departure from the line being too
small (smaller than 1 percent ) to be seen. The emission
of gravitational waves is therefore very well validated,
though gravitational waves are not directly detected
through these observations.
Hulse and Taylor were given the Nobel prize in
physics for 1993. The Nobel citation said that the
award was for their discovery of a new type of pulsar, a
discovery that has opened up new possibilities for the
study of gravitation.
Next Story: In the next story we will describe the
Advance LIGO gravitational wave detectors.
About the Author
Professor Ajit Kembhavi is an emeritus
Professor at Inter University Centre for Astronomy
and Astrophysics and is also the Principal Investigator
of the Pune Knowledge Cluster. He was the former
director of Inter University Centre for Astronomy and
Astrophysics (IUCAA), Pune, and the International
Astronomical Union vice president. In collaboration
with IUCAA, he pioneered astronomy outreach
activities from the late 80s to promote astronomy
research in Indian universities.
5
A Brief Introduction to PLASMA PHYSICS
by Abhishek P S
airis4D, Vol.3, No.5, 2025
www.airis4d.com
2.1 Introduction
What is Plasma?
Radiant matter, that is what Sir William Crookes
mentioned the new state of matter he observed in his
laboratory. Matter exists in the form of ionized gas in
this radiant matter. Strange behaviour of the constituent
particles such as electrons and nucleus made this new
state of matter unique and captivated among the science
community. Analogy of these particles with the blood
plasma inspirited Irving Langmuir to name this radiant
matter as Plasma [1]. The word Plasma comes from
the Greek and means ”something molded”.
From a scientific point of view, matter in the known
universe is often classified in terms of four states: solid,
liquid, gaseous, and plasma. The word plasma is
used to describe a wide variety of macroscopically
neutral substances containing many interacting free
electrons and ionized atoms or molecules, which exhibit
collective behaviour due to the long-range coulomb
forces. Not all media containing charged particles,
however, can be classified as plasmas. For a collection
of interacting charged and neutral particles to exhibit
plasma behaviour it must satisfy certain conditions, or
criteria, for plasma existence.
A given substance is found in one of these states
depends on the random kinetic energy (thermal energy)
of its atoms or molecules i.e, on its temperature.The
equilibrium between these particle thermal energy and
the inter-particle binding forces determines the state.
By heating a solid or liquid substance, the atoms or
molecules acquire more thermal kinetic energy until
they are able to overcome the binding potential energy.
This leads to phase transitions, which occur at constant
temperature for a given pressure. The amount of energy
required for the phase transition is called the ”latent
heat”. If sufficient energy is provided, a molecular gas
will gradually dissociate into an atomic gas as a result
of collisions between those particles whose thermal
kinetic energy exceeds the molecular binding energy.
At sufficiently elevated temperatures an increasing
fraction of the atoms will possess enough kinetic energy
to overcome,by collisions, the binding energy of the
outermost orbital electrons, and an ionized gas or plasma
results. However, this transition from a gas to a plasma
is not a phase transition in the thermodynamic sense,
since it occurs gradually with increase in temperature.
2.2
Particle Interaction and Collective
Behaviour
The properties of a plasma are markedly dependent
upon the particle interactions. One of the basic features
that distinguish the behaviour of plasma from that of
ordinary fluids and solids is the existence of collective
effects. Due to the long range of electromagnetic
forces, each charged particle in the plasma interacts
simultaneously with a considerable number of other
charged particles, resulting in important collective
effects that are responsible for the wealth of physical
phenomenon that takes place in plasma. In a plasma
we must distinguish between charge-charge and charge-
neutral interactions. A charged particle is surrounded
by an electric field and interacts with the other charged
particles according to the coulombs force law, with its
dependence on the inverse of the square of the separation
2.3 Criteria for Plasma
distance. Further more, a magnetic field is associated
with a moving charged particle, which also produces a
force on other moving charges.The charged and neutral
particles interact through electric polarization fields
produced by distortion of the neutral particle’s electronic
cloud during a close passage of the charged particle.The
field associated with neutral particles involves short-
range forces, such that their interaction is effective
only for inter-atomic distance sufficiently small to
perturb the orbital electrons. It is appreciable when
the distance between the centres of the interacting
particles is of the order of their diameter, but nearly
zero when they are farther apart. It’s characteristics can
be adequately described only by quantum-mechanical
considerations.In many cases this interaction involves
permanent or induced electric dipole moments. A
distinction can be made between weakly ionized and
strongly ionized plasmas in terms of the nature of the
particle interactions. In a weakly ionized plasma the
charge-neutral interactions dominate over the multiple
coulomb interactions. When the degree of ionization
is such that the multiple coulomb interactions become
dominant,the plasma is considered strongly ionized.
As the degree of ionization increases,the coulomb
interactions become increasingly important so that in a
fully ionized plasma all particles are subjected to the
multiple coulomb interactions.
2.3 Criteria for Plasma
2.3.1 Debye Shielding
A fundamental characteristic of the behaviour of a
plasma is its ability to shield out electric potentials that
are applied to it. Suppose we tried to put an electric
field inside a plasma by inserting two charged balls
connected to a battery (Fig. 1). The balls would attract
particles of the opposite charge, and almost immediately
a cloud of ions would surround the negative ball and
a cloud of electrons would surround the positive ball.
(We assume that a layer of dielectric keeps the plasma
from actually recombining on the surface, or that the
battery is large enough to maintain the potential in spite
of this.) If the plasma were cold and there were no
Figure 1: The Debye length
λ
D
is an important physical
parameter for the description of plasma. It provides
a measure of the distance over which the influence of
the electric field of an individual charged particle is
felt by the other charged particles inside the plasma.
The charged particles arrange themselves in such a way
as to effectively shield any electrostatic fields within a
distance if the order of the Debye length.This shielding
of electrostatic fields is a consequence of the collective
effects of the plasma particles. A calculation of the
shielding distance was first performed by Debye ,for an
electrolyte.
Image courtesy: https://www.kindpng.com/imgv/Twwhoib show- more-plots-debye-shielding-in-p
lasma-phy
thermal motions, there would be just as many charges in
the cloud as in the ball; the shielding would be perfect,
and no electric field would be present in the body of
the plasma outside of the clouds. On the other hand, if
the temperature is finite, those particles that are at the
edge of the cloud, where the electric field is weak, have
enough thermal energy to escape from the electrostatic
potential well. The ”edge of the cloud then occurs at
the radius where the potential energy is approximately
equal to the thermal energy KBT of the particles, and
the shielding is not complete. Potentials of the order of
KBT/e can leak into the plasma and cause finite electric
fields to exist there[2].
When the boundary surface is introduced in a
plasma, the perturbation produced extends only upto
a distance of the order of
λ
D
from the surface. In the
neighbourhood of any surface inside the plasma there
is a layer of width of the order of
λ
D
known as Plasma
Sheath, inside which the condition of macroscopic
electrical neutrality may not be satisfied. Beyond
the plasma sheath region there is the plasma region
,where macroscopic neutrality is maintained[3]. The
7
2.4 Plasma in Nature
Debye shielding effect is a characteristic of all plasmas
,although it does not occur in every medium that
contains charged particles. A necessary and obvious
requirement for the existence of a plasma is that the
physical dimensions of the system be large compared
to
λ
D
. Otherwise there is just not sufficient space for
the collective shielding effect to take place ,and the
collection of charged particles will not exhibit plasma
behaviour. If L is a characteristic dimension of plasma
,a first criteria for the definition of plasma is therefore
L >> λ
D
.
2.3.2 Quasineutrality
In the absence of external disturbances a plasma
is macroscopically neutral. This means that under
equilibrium conditions with no external forces present,
in a volume of the plasma sufficiently large to contain
a large number of particles and yet sufficiently small
compared to the characteristic lengths for variation
of macroscopic parameters such as density and
temperature, the net resulting electric charge is zero. In
the interior of the plasma the microscopic space charge
fields cancel each other and no net space charge exists
over a macroscopic region[4]. If this macroscopic
neutrality was not maintained, the potential energy
associated with the resulting coulomb forces could
be enormous compared to the thermal particle kinetic
energy. Therefore, a plasma temperature of several
millions of degrees Kelvin would be required to balance
the electric potential energy with the average thermal
particle energy. Departures from macroscopic electrical
neutrality can naturally occur only over distances in
which a balance is obtained between the thermal particle
energy, which tends to disturb the electrical neutrality,
and the electrostatic potential energy resulting from any
charge separation, which tends to restore the electrical
neutrality. This distance is of the order of a characteristic
length parameter of the plasma, called the Debye length.
In the absence of external forces, the plasma cannot
support departures from macroscopic neutrality over
larger distances than this, since the charged particles are
able to move freely to neutralize any regions of excess
space charge in response to the large coulomb forces
that appear.
2.3.3 Plasma Frequency
An important plasma property is the stability of its
macroscopic space charge neutrality. When a plasma
is instantaneously disturbed from the equilibrium
condition, the resulting internal space charge fields
give rise to collective particle motions that tend to
restore the original charge neutrality. These collective
motions are characterised by a natural frequency of
oscillation known as the Plasma frequency. Since these
collective oscillations are high frequency oscillations,
the ions, because of their heavy mass,a re to a certain
extend unable to follow the motion of the electrons.The
electrons oscillate collectively about the heavy ions,the
necessary collective restoring force being provided
by the ion-electron coulomb attraction.The period of
this natural oscillation constitutes a meaningful time
scale against which can be compared the dissipative
mechanisms tending to destroy the collective electron
motions.
Consider a plasma initially uniform and at
rest,and suppose that by some external means a small
charge separation is produced inside it. When the
external disturbing force is removed instantaneously, the
internal electric field resulting from charge separation
collectively accelerates the electrons in an attempt to
restore the charge neutrality. However, because of
their inertia, the electrons move beyond the equilibrium
position, and an electric field is produced in the opposite
direction. This sequence of movements repeats itself
periodically, with a continuous transformation of kinetic
energy into potential energy, and vice versa, resulting
in fast collective oscillations of the electrons about
the more massive ions. On the average the plasma
maintains its macroscopic charge neutrality.
2.4 Plasma in Nature
2.4.1 Sun and its Atmosphere
The sun, which is our nearest star and upon which
the existence of life on Earth fundamentally depends, is
a plasma phenomenon[5]. Its energy output is derived
8
2.4 Plasma in Nature
from thermonuclear fusion reactions of protons forming
helium ions deep in its interior, where temperatures
exceed
1.2 × 10
7
K. The high temperature of its interior
and the consequent thermonuclear reactions keep the
entire sun gaseous. Due to its large mass (
2 × 10
30
kg),
the suns gravitational force is sufficient to prevent the
escape of all but the most energetic particles and, of
course, radiation from the hot solar plasma. There is
no sharp boundary surface to the sun. Its visible part
is known as the solar atmosphere, which is divided
into three general regions or layers. The photosphere,
with a temperature of about 6,000 K, comprises the
visible disk, the layer in which the gases become opaque,
and is a few hundred kilometres thick. Surrounding
the photosphere there is a reddish ring called the
chromosphere, approximately 10,000 km thick, above
which flame-like prominences rise with temperatures of
the order of 100,000 K. Surrounding the chromosphere
there is a tenuous hot plasma, extending millions of
kilometres into space, known as the corona. A steep
temperature gradient extends from the chromosphere to
the hotter corona, where the temperature exceeds
10
6
K. .
2.4.2 Solar Wind
A highly conducting tenuous plasma called the
solar wind, composed mainly of protons and electrons,
is continuously emitted by the sun at very high speeds
into interplanetary space, as a result of the supersonic
expansion of the hot solar corona. The solar magnetic
field tends to remain frozen in the streaming plasma due
to its very high conductivity. Because of solar rotation,
the field lines are carried into Archimedean spirals by
the radial motion of the solar wind[6].
2.4.3 Magnetosphere and Van Allen Belts
As highly conducting solar wind impinges on
the Earths magnetic field, it compresses the field on
the sunward side and flows around it at supersonic
speeds. This creates a boundary,called the magneto-
pause, which is roughly spherical on the sunward side
and roughly cylindrical in the anti-sun direction (Fig.
2). The inner region, from which the solar wind is
Figure 2: Formation of Magneto-pause
Image courtesy: https://en.wikipedia.org/wiki/Magnetopause
excluded and which contains the compressed Earths
magnetic field, is called the magnetosphere. Inside
the magnetosphere we find the Van Allen radiation
belts, in which energetic charged particles (mainly
electrons and protons) are trapped into regions where
they execute complicated trajectories that spiral along
the geomagnetic field lines and, at the same time, drift
slowly around the Earth[7]. The origin of the inner
belt is ascribed to cosmic rays, which penetrate into the
atmosphere and form proton-electron pairs that are then
trapped by the Earths magnetic field. The outer belt is
considered to be due to and maintained by streams of
plasma consisting mainly of protons and electrons that
are ejected from time to time by the sun. Depending
on solar activity, particularly violent solar eruptions
may occur with the projection of hot streams of plasma
material into space. The separation into inner and
outer belts reflects only an altitude-dependent energy
spectrum, rather than two separate trapping regions.
2.4.4 Ionosphere
The large natural blanket of plasma in the
atmosphere, which envelopes the Earth from an altitude
of approximately 60 km to several thousands of
kilometres, is called the ionosphere. The ionized
particles in the ionosphere are produced during the
daytime through absorption of solar extreme ultraviolet
9
2.5 Basic Plasma Phenomena
and X-ray radiation by the atmospheric species. As
the ionizing radiation from the sun penetrates deeper
and deeper into the Earth’s atmosphere, it encounters
a larger and larger density of gas particles, producing
more and more electrons per unit volume. However,
since radiation is absorbed in this process, there
is a height where the rate of electron production
reaches a maximum. Below this height the rate of
electron production decreases, in spite of the increase in
atmospheric density, since most of the ionizing radiation
was already absorbed at the higher altitudes
2.4.5 Plasmas Beyond the Solar System
Beyond the solar system we find a great variety
of natural plasmas in stars, interstellar space, galaxies,
intergalactic space, and far beyond to systems quite
unknown before the start of astronomy from space
vehicles. There we find a variety of phenomena of great
cosmological and astrophysical significance, including
interstellar shock waves from remote supernova
explosions, rapid variations of X-ray fluxes from neutron
stars with densities like that of atomic nuclei, pulsating
radio stars or pulsars (which are theoretically pictured
as rapidly rotating neutron stars with plasmas emitting
synchrotron radiation from the surface), and the plasma
phenomena around the remarkable black holes (which
are considered to be singular regions of space into
which matter has collapsed, possessing such a powerful
gravitational field that nothing, whether material objects
or even light itself, can escape from them).
2.5 Basic Plasma Phenomena
The fact that some or all of the particles in a
plasma are electrically charged and therefore capable
of interacting with electromagnetic fields, as well as of
creating them, gives rise to many novel phenomena that
are not present in ordinary fluids and solids.Because
of high electron mobility, plasmas are generally very
good electrical conductors, as well as good thermal
conductors. The presence of density gradients in
a plasma causes the particles to diffuse from dense
regions to regions of lower density. Due to their lower
mass, electrons tend to diffuse faster than the ions
, generating a polarization electric field as a result
of charge separation. An important characteristic of
plasma is their ability to sustain a great variety of
wave phenomena.e.g electrostatic plasma wave and
electromagnetic wave. The study of waves in plasma
provides significant information on plasma properties
and is very useful for plasma diagnostics. Dissipative
processes, such as collisions ,produce damping of
wave amplitude. Another important aspect of plasma
behaviour is the emission of radiation.The main interest
in plasma radiation lies in the fact that it can be used to
infer plasma properties. Radiation can be from emitting
atoms or molecules, and also from accelerated charges.
At the same time, recombination of ions and electrons
to form neutral particles also occur. Another important
aspect of plasma behaviour is the emission of radiation.
The main interest in plasma radiation lies in the fact
that it can be used to infer plasma properties. Radiation
can be from emitting atoms or molecules, and also from
accelerated charges. At the same time, recombination of
ions and electrons to form neutral particles also occur.
2.6 Applications of Plasma Physics
Plasmas can be characterized by the two
parameters n and KBTe. Plasma applications cover
an extremely wide range of n and
K
B
T
e
; n varies
over 28 orders of magnitude from
10
6
to
10
34
m
3
, and
K
B
T
e
can vary over seven orders from 0.1 to
10
6
eV. Some of these applications are discussed very
briefly below. The tremendous range of density can
be appreciated when one realizes that air and water
differ in density by only
10
3
, while water and white
dwarf stars are separated by only a factor of
10
5
. Even
neutron stars are only
10
15
times denser than water. Yet
gaseous plasmas in the entire density range of
10
28
can
be described by the same set of equations, since only
the classical (non-quantum mechanical) laws of physics
are needed[8].
10
2.6 Applications of Plasma Physics
Figure 3: Magnetohydrodynamic converter
Image courtesy: https://en.wikipedia.org/wiki/Magnetohydrodynamic converter
2.6.1 Gas Discharges
The earliest work with plasmas was that of
Langmuir, Tonks, and their collaborators in the 1920’s.
This research was inspired by the need to develop
vacuum tubes that could carry large currents, and
therefore had to be filled with ionized gases. The
research was done with weakly ionized glow discharges
and positive columns typically with
K
B
T
e
2 eV and
10
14
<
n
< 10
18
m
3
. It was here that the shielding
phenomenon was discovered: the sheath surrounding
an electrode could be seen visually as a dark layer.
Gas discharges arc encountered nowadays in mercury
rectifiers, hydrogen thyratrons, ignitrons. spark gaps,
welding arcs, neon and fluorescent lights, and lightning
discharges.
2.6.2 MHD Energy Conversion and ION
Propulsion
Getting back down to earth, we come
to two practical applications of plasma physics.
Magnetohydrodynamic (MHD) energy conversion
utilizes a dense plasma jet propelled across a magnetic
field to generate electricity (Fig. 3). The Lorentz force
q(v x B), where v is the jet velocity, causes the ions
to drift upward and the electrons downward, charging
the two electrodes to different potentials. Electrical
current can then be drawn from the electrodes without
the inefficiency of a heat cycle[2].
The same principle in reverse has been used to
develop engines for interplanetary missions. In Plasma
jet, a current is driven through a plasma by applying a
voltage to the two electrodes. The j x B force shoots
the plasma out of the rocket, and the ensuing reaction
force accelerates the rocket. The plasma ejected must
always be neutral; otherwise, the space ship will charge
to a high potential.
2.6.3 Space Physics
Another important application of plasma physics
is in the study of the earths environment in space. A
continuous stream of charged particles, called the solar
wind, impinges on the earths magnetosphere, which
shields us from this radiation and is distorted by it
in the process. Typical parameters in the solar wind
are n =
5 × 10
6
m
3
,
K
B
T
i
= 10
eV,
K
B
T
e
= 50
eV,
B =
5 × 10
9
T, and drift velocity 300 km/sec. The
ionosphere, extending from an altitude of 50 km to 10
earth radii, is populated by a weakly ionized plasma
with density varying with altitude up to n =
10
12
m
3
. The temperature is only
10
1
eV. The Van Allen
belts are composed of charged particles trapped by the
earths magnetic field. Here we have
n <= 10
9
m
3
,
K
B
T
e
<= 1keV
,
K
B
T
i
1eV
, and B =
500 × 10
9
T
. In
addition, there is a hot component with n =
10
3
m
3
and K
B
T
e
= 40keV [2].
2.6.4 Controlled Thermonuclear Fusion
The most important application of man-made
plasmas is in the control of thermonuclear fusion
reactions, which holds a vast potential for the generation
of power[9]. Nuclear fusion is the process whereby two
light nuclei combine to form a heavier one, the total
final mass being slightly less than the total initial mass.
The mass difference
δ
m
appears as energy (E) according
to Einsteins famous law E =
mc
2
where c denotes
the speed of light. The nuclear fusion reaction is the
source of energy in the stars, including the sun. The
confinement of the hot plasma in this case is provided
by the self-gravity of the stars.
The basic problem in achieving controlled fusion
is to generate a plasma at very high temperatures (with
thermal energies at least in the 10 ke V range) and
hold its particles together long enough for a substantial
number of fusion reactions to take place. The need for
high temperatures comes from the fact that, in order
to undergo fusion, the positively charged nuclei must
come very close together (within a distance of the order
of
10
14
m, which requires sufficient kinetic energy to
overcome the electrostatic coulomb repulsion.
11
2.6 Applications of Plasma Physics
2.6.5 Other Plasma Devices
The thermionic energy converter is a device that
utilizes a caesium plasma between two electrodes to
convert thermal energy into electrical energy. The
cathode is heated, so that electrons are emitted from
the surface, and the anode is cooled. Due to the
presence of the caesium plasma, very large electrical
currents can be produced at the expense of a significant
fraction of the thermal energy applied to the cathode[8].
Examples of applications involving gas discharges
include the ordinary fluorescent tubes and neon lights
used for illumination and for signs, mercury rectifiers,
spark gaps, a number of specialized tubes like the
hydrogen thyratrons and the ignitrons, which are
used for switching, and the arc discharges or plasma
jets, which are the source of temperatures two or
more times as high as the hottest gas flames and
which are used in metallurgy for cutting, melting, and
welding metals. Two major applications in the area
of communications are the long distance radio wave
propagation by reflection in the ionospheric plasma and
the communication with a space vehicle through the
plasma layer that forms around it during the re-entry
period into the Earths atmosphere.
References
1.
Lewi, Tonks.,The Birth of “Plasma”. Am. J.
Phys. 1 September 1967; 35 (9): 857–858.
https://doi.org/10.1119/1.1974266
2.
Francis, F, Chen., (2015). Introduction to
Plasma Physics and Controlled Fusion (3rd ed.).
Springer Cham. https://doi.org/10.1007/978-3-
319-22309-4
3.
Hull AW., Langmuir I., Control of an Arc
Discharge by Means of Grid. Proc Natl Acad Sci
USA. 1929 March 15;15(3): 218-225
4.
Rozhansky, V., (2023). Quasineutral Plasma
and Sheath Structure. In: Plasma Theory.
Springer, Cham. https://doi.org/10.1007/978-
3-031-44486-9 3
5.
Simon, F, Green., Mark, H, Jones., S, Jocelyn,
Burnell., An Introduction to the Sun and Stars
2004, Cambridge University Press
6.
Schrijver, Carolus, J.; Zwaan, Cornelis., (2000).
Solar and stellar magnetic activity. Cambridge
University Press. ISBN 978-0-521-58286-5.
7.
Li, W.; Hudson, M.K. (2019). Earths Van Allen
Radiation Belts: From Discovery to the Van
Allen Probes Era. J. Geophys. Res. 124 (11):
8319–8351
8.
Goldston, R,J., (1995). Introduction to
Plasma Physics (1st ed.). CRC Press.
https://doi.org/10.1201/9780367806958
9.
Kenro, Miyamoto.,(2005) Plasma Physics and
Controlled Nuclear Fusion. Springer Berlin,
Heidelberg. https://doi.org/10.1007/3-540-
28097-9
About the Author
Abishek P S is a Research Scholar in
the Department of Physics, Bharata Mata College
(Autonomous) Thrikkakara, kochi. He pursues
research in the field of Theoretical Plasma physics.
His works mainly focus on the Nonlinear Wave
Phenomenons in Space and Astrophysical Plasmas.
12
X-ray Astronomy: Through Missions
by Aromal P
airis4D, Vol.3, No.5, 2025
www.airis4d.com
3.1 Satellites in 2000s
In the previous article, we discussed the satellites
that were launched between 2000-2005. All these
satellites were not primarily dedicated to X-ray
astronomy. They carried X-ray instruments for follow-
up observations of high-energy phenomena like gamma-
ray bursts. In this article, we will discuss the satellites
launched in the second half of this decade. In this period,
missions primarily focused on X-rays that changed and
are still contributing to our understanding of high-energy
phenomena.
3.2 SUZAKU
Suzaku is the fifth X-ray astronomy satellite by
the Japanese Aerospace Exploration Agency (JAXA),
built in collaboration with NASA. This mission is a
combined effort of 21 universities from Japan and
three research organizations from the USA. Suzaku is
a red bird popularly found in Japanese and Chinese
mythologies. Suzaku satellite is also known as ASTRO-
EII. Suzaku was launched in July 2005 into a Low Earth
Orbit (LEO) of Perigee and Apogee of 550 km and
570 km, respectively. This orbit had an inclination of
31
providing an orbital period of 96 minutes. Suzaku
was launched as a complementary to Chandra and
XMM-Newton observatories, and it was operational till
September 2015. It had imaging capabilities in the lower
energy range of 0.2-12 keV and used a nested grazing
incident telescope called an X-ray Telescope(XRT) to
focus the incoming photons into the optical plane. There
are 5 XRT units known as XRTI0, XRTI1, XRTI2,
XRTI3 and XRTS. All five XRTs are placed in the
Extensible Optical Bench (EOB), and the scientific
instruments are placed in their focal plane. Suzaku
covered an energy range of 0.2 - 600 keV.
There were 3 scientific detectors in Suzaku:
X-ray Imaging Spectrometer (XIS) is an X-ray
sensitive silicon Charge Coupled Device(CCD)
that is similar to those used in ASCA SIS,
Chandra ACIS, and XMM-Newton EPIC.
XIS enables high-sensitivity imaging and
spectroscopy across the 0.2–12 keV energy band.
There are four identical sets of XISs situated in
the focal planes of the four XRT-Is. XIS has
a field of view of
17.
8 × 17.
8
with an energy
resolution of 130 eV at 6 keV with an effective
area of 400 cm
2
at 1.5 keV.
X-Ray Spectrometer (XRS) is the first micro-
calorimeter used in an X-ray satellite. The XRS
utilized a 6×6 array of mercury telluride (HgTe)
micro-calorimeters with 30 active pixels. It uses
the temperature change to identify the energy of
the incident X-ray photons. It had a stunning
spectral resolution of 6-7 eV at 6 keV. The energy
bandwidth of XRS was the same as that of XIS.
It is positioned in the focal plane of XRT-S. After
the launch, its cryogenic system failed, so no
X-ray observation was performed. But it paved
the path to the upcoming missions.
Hard X-ray Detector (HXD) is a combination of
16 independent units arranged in a 4×4 grid of
the GSO well-type phoswich counters (
>
50 keV)
and the silicon PIN diodes (
<
50 keV). It covered
an energy range of 10-700 keV. HXD did not have
3.3 AGILE
Figure 1: Side view of Suzaku satellite.
Credits: MITSUDA et al. 2007
any imaging capabilities. It provided an effective
area of 142
cm
2
at 20 keV for PIN diodes and
273
cm
2
at 150 keV GSO phoswich counters. It
also had a timing resolution of
61µs
in timing
mode.
Suzaku’s observations of the Perseus Galaxy
Cluster revealed a uniform distribution of iron across its
11-million-light-year expanse. Suzakus detection of
Chromium and Manganese in the Perseus Clusters
IGM - the first such intergalactic measurements -
confirmed that billions of type Ia supernovae contributed
to this metal enrichment. Suzakus high-resolution
spectroscopy enabled precise measurements of black
hole spin in AGN like MCG–6-30-15. Suzaku identified
AE Aquarii as a white dwarf emitting pulsed X-rays
akin to neutron star pulsars.
3.3 AGILE
The Astro-rivelatore Gamma a Immagini LEggero
(AGILE) is a Small Scientific Mission satellite,
developed and operated by the Italian Space Agency
(ASI). It is gamma-ray astronomy mission launched on
April 23, 2007 from Sriharikota, India. It is launched
into a LEO of perigee 509 km and apogee 533 km. The
satellite was operational until January 2024. It covered
an energy band of 8 keV to 50 GeV, from hard X-rays
to Gamma rays.
AGILE had 3 scientific instruments on board:
Gamma Ray Imager Detector (GRID) consists of
Figure 2: MAXI overview.
Credits: Mihara et al. 2022
a Silicon-Tungsten Tracker, a Cesium Iodide Mini-
Calorimeter (MC), an Anti-coincidence system
(AC) made of segmented plastic scintillators, and
a Data Handling system (DH). The GRID is
sensitive in the energy range 30MeV 50GeV .
SuperAGILE (SA) consist of four units of
coded mask imaging system paired with silicon
microstrip detectors. SA is a hard X-ray
instrument that covers an energy range of 15-
45 keV. SAs anti-coincidence system is shared
with AGILE’s Gamma Ray Imaging Detector
(GRID).
Mini-Calorimeter (MCAL) comprised 30 Cesium
Iodide Thallium-doped [CsI(Tl)] scintillator bars.
It detects gamma rays in the 300 keV–100 MeV
energy range.
3.4 MAXI
The Monitor of All-Sky X-ray Image, MAXI, is a
Japanese experiment onboard the International Space
Station. It is the first experiment to be installed on
the Japanese Experiment Module Exposed Facility
(JEM-EF or Kibo-EF) on the ISS. It is also the first
high-energy astrophysical experiment placed on the
space station. MAXI was launched by the space shuttle
STS-127 Endeavour in July 2009 and delivered to the
ISS. MAXI’s primary object is monitoring transient
events. It is a survey experiment that covered a large
fraction of the sky. After 15 years of service, it is still
providing valuable scientific outputs and discovering
new transient sources and its outbursts. MAXI performs
a complete sky survey every 96 minutes (orbital period
of ISS). MAXI covers an energy range of 0.5 keV - 30
keV.
MAXI has two scientific instruments:
14
3.5 Reference
Gas Slit Camera (GSC) employs slit camera
optics and has the advantage of being free from
source contamination than a coded mask. The
entire GSC system is composed of six identical
units, and it uses 12 large-area proportional
counters filled with Xe gas to detect X-rays. It
had a total area of 5350
cm
2
and worked in the
energy range of 2-30 keV. This system covers a
wide rectangular FOV of 160
× 1.5
.
Solid-State Slit Camera (SSC) is a CCD camera
that works in 0.5–12 keV energy range. The SSC
is installed on the GSC and complements the GSC
by focusing on lower-energy X-rays with higher
resolution. SSC has two cameras - SSC-H and
SSC-Z, each has 16 CCD chips and is identical
to each other. It has a Field of view of
90
× 1.5
and an effective area of 200 cm
2
.
MAXI is an all-sky survey experiment and has a larger
coverage of the sky. It has identified several X-ray
sources from the initial days of operations. It discovered
several new transient sources and made regular follow-
up observations of them. MAXI has decade-long light
curves of different X-ray sources. MAXI contributed
significantly to Tidal Disruption Event (TDE) studies,
which are cataclysms where stars are torn apart by
supermassive black holes (SMBHs).
3.5 Reference
About X-ray Astronomy Satellite ”Suzaku”
(ASTRO-EII)
Suzaku Mission Overview
X-Ray Telescopes (XRT)
The Suzaku Technical Description
The X-Ray Observatory Suzaku
X-Ray Spectrometer (XRS)
Hard X-ray Detector (HXD)
The Hard X-ray Detector
Tadayuki Takahashi, Keiichi Abe, Manabu
Endo, Yasuhiko Endo, Yuuichiro Ezoe,
Yasushi Fukazawa, Masahito Hamaya, Shinya
Hirakuri, Soojing Hong, Michihiro Horii,
Hokuto Inoue, Naoki Isobe, Takeshi Itoh,
Naoko Iyomoto, Tuneyoshi Kamae, Daisuke
Kasama, Jun Kataoka, Hiroshi Kato, Madoka
Kawaharada, Naomi Kawano, Kengo Kawashima,
Satoshi Kawasoe, Tetsuichi Kishishita, Takao
Kitaguchi, Yoshihito Kobayashi, Motohide
Kokubun, Jun’ichi Kotoku, Manabu Kouda, Aya
Kubota, Yoshikatsu Kuroda, Greg Madejski,
Kazuo Makishima, Kazunori Masukawa, Yukari
Matsumoto, Takefumi Mitani, Ryohei Miyawaki,
Tsunefumi Mizuno, Kunishiro Mori, Masanori
Mori, Mio Murashima, Toshio Murakami,
Kazuhiro Nakazawa, Hisako Niko, Masaharu
Nomachi, Yuu Okada, Masanori Ohno, Kousuke
Oonuki, Naomi Ota, Hideki Ozawa, Goro Sato,
Shingo Shinoda, Masahiko Sugiho, Masaya
Suzuki, Koji Taguchi, Hiromitsu Takahashi,
Isao Takahashi, Shin’ichiro Takeda, Ken-ichi
Tamura, Takayuki Tamura, Takaaki Tanaka,
Chiharu Tanihata, Makoto Tashiro, Yukikatsu
Terada, Shin’ya Tominaga, Yasunobu Uchiyama,
Shin Watanabe, Kazutaka Yamaoka, Takayuki
Yanagida, Daisuke Yonetoku, Hard X-Ray
Detector (HXD) on Board Suzaku, Publications
of the Astronomical Society of Japan, Volume
59, Issue sp1, 30 January 2007, Pages S35–S51,
https://doi.org/10.1093/pasj/59.sp1.S35
Hirofumi Noda, Kazuo Makishima, Yuuichi
Uehara, Shinya Yamada, Kazuhiro Nakazawa
Suzaku Discovery of a Hard Component
Varying Independently of the Power-Law
Emission in MCG 6–30–15, Publications of
the Astronomical Society of Japan, Volume
63, Issue 2, 25 April 2011, Pages 449–458,
https://doi.org/10.1093/pasj/63.2.449
Suzaku
Cattaneo, P. W. and Rappoldi, A. and Argan, A.
and Barbiellini, G. and Boffelli, F. and Bulgarelli,
A. and Buonomo, B. and Cardillo, M. and Chen,
A. W. and Cocco, V. and Colafrancesco, S. and
D’Ammando, F. and Donnarumma, I. and Ferrari,
A. and Fioretti, V. and Foggetta, L. and Froysland,
T. and Fuschino, F. and Galli, M. and Gianotti,
F. and Giuliani, A. and Longo, F. and Lucarelli,
F. and Marisaldi, M. and Mazzitelli, G. and
Morselli, A. and Paoletti, F. and Parmigiani, N.
15
3.5 Reference
and Pellizzoni, A. and Piano, G. and Pilia, M.
and Pittori, C. and Prest, M. and Pucella, G. and
Quintieri, L. and Sabatini, S. and Tavani, M. and
Trifoglio, M. and Trois, A. and Valente, P. and
Vallazza, E. and Vercellone, S. and Verrecchia, F.
and Zambra, A Calibration of AGILE-GRID with
On-ground Data and Monte Carlo Simulations
The Astrophysical Journal
Feroci, M. and Costa, E. and Soffitta, P. and Del
Monte, E. and Di Persio, G. and Donnarumma, I.
and Evangelista, Y. and Frutti, M. and Lapshov,
I. and Lazzarotto, F. and Mastropietro, M. and
Morelli, E. and Pacciani, L. and Porrovecchio,
G. and Rapisarda, M. and Rubini, A. and
Tavani, M. and Argan, A. SuperAGILE: The
hard X-ray imager for the AGILE space mission
Nuclear Instruments and Methods in Physics
Research Section A: Accelerators, Spectrometers,
Detectors and Associated Equipment
MAXI
Monitor of All-sky X-ray Image (MAXI)
Tatehiro Mihara, Hiroshi Tsunemi, Hitoshi
Negoro MAXI : Monitor of All-sky X-ray Image
arxiv
Hiroshi Tomida, Hiroshi Tsunemi, Masashi
Kimura, Hiroki Kitayama, Masaru Matsuoka,
Shiro Ueno, Kazuyoshi Kawasaki, Haruyoshi
Katayama, Kazuhisa Miyaguchi, Kentaro Maeda,
Arata Daikyuji, Naoki Isobe Solid-State Slit
Camera (SSC) Aboard MAXI Publications of
the Astronomical Society of Japan, Volume
63, Issue 2, 25 April 2011, Pages 397–405,
https://doi.org/10.1093/pasj/63.2.397
Tatehiro Mihara, Motoki Nakajima, Mutsumi
Sugizaki, Motoko Serino, Masaru Matsuoka,
Mitsuhiro Kohama, Kazuyoshi Kawasaki,
Hiroshi Tomida, Shiro Ueno, Nobuyuki Kawai,
Jun Kataoka, Mikio Morii, Atsumasa Yoshida,
Kazutaka Yamaoka, Satoshi Nakahira, Hitoshi
Negoro, Naoki Isobe, Makoto Yamauchi,
Ikuya Sakurai Gas Slit Camera (GSC)
onboard MAXI on ISS Publications of the
Astronomical Society of Japan, Volume 63, Issue
sp3, 25 November 2011, Pages S623–S634,
https://doi.org/10.1093/pasj/63.sp3.S623
About the Author
Aromal P is a research scholar in
Department of Astronomy, Astrophysics and Space
Engineering (DAASE) in Indian Institute of
Technology Indore. His research mainly focuses on
studies of Thermonuclear X-ray Bursts on Neutron star
surface and its interaction with the Accretion disk and
Corona.
16
The Hunt for the Cosmic Radio Lines of
Neutral Hydrogen
by Linn Abraham
airis4D, Vol.3, No.5, 2025
www.airis4d.com
The universe communicates with us not just
through the visible light our eyes can see, but across the
entire electromagnetic spectrum. The dawn of radio
astronomy opened an entirely new window onto the
cosmos, beginning with pioneers like Karl Jansky,
who accidentally discovered a “steady hiss type static
of unknown origin in 1932 while studying interference
on transatlantic radio telephone service. Jansky’s work
revealed radio waves apparently coming from space,
particularly from the plane of the Milky Way. Despite
his recognition of the importance of this discovery,
his work was not immediately appreciated by radio
engineers, astronomers, or physicists.
Following Jansky, individuals like Grote Reber
took up the challenge, building dedicated radio
telescopes in his backyard. Reber produced the first
radio contour maps of the sky, beautifully outlining the
Milky Way and hints of discrete sources. His work
helped demonstrate the potential of studying the cosmos
at radio wavelengths. Early theoretical work sought
to explain the observed galactic background radiation,
considering mechanisms like thermal free-free emission
from interstellar gas. However, calculations showed that
Jansky’s observed brightness temperatures of around
100,000 K at low frequencies could not be explained
by thermal emission from gas at typical interstellar
temperatures of
10,000 K. This discrepancy pointed
towards non-thermal processes, leading later to the
suggestion of the synchrotron mechanism, involving
cosmic rays in galactic magnetic fields.
Building on these foundational observations
and techniques, radio astronomers sought to extend
spectroscopic analysis the study of specific
wavelengths or frequencies into the radio domain,
much like optical astronomers used it to determine
stellar properties. A crucial goal in this quest was
the search for a single, profoundly important cosmic
radio signal: the 21 cm line of atomic hydrogen.
4.1 The Bold Prediction (Amidst
Darkness)
The intellectual journey toward discovering this
specific radio line began in a surprisingly challenging
environment: Nazi-occupied Holland during World War
II. Amidst these difficult circumstances, a student named
Hendrik van de Hulst at Leiden chose to continue
his studies after his professor was detained. Working
with J.H. Oort, who was impressed by the potential of
radio waves for studying the galaxy’s structure (having
learned about Rebers reports from a smuggled journal),
Van de Hulst tackled the theoretical problem of what
specific radio signals interstellar hydrogen might emit.
Van de Hulsts theoretical work is described as
“remarkable both for its scientific prescience and for
the conditions under which it was produced”. While
he considered other possibilities, his most significant
contribution was the first discussion of the implications
of the 21 cm hyperfine line of atomic hydrogen. This
line arises from a specific, low-energy transition within
the neutral hydrogen atom. Van de Hulst calculated
4.2 Independent Confirmation and Other Ideas
the wavelength to be “21.1 cm, corresponding to a
frequency of 1420.4 MHz”. He noted that receiver
sensitivity would need to improve significantly for
detection, acknowledging the speculative nature of
this transition but recognizing its immense potential
given hydrogens abundance and the ideal wavelength
for interstellar penetration.
4.2 Independent Confirmation and
Other Ideas
Working independently in the Soviet Union, Iosif
Shklovsky was also motivated to predict cosmic radio
lines after seeing a brief mention of Van de Hulst’s ideas.
Even without access to the original article, Shklovsky
performed his own calculations concerning the 21 cm
line in a “largely independent manner”.
Shklovsky’s paper was the first to calculate the
probability for the hyperfine transition, achieving a
result “off by only a factor 4” from the correct value.
He argued for collisional excitation of the line and
demonstrated the “feasibility of detection of the line
with available equipment”.
Remarkably, Shklovsky’s work also included an
“entirely original second portion concerning potential
radio line radiation from deuterium and, significantly,
interstellar molecules like OH and CH. He was
“enthusiastic about the odds for detection” of these
molecular transitions. This part of his work, anticipating
the “rich field of galactic microwave spectroscopy” by
two decades, wasn’t widely followed up for several
years.
4.3 The Breakthrough Discovery
Despite these theoretical predictions and
encouraging calculations, the 21 cm line remained
undetected for several years due to the “difficulties
of receiver technology”. When the discovery finally
happened in 1951, it occurred “as so often happens
in modern science, with detection and confirmation
achieved by three independent groups scattered
around the globe within a very short time.
In the United States, H. I. Ewen and E. M. Purcell
were conducting thesis research at Harvard. They used a
large horn antenna “stuck out a window” and their initial
observations showed neutral hydrogen concentrated
towards the galactic plane. However, their equipments
limitations in resolution and frequency coverage meant
they detected only the “tip of the iceberg” of the signal,
missing its full structure and galactic rotation shifts.
In Holland, C.A. Muller and J.H. Oort were
driven to detect the line to study the structure of our
galaxy. They utilized a 7.5 m Wurzburg paraboloid
antenna abandoned by the German army. Their receiver
had similar sensitivity to Ewens, but their frequency-
switching interval was also not fully adequate for
capturing the full line profiles. Nevertheless, they
successfully measured frequency shifts of the peak
intensity at different galactic longitudes.
The third group in Australia, spurred by immediate
notification from Ewen and Purcell, quickly assembled
equipment and detected the line within months. In
a spirit of scientific cooperation, Ewen and Purcell
insisted that the initial reports from all three groups
“simultaneously appear”, which they did in the 1
September 1951 issue of Nature.
4.4 The Power of the 21 cm Line
The detection of the 21 cm hydrogen line was a
transformative event for astronomy. Neutral hydrogen
is the most abundant element in the universe, and this
line provided a unique tool to study its vast clouds
between the stars.
Crucially, unlike visible light, radio waves at 21
cm wavelength can penetrate the interstellar dust that
obscures much of the Milky Way’s structure from optical
view. This capability allowed astronomers to map the
structure of our own galaxy in unprecedented detail.
By measuring the Doppler shifts of the 21 cm line,
astronomers could determine the velocities of hydrogen
clouds, revealing the Milky Way’s rotation and spiral
structure. Oort himself had pioneered principles of
galactic structure and dynamics that could now be
applied using this new tool. Subsequent large-scale
surveys provided the first comprehensive look at this
18
4.5 Conclusion
spiral structure.
This period also saw the discovery and initial
study of discrete radio sources, often initially called
“radio stars”. Examples include Cassiopeia A (Cas
A), Cygnus A (Cyg A), Taurus A (Tau A), and Virgo
A (Vir A). Pioneers like Hey, Parsons, and Phillips
discovered intensity variations in Cyg A, opening
up a new line of investigation. Bolton and Stanley
determined that the Cyg A source had a small size. Later,
through painstaking work involving precise position
measurements, these sources were optically identified,
such as Tau A with the Crab Nebula supernova remnant,
Vir A with a galaxy, and most notably, Cyg A with a
distant, peculiar galaxy, and Cas A with a new type of
galactic emission nebulosity. The discovery of Cyg A
as an extragalactic source had significant implications
for cosmology.
Early investigations also included attempts
to detect the Sun at radio wavelengths, initially
unsuccessfully. Success came during WWII through
the accidental detection by James Hey using radar
equipment and independently by Grote Southworth.
These observations showed that the Sun emitted
surprisingly strong radio radiation, particularly
associated with sunspot activity. Techniques like
using interferometers, analogous to Michelsons optical
interferometer, were developed to measure the sizes of
sources like the Sun and discrete sources, revealing, for
instance, that Cyg A had a complex, possibly double
structure. Other techniques like Dicke switching were
developed to improve receiver sensitivity.
The period from the late 19th century attempts
to detect solar radio waves through Jansky’s discovery
and Reber’s mapping, culminating in the prediction
and detection of the 21 cm hydrogen line, marked a
revolutionary era. It revealed a “quiescent and limited
Universe” in radio waves, enabling unprecedented
studies of our galaxy’s structure and paving the way for
the exploration of distant cosmic radio sources. This
journey, born from theoretical insight and enabled by
improving radio technology and innovative techniques,
demonstrates how pioneering work in radio astronomy
fundamentally changed our understanding of the
universe.
4.5 Conclusion
The journey from theoretical prediction by Van de
Hulst under wartime duress, to independent calculations
by Shklovsky, and finally, the near-simultaneous
detection by three groups across the globe, is a testament
to scientific ingenuity and perseverance. The discovery
of the 21 cm hydrogen line, was a watershed moment.
It provided astronomers with an indispensable tool for
studying the most abundant element in the universe and,
critically, for mapping the structure of our own galaxy, a
task impossible with optical astronomy alone due to dust
obscuration. This breakthrough, born from theoretical
insight and enabled by improving radio technology,
perfectly embodies the theme of how pioneering work in
radio techniques and the interpretation of cosmic signals
revolutionized our understanding of the universe. The
21 cm line stands as a prime example of the incredible
secrets the radio sky held, waiting to be unlocked.
References
[Sullivan(1982)]
Woodruff Turner Sullivan. Classics
in Radio Astronomy. Springer Netherlands,
Dordrecht, 1982. ISBN 978-94-009-7754-9 978-
94-009-7752-5. doi: 10.1007/978-94-009-7752-5.
[Pritchard and Loeb(2012)]
Jonathan R Pritchard and
Abraham Loeb. 21 cm cosmology in the 21st
century. Reports on Progress in Physics, 75(8):
086901, August 2012. ISSN 0034-4885, 1361-
6633. doi: 10.1088/0034-4885/75/8/086901.
[Peebles(2020)]
P. J. E. Peebles. Quantum Mechanics.
Princeton University Press, Princeton, New Jersey,
first paperback printing edition, 2020. ISBN 978-
0-691-20982-1 978-0-691-20673-8.
[Kraus(1966)]
John D. Kraus. Radio Astronomy.
January 1966.
[Condon and Ransom(2016)]
J. J. Condon and
Scott M. Ransom. Essential Radio Astronomy.
Princeton Series in Modern Observational
Astronomy. Princeton University Press, Princeton,
2016. ISBN 978-0-691-13779-7.
19
REFERENCES
About the Author
Linn Abraham is a researcher in Physics,
specializing in A.I. applications to astronomy. He is
currently involved in the development of CNN based
Computer Vision tools for prediction of solar flares
from images of the Sun, morphological classifications
of galaxies from optical images surveys and radio
galaxy source extraction from radio observations.
20
Type I Supernovae: Stellar Explosions
Without Hydrogen
by Sindhu G
airis4D, Vol.3, No.5, 2025
www.airis4d.com
5.1 Introduction
Supernovae are among the most powerful and
dramatic phenomena in the universe. They are stellar
explosions that can briefly outshine an entire galaxy,
releasing an immense amount of energy and dispersing
elements into the interstellar medium. Supernovae are
generally classified into two main types based on their
spectral features: Type I and Type II. The distinguishing
feature of Type I supernovae is the absence of hydrogen
lines in their spectra. These events play a critical role
in astrophysics, from the enrichment of the interstellar
medium to their use as standard candles for measuring
cosmological distances.
Type I supernovae are further subdivided into three
categories—Type Ia, Ib, and Ic—based on the presence
or absence of other elements in their spectra, such as
silicon and helium. Among them, Type Ia supernovae
are especially significant because of their consistent
peak luminosity, which makes them valuable tools in
cosmology.
5.2 Classification and Spectral
Characteristics
5.2.1 Absence of Hydrogen
The classification into Type I is based on the lack
of hydrogen Balmer lines in the optical spectrum near
the peak of the light curve. This contrasts with Type II
supernovae, which show prominent hydrogen lines and
Figure 1: G299 Type Ia supernova remnant. (Image
Credit: NASA/CXC/U.Texas. )
Figure 2: The Type Ib supernova SN 2008D in galaxy
NGC 2770, shown in X-ray (left) and visible light
(right). (Image Credit:NASA / Swift Science Team /
Stefan Immler - NASA. )
5.3 Progenitors and Explosion Mechanisms
originate from massive stars that retain their hydrogen
envelopes.
5.2.2 Subtypes
Type Ia Supernovae
These supernovae are characterized by a strong
silicon absorption line at 6150
˚
A. Type Ia events result
from the thermonuclear explosion of a white dwarf
in a binary system, typically a carbon-oxygen white
dwarf that accretes matter from a companion star until
it approaches the Chandrasekhar limit (
1.4 solar
masses).
Type Ib and Ic Supernovae
These are core-collapse supernovae of massive
stars that have lost their outer envelopes. Type Ib shows
strong helium lines, while Type Ic lacks both hydrogen
and helium. The progenitors are often Wolf-Rayet stars
that have shed their outer layers through stellar winds
or interactions with a binary companion.
5.3 Progenitors and Explosion
Mechanisms
5.3.1 Type Ia Progenitors
There are two main scenarios proposed for Type
Ia progenitors:
Single-Degenerate Model: A white dwarf
accretes matter from a non-degenerate companion
(main-sequence or red giant star). When it
approaches the Chandrasekhar limit, it undergoes
a thermonuclear runaway, leading to a supernova
explosion.
Double-Degenerate Model: Two white dwarfs
in a binary system merge due to gravitational
wave emission. If the resulting mass exceeds
the Chandrasekhar limit, it can lead to a similar
thermonuclear explosion.
Recent observations and simulations suggest that
both channels may contribute to the observed population
of Type Ia supernovae, but the relative frequency
remains an area of active research.
5.3.2 Type Ib/Ic Progenitors
Type Ib and Ic supernovae originate from massive
stars (initial masses
>
8 solar masses) that have
undergone significant mass loss. In these cases, the
core of the star collapses once nuclear fuel is exhausted,
leading to the formation of a neutron star or black hole.
The outer layers are ejected in the supernova explosion.
5.4 Light Curves and Observational
Features
5.4.1 Type Ia Light Curves
Type Ia supernovae exhibit a well-defined light
curve characterized by a rapid rise to peak brightness
followed by a slower decline. The peak luminosity
is powered by the radioactive decay of
56
Ni to
56
Co
and eventually to
56
Fe. The uniformity of Type Ia
light curves, after applying corrections for light curve
shape and color, enables their use as standard candles
in cosmology.
5.4.2 Type Ib/Ic Light Curves
The light curves of Type Ib and Ic supernovae are
more diverse and typically less luminous than those of
Type Ia. They also show faster decline rates. The energy
source is similar—radioactive decay of nickel—but the
differences arise due to the progenitor structure and
explosion mechanism.
5.5 Cosmological Significance
Type Ia supernovae are essential tools for
measuring cosmic distances. Their consistent peak
brightness allows astronomers to determine distances
with relatively low uncertainty. Observations of distant
Type Ia supernovae in the late 1990s led to the
groundbreaking discovery that the universes expansion
is accelerating, a finding that earned the 2011 Nobel
Prize in Physics. This acceleration is attributed to a
mysterious component of the universe known as dark
energy.
22
5.6 Current Research and Open Questions
The use of Type Ia supernovae as standard candles
involves several corrections, including:
Stretch correction (light curve width–luminosity
relation).
Color correction (dust extinction and intrinsic
color variations).
Host galaxy correction (metallicity and star
formation effects).
5.6 Current Research and Open
Questions
Despite their importance, several questions remain
about Type I supernovae:
What is the dominant progenitor system for Type
Ia supernovae?
How do metallicity and environment affect
supernova properties?
Can we accurately model the explosion
mechanisms in 3D simulations?
How do Type Ib and Ic progenitors evolve, and
what is the role of binary interactions?
Ongoing surveys such as the Zwicky Transient
Facility (ZTF), Pan-STARRS, and the upcoming Vera
C. Rubin Observatory are providing massive datasets
that help answer these questions. Spectroscopic
follow-up, polarimetric observations, and gravitational
wave astronomy are also contributing to a deeper
understanding.
5.7 Conclusion
Type I supernovae are not only spectacular
astrophysical events but also critical tools for
understanding the cosmos. From the thermonuclear
explosions of white dwarfs to the core-collapse of
massive stars, they provide insight into stellar evolution,
nucleosynthesis, and the large-scale structure of the
universe. With new observational capabilities and
improved theoretical models, our understanding of
these phenomena continues to grow, promising exciting
discoveries in the years ahead.
References:
Observational Signatures of Type Ia Supernovae
Type Ia supernova
Type Ib and Ic supernovae
The double-degenerate model for the progenitors
of Type Ia supernovae
Single Degenerate Models for Type Ia
Supernovae
The prolonged death of light from type Ia
supernovae
Modelling Type Ia Supernova Light Curves
Towards a better understanding of the evolution
of Wolf–Rayet stars and Type Ib/Ic supernova
progenitors
ZTF counts more than 10,000 supernovae
Type Ia Supernovae: How DES Used Exploding
Stars to Measure Dark Energy
Type Ia supernovae as stellar endpoints and
cosmological tools
The different types of supernovae explained
Supernovae Ib and Ic from the explosion of
helium stars
Evolutionary Models for Type Ib/c Supernova
Progenitors
Type Ib and Ic Supernovae: Models and Spectra
Near-infrared Spectral Properties of Type Ib/Ic
Supernova Progenitors and Implications for
JWST and NGRST Observations
About the Author
Sindhu G is a research scholar in Physics
doing research in Astronomy & Astrophysics. Her
research mainly focuses on classification of variable
stars using different machine learning algorithms. She
is also doing the period prediction of different types
of variable stars, especially eclipsing binaries and on
the study of optical counterparts of X-ray binaries.
23
Part II
Biosciences
Beyond Morphology: How Molecular
Taxonomy is Reshaping Species Identification
by Geetha Paul
airis4D, Vol.3, No.5, 2025
www.airis4d.com
1.1 Introduction
For centuries, the science of taxonomy - the
classification of living organisms - relied almost
exclusively on observable morphological characteristics.
Biologists painstakingly documented and compared
physical traits such as shape, size, colour, anatomical
structures, and developmental patterns to group
organisms and infer evolutionary relationships. This
morphological approach, pioneered by Carl Linnaeus
in the 18th century, formed the foundation of biological
classification and served science well for generations.
However, as our understanding of biological
diversity deepened, the limitations of morphology-
based taxonomy became increasingly apparent. Many
species exhibit remarkable phenotypic plasticity,
where genetically identical organisms can display
vastly different physical characteristics depending on
environmental conditions. Conversely, convergent
evolution often produces similar morphological features
in unrelated species facing similar ecological pressures.
Perhaps most challenging are cryptic species complexes
- groups of organisms that appear identical to human
observers but are genetically distinct species.
These challenges came to a head in the latter
half of the 20th century as new technologies revealed
contradictions between morphological classifications
and molecular evidence. The development of protein
electrophoresis in the 1960s and DNA sequencing in
the 1970s provided the first glimpses of a hidden world
of genetic variation that didn’t always correspond to
visible differences. This technological revolution set
the stage for the emergence of molecular taxonomy as a
distinct discipline.
Molecular taxonomy represents a paradigm shift
in how we define and identify species. By comparing
DNA, RNA, and protein sequences, researchers can
peer directly into an organism’s genetic blueprint,
uncovering evolutionary relationships that may be
obscured at the morphological level. This approach has
resolved numerous taxonomic controversies that had
persisted for decades. For instance, what were once
considered single, widespread species have frequently
been revealed to be complexes of multiple genetically
distinct species. Conversely, organisms classified as
separate species based on morphology have sometimes
been shown to represent mere variants within a single
genetically cohesive population.
The impact of molecular taxonomy extends far
beyond academic debates. In conservation biology,
it helps identify evolutionarily significant units for
protection. In medicine, it enables more accurate
identification of pathogen strains. In agriculture, it
improves our understanding of crop wild relatives
and pests. Perhaps most fundamentally, molecular
taxonomy has transformed our understanding of
biodiversity, revealing that our planet hosts far more
species, particularly microbial ones, than we ever
imagined based on morphological studies alone.
As we stand today, molecular taxonomy
has become an indispensable tool in modern
biology, complementing and superseding traditional
1.2 Key Applications of Molecular Taxonomy
Figure 1: DNA barcoding Procedure
Image courtesy https://www.researchgate.net/publication/382871147 DNA Barcoding and Its App
lications
morphological approaches in many cases. The
following sections explore how this revolution came
about, its major applications, and the exciting
possibilities it holds for the future of species
identification and classification. From DNA barcoding
to phylogenomics, we will examine how molecular data
reshapes our understanding of life’s diversity at its most
fundamental level.
1.1.1 The Shift from Morphology to
Molecules
Traditional taxonomy relies on anatomical and
morphological comparisons, which can be misleading
due to: Convergent evolution (unrelated species
developing similar traits), High phenotypic plasticity
(single species exhibiting varied forms), Cryptic species
(genetically distinct organisms that appear identical).
1.1.2 The Rise of Molecular Techniques
Molecular taxonomy overcomes these challenges
by analysing genetic differences. Key advances include:
DNA barcoding (e.g., CO1 for animals, ITS for
fungi, rbcl for plants), High-throughput sequencing
(NGS) enabling large-scale genomic comparisons,
Phylogenomics, using hundreds of genes to reconstruct
evolutionary trees.
1.2 Key Applications of Molecular
Taxonomy
Discovering Hidden Biodiversity: Molecular Tools
Revealing Natures Secrets
1.2.1 Cryptic Species Unmasked
Molecular taxonomy has revolutionized
our understanding of species boundaries by
exposing widespread cryptic diversity - cases
where morphologically identical organisms are actually
genetically distinct species. A classic example comes
from Eleutherodactylus frogs in the Caribbean, where
what appeared to be a single widespread species was
revealed through mitochondrial DNA analysis to
comprise at least 12 distinct evolutionary lineages,
each warranting species status (Heinicke et al., 2007).
Similar patterns have been found across taxa, Insects:
16% of European butterflies have cryptic sister species
(Vila et al., 2011) Marine life: 33% of surveyed fish
species hide cryptic diversity (Bickford et al., 2007)
Plants: 23% of tropical trees show cryptic speciation
(Dick & Webb, 2012).
These discoveries fundamentally alter biodiversity
estimates and conservation priorities, as what was
thought to be one widespread, secure species may
actually be multiple range-restricted threatened species.
1.2.2 The Microbial Dark Matter Revolution
Before 16S rRNA sequencing, microbiologists
could culture and study less than 1% of environmental
microbes. Molecular approaches have increased known
bacterial phyla from 12 to over 100 (Hugenholtz et
al., 2016), revealed the Candidate Phyla Radiation,
comprising 15-30% of bacterial diversity (Brown et
al., 2015). As early as 2007, Fierer et al. noted
that a single gram of soil can contain
>
10,000
microbial species (Fierer et al., 2007),The Earth
Microbiome Project has now cataloged over 300,000
microbial OTUs (operational taxonomic units), with
most representing uncultured lineages (Thompson
et al., 2017). This hidden diversity has profound
26
1.3 Challenges and Future Directions
implications for ecosystem functioning, biotechnology,
and understanding the origins of life.
1.2.3 Resolving Taxonomic Conflicts:
Molecular Data as the Ultimate Arbiter
Reclassifying Iconic Species
African elephants: Mitochondrial and nuclear
DNA proved Loxodonta africana (savanna) and L.
cyclotis (forest) diverged 2-7 million years ago,
warranting separate species status (Roca et al., 2015),
Giraffes: Genomic data revealed four distinct species
rather than one (Fennessy et al., 2016), Orcas:
eDNA studies suggest at least three separate species
exist (Morin et al., 2010). These reclassifications
dramatically affect conservation status assessments and
management strategies.
1.2.4 Hybridisation Detection and Its
Consequences
Molecular markers excel at identifying hybrids
where morphology fails. Canid hybrids: SNPs
distinguish gray wolf (Canis lupus) × coyote (C. latrans)
hybrids, crucial for endangered wolf conservation
(vonHoldt et al., 2016), Coral reefs: RADseq revealed
widespread hybridization among Acropora species, with
implications for reef resilience (Richards et al., 2018),
Plants: Chloroplast DNA tracks historical hybridization
in oaks, explaining their rapid adaptation (Hipp et al.,
2020). Hybrid zones serve as natural laboratories for
studying speciation, and molecular tools are essential
for mapping these dynamic boundaries.
1.2.5 Conservation and Forensics: Molecular
Taxonomy in Action
Combating Wildlife Trafficking
DNA barcoding has become a frontline tool against
illegal wildlife trade. Shark fins: CO1 barcoding
identified 71 species in Hong Kong markets, including
endangered scalloped hammerheads (Carde
˜
nosa et al.,
2018). Timber: matK and rbcL barcodes expose illegal
logging of protected Dalbergia and Pterocarpus species
(Dormontt et al., 2015). Caviar: Microsatellites verify
Figure 2: Application of Chloroplast DNA Barcodes
In Various Fields
species origin in 30% of commercial products (Birstein
et al., 2005).
Interpol’s Wildlife Crime Working Group now
routinely employs genetic forensics, with conviction
rates increasing 300% since 2015 (INTERPOL, 2021).
1.2.6 Genetic Rescue of Endangered Species
Population genomics guides critical conservation
decisions: Florida panther (Puma concolor coryi):
Introduction of Texas cougars based on genetic
data increased heterozygosity by 90% and reduced
abnormalities (Johnson et al., 2010).Kakapo parrots:
Genome sequencing informs breeding pairs to maximize
genetic diversity (Dussex et al., 2021). Coral reefs: SNP
arrays identify heat-resistant Acropora genotypes for
reef restoration (Matz et al., 2018)
The IUCN now recommends genomic assessments
for all threatened species, recognising that genetic
diversity predicts long-term viability (Hoban et al.,
2020). Molecular taxonomy thus bridges the gap
between species discovery and species survival.
1.3 Challenges and Future Directions
Cost and accessibility: Advanced sequencing
remains expensive in developing regions. Data
interpretation: Phylogenetic conflicts arise from
horizontal gene transfer (HGT) or incomplete lineage
sorting. Ethical considerations: Should genetic
divergence alone define species, or should ecology
and morphology still play a role. Future innovations,
like portable sequencers (e.g., Oxford Nanopore) and
27
1.4 Conclusion
AI-driven phylogenetics, promise to democratise and
refine the field.
1.4 Conclusion
Molecular taxonomy has revolutionised our
understanding of biodiversity by transcending
the limitations of traditional morphology-
based classification. Through DNA barcoding,
phylogenomics, and high-throughput sequencing, it
has uncovered cryptic species, resolved long-standing
taxonomic conflicts, and revealed the astonishing
diversity of microbial life. Beyond academic
significance, these tools have become indispensable
for conservation—combating wildlife trafficking,
guiding genetic rescue efforts, and informing species
management policies. As portable sequencing and
AI-driven analysis democratize access to genetic data,
molecular taxonomy is poised to address emerging
challenges, from delineating species in microbes
to reconciling genomic and ecological data. This
paradigm shift has not only refined our classification
systems but fundamentally transformed how we
perceive, study, and protect life on Earth, proving that
the true boundaries of biodiversity often lie hidden in
the genome. Moving forward, molecular approaches
will remain central to both understanding evolution
and mitigating biodiversity loss in an era of rapid
environmental change.
References
https://doi.org/10.1038/nature14486
https://doi.org/10.1016/j.tree.2006.11.004
https://doi.org/10.1007/978-1-61779-591-6 18
https://doi.org/10.1128/AEM.00358-07
https://doi.org/10.1073/pnas.0611051104
https://doi.org/10.1101/cshperspect.a018085
https://doi.org/10.1111/j.1439-
0426.2005.00664.x
https://doi.org/10.1038/nature24621
https://doi.org/10.1016/j.biocon.2020.108654
https://www.interpol.int/en/Crimes/Environmental-
crime/Wildlife-crime
https://doi.org/10.1126/science.1192891
Hebert, P. D., Cywinska, A., Ball, S. L., &
deWaard, J. R. (2003). Biological identifications
through DNA barcodes. Proceedings of the Royal
Society B: Biological Sciences, 270(1512), 313-321.
Woese, C. R., Kandler, O., & Wheelis, M. L.
(1990).Towards a natural system of organisms: Proposal
for the domains Archaea, Bacteria, and Eucarya.
Proceedings of the National Academy of Sciences,
87(12), 4576-4579.
Hillis, D. M., Moritz, C., & Mable, B. K. (1996).
Molecular Systematics (2nd ed.). Sinauer Associates.
Ratnasingham, S., & Hebert, P. D. (2007). BOLD:
The Barcode of Life Data System. Molecular Ecology
Notes, 7(3), 355-364.
About the Author
Geetha Paul is one of the directors of
airis4D. She leads the Biosciences Division. Her
research interests extends from Cell & Molecular
Biology to Environmental Sciences, Odonatology, and
Aquatic Biology.
28
Part III
Computer Programming
Classifying Parallelism: Part-2
by Ajay Vibhute
airis4D, Vol.3, No.5, 2025
www.airis4d.com
1.1 Introduction
As discussed in earlier sections, parallelism can
be implemented through various models that distribute
computation across multiple processors or tasks, such
as data and task parallelism. However, there are also
more detailed forms of parallelism that take advantage
of the specific capabilities of individual processor
cores, offering further performance improvements. By
exploring these lower-level parallelism techniques, we
can achieve additional optimization within each core,
resulting in a significant boost in overall throughput and
a reduction in the time required to perform complex
operations.
These finer levels of parallelism include bit-
level, instruction-level, and thread-level parallelism.
Each of these focuses on different aspects of how
instructions are handled within the processor, providing
distinct opportunities for optimization at various stages
of the computational process. In the following
subsections, we will explore these foundational levels of
parallelism, emphasizing their strong connection with
the processor’s internal architecture and their critical
role in achieving high-performance computing.
1.2 Bit-Level Parallelism
Bit-level parallelism is a form of parallelism that
focuses on the simultaneous processing of individual
bits within larger data units. This capability is
primarily driven by the processor’s register width, which
determines the number of bits the processor can operate
on concurrently in a single operation. In traditional 8-bit
or 16-bit processors, operations could only be performed
on data in chunks of 8 or 16 bits. The advancement
from 8-bit and 16-bit processors to today’s prevalent
32-bit and 64-bit designs has significantly enhanced the
capacity for parallel bit manipulation, directly resulting
in improved performance.
Bit-level parallelism proves particularly
advantageous in tasks that heavily utilize bitwise
operations, such as the complex calculations in
cryptography (exemplified by AES), algorithms used
for data compression, and certain techniques in signal
processing. For example, by increasing the processor’s
data-handling capacity from 32 to 64 bits, the number
of bits processed in parallel for these operations
doubles, leading to substantial speedups.
Furthermore, the efficiency of bit-level parallelism
can be increased through the integration of SIMD
(Single Instruction, Multiple Data) units, which enable
the parallel execution of the same bitwise instruction
across multiple data elements. While bit-level
parallelism by itself may not provide as significant
a speedup as higher levels of parallelism, it plays a vital
role in optimizing performance, especially in tasks that
require extensive low-level data manipulation.
1.3 Instruction-Level Parallelism
Instruction-Level Parallelism (ILP), figure 1,
allows a processor to execute multiple instructions
simultaneously within a single cycle. Unlike higher-
level parallelism models, which focus on distributing
entire tasks or datasets across multiple processors,
ILP optimizes the performance of a single processor
1.4 Thread-Level Parallelism
by increasing the number of instructions that can
be executed in parallel. Modern processors use
Figure 1: Instruction Level parallelism
various techniques, such as pipelining and out-of-
order execution, to achieve ILP. In pipelining, the
execution of instructions is divided into stages (e.g.,
fetching, decoding, executing, and writing back results),
with each stage processing a different instruction.
By overlapping the stages of multiple instructions,
pipelining enables the concurrent execution of several
instructions within the same clock cycle, significantly
increasing throughput.
Out-of-order execution further enhances ILP by
allowing the processor to execute instructions that are
not dependent on each other, even if they appear later
in the instruction stream. This feature is especially
useful in avoiding pipeline stalls caused by data hazards,
where instructions depend on the results of previous
instructions.
Despite its potential, the effectiveness of ILP
is restricted by several factors, including instruction
dependencies, the availability of execution units, and
hardware constraints. For example, data hazards (where
one instruction depends on the output of another) can
limit the degree of parallelism achievable through ILP.
Moreover, control hazards, which occur when branching
instructions alter the flow of execution, can also lead to
pipeline stalls and reduce efficiency. Modern processors
use techniques such as branch prediction and speculative
execution to address these issues, but the degree of ILP
remains constrained by the inherent characteristics of
the instruction stream.
1.4 Thread-Level Parallelism
Thread-level parallelism (TLP), figure 2, leverages
the multi-core design of contemporary processors
by enabling the simultaneous execution of multiple
threads. A thread represents an independent sequence
of instructions that can be executed in parallel with
others. By running multiple threads concurrently
on different processor cores, TLP can significantly
increase computational throughput, particularly in multi-
threaded applications that are well-suited to this model
of parallelism. TLP can be implemented using various
Figure 2: Thread Level parallelism
architectural approaches, including shared-memory
architectures—where multiple threads access a common
memory space—and distributed-memory systems, in
which each processor or core operates independently
with its own local memory. The advent of multi-core
processors has made TLP one of the most effective
forms of parallelism in modern computing, as it allows
software to exploit concurrency inherent in parallel
workloads.
Maximizing the efficiency of TLP requires careful
management of synchronization and shared data
access. Without robust synchronization mechanisms,
concurrent threads may access shared resources
unsafely, resulting in race conditions and potentially
leading to unpredictable behavior or system errors.
To mitigate such issues, synchronization primitives
such as locks, semaphores, and atomic operations
are commonly employed. Additionally, maintaining
balanced workloads across threads is essential; uneven
31
1.5 Summary
task distribution can lead to idle cores and reduced
overall efficiency.
Although TLP offers substantial performance
benefits for applications with high degrees of intrinsic
parallelism, it is not universally applicable. It is
generally less effective for problems characterized by
sequential logic or strong interdependencies between
operations. In such cases, the overhead associated with
managing threads—such as spawning, synchronization,
and coordination—can introduce delays that diminish
or negate potential performance gains.
1.5 Summary
In summary, bit-level, instruction-level, and thread-
level parallelism represent fundamental approaches
for optimizing the performance of modern processors.
By exploiting these different forms of parallelism,
systems can achieve greater computational efficiency,
making them essential in areas ranging from high-
performance computing (HPC) to real-time processing
applications. While each level of parallelism offers
distinct advantages, their combined use in multi-core,
superscalar processors allows for the efficient handling
of diverse computational workloads. Understanding the
trade-offs and limitations of each type of parallelism is
crucial for designing systems that effectively harness
the power of modern hardware.
About the Author
Dr. Ajay Vibhute is currently working
at the National Radio Astronomy Observatory in
the USA. His research interests mainly involve
astronomical imaging techniques, transient detection,
machine learning, and computing using heterogeneous,
accelerated computer architectures.
32
About airis4D
Artificial Intelligence Research and Intelligent Systems (airis4D) is an AI and Bio-sciences Research Centre.
The Centre aims to create new knowledge in the field of Space Science, Astronomy, Robotics, Agri Science,
Industry, and Biodiversity to bring Progress and Plenitude to the People and the Planet.
Vision
Humanity is in the 4th Industrial Revolution era, which operates on a cyber-physical production system. Cutting-
edge research and development in science and technology to create new knowledge and skills become the key to
the new world economy. Most of the resources for this goal can be harnessed by integrating biological systems
with intelligent computing systems offered by AI. The future survival of humans, animals, and the ecosystem
depends on how efficiently the realities and resources are responsibly used for abundance and wellness. Artificial
intelligence Research and Intelligent Systems pursue this vision and look for the best actions that ensure an
abundant environment and ecosystem for the planet and the people.
Mission Statement
The 4D in airis4D represents the mission to Dream, Design, Develop, and Deploy Knowledge with the fire of
commitment and dedication towards humanity and the ecosystem.
Dream
To promote the unlimited human potential to dream the impossible.
Design
To nurture the human capacity to articulate a dream and logically realise it.
Develop
To assist the talents to materialise a design into a product, a service, a knowledge that benefits the community
and the planet.
Deploy
To realise and educate humanity that a knowledge that is not deployed makes no difference by its absence.
Campus
Situated in a lush green village campus in Thelliyoor, Kerala, India, airis4D was established under the auspicious
of SEED Foundation (Susthiratha, Environment, Education Development Foundation) a not-for-profit company
for promoting Education, Research. Engineering, Biology, Development, etc.
The whole campus is powered by Solar power and has a rain harvesting facility to provide sufficient water supply
for up to three months of drought. The computing facility in the campus is accessible from anywhere through a
dedicated optical fibre internet connectivity 24×7.
There is a freshwater stream that originates from the nearby hills and flows through the middle of the campus.
The campus is a noted habitat for the biodiversity of tropical Fauna and Flora. airis4D carry out periodic and
systematic water quality and species diversity surveys in the region to ensure its richness. It is our pride that the
site has consistently been environment-friendly and rich in biodiversity. airis4D is also growing fruit plants that
can feed birds and provide water bodies to survive the drought.