Cover page
Image Name: AI for the Planet (Image courtesy)
While AI could become an extra dimension for human creativity and welfare, it could also become a threat if
misused by human greed and fanatism. The picture depicts the hope that we will use AI to benefit all life forms
and conserve our ecosystems and biodiversity.
Managing Editor Chief Editor Editorial Board Correspondence
Ninan Sajeeth Philip Abraham Mulamoottil K Babu Joseph The Chief Editor
Ajit K Kembhavi airis4D
Geetha Paul Thelliyoor - 689544
Arun Kumar Aniyan India
Sindhu G
Journal Publisher Details
Publisher : airis4D, Thelliyoor 689544, India
Website : www.airis4d.com
Email : nsp@airis4d.com
Phone : +919497552476
i
Editorial
by Fr Dr Abraham Mulamoottil
airis4D, Vol.2, No.11, 2024
www.airis4d.com
This edition also introduces the News Desk high-
lighting airis4D outreach initiative ”Igniting Young
Minds” led by Geetha Paul and Ninan Sajeeth Philip
through the Vigyan Jyothi program at Jawahar Navo-
daya Vidyalaya School in Pathanamthitta and St. Thomas
College, Kozhenchery. The event aimed to inspire
students toward science and technology, with support
from airis4D team members Sindhu G. and Jinsu Ann
Mathew.
This edition gives two articles on the 2024 No-
bel Prize in Physics. In The Legacy of Hopfield
and Hinton, Arun Aniyan explores the transformative
contributions of John Hopfield and Geoffrey Hinton
to artificial intelligence, computational neuroscience,
and machine learning. Originally a physicist, Hop-
field pioneered the Hopfield network, using concepts
from statistical mechanics to explain neural memory
storage, bridging physics with computational neuro-
science. Drawing from a rich scientific lineage, Hin-
ton revolutionised neural networks with breakthroughs
like backpropagation and capsule networks, advancing
AI’s capacity for complex data representations. Their
academic ancestries reveal a legacy of interdisciplinary
influence, connecting physics, biology, and chemistry
through contributions from renowned scientists, high-
lighting the collaborative nature of scientific advance-
ment. Today, students of Hopfield and Hinton continue
to lead in AI, exemplifying how mentorship, knowledge
transfer, and collaborative networks fuel progress and
sustain innovation across generations.
Blesson George introduces the significance of the
2024 Nobel Prize for Physics. The 2024 Nobel Prize
in Physics awarded John Hopfield and Geoffrey Hinton
for their foundational discoveries that enabled machine
learning with artificial neural networks, marking a shift
in how physics-driven methodologies impact AI. In-
spired by neurobiology and atomic spin physics, Hop-
field’s neural network laid the groundwork for energy-
based models, while Hintons Boltzmann machine ex-
panded machine learning using principles from statis-
tical physics. This award stirred debate about AI’s
fit within physics, as critics argue it diverges from
traditional boundaries and experimental evidence re-
quirements. However, the Nobel Committee defended
their choice, highlighting the role of physics in shaping
transformative technologies. This recognition reflects
a broader view of physics as an interdisciplinary force,
fueling advancements across various fields.
The article ”Will AI Bring an End to Humanity?”
by Ninan Sajeeth Philip discusses the potential dan-
gers of AI, especially regarding inequality, ethics, and
misuse in warfare. It emphasises that the threat isnt
AI itself but how it’s used, particularly by powerful
entities that could exploit technology for surveillance,
autonomous weapons, and manipulation. The author
warns that unequal access to AI might deepen social
divides and fuel geopolitical conflicts. The article sug-
gests collaborative frameworks for responsible AI use,
equitable technology access, and ethical standards in its
development and deployment to address these issues.
In the Black Hole Stories-13, More About Black
Hole Spin Measurements, Ajit Kembhavi explores the
complex techniques for measuring black hole spins,
focusing on X-ray reflection spectra emitted from ac-
cretion disks surrounding supermassive black holes.
Building on the analysis of galaxy MCG-6-30-15, he
discusses how the spin impacts the shape of specific
X-ray emission lines, especially the 6.4 keV iron line.
Modifications in the reflection spectrum due to Comp-
ton scattering, fluorescence, and relativistic effects are
key to spin determination. Observations, especially
using high-resolution X-ray satellites, have shown that
many supermassive black holes have high spin val-
ues, often clustering above 0.5, indicating rapid rota-
tion. Further research and technological advancements
aim to refine these measurements, potentially provid-
ing more insights into black hole characteristics and
behaviours.
In ”Learning is Better than Programming, Linn
Abraham highlights the transformative contributions
of Geoffrey Hinton to artificial intelligence, specifi-
cally through machine learning with neural networks.
The article discusses the 2024 Nobel Prize in Physics
awarded to Hinton and John J. Hopfield for foun-
dational discoveries in neural networks, emphasising
Hintons role in advancing AI with innovations like
backpropagation, Restricted Boltzmann Machines (RBMs),
and Convolutional Neural Networks (CNNs). The ar-
ticle details AlexNet, a CNN model developed by Hin-
tons team, which achieved groundbreaking results in
the 2012 ImageNet Challenge, marking a turning point
in computer vision. By allowing the model to learn
features from data rather than relying on manual pro-
gramming, Hintons work showcased the power of deep
learning, emphasising the value of data and computa-
tional power over manual coding in solving complex
tasks.
The article Applications of AI in Astronomy and
Astrophysics” by Sindhu G highlights the transforma-
tive role of artificial intelligence in modern astron-
omy. AI’s integration with machine learning (ML)
and deep learning (DL) algorithms is enhancing data
analysis, classification, and real-time detection, allow-
ing astronomers to analyse vast datasets generated by
advanced telescopes like the James Webb Space Tele-
scope. By automating processes such as star and galaxy
classification, AI enables researchers to focus on sig-
nificant discoveries, with predictive models aiding in
forecasting cosmic events. For instance, convolutional
neural networks (CNNs) efficiently classify stars and
galaxies, while recurrent neural networks (RNNs) im-
prove the accuracy of exoplanet detection and charac-
terisation. AI models help assess exoplanet habitabil-
ity by analysing transit and radial velocity data, paving
the way for groundbreaking discoveries about potential
life-supporting planets.
Moreover, AI enhances studies of variable stars,
X-ray binaries, and gravitational waves by automating
light curve analysis and improving signal detection ac-
curacy. This is especially crucial for capturing weak
gravitational wave signals amidst noise. AI also ad-
vances the detection of Fast Radio Bursts (FRBs) and
aids in multi-messenger astronomy, where AI frame-
works process data from different wavelengths to cap-
ture associated phenomena. In cosmology, AI-powered
simulations allow researchers to model dark matter
and understand its role in cosmic evolution. AI-based
image processing techniques such as deblurring, de-
noising, and super-resolution improve telescope data
quality, enhancing our ability to explore the universe.
Through these diverse applications, AI is reshaping as-
tronomy and astrophysics by enabling discoveries once
considered impossible.
In the article ”Harnessing Artificial Intelligence
for Biodiversity Conservation: A New Era of Eco-
logical Understanding, Geetha Paul explores how AI
revolutionises conservation efforts. AI’s integration
into biodiversity research enhances data collection,
species identification, and habitat monitoring, provid-
ing real-time insights that were previously challenging
to obtain. Technologies such as acoustic monitoring,
drones, and camera traps powered by deep learning al-
gorithms allow for precise wildlife tracking and proac-
tive conservation actions. Predictive analytics is vi-
tal in foreseeing risks like habitat loss due to climate
change, helping conservationists devise timely solu-
tions. Ethical considerations surrounding AI use, such
as data privacy and employment impacts, underscore
the need for responsible implementation. Ultimately,
AI’s transformative potential in conservation enables
more effective, data-driven strategies that can secure
biodiversity for future generations.
The article ”Bronchopulmonary Dysplasia in Preterm
Infants: Addressing the Challenges and Advancements
iii
in Management” by Kalyani Bagri highlights the com-
plexities of Bronchopulmonary Dysplasia (BPD), a
chronic lung disease in preterm infants, especially those
born before 28 weeks. BPD can lead to severe respira-
tory issues, prolonged hospital stays, and neurodevel-
opmental delays, impacting long-term quality of life.
While advancements like surfactant therapy and less
invasive ventilation methods have improved outcomes,
preventing BPD remains challenging. Treatment op-
tions, such as postnatal corticosteroids, are cautiously
used due to potential adverse effects. Additionally,
the article discusses the development of India-specific
predictive models to assess BPD risks better. Emerg-
ing treatments, including stem cell therapy and genetic
research, show promise for more personalised and ef-
fective management strategies for this high-risk group.
The article ”Ethical Implications of Using AI in
Scientific Research” by Jinsu Ann Mathew discusses
the transformative role AI plays in accelerating scien-
tific discoveries, while emphasizing key ethical chal-
lenges. Key concerns include ensuring accuracy and
trustworthiness, as AI models can produce mislead-
ing outcomes if trained on biased or incomplete data,
especially in critical fields like drug discovery and en-
vironmental science. Transparency in AI, or explain-
able AI (XAI), is also essential for fostering trust by
making AI’s decision-making processes interpretable.
Additionally, biases within AI systems can perpetu-
ate social inequalities if not managed responsibly. To
address these issues, the article suggests a framework
for responsible AI in research, advocating for rigor-
ous testing, transparency, bias mitigation, interdisci-
plinary collaboration, and continuous oversight to en-
hance AI’s ethical and reliable contribution to scientific
progress.
iv
v
Igniting Young Minds
by News Desk
airis4D, Vol.2, No.11, 2024
www.airis4d.com
Geetha Paul and Ninan Sajeeth Philip with students after an outreach activity for students at Vigyan Jyothi
program at Jawahar Navodaya Vidyalaya School, Pathanamthitta and at St Thomas College, Kozhencheri
where Sindhu G and Jinsu Ann Mathew from airis4D also were present.
Contents
Editorial ii
Igniting Young Minds vi
I Artificial Intelligence and Machine Learning 1
1 The Legacy of Hopfield and Hinton 2
1.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.2 John Hopfield . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.3 Geoffrey Hinton . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.4 What does the Academic Ancestry Teach Us ? . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
2 Physics in the 2024 Nobel Prize for Physics 7
2.1 Contributions of John Hopfield and Geoffrey Hinton . . . . . . . . . . . . . . . . . . . . . . . . . 7
2.2 Physics Employed in Hopfield Network and Boltzmann Machine . . . . . . . . . . . . . . . . . . 8
2.3 Spin System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
2.4 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
3 Will AI Bring an End to Humanity? 11
3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
3.2 Inequality and Access to AI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
3.3 Ethical Dilemmas . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
3.4 Misuse of AI in Warfare . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
3.5 Mitigation Methods for Reducing the AI Divide . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
3.6 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
II Astronomy and Astrophysics 15
1 Black Hole Stories-13
More About Black Hole Spin Measurements 16
1.1 The Reflection Spectrum . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
1.2 Modifications of the Reflection Spectrum . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
1.3 The Distribution of the Spin Parameter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
2 Learning is Better than Programming 20
2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
2.2 ImageNet Challenge . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
2.3 AlexNet . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
2.4 The Legacy of AlexNet . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
3 Applications of AI in Astronomy and Astrophysics 23
3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
CONTENTS
3.2 Classification and Identification of Celestial Objects . . . . . . . . . . . . . . . . . . . . . . . . . 24
3.3 Exoplanet Detection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
3.4 Variable Stars and X-ray Binary Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
3.5 Gravitational Wave Detection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
3.6 Fast Radio Burst (FRB) Detection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
3.7 Cosmology and Dark Matter Studies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
3.8 Image Processing and Enhancement for Telescope Data . . . . . . . . . . . . . . . . . . . . . . . 27
3.9 Astroparticle Physics and Cosmic Ray Detection . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
3.10 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
III Biosciences 30
1 Harnessing Artificial Intelligence for Biodiversity Conservation: A New Era of Ecolog-
ical Understanding 31
1.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
1.2 The Challenge of Biodiversity Loss . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
1.3 The Role of AI in Biodiversity Conservation and Data Collection Efficiency . . . . . . . . . . . 32
1.4 Wildlife Monitoring and Protection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
1.5 Wildlife Conservation Tools and Technologies . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
1.6 Wildlife Conservation Tools and Technologies . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
1.7 Case Studies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
1.8 Behavioural Studies Through Machine Learning . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
1.9 Ethical Considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
1.10 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
1.11 Future Directions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
2 Bronchopulmonary Dysplasia in Preterm Infants: Addressing the Challenges and Ad-
vancements in Management 37
2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
2.2 Mortality and Morbidity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
2.3 Short-Term and Long-Term Outcomes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
2.4 Pathogenesis of BPD . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
2.5 Prevention and Treatment Challenges . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
2.6 Postnatal Corticosteroids . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
2.7 Predictive Modeling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
2.8 Emerging Treatments and Future Directions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
3 Ethical Implications of Using AI in Scientific Research 42
3.1 Ensuring Accuracy and Trustworthiness in AI Findings . . . . . . . . . . . . . . . . . . . . . . . 42
3.2 The Transparency Challenge: Opening the “Black Box” . . . . . . . . . . . . . . . . . . . . . . . 43
3.3 The Risk of Bias in AI Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
3.4 Moving Forward: A Framework for Responsible AI in Research . . . . . . . . . . . . . . . . . . 44
viii
Part I
Artificial Intelligence and Machine Learning
The Legacy of Hopfield and Hinton
by Arun Aniyan
airis4D, Vol.2, No.11, 2024
www.airis4d.com
1.1 Introduction
When the Nobel Prize for Physics in 2024 was
declared, the major scientific journal Nature published
an article “Physics Nobel scooped by machine-learning
pioneers”. Whether it was scooped by people who
are not pure physicists is another debate. Many even
among the scientific community are not fully aware of
the background and history of both John Hopfield and
Geoffrey Hinton. The community of young researchers
who have grown under both of them has already made
a huge impact, not only in the area of machine learning
but also in areas like chemistry, biology, medicine, and
so on.
It is interesting and important to understand the
ancestry as well as the legacy of both Hopfield and
Hinton. Suppose one looks at the connection tree of
previous Nobel prize winners. In that case, it is intrigu-
ing to note that the people connected to the winners
come from the same group and have also won similar
prestigious awards.
1.2 John Hopfield
John Joseph Hopfield stands as one of the most
influential figures at the intersection of physics, neuro-
science, and computer science. Born on July 15, 1933,
in Chicago, Illinois, Hopfield emerged from a family
deeply rooted in science his father was a physicist
and his mother a professional chemist, setting the stage
for his future interdisciplinary achievements.
Hopfield’s academic journey began at Swarth-
more College, where he earned his undergraduate de-
gree in physics. He went on to complete his Ph.D. in
physics at Cornell University in 1958 under the super-
vision of Nobel laureate Albert Overhauser. His early
career focused on solid-state physics, but his intellec-
tual curiosity would eventually lead him to revolution-
ize our understanding of neural networks.
Throughout his career, Hopfield held prestigious
positions at multiple institutions. He served on the
faculty at the University of California, Berkeley (1961-
1964), Princeton University (1964-1980), and the Cal-
ifornia Institute of Technology (1980-1997), before re-
turning to Princeton. This academic journey reflects
not just institutional affiliations, but a remarkable intel-
lectual evolution from pure physics to the foundations
of computational neuroscience.
His most transformative work came in 1982 when
he introduced what became known as the Hopfield net-
work a breakthrough that fundamentally changed our
understanding of how neural networks can process in-
formation and store memories. This work brilliantly
bridged the gap between physics and biology, apply-
ing concepts from statistical physics to explain how
networks of neurons could exhibit collective computa-
tional properties. This contribution won him the Nobel
Prize in Physics.
The genius of the Hopfield model lay in its con-
nection to physics. He showed that neural networks
could be understood through the lens of statistical me-
chanics, particularly using the concept of energy land-
scapes. By mapping neural activity to energy states, he
demonstrated how networks could converge to stable
states, representing memories or solutions to computa-
tional problems. This physical interpretation provided
1.3 Geoffrey Hinton
a powerful framework for understanding both artificial
and biological neural networks.
Before his neural network breakthroughs, Hop-
field made significant contributions to molecular biol-
ogy. His work on kinetic proofreading explained how
biological systems achieve high accuracy in molecu-
lar processes like DNA replication and protein syn-
thesis. He proposed that cells use energy-driven non-
equilibrium processes to enhance the accuracy of molec-
ular recognition, a concept that became fundamental to
our understanding of cellular processes.
In the 1990s and 2000s, Hopfield turned his at-
tention to understanding how biochemical networks in
cells make decisions. He developed theoretical frame-
works for understanding how cellular circuits process
information and make reliable decisions despite the in-
herent noise in biological systems. His work helped
explain how cells can achieve reliable behavior from
unreliable components.
Figure 1 shows the ancestral tree of Hopkins and
one can see many prominent figures in Physics in his
ancestry. One of the most prominent ancestors close
to Hopkins is Paul Ehrenfest, an interesting scientific
figure who won the first Nobel Memorial Prize in Eco-
nomics. Ehrenfest was originally trained in Chemistry
and his higher studies shifted toward Physics. His con-
tributions were mainly in thermodynamics, statistical
mechanics, and the foundations of quantum mechanics.
Going a level up, a major figure in ancestry is
Ludwig Boltzmann who put forward the definition of
entropy. Entropy is one of the key principles used by
Hopfield in coming up with a neuron learning model
that was put forward. Most of all the major contri-
butions to the area of thermodynamics were done by
Boltzmann. It is worth noting that Boltzmanns contri-
butions span both physics and chemistry.
Other prominent figures in the ancestral tree in-
clude people like Hermann van Helmotz, Max Planck,
A.M. Lyapunov, Theodore Lyman, Karl Schwarzschild,
Carl Runge, David Hilbert, and J.J. Thompson. They
are the major pillars of modern Physics and Mathemat-
ics.
Figure 1: Academic ancestral tree of John Hopkins
3
1.3 Geoffrey Hinton
1.3 Geoffrey Hinton
Geoffrey Everest Hinton was born on December
6, 1947, in Wimbledon, London. He comes from a dis-
tinguished lineage of scientists he is the great-great-
grandson of George Boole, whose Boolean algebra be-
came the foundation of computer science, and the son
of Howard Hinton, an entomologist. This rich scien-
tific heritage would later influence his groundbreaking
work in artificial intelligence.
Hintons academic journey began at King’s Col-
lege, Cambridge, where he studied experimental psy-
chology and physiology, graduating in 1970. He went
on to complete his Ph.D. in artificial intelligence from
the University of Edinburgh in 1978. During this pe-
riod, he developed a deep interest in understanding how
the human brain processes information and how these
principles could be applied to machines.
In the 1970s and 80s, Hinton pioneered new ap-
proaches to training neural networks, such as the back-
propagation algorithm, which allowed multi-layer neu-
ral networks to learn complex representations from
data. This was a significant advancement, as earlier
neural network models were limited to shallow, single-
layer architectures. However, during this period, neural
networks fell out of favor with the majority of AI re-
searchers, who favored symbolic, rule-based systems.
Hinton persisted in his belief that neural networks held
the key to achieving human-like intelligence in ma-
chines, continuing to refine his ideas and publish sem-
inal papers on the topic.
Hintons big breakthrough came in the late 2000s,
when he and his students demonstrated the power of
deep neural networks - neural networks with multi-
ple hidden layers. By leveraging increased computing
power and larger datasets, deep neural networks were
able to learn rich, hierarchical representations of data,
far surpassing the capabilities of shallow models.
Hintons work on deep belief networks, convo-
lutional neural networks, and related deep learning
techniques laid the foundations for the current AI rev-
olution. These methods have enabled transformative
applications in computer vision, natural language pro-
cessing, speech recognition, and beyond.
Throughout his career, Hinton has made signif-
icant theoretical contributions to understanding how
neural networks learn and represent information. His
work on topics like the ”credit assignment problem,
neural network optimization, and the role of unsuper-
vised pre-training has advanced the fundamental sci-
ence behind deep learning. For example, his research
on ”capsule networks” proposed a new neural network
architecture that better captures the hierarchical, part-
whole relationships in data, an important step towards
more human-like perception and reasoning.
Hinton has held various prestigious positions. He
worked at the University of California San Diego, Carnegie
Mellon University, and notably at the University of
Toronto, where he has been a professor since 1987.
In 2013, he joined Google (now Alphabet) as part of
their acquisition of DNNresearch Inc., while maintain-
ing his position at the University of Toronto. He also
served as the Chief Scientific Advisor for the Vector
Institute for Artificial Intelligence.
The academic ancestral tree of Hinton is very in-
teresting in many respects. Hintons basic degree is
in Experimental Psychology with some background
in Chemistry, his ancestors span in a large area of sci-
ence. His doctoral advisor Hugh Christopher Longuet-
Higgins was a Professor of Chemistry who later in the
year of his career shifted to Cognitive Sciences which
was a relatively new area in the late 1960s.
One of Higgins advisors, Robert S. Mulliken, is
a Nobel Prize winner in Chemistry and a pioneer of
molecular orbital theory. Another ancestor of Hinton,
Charles Alfred Coulson, was both a mathematician and
a chemist. His doctoral students included prominent
figures like David Eisenberg and Peter Higgs, who also
won a Nobel Prize.
Higher level up another ancestor of Hinton, Fritz
Haber is another Nobel Laureate in Chemistry known
for the Haber-Bosch process. Interestingly he is known
as the “Father of Chemical warfare” because he came
up with deadly chemical methods used during the First
World War.
Apart from Hinton’s ancestors, his students are
modern pioneers in the area of artificial intelligence.
The top AI leaders today who are running large AI
4
1.4 What does the Academic Ancestry Teach Us ?
Figure 2: Academic ancestral tree of Geff Hinton
companies are mostly students or postdocs who worked
under Hinton. Yan LeCun the pioneer of Convolutional
Neural Networks, was Hinton’s postdoc. Another post-
doc Peter Dayan is known for Q-Learning which is
the most important formulation used in reinforcement
learning.
Ilya Sutskever and Alex Krizhesky are two of
Hintons most famous students who came up with the
AlexNet architecture which drastically changed the fu-
ture of computer vision applications using deep learn-
ing. Ilya Sutskever is also the co-founder of OpenAI
and now leading Safe Superintelligence Inc.
This year the Nobel Prize for Chemistry was awarded
to DeepMind’s Demis Hassabis. A large fraction of
DeepMind’s researchers and engineers have worked in
Hintons lab. So the young generation that Hinton has
brought up is leading the path for AI today.
1.4 What does the Academic
Ancestry Teach Us ?
There is one fundamental pattern in the academic
ancestry of Nobel Prize winners - one or more direct or
indirect ancestors have always won Nobel Prizes. Even
though the correlation exists it is never a cause for a
researcher to receive a Nobel prize. But interestingly
enough this is a direct representation of a quote from
Issac Newton - If I have seen further, it is by standing
on the shoulders of giants”. Every Nobel Prize winner
has stood on the shoulders of his ancestors who were
giants in their subjects.
Academic ancestry also shows that certain good
traits for research are passed on from earlier generations
to younger ones which has always advanced scientific
understanding. People who have won Nobel prizes
have qualities that are of extreme quality and naturally,
they get passed down to their students. This is one
factor that nurtures bright students who are extremely
successful.
The academic ancestry tree also shows how dif-
ferent areas of science are interconnected and how dis-
coveries in one area advance another. For example,
concepts of Entropy in Physics which is also rooted in
5
1.4 What does the Academic Ancestry Teach Us ?
Chemistry have been used to advance the development
of algorithms in AI. There are researchers whose ba-
sic training is in one subject and who later switch to a
different subject to win Nobel Prizes. This is a perfect
example of skill and knowledge transfer which is a key
factor for a successful career.
Another key lesson from the ancestral tree is the
power of networking and collaboration. Many of the
early researchers who laid the foundations of basic sci-
ence made most of the major discoveries while in-
teracting with other researchers or even while having
debates with others. Sharing and collaborative ideas
always triumph and help advance science.
Reference
Academic Tree
The Encyclopedia of Computer Science, 4th Edi-
tion, Wiley Publications.
About the Author
Dr.Arun Aniyan is leading the R&D for Arti-
ficial intelligence at DeepAlert Ltd,UK. He comes from
an academic background and has experience in design-
ing machine learning products for different domains.
His major interest is knowledge representation and com-
puter vision.
6
Physics in the 2024 Nobel Prize for Physics
by Blesson George
airis4D, Vol.2, No.11, 2024
www.airis4d.com
The 2024 Nobel Prize in Physics was awarded to
Hopfield and Hinton “for foundational discoveries and
inventions that enable machine learning with artificial
neural networks.” This decision has ignited extensive
discussions and stirred apprehensions across various
spheres. Social media and public forums have seen
heated debates questioning whether the Nobel Com-
mittee’s choice appropriately aligns with the traditional
boundaries of Physics. While most agree the award
honors groundbreaking and deserving contributions in
artificial intelligence—a field rapidly transforming our
world—concerns remain about the suitability of award-
ing such work under Physics.
Critics argue that the Nobel Committee may have
succumbed to the current AI hype, especially since this
year’s Chemistry Nobel also recognized an AI-based
advancement, AlphaFold. Although undeniably revo-
lutionary, they contend that awarding a Physics Nobel
for AI deviates from past traditions, which typically
required experimental evidence for recognition. Fur-
thermore, questions have arisen about whether AI is
merely a tool within Physics rather than Physics itself,
especially since one of the laureates isn’t traditionally
a physicist. Despite these reservations, however, many
strongly believe that honoring Hopfield and Hinton was
a well-considered and forward-thinking choice.
In its popular press release, the Nobel Committee
emphasized that ”they used physics to find patterns in
information, presenting a perspective that counters the
criticism of AI merely being a tool for Physics. While
some argue that AI serves only as an auxiliary tool in
Physics, the Committee recognized that the pioneers
of artificial intelligence leveraged core principles of
Physics to develop foundational methods that underpin
today’s powerful machine learning. This viewpoint
reflects the Committees broader perspective on how
physics-driven methodologies can lead to transforma-
tive discoveries well beyond traditional domains.
2.1 Contributions of John Hopfield
and Geoffrey Hinton
John Hopfield made a significant breakthrough in
1982 with the development of the Hopfield network,
one of the earliest forms of artificial neural networks.
Inspired by neurobiology and molecular physics, this
network demonstrated how a system of interconnected
nodes could store and retrieve information. The Hop-
field network is based on principles of atomic spin
physics, where each atom behaves like a tiny mag-
net, contributing to a material’s overall properties.
Hopfield used this analogy to design a network that
operates similarly to an energy-based spin system in
physics, where training involves adjusting the connec-
tions between nodes so that specific images or patterns
represent low-energy states. When presented with a
distorted or incomplete image, the network iteratively
updates node values, minimizing energy until it closely
matches a stored image, thus enabling pattern recogni-
tion and recall.
Geoffrey Hinton expanded upon Hopfield’s con-
cepts in 1984 by creating the Boltzmann machine, a
model that allowed computers to learn from data us-
ing principles from statistical physics. Unlike tradi-
tional programming, the Boltzmann machine learns
directly from examples, drawing on the science of sys-
2.2 Physics Employed in Hopfield Network and Boltzmann Machine
tems made up of numerous, similar elements. This
machine is trained by being exposed to typical exam-
ples that reinforce the patterns within the data. As
a result, the Boltzmann machine can classify images
and generate new examples that reflect the patterns on
which it was trained, marking a significant step forward
in machine learning and pattern recognition.
Energy-Based Models:
The Hopfield network and Boltzmann machine are
both energy-based models that rely on the concept of
low-energy states to function. In the Hopfield network,
an energy function is used to identify the stable or low-
energy state that corresponds to a stored pattern. This
stable state represents the memory or pattern that the
network recalls when given an incomplete or noisy in-
put. The illustration provided effectively demonstrates
how the network ”finds” a stored pattern by minimizing
energy.
Similarly, in the Boltzmann machine, the energy
function is central to learning. It enables the network
to model data distributions by adjusting the weights
of connections between nodes. By minimizing energy,
the Boltzmann machine can learn and generate patterns
that reflect the underlying structure of the data. In
both models, the concept of a low-energy state is key,
as it allows the networks to stabilize into meaningful
representations or memories.
2.2 Physics Employed in Hopfield
Network and Boltzmann Machine
2.2.1 Statistical Mechanics
In the Hopfield network, statistical mechanics is
applied through an energy function similar to the Ising
model, a foundational concept in statistical mechanics
for modeling ferromagnetic interactions. The network
consists of binary neurons x
i
{1, 1}, each of which
interacts with others through symmetric weights w
ij
.
The energy function E of the network is defined as:
E =
1
2
i,j
w
ij
x
i
x
j
i
b
i
x
i
where b
i
represents the bias terms acting as exter-
nal fields. This energy function describes the stability
of patterns in the network, where lower energy states
correspond to stable or ”remembered” patterns. When
the network receives an input pattern, it iteratively up-
dates the states x
i
of neurons to reduce the energy E,
guiding the system towards a minimum energy state
associated with a stored pattern. This energy mini-
mization process mirrors physical systems in statisti-
cal mechanics, where systems evolve towards states of
minimum energy, thus achieving stability.
In Boltzmann machines, statistical mechanics prin-
ciples are fundamental, particularly through the use of
the Boltzmann distribution and energy minimization
concepts from thermodynamics. Boltzmann machines
are designed to model the probability distribution of
data by adjusting weights to minimize an energy func-
tion, allowing them to ”learn” from examples.
The energy function E of a Boltzmann machine
for a configuration of visible and hidden units x is given
by:
E(x) =
1
2
i,j
w
ij
x
i
x
j
i
b
i
x
i
where w
ij
are the symmetric weights between
units, and b
i
are bias terms. Using this energy func-
tion, the probability of a given state x in a Boltzmann
machine is modeled by the Boltzmann distribution:
P (x) =
e
E(x)/T
Z
where T represents the ”temperature” parame-
ter, controlling the system’s randomness, and Z is the
partition function, which normalizes the distribution.
The training process involves adjusting weights w
ij
to minimize the Kullback–Leibler divergence between
the model’s distribution and the data distribution, ef-
fectively learning the datas structure. The Boltzmann
8
2.3 Spin System
machine uses stochastic updates where each unit’s state
is updated probabilistically, following the sigmoid ac-
tivation function:
P (x
i
= 1x
i
) =
1
1 + e
E/T
where E is the change in energy if unit x
i
were
flipped. This probabilistic updating allows the Boltz-
mann machine to escape local energy minima, enabling
it to approximate global minima over time and reach
thermal equilibrium, thereby learning the distribution
of input data .
2.3 Spin System
Spin systems are models used in statistical me-
chanics and condensed matter physics to represent mag-
netic properties of materials. In these systems, ”spins”
represent magnetic moments of particles (such as elec-
trons) and are typically modeled as binary variables
with values +1 or 1, representing ”up” or ”down”
states. These spins interact with each other, forming
complex patterns based on temperature and interaction
strength.
1. Ising Model
The Ising model is a fundamental spin system
that considers a lattice where each site (or par-
ticle) has a spin s
i
that can take values +1 or
1. The Hamiltonian (energy function) for the
Ising model, considering only nearest-neighbor
interactions, is given by:
H = J
i,j
s
i
s
j
i
h
i
s
i
where J is the interaction strength, i, j denotes
pairs of neighboring spins, and h
i
represents an
external magnetic field at site i. For ferromag-
netic interactions (J > 0), neighboring spins pre-
fer to align, while for antiferromagnetic interac-
tions (J < 0), they prefer to be anti-aligned.
2. Energy Minimization
In spin systems, configurations with lower en-
ergy are more stable, particularly at low tem-
peratures. At thermal equilibrium, spins tend
to settle into configurations that minimize the
system’s total energy.
3. Phase Transitions
Spin systems exhibit phase transitions, such as
a change from a disordered (paramagnetic) to
an ordered (ferromagnetic or antiferromagnetic)
phase as temperature changes. For instance, in
the 2D Ising model without an external field, a
critical temperature T
c
exists at which the system
spontaneously transitions from a disordered state
to an ordered state.
4. Relation to Neural Networks
Spin systems, particularly the Ising model, are
closely related to Hopfield networks and Boltz-
mann machines in machine learning. In these
neural networks, binary neuron states and their
interactions mimic spins and their interactions
in a spin system. Energy minimization in neural
networks is analogous to finding stable config-
urations in spin systems, with low-energy states
corresponding to stable patterns or memories.
5. Boltzmann Distribution
Spin configurations at thermal equilibrium fol-
low the Boltzmann distribution:
P (configuration) =
e
H/(k
B
T )
Z
where H is the system’s Hamiltonian, T is the
temperature, k
B
is the Boltzmann constant, and
Z is the partition function that normalizes the
probability distribution.
2.4 Conclusion
The 2024 Nobel Prize in Physics marks a signif-
icant milestone, underscoring the ongoing dissolution
of traditional boundaries between scientific disciplines.
In an era increasingly defined by the AI revolution, this
award highlights the pivotal role that Physics plays in
the development of artificial intelligence and neural
networks. Such recognition broadens our understand-
ing of Physics, extending its foundational principles
into transformative technologies that impact us all.
The prize serves as a reminder that interdisciplinary
9
2.4 Conclusion
advancements can fuel innovation, with benefits that
reach across fields and into everyday life.
References
1. Lucas Bottcher and Gregory Wheeler, ”Statisti-
cal Mechanics and Artificial Neural Networks:Principles,
Models, and Applications” arXiv:2405.10957v1
[cond-mat.dis-nn] 2024
2. Why is Nobel Prize in Physics awarded to Geoff
Hinton and John Hopfield?
3. Press release-The nobel prize
About the Author
Dr. Blesson George presently serves as an
Assistant Professor of Physics at CMS College Kot-
tayam, Kerala. His research pursuits encompass the
development of machine learning algorithms, along
with the utilization of machine learning techniques
across diverse domains.
10
Will AI Bring an End to Humanity?
by Ninan Sajeeth Philip
airis4D, Vol.2, No.11, 2024
www.airis4d.com
While Artificial Intelligence (AI) has the potential
to revolutionize various sectors positively, it also poses
significant threats to humanity. This paper explores the
negative aspects of AI, particularly focusing on issues
of inequality, ethical dilemmas, and the potential for
misuse in warfare. The discussion emphasizes that the
real danger lies not in AI itself but in how it is wielded
by those with access to advanced technologies. Ad-
ditionally, this paper outlines strategies to mitigate the
AI divide and promote equitable access to AI technolo-
gies.
3.1 Introduction
As AI continues to evolve and integrate into var-
ious aspects of society, concerns regarding its poten-
tial negative impacts have gained prominence. Al-
though AI can enhance productivity and efficiency, it
also raises critical questions about ethics, inequality,
and control. This paper discusses how AI can become
a threat to humanity, particularly in contexts where
disparities exist between those who possess advanced
technologies and those who do not. The implications
of these disparities are profound, as they can lead to
social unrest, exacerbate existing inequalities, and cre-
ate new forms of conflict. Therefore, it is essential to
critically examine the multifaceted threats posed by AI.
3.2 Inequality and Access to AI
One of the most pressing concerns regarding AI is
its potential to exacerbate existing inequalities. Access
to AI technologies is often limited to affluent individ-
uals or nations, creating a divide that could lead to
significant social unrest. As industries increasingly
adopt AI for automation and efficiency, low-skilled
workers are at risk of displacement while high-skilled
positions become more lucrative [7]. For example,
during the COVID-19 pandemic, many businesses ac-
celerated their adoption of AI tools for automation,
leading to significant job losses in sectors such as retail
and hospitality [2]. This economic shift may result in
an ”M-shaped” wealth distribution where wealth be-
comes concentrated among a small elite while a sig-
nificant portion of the population faces unemployment
or underemployment. The disparity in access to tech-
nology can foster resentment and conflict within soci-
eties, potentially leading to protests or civil unrest as
marginalized groups demand equitable opportunities.
Countries that lead in AI development may gain
substantial geopolitical advantages, potentially leading
to a new form of colonialism where technologically
advanced nations exploit less developed regions [6].
For instance, during conflicts like the recent Israel-
Palestine escalation in 2023, advanced surveillance and
drone technologies were employed by Israel to mon-
itor movements and conduct targeted operations with
precision [11]. Such technological superiority raises
ethical questions about power dynamics and the conse-
quences for less technologically advanced adversaries.
The ability of one nation to leverage superior technol-
ogy against another not only shifts the balance of power
but also sets a dangerous precedent for future conflicts
where technology becomes a primary weapon. This
dynamic can create a cycle of dependency where less
developed nations are forced to rely on more advanced
3.3 Ethical Dilemmas
powers for security and stability.
Moreover, this inequality extends beyond national
borders; within countries, marginalized communities
often lack access to cutting-edge technology. Dispari-
ties in education and resources mean that only a select
few can harness the benefits of AI advancements. For
instance, rural areas may lack internet access or tech-
nological infrastructure necessary for participating in
an increasingly digital economy. This digital divide
can perpetuate cycles of poverty and limit opportuni-
ties for upward mobility. As access becomes increas-
ingly tied to economic power, those without means
may find themselves further excluded from societal ad-
vancements driven by AI.
3.3 Ethical Dilemmas
AI raises numerous ethical questions that society
must address as its capabilities expand. One major con-
cern is the autonomy granted to AI systems in decision-
making processes. There are fears that as these systems
become more autonomous, humans may lose control
over critical decisions that affect lives. The possibil-
ity of autonomous weapons systems operating without
human intervention presents a dire scenario where ma-
chines could make life-and-death decisions based on
algorithms rather than ethical considerations. For ex-
ample, during military operations in Ukraine, there
have been reports of autonomous drones being used
for targeted strikes without direct human oversight [8].
This raises profound moral questions about account-
ability: if an autonomous system makes a mistake that
results in civilian casualties, who is responsible?
Furthermore, AI technologies enable unprecedented
levels of surveillance that can infringe on individual
privacy rights. Governments may utilize AI for mon-
itoring citizens under the guise of security, leading to
authoritarian practices reminiscent of dystopian soci-
eties [7]. The implementation of social credit sys-
tems in countries like China exemplifies how AI can be
used to enforce conformity and suppress dissent [12].
Such systems leverage data analytics to monitor be-
havior continuously; those who deviate from accepted
norms may face penalties or restrictions on their free-
doms. This creates a chilling effect on free speech
and personal expression as individuals become wary
of surveillance.
The ethical implications extend into algorithmic
bias as well; if AI systems are trained on biased data
sets, they can perpetuate existing prejudices or cre-
ate new forms of discrimination. For instance, facial
recognition technologies have been shown to misiden-
tify individuals from minority groups at higher rates
than their white counterparts. Such biases can have
real-world consequences when these technologies are
deployed in law enforcement or hiring practices. Thus,
addressing these ethical dilemmas requires robust frame-
works that ensure transparency and accountability in
AI development.
Additionally, there is an urgent need for interdis-
ciplinary collaboration among technologists, ethicists,
policymakers, and civil society organizations to create
guidelines that govern AI’s use responsibly. Engaging
diverse stakeholders can help mitigate risks associated
with biased algorithms and ensure that technological
advancements benefit all segments of society rather
than exacerbating existing inequalities.
3.4 Misuse of AI in Warfare
The application of AI in military contexts raises
significant ethical and moral concerns that cannot be
overlooked. The development of autonomous weapons
systems poses unique threats as these machines can op-
erate without human oversight. These systems have the
potential to engage in combat autonomously based on
pre-defined algorithms or real-time data analysis [10].
If these machines are programmed with flawed algo-
rithms or biased data, they could make erroneous deci-
sions leading to unintended casualties. Recent reports
indicate that during conflicts like those in Syria and
Libya, autonomous drones have been deployed with
minimal human intervention [10]. Such developments
highlight the urgent need for international regulations
governing the use of autonomous weapons.
Moreover, AI’s ability to analyze vast amounts of
data quickly can lead to rapid decision-making pro-
cesses that may escalate conflicts without adequate hu-
12
3.5 Mitigation Methods for Reducing the AI Divide
man judgment. The speed at which decisions are made
could outpace diplomatic efforts aimed at de-escalation
[6]. For instance, during the recent Israel-Hamas con-
flict in 2024, rapid deployment of drone strikes based
on real-time data led to significant civilian casualties
and raised international concerns about proportionality
in warfare [1]. As military strategies increasingly rely
on automated systems for targeting and engagement,
there is a growing risk that conflicts could spiral out
of control due to miscalculations or overreliance on
technology.
The potential for cyber warfare further compli-
cates this landscape; adversaries could deploy AI-driven
cyber-attacks against critical infrastructure such as power
grids or communication networks. Such attacks could
destabilize nations without traditional military engage-
ment while causing widespread chaos and disruption
[9]. Additionally, misinformation campaigns powered
by AI algorithms can manipulate public perception
during conflicts by spreading false narratives rapidly
across social media platforms [13]. These tactics not
only undermine trust but also complicate diplomatic
resolutions.
To mitigate these risks associated with military
applications of AI technology requires global coop-
eration among nations aimed at establishing treaties
regulating autonomous weapons systems similar to ex-
isting agreements on chemical and biological weapons.
Furthermore, fostering dialogue between military lead-
ers and ethicists could help ensure that technological
advancements align with humanitarian principles.
3.5 Mitigation Methods for Reducing
the AI Divide
Addressing the disparities created by AI requires
targeted strategies aimed at promoting equitable access
and responsible use:
Enhancing Data Quality and Diversity: Im-
proving the quality and diversity of datasets used
in training algorithms helps prevent bias and
ensures fairer outcomes across different demo-
graphic groups [6]. Organizations should focus
on collecting comprehensive datasets that repre-
sent various populations accurately; this includes
obtaining additional data points from underrep-
resented groups which can significantly reduce
algorithmic bias.
Implementing Fairness and Transparency Stan-
dards: Establishing standards for fairness en-
sures accountability in decision-making processes
involving AI systems [7]. Companies should
adopt frameworks requiring transparency so users
understand how outcomes are derived from algo-
rithms; this could involve using explainable ar-
tificial intelligence (XAI) techniques which pro-
vide insights into how decisions are made.
Promoting Inclusive Education and Training:
Enhancing access to education focused on data
science is essential for reducing skill gaps related
to technology use [5]. Initiatives like coding boot
camps specifically targeting underserved com-
munities can empower individuals with neces-
sary skills while fostering a more diverse work-
force capable of contributing meaningfully within
tech sectors.
Encouraging Diverse Teams in Development:
Diverse teams bring varied perspectives which
help identify biases early during development
phases [10]. Companies should prioritize hir-
ing practices promoting diversity across gender
identity/race/socioeconomic backgrounds within
their engineering teams; this diversity fosters
creativity while ensuring products cater effec-
tively towards broader audiences.
Responsible Deployment: Engaging stakehold-
ers before deploying technologies ensures con-
sideration for potential societal impacts [4]. Or-
ganizations should develop policies protecting
vulnerable populations from adverse effects while
maximizing benefits derived from their tech-
nologies; this might include setting up advisory
boards comprised of community representatives
who provide insights into local needs/concerns
regarding new deployments.
Public Policy Frameworks: Governments should
establish regulations mandating fairness/account-
13
3.6 Conclusion
ability standards within AI applications [4]. Poli-
cies might include requirements for regular au-
dits assessing impacts across different demo-
graphic groups; additionally fostering interna-
tional cooperation on regulatory frameworks can
help address cross-border challenges posed by
these technologies.
Leveraging Technology for Social Good: Fo-
cusing specifically on applications addressing
societal issues helps bridge gaps created by tech-
nological advancements [3]. For instance utiliz-
ing machine learning algorithms analyzing health
data from underserved populations allows pub-
lic health interventions tailored towards specific
community needs thereby improving overall out-
comes significantly.
3.6 Conclusion
Addressing the AI divide requires a multifaceted
approach involving improvements in data quality stan-
dards, transparency initiatives, inclusive education, di-
verse teams responsible deployment practices, support-
ive public policies, leveraging technology for social
good, etc. By implementing these strategies collec-
tively we can work towards future where artificial in-
telligence serves tool enhancing equality rather than
exacerbating existing disparities.
References
[1] Various Authors. Casualties of the israel–hamas
war. Wikipedia, 2024.
[2] Bikash Chatterjee. Precision medicine: Promis-
ing future as ai, biomarkers and technology ...
Clinical Trials Arena, 2023.
[3] Anil Kumar et al. Machine learning approaches
for early diagnosis of neurodevelopmental disor-
ders. Journal of Neural Engineering, 2023.
[4] Priya Sinha et al. Ai-based biomarkers for autism
spectrum disorder: A review. Frontiers in Psy-
chology, 2023.
[5] Richard Sutton et al. Cognitive modeling using
artificial intelligence: Applications in education.
Educational Technology Research and Develop-
ment, 2023.
[6] Marcus Gullberg. Exploring the impact of arti-
ficial intelligence (ai) on society. Graphite Note,
2021.
[7] Bernard Marr. What is the impact of artificial
intelligence (ai) on society? Forbes, 2021.
[8] BBC News. Ukraine war : How autonomous
drones are changing warfare, 2023. News Article.
[9] James Robinson. Cyber warfare : A new
paradigm. International Security Review, 2019.
[10] John Sullivan. The role of autonomous weapons
in modern warfare. Military Review, 2023.
[11] Noah Sylvia. The israel defense forces use of
ai in gaza: A case of misplaced purpose. Rusi,
2024.
[12] Michael Cheng-Tek Tai. The impact of artifi-
cial intelligence on human society and bioethics.
PMC Journal, 2020.
[13] Clint Tucker. Social media manipulation : A new
threat landscape. Journalism Studies, 2017.
About the Author
Professor Ninan Sajeeth Philip is a Vis-
iting Professor at the Inter-University Centre for As-
tronomy and Astrophysics (IUCAA), Pune. He is also
an Adjunct Professor of AI in Applied Medical Sci-
ences [BCMCH, Thiruvalla] and a Senior Advisor for
the Pune Knowledge Cluster (PKC). He is the Dean
and Director of airis4D and has a teaching experience
of 33+ years in Physics. His area of specialisation is
AI and ML.
14
Part II
Astronomy and Astrophysics
Black Hole Stories-13
More About Black Hole Spin Measurements
by Ajit Kembhavi
airis4D, Vol.2, No.11, 2024
www.airis4d.com
In BHS12 we described how the spin of a black
hole can affect the width of X-ray emission lines emit-
ted by reflection from the inner region of an accretion
disc around the black hole. We considered the specific
case of the active galaxy MCG-6-30-15 and showed
how the spin of the black hole at the centre of the galaxy
can be estimated from the shape of a line emitted by
iron. In this story we will consider further examples
of spin determination for supermassive black holes.
Much of the material here is based on two excellent
reviews, Brenneman (2013) and Reynolds (2013).
1.1 The Reflection Spectrum
We have seen in BHS12 that the X-ray emission
arises from a thin accretion disc around the black hole
and a corona which is some distance from the disc. The
disc is geometrically thin in the sense that its thickness
is very much smaller than its radius. For a supermas-
sive massive black hole (SMBH), the temperature is
about 10
5
K, which means that the thermal radiation
from the hot disc has a peak in the ultraviolet or soft
X-ray region of the spectrum. The disc gets progres-
sively colder at increasing radii, and the net emission
is the sum of the emission from annular regions each
of which emits at a fixed temperature. Some of the
radiation reaches a much hotter corona, which is at a
temperature of about 10
9
K, located at some distance
from disc. The thermal photons reaching the corona
undergo inverse Compton scattering by the very hot
electrons in it, which leads to increase in energy of
the spectrum. The result of the scattering is that the
photons acquire a power-law spectrum of the form I(ν)
˜ ν
α
, where ν is the frequency, I(ν) is the intesity of
the radiation, and α is a constant, typically in the range
0 - 1.5. Some of the power-law photons leave the sys-
tem, while the rest are emitted towards the disc. The
photons reaching the disc undergo Compton scattering,
which lowers their energy; after one or more scatter-
ings the photons can emerge from the disk. Some of
the photons incident on the disk are absorbed through
the photoelectric effect by various ions present in the
disc gas, which then produce emission lines through
fluorescence, as described in BHS 12. We are repeat-
ing here, for convenience of understanding, a figure
from BHS 12:
Figure 1: A sketch showing an accretion disc and
a corona and the various components of the radiation
which are explained in the text. The temperatures are
indicated in kilo electron volts (keV), with 1 KeV being
equivalent to 11.6 million K. The inner disc is indicated
to be at the innermost stable circular orbit.
Shown in Figure 2 is the spectrum produced by
1.2 Modifications of the Reflection Spectrum
the combination of fluorescence and Compton scat-
tering from a power-law spectrum incident on a thin
accretion disc. This spectrum is modified by various
effects which are discussed in the next section. It is the
modified spectrum which would be seen by a distant
observe.
Figure 2: Simulation of a reflection spectrum pro-
duced when power-law spectrum, shown as a dashed
line, is incident on the inner region of a thin accretion
disc. The relativistic broadening is not taken into ac-
count. It is seen that that the iron (Fe) line is dominant.
Fluorescence lines produced by other elements are also
shown. All the lines are seen to be very narrow. The
broad hump beyond ˜7 keV is produced by absorption
of the reflected continuum by iron and the Compton
scattering of high energy photons to lower energies.
The figure is from A. C. Fabian and G. Miniutti (2005).
1.2 Modifications of the Reflection
Spectrum
The shape of the features in the reflection spec-
trum is changed due to relativistic effects as explained
in BHS12. These have to be taken into account to
generate a model of the broadened spectrum, and such
a model has to be fitted to an observed X-ray spec-
trum to derive the parameters which go into the model.
These include properties of the gas in the disc, like
its chemical composition, the inclination of the disc
to the observer’s line of sight and so forth. But most
importantly for the spin determination, the inner radius
of the accretion disc has to be determined as a model
parameter. This inner disc radius is identified with the
inner most stable circular orbit r
ISCO
of the black hole,
which allows the spin parameter a to be determined, as
described in BHS12.
For the modelling to be correctly carried out, it is
necessary to include absorption and emission processes
which can affect the reflection spectrum. Absorption
can take place due to a cold gas which, due to its low
temperature, is largely neutral, i.e. not ionised. The
gas is located at a distance of about 10
4
- 10
5
r
g
from
the black hole at the centre, where r
g
=
GM
c
2
is the grav-
itational radius. The gas can be distributed in clumps
so that it does not cover the inner region uniformly.
Radiation reaching these cold clumps can be absorbed
due to the photoelectric effect, modifying the lower
energy part of the spectrum. Another component is a
warm absorber, which is at a temperature of about 10
5
K 10
7
K, and is partially ionised. Such a gas can
again absorb the spectrum through the photoelectric
effect right up to the Fe line energies, depending on
the stage of ionisation. The outer parts of the accre-
tion disk also produces a reflection spectrum, which is
not broadened due to relativistic effects, because of the
large distance from the black hole. This spectrum has
a narrow iron emission line at 6.4 keV, which has to
be taken into account while modelling the spectrum.
The outer accretion disc also contributes to the Comp-
ton hump. There is another excess over the power-law
emission at lower energy, known as the soft excess,
which has to be included in the modelling.
Figure 3 shows the X-ray spectrum of the Seyfert
galaxy MGC 3783 observed by the Suzaku satellite.
The prominent iron line emission and the X-ray bump
are seen. The iron line has prominent broad and nar-
row components, with the latter produced by the outer
regions of the accretion disc. Absorption is seen at
X-ray energies of a few keV, and a soft excess below 1
keV.
A model fit to the spectrum, taking into account
various components, relativistic effects etc. led to a
spin parameter a > 0.98.
In BHS12 we found that the broadening of the
6.4 keV line of iron in MCG-6-13-15, with the low
energy tail reaching ˜3keV immediately leads to the
expectation that the black hole has high spin. That
is borne out by detailed modelling which leads to the
17
1.3 The Distribution of the Spin Parameter
value of the angular momentum parameter a > 0.97.
Another interesting case is the active galaxy Fairall 9,
which is bare Seyfert galaxy, in the sense that it lacks
significant absorption, i.e. the intervening absorbing
matter is absent. The soft excess too is significantly
weaker in this galaxy. These factors make the fitting
process easier than in the other cases discussed, and
with Suzaku data, a spin parameter value of a = 0.65
(+0.5, -0.5 as the one sigma errors) is obtained.
Figure 3: X-ray spectrum of the Seyfert 1 galaxy
NGC 3783. The features in the spectrum are described
in the text. The figure is taken from Brenneman (2013).
It should be mentioned here that while all the dis-
cussion above seems to show that the observed spec-
tra are fully consistent with their production by a this
accretion disc (with a corona) around a spinning black
hole, it is possible to model the features in the spectrum
with multiple absorbers that partially cover the source,
without any need of relativistic effects to broaden the
lines (Miller, Turner, Reeves 2008). However, more
recent data make the model less plausible, which we
will describe in a later story.
1.3 The Distribution of the Spin
Parameter
The determination of the spin requires X-ray data
with adequate signal-to-noise ratio, that is with a sig-
nal good enough for the parameters of the model to
be determined with reasonable errors. It is also neces-
sary to use a physical model which takes into account
various processes that can contribute to and affect the
spectrum, as briefly described above. Keeping these
and other related factors in mind, Brenneman (2013)
has reported 22 spin measurements made for 22 AGN
by various groups. The distribution of spin values is
shown in Figure 3. It is seen that that the measured
spins seem to cluster at higher values, with no spin
measurement consistent with a = 0, which corresponds
to the Schwarzschild case. The total number of spin
measurements in the distribution is of course too small
for any firm conclusions to be drawn about the distribu-
tion. Also, the figure is based on spin values reported
before 2013. There would be further measurements in
subsequent years of course, and the use of data from
NUSTAR and other satellites would allow the spec-
trum to be fitted over a broader energy range, which
would help to take into account various effects and dis-
tinguish between possible models. These maters will
be discussed in future stories.
Figure 4: The distribution of measured spin pa-
rameter a values for 22 supermassive black holes. The
figure is taken from Brenneman (2013).
The spin measures reported above are all for su-
permassive black holes. Spin measurements for stellar
mass black holes will be discussed in future stories.
18
1.3 The Distribution of the Spin Parameter
About the Author
Professor Ajit Kembhavi is an emeritus
Professor at Inter University Centre for Astronomy
and Astrophysics and is also the Principal Investiga-
tor of the Pune Knowledge Cluster. He was the former
director of Inter University Centre for Astronomy and
Astrophysics (IUCAA), Pune, and the International
Astronomical Union vice president. In collaboration
with IUCAA, he pioneered astronomy outreach activ-
ities from the late 80s to promote astronomy research
in Indian universities.
19
Learning is Better than Programming
by Linn Abraham
airis4D, Vol.2, No.11, 2024
www.airis4d.com
2.1 Introduction
The 2024 Nobel Prize in Physics was awarded to
John J. Hopfield and Geoffrey E. Hinton for “for foun-
dational discoveries and inventions that enable machine
learning with artificial neural networks”. The aim of
this article is to understand a few concepts in the field
by turning our attention to the contributions of Geof-
frey Hinton. Hinton is often called the The Godfather
of A.I.”, whereas the title of “Father of A.I.” is often
credited to either John McCarthy or Alan Turing. What
are his most significant contributions to the field? In
the 1980s he showed that the backpropogation algo-
rithm discovered by Rumelhart (and independently by
others before him) could be used for learning repre-
sentations from data. In the work that was published
in “Letters to Nature he applied backpropogation to
enable a machine learn the distributed representation
of words. However the hype that ensued fizzled down
over the next twenty years or so years as researchers felt
dissillusioned. In 2012, he again showed the power of
a deep Convolutional Neural Network (CNN) namely
AlexNet by winning the ILSVRC challenge by a large
margin. Hinton was also the first to describe the Re-
stricted Boltzmann Machines (RBM) which forms the
basis of Deep Belief Networks (DBN). In the rest of the
article, let us try to understand how AlexNet changed
computer vision and AI in general.
2.2 ImageNet Challenge
In a special edition of the original AlexNet paper,
the authors give credit to the ImageNet dataset for the
Figure 1: Image showing two of the subtrees and
associated images in the ImageNet dataset.
deep learning revolution. The ImageNet dataset took
about two years to be put together and was the basis
of the ImageNet Large Scale Visual Recognition Chal-
lenge (ILSVRC) which began in 2010. It is important
to note that the ImageNet datastet is built upon the hi-
erarchical structure provided by WordNet. Each mean-
ingful concept in WordNet was called a synonym set or
‘sysnet. WordNet has 80,000 such sysnets and the goal
of ImageNet was to have approximately 500 to 1000
images per sysnet. These sysnets are part of the many
subtrees such as mammal, bird, fish, reptile, amphib-
ian, vehicle, furniture, musical instrument, geological
formation, tool, flower, fruit. Figure 1 shows images
from two of those subtrees. The year 2012 probably
marked the the point when the ice began to melt after a
long AI winter. The winners of the first two years made
use of intelligent features derived from images that re-
searchers painstakingly came up with such as SIFT,
Fisher vectors etc. This they combined with shallow
Machine Learning algorithms like Support Vector Ma-
chines (SVM), Principal Component Analysis (PCA)
and others. But in 2012 the team SuperVision consist-
ing of Alex Krizhevsky, his Ph.D. supervisor Geoffrey
2.3 AlexNet
Figure 2: Diagram showing the architecture of the
AlexNet model.
E. Hinton and Ilya Sutskever put forward the AlexNet
model. It obliterated the competition, beating the run-
ner up in top-5 error rates by more than 10 percentage
points. For each sample in the test set, each entry
model provides a probability of being the right answer.
A top-5 error only considers an error when the actual
label is not amongst the top-5 most probable answers.
2.3 AlexNet
By winning the ILSVRC challenge with a huge
margin, the authors proved without doubt that deeper
networks were the answer. Before thinking about the
implications of their discovery let us first try to un-
derstand what their model is all about. The AlexNet
model can be shortened as follows.
(CNN RN MP)² (CNN³ MP) (FC
DO)² Linear softmax
It consists of eight layers, the first five being con-
volutional layers (CNN) and followed by three fully
connected (FC) layers. Figure 2 shows a representative
diagram of this architecture. The CNNs had a ReLU
activation which they found to be faster to train than
tanh neurons. They also implement local response nor-
malization (RN) on the output of the CNN. Some of
the convolutional layers are followed by max-pooling
layers (MP). The final layers consists of a 1000 way
softmax layer. All in all, it had 60 million parame-
ters and 650,000 neurons. The network was trained
on the large ImageNet dataset. Since the images from
the dataset had arbitrary rectangular shapes, these were
first downsampled to a resolution of 256 × 256. The
authors also incorporated data augmentation by us-
ing image translation and horizontal reflections with
Figure 3: On the left, Alex Krizhevsky and on the
right, Geoffrey Hinton.
patches of 227 × 227. This increased their training set
by a factor of 2048 although the generated images are
highly correlated. This means that during testing they
would extract a similar patch from all four corners and
a centre patch and pool the decisions of the network on
all these five patches and their horizontal reflections.
2.4 The Legacy of AlexNet
By 1986, partly due to the work of Hinton and
others it was known that the Multi-Layer Perceptron
was capable of learning any decision surface. And
the backpropogation algorithm was what made it pos-
sible to train networks with many layers. However
how would this fare when applied to tasks as complex
as human vision? For certain visual recognition tasks
such as hand written digit recognition using the MNIST
dataset it might be feasible to think of an MLP solu-
tion. However in a much more general classification
or detection task such as that put out by the ILSVRC,
it became obvious that what was important was not
just the pixel intensities in the images. It was known
that humans vision involved a hierarchy of features
computed from the intensities like shapes, edges and
textures. Thus researchers came up with the convo-
lutional neural network that would allow the machine
to learn such features without exactly specifying what
features where to be learnt.
The researchers behind AlexNet showed that less
was more, by not hard coding intelligent features into
their model but still allowing for the model to learn
such features from data. They showed that learning
was better than programming in solving such complex
21
REFERENCES
tasks. They also showed that the depth of the network
was critical to the result that they had achieved. They
showed this by removing a convolutional block from
their network and showing the performance degrade
substantially. Finally to practically train such deep
networks they made use of the GPU by writing imple-
mentations of the 2D convolution operation and other
associated operation which could make use of GPUs.
They also made use of two GPUs by cleverly distribut-
ing the network between those. Ever since GPUs have
come to dominate the world of deep learning. Ded-
icated frameworks like tensorflow, pytorch etc. were
developed to make it easier than ever for deep learning
researchers to make use of GPUs without getting their
hands dirty. In summary they showed that what was
lacking was better and more data and more computa-
tion and not the algorithms.
References
[Russakovsky et al.(2015)] Olga Russakovsky, Jia
Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh,
Sean Ma, Zhiheng Huang, Andrej Karpathy,
Aditya Khosla, Michael Bernstein, Alexander C.
Berg, and Li Fei-Fei. ImageNet Large Scale
Visual Recognition Challenge. arXiv:1409.0575
[cs], January 2015.
[ros(2017)] Deep Learning for Computer Vision with
Python: Practitioner Bundle. PyImageSearch,
United States, 1st edition 1.3 edition, 2017. ISBN
978-1-72248-783-6.
[Ford(2018)] Martin R. Ford. Architects of Intelli-
gence: The Truth about AI from the People Build-
ing It. Packt Publishing, Birmingham, UK, 2018.
ISBN 978-1-78913-151-2.
[Marsland(2014)] Stephen Marsland. Machine Learn-
ing: An Algorithmic Perspective. Chapman and
Hall/CRC, 2 edition, October 2014. ISBN 978-0-
429-10250-9. doi: 10.1201/b17476.
[Krizhevsky et al.(2017) Krizhevsky, Sutskever, and Hinton]
Alex Krizhevsky, Ilya Sutskever, and Geoffrey E.
Hinton. ImageNet classification with deep convo-
lutional neural networks. Communications of the
ACM, 60(6):84–90, May 2017. ISSN 0001-0782,
1557-7317. doi: 10.1145/3065386.
[Deng et al.(2009)Deng, Dong, Socher, Li, Kai Li, and Li Fei -Fei]
Jia Deng, Wei Dong, Richard Socher, Li-Jia Li,
Kai Li, and Li Fei-Fei. ImageNet: A large-scale
hierarchical image database. In 2009 IEEE
Conference on Computer Vision and Pattern
Recognition, pages 248–255, Miami, FL, June
2009. IEEE. ISBN 978-1-4244-3992-8. doi:
10.1109/CVPR.2009.5206848.
About the Author
Linn Abraham is a researcher in Physics,
specializing in A.I. applications to astronomy. He is
currently involved in the development of CNN based
Computer Vision tools for prediction of solar flares
from images of the Sun, morphological classifica-
tions of galaxies from optical images surveys and ra-
dio galaxy source extraction from radio observations.
22
Applications of AI in Astronomy and
Astrophysics
by Sindhu G
airis4D, Vol.2, No.11, 2024
www.airis4d.com
Figure 1: Image Credit: Dr. David Ragland and
Medium
3.1 Introduction
The integration of Artificial Intelligence (AI) into
astronomy and astrophysics marks a transformative
leap in our capacity to explore and understand the
universe. As technological advancements drive the
production of vast amounts of data from ground-based
telescopes and space missions, processing and inter-
preting this complex information presents significant
challenges. Traditional data analysis methods are of-
ten insufficient for handling the volume and intricacy
of astronomical data, creating a need for innovative so-
lutions. AI, especially through machine learning (ML)
and deep learning (DL) algorithms, has emerged as
a powerful tool for efficiently analyzing large datasets,
identifying patterns, and even predicting celestial events.
AI’s applications in astronomy are extensive, from
automating the detection and classification of celestial
objects to enhancing observational techniques and op-
timizing telescope operations. For instance, machine
learning models can swiftly classify stars, galaxies, and
transient phenomena such as supernovae with high ac-
curacy, freeing astronomers to concentrate on signifi-
cant discoveries rather than becoming entangled in data
processing. Predictive AI models assist researchers in
forecasting cosmic events, thereby improving planning
and observation strategies.
The ongoing advancements in telescopes, such as
the James Webb Space Telescope and Vera C. Rubin
Observatory, are expected to generate unprecedented
quantities of data, making AI essential for modern
astronomical research. Instruments like the Hubble
Space Telescope and Sloan Digital Sky Survey (SDSS)
have already demonstrated how AI-driven automation
can advance our understanding of the cosmos by im-
proving data analysis, object classification, and real-
time detection capabilities.
This article explores AI’s crucial contributions
to data analysis, exoplanet discovery, predictive mod-
eling, and more in astronomy and astrophysics. By
examining these advancements, we aim to showcase
how AI is reshaping our understanding of the universe,
enabling discoveries once thought impossible, and ad-
dressing challenges like data biases and the balance
between AI-driven insights and human creativity in
scientific inquiry.
3.2 Classification and Identification of Celestial Objects
3.2 Classification and Identification
of Celestial Objects
AI has transformed the classification of celestial
objects by processing vast datasets that contain bil-
lions of stars, galaxies, and transient phenomena. Un-
like traditional labor-intensive methods, AI models can
rapidly identify and categorize these objects based on
defining characteristics such as shape, brightness, and
spectra. Convolutional neural networks (CNNs), for
example, enable astronomers to classify galaxies into
morphological types—such as spiral, elliptical, and ir-
regular—by training on surveys like the Sloan Digital
Sky Survey. This capability helps scientists uncover
correlations between galaxy structure and evolution.
In stellar classification, machine learning algo-
rithms analyze spectral data and properties like tem-
perature, luminosity, and age, allowing for precise cat-
egorization essential to studies of star formation and
evolution. AI also aids in classifying supernovae by
analyzing light curves to distinguish between types,
advancing our understanding of stellar death and en-
abling supernovae to be used as standard candles for
distance measurement.
AI’s impact extends to transient phenomena, where
machine learning quickly classifies events such as gamma-
ray bursts and supernovae, essential for timely follow-
up observations. Furthermore, unsupervised cluster-
ing techniques allow AI models to uncover patterns in
data without prior assumptions, leading to discover-
ies of new relationships and phenomena. An example
is the AstroAI program, where unsupervised learning
was used to catalog over 14,000 X-ray source detec-
tions, highlighting AI’s role in unveiling novel insights
within astronomical datasets.
3.3 Exoplanet Detection
AI is transforming the field of exoplanet detection
by streamlining the analysis of massive datasets from
space telescopes like Kepler and TESS. These tele-
scopes observe light curves, tracking changes in stellar
brightness over time to detect the characteristic dips
caused by planets transiting across their host stars. Tra-
ditional methods for analyzing light curves are labor-
intensive and often miss subtle signals. AI, particu-
larly deep learning models like recurrent neural net-
works (RNNs) and long short-term memory (LSTM)
networks, has revolutionized this process by efficiently
identifying these signals and filtering out false posi-
tives, significantly increasing the speed and accuracy
of exoplanet discovery.
Beyond detection, AI plays a critical role in char-
acterizing exoplanets by analyzing transit and radial
velocity data to infer essential details about a planet’s
size, orbit, and atmospheric composition. These in-
sights are vital for determining habitability and prior-
itizing candidates for further study. For instance, re-
searchers at the University of Georgia have developed
models that analyze synthetic data of protoplanetary
environments, which can then be applied to real ob-
servations, enhancing the likelihood of finding faint,
previously undetectable exoplanets.
AI models have achieved impressive success rates,
with studies showing up to 96% accuracy in predict-
ing exoplanet presence by detecting subtle patterns in
brightness data. This capability is especially crucial
in light of the vast number of potential planets in our
galaxy alone. Additionally, AI aids in the search for
potentially habitable planets. Researchers have created
algorithms that recognize Earth-like anomalies among
non-habitable exoplanets, allowing scientists to stream-
line their search for planets that could support life.
As missions like the James Webb Space Tele-
scope begin to provide unprecedented data, AI’s role
in exoplanet detection is poised to expand further. By
advancing detection methods, enhancing characteriza-
tion, and enabling the targeted search for habitable
worlds, AI is paving the way for discoveries that may
one day reveal the existence of life beyond Earth.
3.4 Variable Stars and X-ray Binary
Analysis
AI is playing a crucial role in the study of variable
stars and X-ray binaries, enabling researchers to un-
cover new patterns and classifications. Variable stars,
24
3.5 Gravitational Wave Detection
known for their fluctuating brightness, are analyzed
through light curve modeling, which reveals insights
into their size, age, and other properties. This is partic-
ularly significant for stars like Cepheids and RR Lyrae,
which serve as important distance markers in the uni-
verse.
In the analysis of variable stars, machine learn-
ing algorithms automate classification by examining
light curves using techniques such as k-nearest neigh-
bors (kNN), random forests (RF), and neural networks
(NN). For example, research utilizing data from the Op-
tical Gravitational Lensing Experiment (OGLE) suc-
cessfully classified 31,798 light curves and identified
178 new variable stars. Unsupervised learning meth-
ods, like Principal Component Analysis (PCA) and
Independent Component Analysis (ICA), help iden-
tify patterns in light curves without prior labeling,
with ICA proving particularly effective for classifying
Cepheid variables. The development of hybrid neural
networks that combine convolutional neural networks
and long short-term memory (LSTM) networks fur-
ther enhances the classification process, achieving up
to 85% accuracy in studies using light curves from
OGLE and the Catalina Real-time Transient Survey
(CRTS).
Similarly, AI transforms the analysis of X-ray
binaries, where understanding variability is essential
for studying accretion processes and orbital dynam-
ics. Machine learning techniques analyze X-ray light
curves to detect patterns corresponding to different
states of these systems, such as quiescent, outburst, or
transition states. AI can also perform anomaly detec-
tion by establishing baseline behaviors through super-
vised learning, allowing researchers to identify unusual
events that may indicate new physical phenomena or
changes in system dynamics.
In both areas, AI algorithms facilitate automated
light curve analysis, enabling the identification of pat-
terns and anomalies indicative of specific types of vari-
ability. They accurately classify variable stars based on
light curve characteristics and determine the periods of
periodic variable stars, improving our understanding of
their pulsation mechanisms. Additionally, AI aids in
exoplanet detection by analyzing subtle dips in stellar
light curves caused by transiting planets.
In the realm of X-ray binaries, AI enhances source
detection and classification in large-scale surveys, an-
alyzes time-series data to study variability patterns,
and conducts spectral analysis to gain insights into the
physical conditions of accreting matter. Moreover, AI
contributes to estimating the masses and spin parame-
ters of black holes by examining their X-ray emissions.
Through these applications, AI is revolutionizing the
study of variable stars and X-ray binaries, opening new
avenues for discovery in astrophysics.
3.5 Gravitational Wave Detection
Artificial Intelligence is revolutionizing the detec-
tion and analysis of gravitational waves, the ripples in
spacetime produced by catastrophic cosmic events like
black hole or neutron star mergers. Since LIGO (Laser
Interferometer Gravitational-Wave Observatory) made
its first detection in 2015, researchers have been explor-
ing innovative methods to enhance detection sensitivity
and accuracy, with AI techniques proving invaluable
for processing vast amounts of data and improving sig-
nal detection amidst noise.
Detecting gravitational waves is challenging due
to their inherently weak signals, often buried in noisy
data. Traditional detection methods can struggle to dif-
ferentiate genuine signals from various environmental
noises. However, AI models, particularly those based
on deep learning, have been developed to effectively
address this issue. For example, CNNs have been uti-
lized to classify signals from LIGO’s data, achieving
high accuracy in identifying gravitational wave events
while filtering out noise. Notably, a team at Argonne
National Laboratory created an AI framework that pro-
cessed an entire month of LIGO data in under seven
minutes, demonstrating that AI can match the sensitiv-
ity of traditional template matching algorithms while
operating more efficiently.
AI’s rapid processing capabilities enable real-time
analysis of gravitational wave signals, which is crucial
for multi-messenger astronomy, where gravitational
wave data can be combined with electromagnetic sig-
nals from events like gamma-ray bursts. Integrating
25
3.6 Fast Radio Burst (FRB) Detection
AI into this workflow allows astronomers to respond
swiftly to transient events, increasing the chances of
capturing associated phenomena across different wave-
lengths. Additionally, AI is employed for noise reduc-
tion in gravitational wave data analysis, with machine
learning models trained to recognize patterns associ-
ated with noise and effectively filter these out, enhanc-
ing the clarity of the data for more accurate signal
extraction.
AI also contributes to parameter estimation through
Bayesian inference techniques, allowing for the estima-
tion of the physical parameters of gravitational wave
sources, such as their masses, spins, and distances.
Furthermore, AI facilitates model selection to iden-
tify the most suitable physical models for observed
gravitational wave signals. In the realm of multi-
messenger astronomy, AI correlates gravitational wave
detections with observations from other astronomical
instruments, such as telescopes and neutrino detectors,
enabling joint analysis that provides a comprehensive
understanding of cosmic events. Through these appli-
cations, AI is transforming gravitational wave detec-
tion, paving the way for groundbreaking discoveries in
our understanding of the universe.
3.6 Fast Radio Burst (FRB) Detection
Fast Radio Bursts (FRBs) are intense, millisecond-
long flashes of radio waves originating from distant
galaxies, and their origins remain one of modern as-
tronomy’s greatest mysteries. The detection of these
enigmatic signals poses significant challenges due to
their brief duration and the vast amounts of data gener-
ated by radio telescopes. Traditional methods of data
analysis are often time-consuming and inefficient, mak-
ing it difficult to identify these fleeting signals amid
the noise. However, advancements in Artificial Intelli-
gence are revolutionizing the detection and analysis of
FRBs.
AI models, particularly convolutional neural net-
works, are essential for real-time detection, enabling
astronomers to quickly respond to FRBs as they occur.
Machine learning algorithms can identify distinctive
patterns associated with FRBs, significantly reducing
the likelihood of false positives by accurately distin-
guishing genuine signals from radio interference. For
instance, an AI-based detection system developed by
Ph.D. student Wael Farah at the Molonglo Radio Ob-
servatory can identify FRBs within seconds of their
arrival, which was previously unattainable with con-
ventional methods. This system has already detected
multiple bursts, including some of the most energetic
recorded, allowing researchers to capture high-quality
data for detailed analysis.
The application of AI extends beyond simple iden-
tification to sophisticated data processing techniques.
For example, FETCH (Fast Extragalactic Transient
Candidate Hunter) is a machine-learning software de-
veloped at West Virginia University that sifts through
massive datasets to distinguish genuine FRB signals
from noise and interference. This significantly reduces
the time researchers spend on manual data analysis and
enables the classification of FRBs into different cate-
gories based on properties such as dispersion measures
and polarization. By analyzing larger datasets with AI
tools, astronomers can discern patterns and potentially
link specific characteristics of FRBs to their sources.
The integration of AI with advanced observational
techniques further enhances our ability to study FRBs.
The Canadian Hydrogen Intensity Mapping Experi-
ment (CHIME) telescope utilizes digital signal pro-
cessing methods to reconstruct signals from multiple
directions simultaneously, allowing it to detect FRBs
at a significantly higher rate than traditional telescopes.
In its first year of operation, CHIME discovered over
500 new bursts. Moreover, AI-driven systems assist in
multi-wavelength follow-up observations, where data
from optical and X-ray telescopes are analyzed along-
side radio signals, providing a comprehensive view
of the environments surrounding FRB sources. This
holistic approach enhances our understanding of the
physical conditions that give rise to these mysterious
bursts.
26
3.7 Cosmology and Dark Matter Studies
3.7 Cosmology and Dark Matter
Studies
Artificial Intelligence is revolutionizing cosmo-
logical research, particularly in the study of dark mat-
ter and dark energy. By analyzing large-scale struc-
tures and aiding in the interpretation of vast datasets,
AI is transforming how researchers explore the uni-
verse’s evolution. As astronomical surveys and obser-
vational technologies improve, machine learning and
deep learning techniques are increasingly crucial for
addressing the complexities of cosmological data.
AI enhances data analysis by efficiently process-
ing the immense volumes generated by large-scale sur-
veys that map galaxy positions and velocities. This
capability allows astronomers to sift through petabytes
of information, identifying subtle signals that help de-
tect phenomena like gravitational lensing and super-
novae, which are vital for understanding dark matter
and the universes expansion. Machine learning mod-
els, particularly convolutional neural networks, excel
at analyzing weak lensing data, enabling researchers
to extract detailed cosmological parameters that offer
deeper insights into dark matter distribution and its
influence on galaxy formation.
The characterization of dark matter remains a sig-
nificant challenge in cosmology. AI techniques are
employed to improve our understanding of its prop-
erties by modeling its gravitational effects on visible
matter. For example, researchers use machine learning
to compare theoretical predictions with observational
data, refining knowledge of how dark matter interacts
with ordinary matter and its role in cosmic structure
formation. Additionally, AI has proven instrumental in
analyzing data from gravitational wave observatories
like LIGO and Virgo, where it helps identify and clas-
sify signals from black hole mergers and neutron star
collisions, providing insights into the mass distribution
of dark matter.
Moreover, AI accelerates cosmological simula-
tions, which are essential for understanding the uni-
verse’s evolution. Traditional simulations are often
computationally intensive and time-consuming, but gen-
erative AI models can speed up this process by creat-
ing realistic simulations based on existing data. This
allows researchers to explore a broader range of sce-
narios concerning dark matter properties and cosmic
evolution without the heavy computational burden of
extensive simulations. By training AI models on these
simulations, scientists can quickly infer cosmological
parameters from observational data, streamlining the
analysis process.
3.8 Image Processing and
Enhancement for Telescope Data
Artificial Intelligence is significantly enhancing
image processing and enhancement techniques for tele-
scope data, addressing challenges such as atmospheric
distortions, noise, and other factors that degrade the
quality of astronomical images. These challenges in-
clude blurring from atmospheric turbulence, electronic
noise obscuring faint details, cosmic rays creating bright
streaks, and the limited dynamic range of captured im-
ages.
To overcome these challenges, AI-driven image
processing techniques are being employed. One pri-
mary application is deblurring, where AI algorithms
utilize deep learning models, including convolutional
neural networks, to reverse the blurring process and
recover the original image. Researchers from North-
western University and Tsinghua University have de-
veloped an AI algorithm that effectively removes blur
from ground-based telescope images, resulting in im-
ages with up to 38.6% fewer errors compared to tra-
ditional methods. This improvement is crucial for ac-
curately measuring the shapes of galaxies and under-
standing gravitational effects in the universe.
In addition to deblurring, AI techniques are also
used for noise reduction, employing filtering and wavelet
denoising methods to smooth out noise while preserv-
ing important features. Deep learning models trained
on large datasets can effectively denoise images, en-
hancing their clarity. Furthermore, algorithms can de-
tect and remove cosmic ray artifacts through detection
and interpolation, and machine learning can be applied
27
3.9 Astroparticle Physics and Cosmic Ray Detection
for more accurate identification of these artifacts.
AI-based super-resolution algorithms play a sig-
nificant role in increasing the effective resolution of
telescope images, allowing astronomers to observe finer
details of cosmic structures and distant galaxies. This
capability has applications in both ground-based obser-
vatories and space telescopes like Hubble. To optimize
performance, researchers train AI models using sim-
ulated data that mimics future telescope conditions,
preparing them for real-world applications.
The development of open-source AI tools has de-
mocratized access to these algorithms, enabling as-
tronomers worldwide to adapt them for specific needs
and encouraging collaboration within the astronomical
community. As telescopes become more sophisticated,
integrating AI into their operational frameworks is es-
sential. For instance, the James Webb Space Telescope
(JWST) employs various algorithms for image opti-
mization and noise correction, enhancing its ability to
capture detailed images across different wavelengths.
3.9 Astroparticle Physics and Cosmic
Ray Detection
AI is making significant strides in astroparticle
physics, particularly in the detection and analysis of
cosmic rays and neutrinos. Cosmic rays, which are
high-energy particles primarily consisting of protons,
originate from outer space and interact with the Earths
atmosphere. Their study is crucial for understanding
fundamental astrophysical processes. AI techniques
enhance detection capabilities, improve data analysis,
and facilitate real-time monitoring of cosmic ray activ-
ity.
In observatories like IceCube, AI models pro-
cess massive data volumes to distinguish rare neutrino
events from background noise, enabling insights into
high-energy cosmic processes. Similarly, AI aids in
identifying and classifying cosmic ray events by ana-
lyzing their trajectories and interaction patterns, con-
tributing to a deeper understanding of the sources and
propagation mechanisms of high-energy particles in
the universe.
Recent initiatives, such as the Ray Deep Project,
utilize deep learning techniques for real-time cosmic
ray detection by identifying ionizing particles like muons
and electrons through advanced image processing meth-
ods. This project employs convolutional neural net-
works on FPGA (Field-Programmable Gate Array) de-
vices to achieve high-speed analysis, allowing detec-
tion times of less than 50 milliseconds and generating
alerts regarding increases in cosmic ray activity. An-
other noteworthy advancement is the AstroTeq.ai de-
tector, which accurately detects muons from cosmic
rays using scintillators connected to silicon photomul-
tiplier (SiPM) diodes in a TOP-BOTTOM coincidence
mode for enhanced accuracy.
AI’s role extends beyond detection to encompass
sophisticated data analysis techniques. For instance,
deep learning algorithms are applied to remove back-
ground noise from cosmic ray data collected in neutrino
experiments. In liquid argon time projection chambers
(LArTPCs), where cosmic muons often dominate, deep
neural networks classify pixels to filter out irrelevant
data, improving the quality of scientific results from
neutrino experiments.
Furthermore, AI technologies enable real-time
monitoring of cosmic ray activity, which is vital for
applications like muon tomography, an imaging tech-
nique that utilizes muons generated by cosmic rays to
create detailed images of structures such as volcanoes
or nuclear facilities. By integrating AI systems capable
of rapid data processing, researchers can generate alerts
for increased cosmic ray activity, allowing timely re-
sponses in various contexts, including safety measures
for environments exposed to radiation.
3.10 Conclusion
The application of AI in astronomy and astro-
physics is transforming the field by enabling the analy-
sis of large, complex datasets and uncovering patterns
and phenomena that traditional methods struggle to
detect. AI’s versatility, from object classification and
exoplanet detection to gravitational wave identification
and cosmological modeling, accelerates scientific dis-
covery and helps address fundamental questions about
28
3.10 Conclusion
the nature of the universe. As AI continues to evolve,
its role in astrophysical research will likely expand,
opening new frontiers in our understanding of the cos-
mos.
References:
AI in Astronomy
Astronomers are enlisting AI to prepare for a
data downpour
Welcome to the AI future?
Artificial intelligence helps in the identification
of astronomical objects
AI Revolutionizes Exoplanet Discovery Uncov-
ering New Worlds
New artificial Intelligence-based tools can help
finding habitable planets
X-ray Binary
Black Holes and X-ray binaries
Gravitational-wave observatory
Searching for exoplanets using artificial intelli-
gence
A real-time fast radio burst: polarization detec-
tion and multiwavelength follow-up
The FRATS project: real-time searches for fast
radio bursts and other fast transients with LO-
FAR at 135 MHz
Detection of ultra-fast radio bursts from FRB
20121102A
Astronomers detect most distant fast radio burst
to date
DOE Explains...Dark Matter
Dark matter
Dark Energy and Dark Matter
Tools & Training
Retouching of astronomical data for the produc-
tion of outreach images
Astronomers detect most distant fast radio burst
to date
Image Quality and Calibration
Astroparticle physics
What are fast radio bursts?
About the Author
Sindhu G is a research scholar in Physics
doing research in Astronomy & Astrophysics. Her
research mainly focuses on classification of variable
stars using different machine learning algorithms. She
is also doing the period prediction of different types
of variable stars, especially eclipsing binaries and on
the study of optical counterparts of X-ray binaries.
29
Part III
Biosciences
Harnessing Artificial Intelligence for
Biodiversity Conservation: A New Era of
Ecological Understanding
by Geetha Paul
airis4D, Vol.2, No.11, 2024
www.airis4d.com
(Image courtesy)
1.1 Introduction
Biodiversity, the variety of life on Earth, is essen-
tial for maintaining ecological balance and supporting
human existence. However, the rapid loss of biodiver-
sity due to human activities poses significant challenges
for conservationists and researchers. Traditional meth-
ods of studying and conserving biodiversity are often
labour-intensive and time-consuming, making it chal-
lenging to keep pace with the scale of the crisis. In
this context, Artificial Intelligence (AI) and Machine
Learning (ML) have emerged as transformative tools
that can enhance our understanding of biodiversity and
improve conservation efforts. Integrating Artificial In-
telligence into biodiversity research represents a trans-
formative shift in our approach to conservation. AI
empowers researchers and policymakers by enhanc-
ing data collection efficiency, improving accuracy in
species identification, enabling real-time monitoring
and predictive analytics, supporting ecological restora-
tion efforts and facilitating interdisciplinary collabora-
tion. As we harness the power of intelligent machines
to address current issues surrounding biodiversity loss,
such as habitat destruction due to climate change or
urbanisation, we move closer to a sustainable future
where human knowledge and technological innovation
work hand in hand to protect our planets rich bio-
logical heritage. By embracing these advancements
while remaining vigilant about ethical considerations
surrounding their use, we can pave the way for more
effective conservation strategies that ensure a thriv-
ing planet for future generations. The applications are
vast, from identifying rare species to improving habitat
restoration efforts, demonstrating that AI is not merely
a tool but a crucial ally in our quest to conserve the
natural world. Today, AI is becoming a powerful force
in nature conservation, with applications ranging from
monitoring wildlife to collecting environmental DNA.
1.2 The Challenge of Biodiversity
Loss
The current biodiversity crisis is alarming, with
estimates suggesting that over one million species face
extinction in the coming decades. Habitat destruction,
climate change, pollution, and overexploitation drive
this decline. As ecosystems become increasingly frag-
1.3 The Role of AI in Biodiversity Conservation and Data Collection Efficiency
mented and degraded, the need for effective monitoring
and conservation strategies has never been more urgent.
1.3 The Role of AI in Biodiversity
Conservation and Data Collection
Efficiency
AI technologies, particularly deep learning algo-
rithms, have shown great promise in addressing these
challenges. By processing vast amounts of data quickly
and accurately, AI can streamline data collection and
analysis, enabling researchers to focus on critical con-
servation tasks. Traditional methods of gathering eco-
logical data often involve extensive fieldwork, which
can be time-consuming and costly. AI-powered sys-
tems can analyse data from various sources—remote
sensors, camera traps, and satellite imagery—24/7.
This capability allows for real-time monitoring of wildlife
activity and environmental changes. For instance, AI
algorithms can identify species from images captured
by camera traps, significantly reducing the time re-
quired for manual identification. Predictive analytics
is one of AI’s most exciting applications in biodiversity
conservation. By analysing historical data, AI models
can forecast future trends and identify areas at risk of
habitat loss or species decline. For example, AI can
assess how climate change will impact various ecosys-
tems, aiding in adaptation planning. This proactive
approach allows conservationists to develop strategies
to mitigate potential threats before they escalate.
Image courtesy: U.S. Geological Survey.
Fig: 1 Perspective in machine learning for wildlife
conservation. Studies frequently combine data from
multiple sensors at the exact geographic location or
data from various locations to achieve deeper ecolog-
ical insights. Sentinel-2 (ESA) satellite/ of the U.S.
Geological Survey.
1.4 Wildlife Monitoring and
Protection
AI is crucial in monitoring vulnerable wildlife
populations. Advanced technologies such as acoustic
monitoring can detect animal calls and alert researchers
to the presence of rare or endangered species. Ad-
ditionally, drones equipped with AI capabilities can
patrol protected areas to detect illegal activities like
poaching.
(Image courtesy)
1.4.1 Acoustic Monitoring
Acoustic monitoring and sound recordings can
detect animal calls, and researchers can listen to the
recording as many times as needed to confirm an ID.
Repeated listening by multiple experts will allows scru-
tinising data for more accurate interpretation.
32
1.5 Wildlife Conservation Tools and Technologies
1.4.2 Revolutionising Wildlife Monitoring
with Advanced Technology
(Image courtesy)
Fig:3. Artificial intelligence-based image recog-
nition and computer vision technologies help identify
species from photos and videos, helping researchers
track and study wildlife populations.
(Image courtesy)
Fig:4. DeepMind, a UK-based company, has de-
veloped an artificial intelligence model designed to
identify various animal species and assess their popu-
lations. This initiative is based in Serengeti National
Park, Tanzania, where scientists utilise AI technology
to detect wildlife, analyse large volumes of data col-
lected with the help of advanced camera traps and work
towards conserving endangered species before it be-
comes too late.
1.5 Wildlife Conservation Tools and
Technologies
Camera Traps: Equipped with motion sensors,
camera traps capture images and videos of wildlife,
providing valuable data on species presence and be-
haviour.
GPS Tracking Collars: These collars provide real-
time location data, helping researchers track animal
movements and study habits.
Remote sensing - using satellite Images to monitor
and manage habitats.
UAV - Unmanned aerial vehicles or drones for
monitoring habitats Satellite imagery, GIS mapping -
Geographic Information Systems - to map and monitor
species and habitats.
Habitat Mapping: Satellite imagery aids in identi-
fying critical habitats for various species. By mapping
these areas, conservationists can prioritise regions for
protection and restoration.
Population Dynamics: GIS tools enable the analy-
sis of wildlife population trends by correlating satellite
data with field observations and mapping and monitor-
ing species and habitats. This helps assess the health
of populations and determine conservation strategies.
DNA Analysis: Genetic analysis helps researchers
understand population genetics, identify individual an-
imals, and monitor genetic diversity.
Predictive Modeling: AI can predict future popu-
lation trends and potential threats, allowing for proac-
tive conservation efforts.
Habitat Analysis: AI algorithms can analyse habi-
tat data to identify critical conservation and restoration
areas.
Disease Monitoring: AI can detect patterns in
wildlife health data, helping to identify and respond to
disease outbreaks.
Climate Impact Studies: AI helps researchers un-
derstand how climate change affects wildlife, guiding
conservation strategies in a changing environment.
1.6 Wildlife Conservation Tools and
Technologies
Camera Traps: Equipped with motion sensors,
camera traps capture images and videos of wildlife,
providing valuable data on species presence and
behaviour.
GPS Tracking Collars: These collars provide
real-time location data, helping researchers track
animal movements and study habits. Remote
sensing - using satellite Images to monitor and
33
1.7 Case Studies
manage habitats.
UAV - Unmanned aerial vehicles or drones for
monitoring habitats Satellite imagery.
GIS mapping - Geographic Information Systems
- to map and monitor species and habitats.
Habitat Mapping: Satellite imagery aids in iden-
tifying critical habitats for various species. By
mapping these areas, conservationists can pri-
oritise regions for protection and restoration.
Population Dynamics: GIS tools enable the anal-
ysis of wildlife population trends by correlating
satellite data with field observations and map-
ping and monitoring species and habitats. This
helps assess the health of populations and deter-
mine conservation strategies.
DNA Analysis: Genetic analysis helps researchers
understand population genetics, identify individ-
ual animals, and monitor genetic diversity. Pre-
dictive Modelling: AI can predict future popu-
lation trends and potential threats, allowing for
proactive conservation efforts.
Habitat Analysis: AI algorithms can analyse
habitat data to identify critical conservation and
restoration areas.
Disease Monitoring: AI can detect patterns in
wildlife health data, helping to identify and re-
spond to disease outbreaks.
Climate Impact Studies: AI helps researchers
understand how climate change affects wildlife,
guiding conservation strategies in a changing en-
vironment.
(Source) Table 1. Shows the list of technologies
to work smarter and faster- Aspects, Description and
Examples of AI Tools
1.7 Case Studies
Camera Traps: Organisations like the Zoological
Society of London utilise AI to analyse millions of
images from camera traps deployed in natural habitats.
This technology enables precise population estimates
and real-time monitoring of biodiversity.
Acoustic Monitoring: Projects like BirdNET use
AI to identify over 3,000 bird species from audio record-
ings. This innovation allows scientists and citizen sci-
entists alike to contribute to biodiversity monitoring on
an unprecedented scale.
Anti-Poaching Efforts: The SMART (Spatial Mon-
itoring and Reporting Tool) platform integrates AI to
analyse data from ranger patrols and camera traps.
This system has significantly reduced illegal hunting
by identifying poaching hotspots in several key conser-
vation areas. Real-time data allows for rapid response
to threats like poaching or natural disasters.
AI also aids in ecological restoration efforts by op-
timising strategies for rehabilitating degraded ecosys-
tems. AI can simulate ecosystem behaviour under
various scenarios, allowing scientists to test different
restoration approaches. Drones equipped with AI can
identify and remove invasive species, helping restore
native flora and fauna balance. AI sensors assess soil
34
1.8 Behavioural Studies Through Machine Learning
health to optimise the selection of native plant species
for restoration projects.
1.8 Behavioural Studies Through
Machine Learning
In addition to identification and monitoring tasks,
machine learning algorithms can facilitate behavioural
studies among various species.
Grooming Behaviors: AI systems can analyse
video footage to observe grooming behaviours in fish
or other aquatic organisms, providing insights into their
social interactions.
Courtship Displays: By studying courtship be-
haviours through video analysis, researchers can better
understand mating strategies in different species.
These behavioural insights are vital for develop-
ing effective conservation strategies tailored to specific
ecological needs.
1.9 Ethical Considerations
While AI offers immense potential for biodiver-
sity conservation, it also raises ethical questions that
must be addressed. Issues related to data privacy, po-
tential job displacement for human workers, and the
implications of using AI in wildlife management need
careful consideration. Conservationists must navigate
these challenges while leveraging technology to protect
vulnerable ecosystems. As researchers increasingly
rely on data collected from public sources or citizen
scientists, ensuring data privacy becomes paramount.
Ethical guidelines must be established to protect indi-
viduals rights while promoting transparency in bio-
diversity research. The automation by AI may lead
to concerns regarding job displacement among tradi-
tional fieldworkers or researchers. While AI enhances
efficiency, it is essential to consider how these tech-
nologies can complement human expertise rather than
replace it.
1.10 Conclusion
The integration of Artificial Intelligence into bio-
diversity research represents a paradigm shift in our ap-
proach to conservation. By automating data collection
and analysis, predictive modelling, and wildlife mon-
itoring, AI empowers researchers to make informed
decisions that enhance our understanding of complex
ecological systems. As we harness the power of intel-
ligent machines to address pressing issues surrounding
biodiversity loss, we move closer to a sustainable future
where human knowledge and technological innovation
work hand in hand to protect our planets rich biological
heritage. The applications are vast—from identifying
rare species to improving habitat restoration efforts—
demonstrating that AI is not just a tool but a crucial
ally in our quest to conserve the natural world. By
embracing these technological advancements while re-
maining vigilant about ethical considerations, we can
pave the way for more effective conservation strategies
that ensure a thriving planet for future generations.
1.11 Future Directions
Looking ahead, further research is needed to re-
fine AI algorithms tailored explicitly for biodiversity
applications. Collaborations between ecologists and
computer scientists will be critical in developing ro-
bust models for ecological complexities.
Moreover, expanding access to training datasets
will enhance machine learning capabilities across var-
ious taxa globally. Engaging citizen scientists through
mobile applications could facilitate data collection while
fostering public interest in biodiversity conservation.
In summary, as we stand at the intersection of tech-
nology and ecology, embracing artificial intelligence
offers an unprecedented opportunity for understanding
and actively preserving our planets invaluable biodi-
versity.
35
1.11 Future Directions
(Image courtesy)
Fig 5: This image showcases a robotic arm gently
holding a lush green sapling toward a human hand ex-
tended in acceptance. The white robotic arm, with
sleek design and dexterous fingers, symbolises the
pinnacle of modern engineering and the potential for
artificial intelligence to coexist with and nurture the
natural world. The human hand’s soft, warm skin
tone contrasts with the cool precision of the robotic
limb, conveying a narrative of trust and partnership
between humanity and machines. The verdant leaves
of the sapling, backlit by soft, diffused daylight filter-
ing through a window, introduce a sense of hope and
growth that symbolise a harmonious future.
“By prioritising biodiversity conservation and pro-
tecting natural habitats, with the help of AI tools,
we can cultivate a harmonious coexistence between
technology and nature, ultimately fostering a healthier
planet for future generations”.
References
4 Reasons why acoustic monitoring reigns in
wildlife research
https://saiwa.ai/blog/ai-in-wildlife-conservation/
https://www.nature.com/articles/s41467-022-27980-
y/
AI in conservation where we came from and
where we are heading
https://hcrobo.com/ai-wildlife-monitoring-automated-
wildlife-tracking/
Silvestro, D., Goria, S., Sterner, T., & et al.
(2022). Improving biodiversity protection through
artificial intelligence. Nature Sustainability, 5,
415–424.
https://doi.org/10.1038/s41893-022-00851-6
About the Author
Geetha Paul is one of the directors of
airis4D. She leads the Biosciences Division. Her
research interests extends from Cell & Molecular Bi-
ology to Environmental Sciences, Odonatology, and
Aquatic Biology.
36
Bronchopulmonary Dysplasia in Preterm
Infants: Addressing the Challenges and
Advancements in Management
by Kalyani Bagri
airis4D, Vol.2, No.11, 2024
www.airis4d.com
2.1 Introduction
Bronchopulmonary Dysplasia (BPD) is a chronic
lung condition predominantly affecting preterm infants,
particularly those born before 28 weeks of gestation.
Characterized by a need for supplemental oxygen or
respiratory support for at least 28 days post-birth. BPD’s
severity is classified based on oxygen dependency at
36 weeks postmenstrual age (PMA). This classifica-
tion helps differentiate the stages of the disease, which
typically develops due to disrupted lung maturation,
exacerbated by necessary medical interventions such
as mechanical ventilation and oxygen therapy.
Since Dr. William Northway’s initial description
in 1967, our understanding of BPD has evolved signif-
icantly. Originally attributed to lung injury from pro-
longed mechanical ventilation and high oxygen expo-
sure. Classic BPD was thought to cause inflammation,
fibrosis, and alveolar damage. However, advancements
in neonatal care, including gentler ventilation methods,
surfactant therapy, and antenatal corticosteroids, have
transformed the clinical presentation of BPD. Now, it is
recognized as a developmental disorder resulting from
interrupted lung growth rather than solely lung injury.
2.2 Mortality and Morbidity
BPD remains a leading complication of prematu-
rity, substantially contributing to infant mortality and
morbidity. Severe BPD increases the risk of early mor-
tality due to respiratory failure, pulmonary hyperten-
sion, and other life-threatening conditions. Despite
neonatal care advancements, severe BPD cases con-
tinue to carry high mortality rates, particularly in ex-
tremely preterm infants.
Long-term morbidity associated with BPD in-
cludes chronic respiratory problems, such as recurrent
infections, wheezing, and impaired gas exchange. Ad-
ditionally, BPD is linked to neurodevelopmental chal-
lenges, including motor delays, cognitive impairments,
and behavioral issues. These ongoing health problems
result in prolonged hospital stays, increased healthcare
costs, and frequent readmissions, significantly impact-
ing quality of life and developmental outcomes for af-
fected children.
2.3 Short-Term and Long-Term
Outcomes
Preterm infants with BPD often require extended
stays in the neonatal intensive care unit (NICU) due
to prolonged respiratory support, which increases their
vulnerability to infections and feeding difficulties. These
complications can hinder growth and neurodevelop-
ment, while prolonged mechanical ventilation raises
the risk of ventilator-associated complications, further
exacerbating health challenges.
2.4 Pathogenesis of BPD
The consequences of BPD persist beyond infancy,
impacting long-term outcomes. Children with a his-
tory of BPD are more likely to experience respiratory
issues, including reduced lung function and an ele-
vated risk of chronic obstructive pulmonary disease
(COPD) in adulthood. Structural changes in the lungs
can persist into adolescence and adulthood, leading to
impaired exercise tolerance and increased susceptibil-
ity to respiratory illnesses. Moreover, BPD contributes
to neurodevelopmental delays, affecting learning, be-
havior, and overall quality of life.
2.4 Pathogenesis of BPD
The pathogenesis of BPD in preterm infants in-
volves a complex interplay of prenatal and postnatal
mechanisms that disrupt lung development. Maternal
inflammation, such as chorioamnionitis, can precipi-
tate lung injury before birth, while intrauterine growth
restriction (IUGR), placental dysfunction, and mater-
nal smoking can also compromise lung development,
increasing vulnerability to postnatal injury.
Following birth, mechanical ventilation, oxygen
toxicity, infections, and nutritional deficiencies fur-
ther contribute to lung damage. Although essential for
survival, high oxygen levels generate oxidative stress,
damaging delicate lung tissue. Even with modern,
gentler ventilation techniques, mechanical ventilation
can still traumatic immature lungs. Infections and in-
flammation exacerbate lung development impairment,
leading to abnormal alveolarization and vasculariza-
tion.
The cumulative effect of these prenatal and post-
natal insults is arrested alveolar growth, abnormal lung
architecture, and impaired gas exchange the hallmark
characteristics of BPD.
2.5 Prevention and Treatment
Challenges
Despite significant advancements in neonatal care,
which have improved the survival rates of preterm in-
fants, preventing and treating BPD remains a formidable
challenge. Current treatments, including antenatal cor-
ticosteroids, surfactant replacement therapy, and less
invasive ventilation strategies, have mitigated the sever-
ity of respiratory distress but have not substantially
reduced BPD incidence. The ongoing struggle to pre-
vent BPD stems from the delicate balance between
life-saving interventions, such as oxygen therapy and
mechanical ventilation, and their potential harm to the
developing lungs of extremely preterm infants.
Furthermore, the intricate pathogenesis of BPD,
involving multiple prenatal and postnatal factors, com-
plicates efforts to prevent its onset fully. As a result,
preventing BPD remains an area of ongoing research,
with clinicians seeking innovative strategies to mini-
mize lung damage and promote healthy lung develop-
ment in vulnerable preterm infants.
2.6 Postnatal Corticosteroids
Postnatal corticosteroids (PNCS) have been a main-
stay in managing preterm infants at risk of BPD, with
established efficacy in reducing incidence and sever-
ity. However, concerns about potential long-term ad-
verse effects have sparked ongoing debate and evolv-
ing guidelines. The mechanism of action of PNCS in
preventing BPD is complex, involving reduced lung in-
flammation, promoted lung maturation, and improved
oxygenation.
Although corticosteroid therapy became a stan-
dard intervention for preterm infants at risk of BPD
in the 1990s, increasing long-term follow-up studies
have raised concerns about optimal dosage and dura-
tion. PNCS therapy in preterm infants carries both
short-term and long-term risks.
In the short term, corticosteroids can increase the
risk of infection by suppressing the immune system,
leading to sepsis and other infections. Additionally,
PNCS can cause metabolic complications, such as hy-
perglycemia and hypertension, complicating manage-
ment and increasing adverse event risk. Corticosteroid
use may also result in gastrointestinal complications,
including feeding intolerance and an increased risk of
necrotizing enterocolitis (NEC). Furthermore, adrenal
suppression is a concern, requiring careful tapering and
38
2.7 Predictive Modeling
monitoring for signs of adrenal insufficiency.
Long-term risks associated with PNCS therapy in-
clude neurodevelopmental impairment, with high-dose
dexamethasone exposure potentially increasing the risk
of cerebral palsy and cognitive deficits. Growth retar-
dation is another concern, as corticosteroid therapy can
negatively impact growth in preterm infants, leading to
long-term growth deficits. Moreover, corticosteroid
exposure may impair lung development, affecting lung
function well into childhood. Endocrine dysfunction,
particularly related to adrenal insufficiency, is also a
potential long-term consequence.
Balancing the benefits and risks of PNCS therapy
requires careful consideration of individual patient fac-
tors, including gestational age, birth weight, and clini-
cal status, to optimize treatment and minimize adverse
effects.
2.7 Predictive Modeling
To tackle the complexities surrounding corticos-
teroid use, innovative tools have been developed to help
clinicians tailor treatments to individual infants. No-
tably, the National Institute of Child Health and Human
Development(NICHD) Bronchopulmonary Dysplasia
Estimator (BPDE) is a predictive model that utilizes
patient-specific factors to assess BPD risk, enabling
informed decisions regarding corticosteroid use. By
identifying high-risk infants, clinicians can prioritize
early and aggressive interventions, such as postnatal
corticosteroids (PNCS) and surfactant therapy, reduc-
ing unnecessary corticosteroid exposure and minimiz-
ing adverse effects.
The estimator identifies infants at highest risk of
developing BPD, allowing clinicians to prioritize early
intervention. Additionally, it optimizes steroid dos-
ing by tailoring therapy to an infants individual risk
profile, minimizing adverse effects while maximizing
therapeutic benefits. Furthermore, the BPDE enables
continuous monitoring of treatment response, allowing
for dynamic adjustments to the therapeutic regimen, re-
sulting in a more personalized and effective treatment
plan.
2.7.1 The Need for a Separate BPD
Estimator for India
The application of international guidelines and
BPD estimators, such as the NICHD BPD estimator,
in the Indian context is limited due to several factors.
Indias diverse population, characterized by varying ge-
netic and environmental factors, influences BPD risk,
making it challenging to directly apply international
models. Additionally, differences in clinical practices,
including oxygen therapy and surfactant administra-
tion, across various Indian healthcare settings further
complicate the use of standardized estimators.
Moreover, the lack of extensive, high-quality re-
search studies on preterm infants in India makes it
difficult to create and confirm the accuracy of reliable
estimators/predictive models for BPD. Consequently,
developing a tailored BPD estimator for the Indian con-
text is essential to enhance risk prediction accuracy and
optimize clinical decision-making.
Such a model should incorporate population-specific
factors, including maternal nutritional status, socioe-
conomic factors, and exposure to environmental pollu-
tants, to better reflect the Indian scenario. By account-
ing for these unique factors, an India-specific BPD
estimator can provide more reliable risk assessments,
ultimately improving outcomes for preterm infants.
2.7.2 Web-Based BPD Estimator at
Fernandez Hospital, Hyderabad
We developed and validated a predictive model
for BPD using a retrospective cohort of 845 preterm
infants (206 with BPD and 639 without BPD) admitted
to the Fernandez Foundations Neonatal Intensive Care
Unit (NICU) from 2018 to 2022. Our study integrated
various perinatal and respiratory risk factors, employ-
ing an ensemble model of Random Forest algorithms
to optimize predictive accuracy.
The model demonstrated robust performance, achiev-
ing 99.0% accuracy on training set and 80.3% on the
testing set, comparable to the established NICHD BPD
estimator. Notably, our model predicts the risk of BPD
and provides percentage probabilities for both classes
(BPD and No-BPD), enhancing clinical decision-making.
39
2.8 Emerging Treatments and Future Directions
Our findings highlight key contributors to BPD
pathogenesis, particularly early-onset sepsis, Necrotiz-
ing Enterocolitis (NEC), and Patent Ductus Arteriosus
(PDA). The predictive model is particularly advanta-
geous in optimizing PNCs use by reducing unnecessary
exposure, associated risks and improving overall effi-
cacy. To further enhance the model’s utility, we are
expanding it to include multiclass classification, cate-
gorizing BPD into mild, moderate, and severe classes.
This advancement aims to facilitate targeted interven-
tions and personalized care for high-risk preterm in-
fants.
To facilitate clinical implementation, our predic-
tive model has been developed as a user-friendly web-
based estimator, enabling healthcare providers to ac-
cess and utilize the tool for real-time assessments and
tailored care strategies.
2.8 Emerging Treatments and Future
Directions
Emerging treatments and novel approaches offer
hope for reducing the burden of BPD. According to the
National Institutes of Health, several promising strate-
gies are being explored.
One area of research focuses on stem cell therapy,
specifically mesenchymal stem cells (MSCs), which
have shown potential in repairing lung tissue and re-
ducing inflammation. Preclinical studies have demon-
strated that MSCs can promote lung growth and reduce
fibrosis in animal models of BPD, and early-phase clin-
ical trials are investigating their safety and efficacy in
neonates.
Additionally, new pharmacological agents target-
ing inflammation, oxidative stress, and lung develop-
ment are being developed. Anti-inflammatory agents,
antioxidants, and growth factors that promote alveolar-
ization and vascularization hold promise in preventing
or treating BPD.
Advances in genetic research also pave the way
for precision medicine approaches, enabling targeted
interventions based on an infants genetic profile. This
personalized approach may improve outcomes and re-
duce BPD risk.
Moreover, revolutionary changes are on the hori-
zon in neonatology, particularly with artificial womb
technology. By mimicking the intrauterine environ-
ment, this innovation supports extremely preterm in-
fants during critical lung development stages, poten-
tially drastically reducing the need for mechanical ven-
tilation and oxygen therapy preventing BPD before it
starts.
References
1. Bonadies, L., Zaramella, P., Porzionato, A., Per-
ilongo, G., Muraca, M., & Baraldi, E. (2020).
Present and Future of Bronchopulmonary Dys-
plasia. Journal of Clinical Medicine, 9(5), 1539.
https://doi.org/10.3390/jcm9051539. PMID: 32443685;
PMCID: PMC7290764.
2. Holzfurtner, L., Shahzad, T., Dong, Y., Rekers,
L., Selting, A., Staude, B., Lauer, T., Schmidt,
A., Rivetti, S., Zimmer, K. P., Behnke, J., Bel-
lusci, S., & Ehrhardt, H. (2022). When inflam-
mation meets lung development—An update on
the pathogenesis of bronchopulmonary dyspla-
sia. Molecular and Cellular Pediatrics, 9(1), 7.
https://doi.org/10.1186/s40348-022-00137-z. PMID:
35445327; PMCID: PMC9021337.
3. Zayat, N., Truffert, P., Drumez, E., Duhamel, A.,
Labreuche, J., Zemlin, M., Milligan, D., Maier,
R. F., Jarreau, P. H., Torchin, H., Zeitlin, J.,
Nuytten, A., & EPICE Research Group. (2022).
Systemic Steroids in Preventing Bronchopulmonary
Dysplasia (BPD): Neurodevelopmental Outcome
According to the Risk of BPD in the EPICE Co-
hort. International Journal of Environmental
Research and Public Health, 19(9), 5600. https:
//doi.org/10.3390/ijerph19095600. PMID: 35564997;
PMCID: PMC9106050.
4. Tracy, M. C., & Cornfield, D. N. (2020). Bron-
chopulmonary Dysplasia: Then, Now, and Next.
Pediatric Allergy, Immunology, and Pulmonology,
33(3), 99–109. https://doi.org/10.1089/ped.2020.
1205. PMID: 35922031; PMCID: PMC9354034.
5. Nuthakki, S., Ahmad, K., Johnson, G., & Cuevas
40
2.8 Emerging Treatments and Future Directions
Guaman, M. (2023). Bronchopulmonary Dys-
plasia: Ongoing Challenges from Definitions to
Clinical Care. Journal of Clinical Medicine,
12(11), 3864. https://doi.org/10.3390/jcm12113864.
PMID: 37298058; PMCID: PMC10253815.
6. Muehlbacher, T., Bassler, D., & Bryant, M. B.
(2021). Evidence for the Management of Bron-
chopulmonary Dysplasia in Very Preterm In-
fants. Children (Basel), 8(4), 298. https://doi.
org/10.3390/children8040298. PMID: 33924638;
PMCID: PMC8069828.
7. Lemyre, B., Dunn, M., & Thebaud, B. (2020).
Postnatal corticosteroids to prevent or treat bron-
chopulmonary dysplasia in preterm infants. Pae-
diatrics & Child Health, 25(5), 322–331. https://
doi.org/10.1093/pch/pxaa073. PMID: 32765169;
PMCID: PMC7395322.
8. Balany, J., & Bhandari, V. (2015). Understand-
ing the Impact of Infection, Inflammation, and
Their Persistence in the Pathogenesis of Bron-
chopulmonary Dysplasia. Frontiers in Medicine,
2, 90. https://doi.org/10.3389/fmed.2015.00090.
PMID: 26734611; PMCID: PMC4685088.
9. Aleem, S., & Greenberg, R. G. (2023). Accu-
rate Prediction of Bronchopulmonary Dysplasia:
Are We There Yet? Journal of Pediatrics, 258,
113389. https://doi.org/10.1016/j.jpeds.2023.03.
004. Epub 2023 Mar 16. PMID: 36933768.
About the Author
Dr. Kalyani Bagri is a Senior Research
Associate at Fernandez Foundation in Hyderabad, In-
dia, where she plays a pivotal role as the lead data sci-
entist in the Neonatology Department. She earned her
Ph.D. in Astrophysics from Pt. Ravishankar Shukla
University, in collaboration with IUCAA and TIFR
Mumbai. In her current role, Dr. Bagri independently
integrates Artificial Intelligence (AI) into neonatal
care. She is leading critical projects, including the
development of a Bronchopulmonary Dysplasia esti-
mator to guide strategic decisions and optimize steroid
usage in neonates, and a sepsis calculator designed to
detect sepsis in neonates prior to clinical recognition.
Additionally, she is involved in a range of initiatives
aimed at advancing neonatal health outcomes through
her innovative and data-driven approaches.
41
Ethical Implications of Using AI in Scientific
Research
by Jinsu Ann Mathew
airis4D, Vol.2, No.11, 2024
www.airis4d.com
In just a few short years, Artificial Intelligence
(AI) has transformed scientific research in ways that
were once the stuff of science fiction. Imagine a sys-
tem that can analyze mountains of data in minutes,
help design complex molecules for new drugs, review
thousands of research papers at lightning speed, or even
generate fresh ideas and hypotheses for scientists to ex-
plore. AI is doing all of this and more, fundamentally
changing how research is done.
In physics, for example, AI is being used to simu-
late and predict the behavior of particles at the smallest
scales, helping scientists understand the fundamental
forces of the universe. In fields like medicine, it’s help-
ing predict which new treatments might work best for
specific diseases. AI systems are even working along-
side scientists to test theories, analyze experimental
data, and reveal hidden connections across fields.
But with these amazing capabilities come impor-
tant ethical questions. How do we ensure the findings
from AI are accurate and trustworthy? Can we fully
understand and explain the conclusions AI reaches, or
are we taking a “black box” at its word? And what
happens if the data AI learns from is biased? If not
carefully managed, AI could amplify those biases, af-
fecting the results in ways that could mislead science or
harm society. Addressing these ethical issues is crucial
to ensure that AI contributes responsibly and equitably
to scientific progress. This article delves into these
ethical challenges and their impact on the reliability
and integrity of AI-driven research.
3.1 Ensuring Accuracy and
Trustworthiness in AI Findings
One of the most significant ethical challenges in
using AI for research is ensuring that its findings are
accurate and trustworthy. While AI algorithms have
incredible potential to process data and detect patterns,
they’re not infallible. They operate in highly special-
ized domains, often working with data that is incom-
plete, noisy, or inconsistent. If AI models produce
inaccurate predictions or flawed conclusions, the con-
sequences can be costly and, in some fields, even dan-
gerous. This is particularly concerning in fields where
research findings influence policy, public health, or
treatment plans.
Take the example of AI in drug discovery, where
researchers use AI models to predict the potential ef-
fectiveness of new compounds. AI can sift through vast
chemical databases, analyzing millions of compounds
and identifying those that could potentially treat spe-
cific diseases. This process is incredibly valuable for
accelerating the drug discovery pipeline. However, if
these AI predictions are inaccurate, scientists might
invest significant time and resources into testing com-
pounds that ultimately prove ineffective or even unsafe.
Another example can be found in AI applications
in environmental science, where AI is used to forecast
climate patterns, track pollution levels, and assess the
impacts of deforestation or other ecological changes.
Inaccurate predictions in these areas could misguide
3.2 The Transparency Challenge: Opening the “Black Box”
critical environmental policies. For instance, if an AI
system underestimates the pollution levels in a spe-
cific area due to incomplete or outdated data, regula-
tory agencies may fail to impose necessary pollution
controls, putting ecosystems and public health at risk.
In such cases, trust in AI predictions is paramount, as
these outcomes have long-lasting effects on society and
the planet.
To ensure AI’s trustworthiness, models must un-
dergo rigorous testing, validation, and continuous mon-
itoring. In drug discovery, this means running AI pre-
dictions against known compounds with verified ef-
fects to confirm that the model can accurately identify
promising candidates. For environmental and health
applications, AI models need to be trained on large,
diverse datasets that are regularly updated, helping to
avoid inaccuracies caused by outdated or unrepresen-
tative data.
3.2 The Transparency Challenge:
Opening the “Black Box”
AI models, particularly complex ones like deep
neural networks, often function as “black boxes.” While
they can provide impressive answers and solutions, the
inner workings behind these answers are notoriously
difficult to interpret, even for experts. This lack of
transparency creates a significant dilemma: if scien-
tists and practitioners cannot fully understand or ex-
plain how an AI model reached its conclusions, how
can they trust these findings or confidently justify them
to others? Without transparency, AI-driven insights
could remain out of reach for validation, verification,
and understanding—a risk especially high in sensitive
fields like healthcare and environmental science.
In healthcare, for example, AI has been employed
to predict patient outcomes based on electronic health
records, analyzing vast datasets to forecast everything
from potential disease progression to the likelihood of
hospital readmission. These models often analyze nu-
merous factors at once, such as age, genetics, medical
history, and lifestyle choices, yielding predictions that
can guide treatments. However, when these predictions
arrive without an explanation for why a particular out-
come was chosen, doctors may hesitate to follow the
recommendations, unsure if the AI has accounted for
essential factors or ignored key information due to bias
in the training data. Imagine an AI recommending
a specific cancer treatment based on data from pre-
dominantly younger patients. Without transparency,
a physician might question whether this recommen-
dation truly applies to an older patient with multiple
health conditions, leading to justifiable hesitation in
trusting the AI’s judgment.
To address these transparency issues, researchers
are focusing on explainable AI (XAI)—models that
aim to reveal how AI arrives at specific conclusions.
In healthcare, explainable AI can help a doctor see
which factors (like age, genetic profile, or health his-
tory) were weighted most heavily in a patients risk
prediction, making it easier for them to determine if
the recommendation is suitable. Transparency is also
essential for ensuring that AI findings are robust and
trustworthy. When results are open to scrutiny and can
be independently verified, they carry greater weight
and are more likely to be accepted within the scien-
tific community. Developing transparent, explainable
AI models enables researchers to back up AI-driven
discoveries with clear, understandable reasoning that
aligns with traditional scientific standards. Only by
improving transparency in AI systems can we ensure
that their insights are reliable, ethical, and beneficial to
society at large.
3.3 The Risk of Bias in AI Models
AI models are only as good as the data they learn
from. When this data contains biases or lacks represen-
tation, the AI’s conclusions can become skewed, po-
tentially reinforcing existing inequalities or producing
inaccurate results. This is a serious ethical concern be-
cause biased AI models can influence further research,
decision-making, and public opinion.
In academic research, AI bias has already shown
how it can distort outcomes. For example, AI systems
used to evaluate research quality or assign academic
rankings may be trained on publication and citation
43
3.4 Moving Forward: A Framework for Responsible AI in Research
records, metrics that can reflect existing academic in-
equalities. These datasets often favor well-resourced
institutions and established researchers, as they have
greater access to publication channels and funding. An
AI model trained primarily on these records may un-
dervalue contributions from emerging researchers or
smaller institutions, perpetuating a cycle where only a
few voices receive visibility. This bias can affect hir-
ing and funding decisions, limiting opportunities for
innovation from diverse perspectives.
AI can also introduce bias in hiring for corpo-
rate and research positions. Many companies and
institutions now use AI-driven recruitment tools to
screen applicants, learning from past hiring decisions.
If these past decisions favored certain demographics
or preferred educational backgrounds, the AI might
continue that pattern, unintentionally discriminating
against candidates from less represented groups or non-
traditional educational paths. For instance, if a com-
pany’s historical hiring data favored applicants from
certain universities, the AI might overlook qualified
applicants from other institutions, reducing diversity
and missing out on valuable perspectives.
Bias can also emerge in law enforcement AI, es-
pecially in systems designed to predict crime hotspots
or recommend sentencing. If the training data reflects
biased policing practices or historical disparities in sen-
tencing, the AI may continue to target specific neigh-
borhoods or individuals unfairly. This can influence
judicial outcomes, affecting people’s lives and perpet-
uating systemic bias within the criminal justice system.
In all these cases, the common thread is that AI
can unintentionally reinforce the very biases that exist
in its training data. To address this, researchers and
developers need to scrutinize the data sources used in
AI training. A more equitable dataset, for example,
could include research from diverse academic fields,
grading samples from a wide range of dialects and
linguistic backgrounds, or hiring data that anonymizes
certain demographic information. Techniques like bias
detection, reweighting, and adversarial debiasing are
critical for minimizing the effects of biased data.
Bias remains one of the toughest ethical issues to
handle, as it can be deeply ingrained and difficult to
detect. By actively managing bias, we can help ensure
AI contributes to fairer and more inclusive outcomes
across research, education, hiring, and beyond. Ad-
dressing these concerns is essential if we want AI to
serve as a truly reliable and unbiased partner in ad-
vancing science and society.
3.4 Moving Forward: A Framework
for Responsible AI in Research
Ensuring that AI contributes responsibly and eq-
uitably to scientific research requires a comprehensive
ethical framework. Such a framework should address
the various challenges posed by AI systems to foster
transparency, accuracy, and fairness. Key steps in this
framework include:
Rigorous Testing and Validation
AI models should undergo continuous testing with
varied datasets to ensure they remain accurate and re-
liable across different research areas, reducing the risk
of overfitting or unintended biases.
Enhancing Transparency and Explainability
Increasing AI transparency allows scientists and
the public to understand AI-driven conclusions, fos-
tering trust through interpretability tools and simpler
models that reveal the logic behind AI decisions.
Proactively Managing and Mitigating Bias
By using diverse training data and techniques like
reweighting and debiasing, researchers can actively re-
duce AI model bias, helping ensure fair and balanced
research outcomes.
Establishing Ethical Guidelines and
Standards
Research organizations should define ethical guide-
lines covering data privacy, transparency, and account-
ability, providing a structured approach to address eth-
ical concerns and protect societal values.
44
REFERENCES
Incorporating Human Oversight and
Accountability
Human oversight ensures that AI serves as a sup-
port tool rather than a replacement for expert judgment,
with review boards validating AI findings to guard
against errors and unintentional biases.
Prioritizing Interdisciplinary Collaboration
Collaboration among experts from various fields,
such as ethics, computer science, and the research do-
main, helps identify ethical challenges and create bal-
anced solutions for using AI in research.
Continuous Monitoring and Updating of AI
Models
Regularly updating AI models helps adapt them to
new data and evolving ethical standards, ensuring their
ongoing fairness, accuracy, and relevance in changing
research contexts.
Ensuring Data Privacy and Security
An ethical AI framework must prioritize strict data
handling and anonymization protocols to protect indi-
vidual privacy and prevent misuse, especially when
dealing with sensitive research data.
By implementing these principles, the scientific
community can establish a responsible AI framework
that maximizes AI’s potential while safeguarding re-
search integrity, trust, and inclusivity. These measures
are fundamental to harnessing AI’s capabilities in a
way that promotes ethical standards and advances sci-
entific knowledge in service of society.
References
The ethics of using artificial intelligence in sci-
entific research: new guidance needed for a new
tool
Specific challenges posed by artificial intelli-
gence in research ethics
Ethical concerns mount as AI takes bigger decision-
making role in more industries
AI assistance with scientific writing: Possibili-
ties, pitfalls, and ethical considerations
Legal and Ethical Consideration in Artificial In-
telligence in Healthcare: Who Takes Responsi-
bility??
About the Author
Jinsu Ann Mathew is a research scholar
in Natural Language Processing and Chemical Infor-
matics. Her interests include applying basic scientific
research on computational linguistics, practical appli-
cations of human language technology, and interdis-
ciplinary work in computational physics.
References
45
About airis4D
Artificial Intelligence Research and Intelligent Systems (airis4D) is an AI and Bio-sciences Research Centre.
The Centre aims to create new knowledge in the field of Space Science, Astronomy, Robotics, Agri Science,
Industry, and Biodiversity to bring Progress and Plenitude to the People and the Planet.
Vision
Humanity is in the 4th Industrial Revolution era, which operates on a cyber-physical production system. Cutting-
edge research and development in science and technology to create new knowledge and skills become the key to
the new world economy. Most of the resources for this goal can be harnessed by integrating biological systems
with intelligent computing systems offered by AI. The future survival of humans, animals, and the ecosystem
depends on how efficiently the realities and resources are responsibly used for abundance and wellness. Artificial
intelligence Research and Intelligent Systems pursue this vision and look for the best actions that ensure an
abundant environment and ecosystem for the planet and the people.
Mission Statement
The 4D in airis4D represents the mission to Dream, Design, Develop, and Deploy Knowledge with the fire of
commitment and dedication towards humanity and the ecosystem.
Dream
To promote the unlimited human potential to dream the impossible.
Design
To nurture the human capacity to articulate a dream and logically realise it.
Develop
To assist the talents to materialise a design into a product, a service, a knowledge that benefits the community
and the planet.
Deploy
To realise and educate humanity that a knowledge that is not deployed makes no difference by its absence.
Campus
Situated in a lush green village campus in Thelliyoor, Kerala, India, airis4D was established under the auspicious
of SEED Foundation (Susthiratha, Environment, Education Development Foundation) a not-for-profit company
for promoting Education, Research. Engineering, Biology, Development, etc.
The whole campus is powered by Solar power and has a rain harvesting facility to provide sufficient water supply
for up to three months of drought. The computing facility in the campus is accessible from anywhere through a
dedicated optical fibre internet connectivity 24×7.
There is a freshwater stream that originates from the nearby hills and flows through the middle of the campus.
The campus is a noted habitat for the biodiversity of tropical Fauna and Flora. airis4D carry out periodic and
systematic water quality and species diversity surveys in the region to ensure its richness. It is our pride that
the site has consistently been environment-friendly and rich in biodiversity. airis4D is also growing fruit plants
that can feed birds and provide water bodies to survive the drought.