Cover page
Image Name: The Blue Heart
The blue heart symbolizes the urgent need for a harmonious relationship between technology and nature, where
artificial intelligence can play a pivotal role in environmental conservation. As we harness AI to monitor
ecosystems and predict climate changes, we must ensure that our innovations promote health and sustainability
for all living beings. By embracing this blue heart, we commit to a future where technology enhances our
connection to the planet, fostering both ecological balance and human well-being. Let this symbol inspire us to
create intelligent solutions that protect our environment while nurturing our health.
Managing Editor Chief Editor Editorial Board Correspondence
Ninan Sajeeth Philip Abraham Mulamoottil K Babu Joseph The Chief Editor
Ajit K Kembhavi airis4D
Geetha Paul Thelliyoor - 689544
Arun Kumar Aniyan India
Sindhu G
Journal Publisher Details
Publisher : airis4D, Thelliyoor 689544, India
Website : www.airis4d.com
Email : nsp@airis4d.com
Phone : +919497552476
i
Editorial
by Fr Dr Abraham Mulamoottil
airis4D, Vol.2, No.9, 2024
www.airis4d.com
The editorial of this edition of airis4D is pleased
to spotlight Chinar Deshpande, a 12th-grade student at
Aditya English Medium School, who is deeply passion-
ate about research in Computer Vision, Machine Learn-
ing, and the intersection of AI and Education. Chinar
has interned at EagleRobotLab in Bangalore, where he
contributed to developing humanoid robots designed
to teach and assist instructors in classrooms. He is
also actively collaborating with the Pune Knowledge
Cluster, showcasing several proof-of-concept projects.
Chinar’s notable project included in this edition,
”The Swachha Shala Initiative, focuses on improv-
ing hygiene in schools across India, particularly for
girls whose attendance is affected by inadequate facili-
ties. The initiative employs a React Native app that en-
ables students to upload images of school bathrooms,
which are then analyzed by a Convolutional Neural
Network (CNN) to detect cleanliness issues. By in-
volving students in maintaining hygiene standards, the
project aims to create a healthier learning environment
and reduce school absences. Currently, the initiative
is in the testing phase, with ongoing efforts to enhance
its machine-learning models.
In the article ”Transformer architecture in Com-
puter vision-Part II” by Blesson George, he continues
the discussion about Vision transformers. He discuss
about the seminal paper An Image is Worth 16x16
Words: Transformers for Image Recognition at Scale”
by Alexey Dosovitskiy et al. This discussion helps
to understand how the transformer architecture revo-
lutionized the image recognition tasks and understand
the innovative techniques introduced in this paper.
The article of Aromal P, ”X-ray Astronomy: Through
Missions” explores the development and achievements
of X-ray astronomy through various satellite missions.
Early X-ray satellites could not directly image X-ray
sources due to the challenges of focusing high-energy
photons. The launch of the Einstein satellite in 1978
marked a breakthrough with its innovative focusing
optics, dramatically enhancing the sensitivity and res-
olution of X-ray imaging. Subsequent missions in the
1980s, including Japans Hinotori and Tenma, the Eu-
ropean Space Agency’s EXOSAT, and the Japanese-
British-American collaboration Ginga, further advanced
the field with discoveries in solar phenomena, high-
energy X-ray sources, and cosmic phenomena like
quasars and X-ray binaries. These missions set the
stage for a ”golden period” of X-ray astronomy from
the 1990s onwards, characterized by even more sensi-
tive and capable satellites. Future articles will delve
into these groundbreaking missions.
Sindhu G examines in the article ”Exploring Stel-
lar Clusters: Insights from Color-Magnitude Diagrams,
Part-2” the Hertzsprung-Russell diagram (HRD) of the
globular star cluster Messier 55. It highlights how HR
diagrams offer insights into stellar evolution by com-
paring star clusters with different chemical composi-
tions and evolutionary stages. The HRD of Messier 55
shows it is an older cluster with a higher proportion of
evolved stars and a lack of high-mass main-sequence
stars, such as types O, B, A, and F. The presence of
blue stragglers and red giants indicates stellar mergers
and different evolutionary phases. The article contrasts
these features with those of younger open clusters and
suggests further exploration of HR diagrams of various
clusters in future articles.
Geetha Paul explores. in the article ”Drug Re-
purposing or Drug Repositioning: An Emerging Ap-
proach in Drug Discovery”, the concept of drug re-
purposing, is a strategy to find new therapeutic uses
for existing drugs. Unlike the traditional drug discov-
ery process, which is time-consuming and costly, drug
repurposing leverages known safety profiles and clini-
cal data to reduce development time, costs, and risks.
The article discusses various methodologies for drug
repurposing, including drug-oriented, target-oriented,
and disease/therapy-oriented approaches. This strat-
egy has proven effective for treating rare, complex, and
neglected diseases and has gained prominence due to
advancements in bioinformatics, cheminformatics, and
artificial intelligence. The article concludes that drug
repurposing is becoming an essential component of
modern drug discovery.
The article The Evolution of Artificial Intelli-
gence in Neonatology: An India-Focused Perspec-
tive” by Kalyani Bagri explores the growing role of
AI in improving neonatal care. Globally, AI is trans-
forming Neonatal Intensive Care Units (NICUs) by
enhancing monitoring, diagnosis, and treatment. In
India, while the adoption has been slower, innova-
tive and cost-effective AI solutions tailored to local
challenges are gaining traction. AI technologies such
as predictive analytics, computer vision, and Natural
Language Processing (NLP) are being used to pre-
dict complications, enable timely interventions, and
provide real-time data-driven insights. Despite chal-
lenges like fragmented infrastructure and data quality
issues, initiatives by prominent Indian organizations
and collaborations among government and private en-
tities are driving AI integration in neonatology. These
efforts aim to improve patient outcomes and expand
access to specialized care, particularly in underserved
regions, by enhancing operational efficiency in elec-
tronic NICUs (e-NICUs). The article ”Bead Chip
Technology: Simplifying Genetic Analysis” by Jinsu
Ann Mathew explains how DNA microarray technol-
ogy has revolutionised genetic analysis by allowing the
simultaneous study of thousands of genes. This tech-
nology uses a chip with thousands of specific DNA
sequences (probes) to detect and analyze genetic varia-
tions and gene expressions in a sample. Different types
of DNA microarrays—spotted arrays, self-assembled
arrays, and in-situ synthesized arrays—offer unique ad-
vantages for various genetic studies, from gene expres-
sion profiling to identifying genetic mutations. Each
method uses innovative approaches like hybridization,
photolithography, and inkjet printing to enhance preci-
sion and efficiency in genetic analysis. As microarray
technology advances, it holds the potential to deepen
our understanding of genetics and support the develop-
ment of personalized medicine.
The article The Evolution of GPUs: Powering
the Next Generation of High-End Computing and Ar-
tificial Intelligence” by Atharva Pathak explores the
transformation of GPUs from specialized graphics hard-
ware to essential tools in modern computing. Ini-
tially developed to handle complex graphics rendering,
GPUs have evolved into versatile processors capable of
massively parallel computations. This shift has made
them crucial for high-performance tasks like scientific
research, AI, and machine learning. Key innovations,
such as NVIDIAs CUDA and Tensor Cores, have en-
hanced their capabilities in AI training. Despite chal-
lenges like high power consumption and programming
complexity, GPUs are poised to drive future advance-
ments in AI and computing, with ongoing development
toward more powerful and efficient architectures.
iii
iv
News in Brief
by News Desk
airis4D, Vol.2, No.9, 2024
www.airis4d.com
Dr Kaveri Kale was awarded Phd for here thesis titled Automatic Radiology Report Generation for
Different Radiology Modalities by Institute: Indian Institute of Information Technology, Bombay. She worked
under the supervision of Prof. Pushpak Bhattacharyya, Department of Computer Science and Engineering, IIT
Bombay and external supervisor Prof. Ninan Sajeeth Philip, airis4D.
News in Brief
by News Desk
airis4D, Vol.2, No.9, 2024
www.airis4d.com
Geetha Paul and Jinsu Ann Mathews with students after an outreach activity for students at SNV UP
School, Thadiyoor.
Contents
Editorial ii
News in Brief v
News in Brief vi
I Artificial Intelligence and Machine Learning 1
1 Transformer Architecture in Computer Vision-Part II 2
1.1 Vision Transformer (ViT) Concept . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.2 Transformer Encoder in the Vision Transformer (ViT) . . . . . . . . . . . . . . . . . . . . . . . . 3
1.3 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
II Astronomy and Astrophysics 5
1 When the Crab Slows Down : A Bharathiya Antariksh Hackthon Story 6
1.1 Bharathiya Antariksh Hackathon 2024 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
1.2 The Crab is Slowing Down . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
2 The Pleiades Cluster 10
2.1 Pleiades . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
2.2 Mythological Significance of the Pleiades . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
2.3 Formation of the Pleiades Cluster . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
2.4 Structure of the Pleiades Cluster . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
2.5 Locating the Pleiades . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
2.6 Astronomical Significance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
2.7 Astrophysical Importance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
2.8 Cultural and Historical Significance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
2.9 Visibility and Aesthetic Appeal . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
2.10 Scientific Discoveries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
III Biosciences 14
1 Drug Repurposing or Drug Repositioning: An Emerging Approach in Drug Discovery 15
1.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
1.2 Traditional Drug Discovery vs. Drug Repurposing . . . . . . . . . . . . . . . . . . . . . . . . . . 16
1.3 Advantages of Drug Repurposing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
1.4 Methodologies of Drug Repurposing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
2 The Evolution of Artificial Intelligence in Neonatology: An India-Focused Perspective 20
2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
CONTENTS
2.2 AI and Neonatology: A Growing Synergy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
2.3 AI in Electronic Neonatal Intensive Care Units (e-NICU) . . . . . . . . . . . . . . . . . . . . . . 21
3 Biomarkers: Powerful Indicators in Healthcare 23
3.1 Diagnostic Biomarkers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
3.2 Prognostic Biomarkers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
3.3 Predictive Biomarkers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
3.4 Risk Biomarkers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
3.5 Safety Biomarkers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
3.6 Monitoring Biomarkers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
3.7 Summary of Biomarker Types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
3.8 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
IV General 28
1 The Swachha Shala Initiative 29
1.1 The Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
1.2 My Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
1.3 Dataset . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
1.4 Current App . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
1.5 Epilogue . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
V Computer Programming 32
1 The Evolution of GPUs: Powering the Next Generation of High-End Computing and
Artificial Intelligence 33
1.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
1.2 The Birth of the GPU: From Graphics to General-Purpose Processing . . . . . . . . . . . . . . . 33
1.3 GPU Architecture: A Deep Dive . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
1.4 From Graphics to General-Purpose Computing . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
1.5 GPUs in AI and Machine Learning: A Match Made in Silicon . . . . . . . . . . . . . . . . . . . 34
1.6 Challenges and Limitations of GPUs in High-End Computing . . . . . . . . . . . . . . . . . . . 34
1.7 The Future of GPUs: What’s Next? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
1.8 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
1.9 Suggested Further Reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
1.10 Timeline of GPU Evolution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
1.11 GPU Architecture Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
1.12 GPUs in Action . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
1.13 Future of GPUs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
viii
Part I
Artificial Intelligence and Machine Learning
Transformer Architecture in Computer
Vision-Part II
by Blesson George
airis4D, Vol.2, No.9, 2024
www.airis4d.com
In the previous issue, we explored the core prin-
ciples of Vision Transformers (ViTs) and their adapta-
tion of the traditional transformer architecture to han-
dle image data. In this issue, we will take a closer
look at the seminal paper An Image is Worth 16x16
Words: Transformers for Image Recognition at Scale”
by Alexey Dosovitskiy et al. This discussion will in-
volve a deep dive into the paper to understand how
it revolutionizes image recognition by applying trans-
formers, originally designed for text, to visual data.
We’ll explore the innovative techniques introduced in
this paper, including the division of images into patches
and the use of self-attention to capture complex pat-
terns, and discuss the broader implications of this ap-
proach for the field of computer vision.
In recent years, traditional convolutional neural
networks (CNNs) have dominated the field of image
recognition; however, they come with inherent limi-
tations that can hinder their scalability and efficiency.
Despite their success, as demonstrated by foundational
works such as LeCun et al. (1989) and Krizhevsky
et al. (2012), CNNs often rely on inductive biases
that may not be optimal for all tasks. As the demand
for more sophisticated models grows, it becomes in-
creasingly important to explore alternative architec-
tures that can better handle the complexities of vi-
sual data. This paper introduces the Vision Trans-
former (ViT), which leverages the Transformer archi-
tecture—originally designed for natural language pro-
cessing (NLP) by Vaswani et al. (2017)—to address
these challenges in computer vision. By applying the
principles of self-attention and treating images as se-
quences of patches, ViT interprets an image as a se-
quence of flattened 2D patches, allowing it to process
visual information in a manner similar to how Trans-
formers handle text. This novel approach not only en-
hances performance on image classification tasks but
also demonstrates that a pure Transformer can achieve
state-of-the-art results, thereby opening new avenues
for research in the application of Transformers beyond
text-based data.
1.1 Vision Transformer (ViT)
Concept
The Vision Transformer (ViT) concept represents
a significant shift in how images are processed for tasks
such as image classification. Unlike traditional convo-
lutional neural networks (CNNs), which rely on con-
volutional layers and local receptive fields to extract
features from images, ViT employs the Transformer
architecture, originally designed for natural language
processing (NLP). Here are the key components of the
ViT concept:
Image as a Sequence of Patches: ViT begins by
dividing an input image into fixed-size patches.
For example, an image might be split into 16x16
pixel patches. Each patch is then flattened into
a one-dimensional vector, effectively transform-
ing the 2D image into a sequence of 1D patch
embeddings. This approach allows the model to
1.2 Transformer Encoder in the Vision Transformer (ViT)
treat the image similarly to how text is processed
in NLP, where words are treated as tokens in a
sequence.
Patch Embeddings: Each flattened patch is pro-
jected into a higher-dimensional space using a
trainable linear projection. This process gener-
ates what are known as patch embeddings, which
serve as the input tokens for the Transformer
model. The dimensionality of these embeddings
is consistent throughout the model, allowing for
efficient processing.
Self-Attention Mechanism: The core of the
Transformer architecture is the self-attention mech-
anism, which enables the model to weigh the
importance of different patches relative to one
another. This allows ViT to capture long-range
dependencies and relationships between differ-
ent parts of the image, which is crucial for un-
derstanding complex visual patterns.
Transformer Encoder: ViT utilizes a standard
Transformer encoder, which consists of multi-
ple layers of self-attention and feed-forward net-
works. Each layer processes the sequence of
patch embeddings, allowing the model to learn
hierarchical representations of the image. The
architecture is designed to be scalable, making
it possible to train large models on extensive
datasets.
Pre-training and Fine-tuning: Similar to NLP
applications, ViT benefits from a two-step train-
ing process: pre-training on a large dataset (such
as ImageNet-21k or JFT-300M) followed by fine-
tuning on a smaller, task-specific dataset. This
approach allows the model to learn general fea-
tures during pre-training, which can then be adapted
to specific tasks during fine-tuning.
1.2 Transformer Encoder in the
Vision Transformer (ViT)
The Transformer encoder is a fundamental part of
the Vision Transformer architecture, designed to pro-
cess the sequence of patch embeddings derived from an
Figure 1: Figure 1 provides a comprehensive overview
of the Vision Transformer architecture. The process
begins with the input image, which is divided into
fixed-size patches. Each of these patches is then flat-
tened and linearly projected into a lower-dimensional
space to create patch embeddings. To incorporate
spatial information, position embeddings are added to
these patch representations. The resulting sequence of
embedded patches, along with an additional learnable
classification token, is fed into a standard Transformer
encoder. This encoder consists of multiple layers that
utilize multi-head self-attention and feed-forward net-
works to process the sequence, ultimately enabling the
model to learn rich representations of the image for
classification tasks.
input image. Each encoder layer plays a crucial role in
capturing the relationships and dependencies between
different patches, allowing the model to learn rich and
meaningful representations of the image.
Within each layer, the first component is the multi-
head self-attention mechanism. This mechanism en-
ables the model to focus on various parts of the in-
put sequence simultaneously. By computing attention
scores for each patch in relation to all other patches,
the model can determine the importance of each patch
based on its context. The multi-head aspect allows
multiple attention mechanisms to operate in parallel,
capturing different types of relationships and features.
Following the self-attention mechanism, the out-
put is processed by a feed-forward neural network. This
network consists of two linear transformations with a
non-linear activation function in between, further refin-
ing the representations learned from the self-attention
layer. To enhance training stability and convergence,
each sub-layer is equipped with a residual connection
and layer normalization. The residual connection adds
the input of the sub-layer to its output, which helps
mitigate issues like vanishing gradients, while layer
3
1.3 Conclusion
normalization standardizes the outputs for stable train-
ing dynamics.
The ViT model stacks several of these Trans-
former encoder layers, allowing each layer to build
upon the output of the previous one. This stacking en-
ables the model to learn increasingly abstract and com-
plex features of the input image. To maintain spatial
awareness, positional encodings are added to the patch
embeddings, providing information about the relative
positions of the patches in the original image.
The final output of the Transformer encoder is a
set of embeddings that encapsulate the learned rela-
tionships and features of the entire image. This output
can then be utilized for various tasks, such as classifi-
cation, by applying a classification head to the output
corresponding to a specific patch, often referred to as
the ”class token.”
1.3 Conclusion
Vision Transformer (ViT) represents a significant
advancement in the field of computer vision by reimag-
ining how images are processed for tasks like classi-
fication. By leveraging the Transformer architecture,
originally designed for natural language processing,
ViT demonstrates that self-attention mechanisms can
effectively capture the intricate relationships within vi-
sual data. The novel approach of treating images as
sequences of patches, combined with the power of the
Transformer encoder, enables ViT to achieve state-of-
the-art performance without relying on the traditional
convolutional layers of CNNs. This paradigm shift not
only challenges the dominance of CNNs in computer
vision but also opens up new possibilities for apply-
ing transformers to a wide range of visual tasks. As
research in this area continues to evolve, the Vision
Transformer is poised to inspire further innovations
and improvements in both the theory and application
of deep learning models for image recognition and be-
yond.
References
1. Dosovitskiy, Alexey, et al. An image is worth
16x16 words: Transformers for image recogni-
tion at scale.” arXiv preprint arXiv:2010.11929
(2020).
2. Vision Transformers
About the Author
Dr. Blesson George presently serves as an
Assistant Professor of Physics at CMS College Kot-
tayam, Kerala. His research pursuits encompass the
development of machine learning algorithms, along
with the utilization of machine learning techniques
across diverse domains.
4
Part II
Astronomy and Astrophysics
When the Crab Slows Down : A Bharathiya
Antariksh Hackthon Story
by Aromal P
airis4D, Vol.2, No.9, 2024
www.airis4d.com
1.1 Bharathiya Antariksh Hackathon
2024
Honorable Prime Minister of India Shri Narendra
Modi declared 23
rd
August as the National Space Day,
to celebrate the success in achieving a soft landing
of the Vikram Lander at the Shiv Shakti point on the
southern pole of the moon on 23
rd
August 2023. The
nation celebrated its first National Space Day (NSpD)
on 23
rd
August 2024. A series of events were held
across the nation as a part of the celebration and two
national level competitions were conducted by ISRO:
Bharathiya Antariksh Hackathon 2024 (BAH -
2024).
ISRO Robotic Challenge.
Here, I would like to emphasise more on BAH - 2024.
This nationwide competition was launched by ISRO in
collaboration with Hack2skill on 4
th
July 2024. The
competition. 12 problem statements were given:
Optimizing Urban Futures: Leveraging Digital
Twins for Comprehensive Infrastructure Man-
agement.
Generation of Rooftop Solar Energy Potential
Map Using Machine Learning/Deep Learning
Based Building Footprint Extraction.
Automatic detection of craters & boulders from
Orbiter High Resolution Camera(OHRC) images
using AI/ML techniques.
Voice enabled user interface for geospatial map
based.
Nowcasting of Precipitation Systems using C-
Band Doppler Radar Observations.
Feature Extraction from Remote Sensing High
Resolution Data using AI/ML (Ex–High Tension
tower, windmill, electric substation, Brick Kiln,
farmbunds).
How much the Crab pulsar has slowed down
since launch of AstroSat? Finding rate of spin
down rate of Crab pulsar with AstroSat LAXPC
and CZTI observations.
Spectral classification of Chandrayaan-2 IIRS
using AI/ML for understanding geological di-
versity of the Moon.
Identification of safe navigation routes on the
Moon using Chandrayaan Images.
Image based Search of Lunar craters from global
mosaic.
Lunar surface image simulation and Visualiza-
tion.
Context-Aware Geospatial Data Retrieval using
LLM/NLP.
A team of 3-4 members can be formed to propose an
idea and solve it. After the submission of the ideas,
around 100 teams were selected for the initial inter-
view, which was held online and from that 30 teams
were selected for the Grand Finale, which was held on
13-14 August 2024 at the NRSC office, Hyderabad. At
the end of this 30 hour hackathon, 12 teams were se-
lected to present their work from which 3 temas were
declared as the winners and 2 teams were given con-
solation prizes. The top three teams were invited to
1.2 The Crab is Slowing Down
Figure 1: Team Khagola at Bharath Mandapam in
Delhi. From Left, Himanshu Grover, Shyam Praksh,
Aromal P and Biki Ram.
the maiden National Space Day celebration held at
Bharath Mandapam in New Delhi, where they pre-
sented their results and received the award from the
Honorable President of India Smt. Droupadi Murmu.
In this article, I would like to emphasise the sci-
ence that led team Khagola to win the third posi-
tion in Bharathiya Antariksh Hackthon-2024. Team
Khagola(Figure 1), formed by 4 PhD students - Hi-
manshu Grover (IIT Roorkee), Shyam Prakash (URSC
Banglore), Biki Ram (IIT Indore), and myself, Aromal
P (IIT Indore), worked on the problem statement ”How
much the Crab pulsar has slowed down since launch of
AstroSat? Finding rate of spin down rate of Crab pul-
sar with AstroSat LAXPC and CZTI observations?”
1.2 The Crab is Slowing Down
Neutron stars are the stellar remnants of interme-
diate mass stars which were too massive to form white
dwarfs but not massive enough to form a black hole.
They serve as a natural laboratory to study high-energy
and extremely dense phenomena. Formed from stars
whose radius spans over millions of kilometers, these
stars are usually found to have a radii 10km and a
mass of about 1.4 times that of the Sun. When they
are formed, they rotate at very high speed to conserve
the angular momentum. Over time, the rotating neu-
tron star, also known as the pulsar, gradually loses its
energy and begins to slow down. However, the decel-
eration is so gradual that it takes thousands of years
for their rotational period to slow down by 1 second.
As a result, the pulsars are regarded as one of the most
stable rotators in the universe.
Crab, an isolated neutron star situated at the heart
of the Crab Nebula, is believed to have been formed
in 1054 AD after a supernova explosion, which was
recorded for the first time in the human history. Crab
is one of the brightest objects in the X-ray wavelength
and is often used as a standard reference for brightness
in high energy astrophysics. Crab has been studied for
decades in both radio and X-ray wavelengths. Most
high energy experiments rely on the Crab Pulsar for
time calibrations.
1.2.1 Crab Pulsar Data of AstroSat
In order to find the spin down rate and the rate of
spin down rate of a pulsar, it is necessary to have an
instrument with high timing capabilities. AstroSat, In-
dias first multi-wavelength satellite, is one such satel-
lite which has two payloads - a Large Area X-ray Pro-
portional Counters (LAXPC) and a Cadmium-Zinc-
Telluride Imager (CZTI) with high timing capabilities.
LAXPC has an effective area of nearly 10,000 cm
2
and
works on 3-80 keV energy range. CZTI on the other
hand, has an effective area of 1,000 cm
2
and works on
20-200 keV energy range. From its launch in 2015, the
AstroSat has been observing Crab pulsar and collecting
data regularly for calibration purpose.
The AstroSat LAXPC and CZTI data can be re-
duced with the help of LaxpcSoft Format-B software
and New CZTI Pipeline version 3, respectively. For the
precise timing analysis, all timing information should
be modified with respect to the center of the solar sys-
tem. For that, the barycentric correction is also re-
quired, which can be done with the help of the as1bary
codes. For around 900 orbits of data to be reduced man-
ually, it will take a lot of time. Our team automated all
these steps using a script during this hackathon, pro-
viding a better and efficient way to reduce these data.
1.2.2 Leahy Power Spectrum and Z
2
n
Statistics Finds the Period!
We used the Leahy power spectrum, which uses
Fast Fourier Transform (FFT) Algorithm to find the
7
1.2 The Crab is Slowing Down
different frequencies in a signal. FFT is used to com-
pute the discrete Fourier transform. To find out the
oscillations using FFT, binning of data is required and
it will not considering the harmonics. Z
2
n
statistics is
another approach that we used to find the oscillations
in a signal. This is similar to that of Fourier transform
but this does not require binning of the data and also
considers the effect of harmonics.
AstroSat completes one revolution around the earth
in 90 minutes. This provides us a window of 45 min-
utes to continuously observe an object. In one orbit,
we will get around 3,000 seconds of exposure. Using
the Leahy power spectrum, we will be getting less pre-
cise value for frequency. For more precise values, Z
2
n
method is a better approach but they are computation-
ally expensive for a long range of frequencies. So, we
used both FFT and Z
2
n
statistics, to find a more pre-
cise value of the rotational frequency. Combining both
methods, we have reached upto 7 significant figures.
1.2.3 Let’s find the spin down rate
We calculated the frequency and period of se-
lected orbits from each year, starting from 2015 to 2022
(publicly available data) using both the instruments -
LAXPC and CZTI, and plotted it against time. We then
compared our results with the already available radio
data from Jodrell Bank Center for astrophysics (JBO).
From the plot, fitting it with Taylor expansion, we cal-
culated the spin down rate and rate of spin down rate,
as shown in figure(2). These values that we calculated
matched exactly with the already available radio data.
1.2.4 We’ve got a Glitch......!
We also tried to plot the rate of change of fre-
quency over time (Figure 3). Then, we came across
an event called Glitch, the abrupt changes in rotation
rate of pulsar for a short period of time. After Glitch,
the pulsar will recover and follow its previous state.
It was the first ever discovery of a Glitch event from
the AstroSat data. It is suggested that the Glitches are
arising from the internal properties of a neutron star
and hence it can be used to constrain the equation of
Figure 2: Period vs time and frequency vs time graph
of crab pulsar from LAXPC and CZTI data compared
against Jodrell Bank Observatory data
8
1.2 The Crab is Slowing Down
Figure 3: Change in frequqncy vs time graph of crab
pulsar from LAXPC and CZTI data compared against
Jodrell Bank Observatory data, vertical lines represents
the glitch event observed from radio measurements
states of neutron stars.
All these findings are done within 30 hr BAH
finale held at National Remote Sensing Center, Hyder-
abad on August 13-14 2024. We are carrying out more
studies and will be updated.
About the Author
Aromal P is a research scholar in Depart-
ment of Astronomy, Astrophysics and Space Engi-
neering (DAASE) in Indian Institute of Technology
Indore. His research mainly focuses on studies of
Thermonuclear X-ray Bursts on Neutron star surface
and its interaction with the Accretion disk and Corona.
9
The Pleiades Cluster
by Sindhu G
airis4D, Vol.2, No.9, 2024
www.airis4d.com
2.1 Pleiades
The Pleiades cluster (Figure: 1 and Figure: 2),
also known as the Seven Sisters or Messier 45, is an
open star cluster located in the constellation Taurus. It
is one of the closest and most visible star clusters to the
naked eye, even from areas with moderate light pollu-
tion. The Pleiades is visible from nearly every location
on Earth, from the North Pole to areas well beyond
the southernmost tip of South America. The clus-
ter is approximately 440 light-years away from Earth
and consists of several hundred stars, although only a
few of the brightest ones are easily visible without a
telescope. The clusters total mass is estimated to be
around 800 solar masses, with a core radius of roughly
8 light-years and a tidal radius reaching approximately
43 light-years. The Pleiades is distinguished by its
vivid blue color, produced by the high temperatures of
its prominent B-type stars, which are relatively young,
with an estimated age of 75 to 150 million years. The
Pleiades is best viewed in the Northern Hemisphere
during the winter months, especially from October to
April.
2.2 Mythological Significance of the
Pleiades
In Greek mythology, the Pleiades are the seven
daughters of the Titan Atlas and the ocean nymph
Pleione. Their names—Alcyone, Celaeno, Electra,
Maia, Merope, Sterope, and Taygete—are deeply em-
bedded in myth. According to legend, Zeus trans-
formed these nymphs into stars to protect them from
Figure 1: The Pleiades is an open star cluster in the
constellation Taurus. (Image Credit: Manfred Konrad
via Getty Images)
Figure 2: Pleiades, identikit of the brightest star clus-
ter. (Image Credit: Wikipedia)
2.3 Formation of the Pleiades Cluster
the hunter Orion. This transformation led to the clus-
ter being known as the ”Seven Sisters.” The Pleiades
have been significant across various cultures, serving
as markers for seasonal changes and guiding ancient
mariners in their navigation. While the cluster is most
commonly associated with these seven prominent stars,
additional stars within the Pleiades can be seen with the
aid of binoculars or telescopes.
2.3 Formation of the Pleiades Cluster
Origin: The Pleiades cluster formed about 100
million years ago from a large, dense cloud of gas and
dust in the interstellar medium. This cloud underwent
a process of gravitational collapse, leading to the for-
mation of numerous stars in a relatively short period.
Triggering Mechanism: The initial collapse of
the cloud may have been triggered by nearby super-
novae or the influence of other massive stars, which
caused shock waves that compressed the gas and dust,
initiating star formation.
Stellar Birth: The stars in the Pleiades formed
from the remaining gas and dust in the cluster’s core.
The cluster’s young age means that most of its stars are
still on the main sequence of stellar evolution, where
they are actively fusing hydrogen into helium.
2.4 Structure of the Pleiades Cluster
2.4.1 Core and Halo:
Core: The core of the Pleiades cluster is densely
packed with young, massive stars. This central region
is where most of the cluster’s bright, blue stars are
found. The core radius is approximately 8 light-years.
Halo: Surrounding the core is a more diffuse halo of
stars. The tidal radius, which defines the region within
which the cluster’s gravitational influence is dominant,
extends to about 43 light-years.
2.4.2 Star Distribution:
Main Sequence: The majority of the stars in
the Pleiades are on the main sequence. These stars are
mostly of spectral types B and A, characterized by their
blue and white colors due to their high temperatures.
Pre-Main-Sequence Stars: Some stars in the Pleiades
are still in their pre-main-sequence phase, where they
are contracting and heating up before beginning hydro-
gen fusion.
2.4.3 Nebula:
Reflection Nebula: The Pleiades is surrounded
by a faint reflection nebula, which is a cloud of in-
terstellar dust that reflects the light from the clusters
bright stars. This nebula is visible in long-exposure
photographs and contributes to the clusters character-
istic blue appearance.
2.4.4 Stellar Dynamics:
Gravitational Interactions: The stars in the Pleiades
interact gravitationally, but the cluster is not densely
packed like some older clusters. This relatively loose
configuration is typical for a young open cluster.
Evolution: As the Pleiades ages, its stars will gradu-
ally drift apart due to tidal interactions with the Milky
Way and the internal dynamics of the cluster. Over
time, the cluster will disperse and become less densely
populated.
2.4.5 Distance and Size:
Distance: The Pleiades is located approximately
440 light-years from Earth, making it one of the closest
star clusters to our solar system.
Size: The cluster spans about 20 light-years across,
with its stars spread out in a relatively compact and
well-defined region.
2.5 Locating the Pleiades
To locate the Pleiades(Figure 3), begin by finding
the well-known constellation Orion, the hunter. Use
the three stars in Orions belt to draw a line upward,
extending beyond his bow. The first bright star you’ll
encounter is Aldebaran, which represents the eye of the
bull in the constellation Taurus. According to Earth-
Sky, Aldebaran means ”follower” in Arabic, as it fol-
lows the Pleiades across the sky. The Pleiades cluster
11
2.6 Astronomical Significance
Figure 3: The Pleiades are easy to locate. (Image
Credit: Daisy Dobrijevic)
will be just beyond Aldebaran, appearing as a small
dipper-shaped group of stars.
2.6 Astronomical Significance
Stellar Evolution Studies: The Pleiades is a rel-
atively young open star cluster, estimated to be around
100 million years old. This makes it an ideal laboratory
for studying the early stages of stellar evolution. By
examining its stars, astronomers can gain insights into
how stars form, evolve, and interact within a cluster.
Distance Calibration: The Pleiades has been cru-
cial for calibrating the distance scale of the universe.
Accurate distance measurements to the Pleiades help
refine techniques used to measure distances to other
star clusters and galaxies, which is fundamental for
understanding the size and structure of the universe.
2.7 Astrophysical Importance
Understanding Star Formation: As a young
cluster, the Pleiades contains many hot, blue stars.
Studying these stars helps astronomers understand the
processes involved in star formation and the life cycles
of stars. The presence of a reflection nebula around the
cluster also provides information about the interaction
between starlight and interstellar dust.
Stellar Dynamics: The clusters relatively close prox-
imity allows for detailed studies of stellar motions and
dynamics within a cluster. Observing how stars move
within the Pleiades helps scientists understand the grav-
itational interactions and evolution of star clusters.
2.8 Cultural and Historical
Significance
Mythology and Folklore: The Pleiades has been
a significant feature in the mythology and folklore of
various cultures worldwide, from the Greek myth of
the Seven Sisters to Native American stories and be-
yond. These myths have enriched cultural narratives
and contributed to the understanding of how ancient
peoples perceived the sky.
Seasonal Marker: Historically, the Pleiades has been
used as a marker for agricultural and navigational pur-
poses. Its rising and setting were often associated with
the timing of planting and harvesting crops, as well as
guiding ancient sailors.
2.9 Visibility and Aesthetic Appeal
Easily Visible: The Pleiades is one of the most
easily recognizable star clusters, visible to the naked
eye even in moderately light-polluted skies. Its bright
stars and compact arrangement make it a popular object
for stargazers and amateur astronomers.
Aesthetic Beauty: The clusters striking blue stars
and the surrounding reflection nebula create a visually
stunning spectacle in the night sky, making it a favorite
target for astrophotographers.
2.10 Scientific Discoveries
Exoplanet Studies: The Pleiades cluster has also
been the subject of studies looking for exoplanets. Un-
derstanding the types of planets that might form around
young stars can provide insights into planetary forma-
tion and the potential for life elsewhere in the universe.
Overall, the Pleiades clusters proximity, visibil-
ity, and rich stellar population make it a key object of
study in both professional and amateur astronomy, as
well as a significant part of human cultural heritage.
By examining the Hertzsprung-Russell diagram
and color-magnitude diagrams of the Pleiades, astronomers
can study how stars of different masses evolve over time
and gain insights into the processes that govern star for-
mation and the early stages of stellar life.
12
2.10 Scientific Discoveries
In the upcoming article, we will delve into the
Hertzsprung-Russell diagram of the Pleiades cluster.
References:
Star Clusters
Pleiades
The Pleiades or 7 Sisters known around the
world
HR Diagram, Star Clusters,and Stellar Evolution
The Pleiades: Facts about the ”Seven Sisters”
star cluster
Pleiades, identikit of the brightest star cluster
Pleiades
About the Author
Sindhu G is a research scholar in Physics
doing research in Astronomy & Astrophysics. Her
research mainly focuses on classification of variable
stars using different machine learning algorithms. She
is also doing the period prediction of different types
of variable stars, especially eclipsing binaries and on
the study of optical counterparts of X-ray binaries.
13
Part III
Biosciences
Drug Repurposing or Drug Repositioning:
An Emerging Approach in Drug Discovery
by Geetha Paul
airis4D, Vol.2, No.9, 2024
www.airis4d.com
(Image courtesy:
https://www.slideshare.net/slideshow/drug-repurposing-147759373/147759373)
1.1 Introduction
Drug repurposing is a strategy aimed at discov-
ering new therapeutic indications for existing drugs,
including those already on the market, used in various
clinical settings, or highly characterised compounds—
even those that were previously considered failures.
Recently, drug repurposing has gained prominence as
an alternative approach for rapidly identifying and de-
veloping new pharmaceuticals, particularly for rare and
complex diseases that currently lack effective treat-
ments. This review will explore the current status
of drug repurposing for various diseases, including
skin disorders, infectious diseases, inflammatory con-
ditions, cancers, and neurodegenerative diseases. Ad-
ditionally, the review will provide insights into these
repurposed drugs structural features and mechanisms
of action.
Traditional drug discovery is a well-established
process that identifies and develops New Molecular
Entities (NMEs) to treat various diseases. This ap-
proach typically involves several sequential stages: dis-
covery and preclinical research, safety review, clinical
research, regulatory review (such as FDA review), and
post-market safety monitoring. The process begins
with identifying a potential therapeutic target, followed
by the design and synthesis of compounds that can
interact with this target. These compounds undergo
extensive preclinical testing, including in vitro and in
vivo studies, to assess their efficacy, safety, and phar-
macokinetic properties. Once a promising compound
is identified, it proceeds to clinical trials, divided into
several phases (Phase I, II, and III) to evaluate its safety
and efficacy in humans. If the compound successfully
passes these clinical trials, it is submitted for regulatory
review. The drug enters the market upon approval, but
continuous monitoring is required to ensure its long-
term safety and efficacy.
The traditional drug discovery process is notori-
ously lengthy, time-consuming, and expensive. It can
take anywhere from 10 to 16 years for a new drug to
reach the market, with an estimated cost of around $12
billion. Moreover, the risk of failure is high, with many
compounds failing at various stages of development
due to issues such as lack of efficacy, unacceptable
toxicity, or poor pharmacokinetic properties.
Drug repurposing, also known as drug reposition-
ing, is an alternative approach to drug discovery that
focuses on finding new therapeutic uses for existing
drugs. This strategy leverages the vast amount of data
and knowledge accumulated during the development
and clinical use of approved drugs to identify new in-
dications for these compounds. Drug repurposing typ-
1.2 Traditional Drug Discovery vs. Drug Repurposing
ically involves four main stages: compound identifi-
cation, compound acquisition, development, and post-
market safety monitoring. Unlike traditional drug dis-
covery, drug repurposing does not require the initial
stages of target identification and compound design,
as it starts with existing drugs that have already been
proven safe for human use. This significantly reduces
the time and cost associated with drug development.
Drug repurposing can be approached through vari-
ous methodologies, including drug-oriented, target-
oriented, and disease/therapy-oriented strategies. Drug-
oriented methods focus on the structural characteristics
and biological activities of drugs, while target-oriented
methods involve screening drugs against specific bio-
logical targets. Disease/therapy-oriented methods utilise
information about the disease process, such as pro-
teomics, genomics, and metabolomics data, to identify
potential new uses for existing drugs.
(Image courtesy:
https://pubs.acs.org/journal/acsodf?ref=pdf )
Figure 1: Illustration of the steps involved in the drug
repurposing process. Step 1: Screening identifying
potential repurposed drug candidates from a large pool
of drugs using appropriate computational or experi-
mental methodologies, Step 2: Shortlisting selection
of potential lead compounds, Step 3: Validation
validating the discovered drug through preclinical and
clinical trial investigations.
1.2 Traditional Drug Discovery vs.
Drug Repurposing
The traditional drug discovery process is lengthy,
time-consuming, and expensive, involving five stages:
discovery and preclinical, safety review, clinical re-
search, FDA review, and FDA post-market safety mon-
itoring. This process can take 10-16 years and cost
around $12 billion, with a high risk of failure. In
contrast, drug repurposing involves only four stages:
compound identification, compound acquisition, de-
velopment, and FDA post-market safety monitoring.
With advancements in bioinformatics and cheminfor-
matics tools and the availability of extensive biological
and structural databases, drug repurposing has signifi-
cantly reduced the time and cost of drug development,
with a lower risk of failure.
Drug repurposing has demonstrated success through
serendipitous observations. For example, sildenafil
(Viagra), initially developed for coronary artery dis-
ease, was repurposed for the treatment of erectile dys-
function, reducing development costs and time. Sim-
ilarly, metformin (Glucophage), an oral anti-diabetic
medication, is being developed as a cancer therapeutic
and is currently in phase II/phase III clinical trials.
(Image courtesy:
https://www.researchgate.net/figure/Methodologies-and-steps-involved-in-drug-
repositioning fig2 342991385)
Figure 2: Traditional drug discovery vs. drug repur-
posing.
1.3 Advantages of Drug Repurposing
Drug repurposing offers several advantages over
traditional drug discovery. It significantly reduces the
time spent in research and development (R&D), with an
estimated 3-12 years compared to 10-16 years for tradi-
tional methods. The cost of developing a new drug us-
ing drug repurposing is around $1.6 billion, compared
to $12 billion for conventional methods. Researchers
need only 1-2 years to identify new drug targets and
about eight years to develop a repositioned drug.
Repurposed drugs do not require the initial 6-9
16
1.4 Methodologies of Drug Repurposing
years typically needed for new drug development but
instead enter directly into preclinical testing and clini-
cal trials, reducing overall risk, time, and cost. Reports
suggest that repurposed drugs require approximately
3-12 years for FDA and European Medicines Agency
(EMA) approval at a 50-60% reduced price. The avail-
ability of pre-clinical and clinical efficacy and safety
information at the start of a repurposing project re-
duces the risks associated with early-stage failures. It
significantly lowers costs, increasing clinical safety and
success rates.
(Image courtesy:
https:
//www.google.com/url?sa=i&url=https%3A%2F%2Fsitn.hms.harvard.edu%2Fflash%2F2016%
2Fre-engineering-cures-big-data-age-precision-medicine-cocomputational-drug-repositioning%
2F&psig)
Figure 3: Fre-engineering-cures-big-data-age-
Precision-medicine-computational-drug-repositioning
and its illustration.
Cost-Effectiveness: Since repurposed drugs have
already undergone safety testing, the development pro-
cess is often faster and less expensive than new drug
development.
Established Safety Profiles: Approved drugs come
with known pharmacokinetics and safety profiles, which
can significantly reduce the risks associated with new
treatments.
Rapid Application: In urgent situations, such as
the COVID-19 pandemic, repurposing existing drugs
can provide quicker therapeutic options for patients.
Increased Success Rate: Repurposed drugs have
already been proven safe for human use, which reduces
the risk of failure during clinical development. This is
particularly important, as many traditional drug discov-
ery projects fail due to safety issues. The availability
of pre-clinical and clinical data at the start of a re-
purposing project increases the likelihood of success.
This data can help guide the development process and
identify potential issues early on.
Addressing Unmet Medical Needs: Drug repur-
posing can help develop treatments for rare, difficult-
to-treat and neglected diseases. These conditions often
lack effective therapies, and the lower cost and risk as-
sociated with drug repurposing make it an attractive
option for addressing these unmet medical needs.
1.4 Methodologies of Drug
Repurposing
The methodologies adopted in drug repurposing
can be divided into three broad groups based on the
quantity and quality of pharmacological, toxicological,
and biological activity information available:
1. Drug-Oriented Methodology: This approach eval-
uates the structural characteristics, biological ac-
tivities, adverse effects, and toxicities of drug
molecules. It identifies molecules with biolog-
ical effects based on cell/animal assays without
necessarily knowing the biological targets. Suc-
cesses in this methodology include the discovery
of sildenafil through serendipity or clinical ob-
servation.
2. Target-Oriented Methodology: This method in-
volves in silico screening or virtual high-throughput
screening (vHTS) of drugs or compounds from
libraries/databases, such as ligand-based screen-
ing or molecular docking, followed by in vitro
and in vivo high-throughput and high-content
screening (HTS/HCS) against a selective pro-
tein molecule or biomarker. This method has
a higher success rate in drug discovery because
most biological targets directly represent disease
pathways/mechanisms.
3. Disease/Therapy-Oriented Methodology: This
approach is relevant when more information about
the disease model is available. It is guided
by disease and treatment based on proteomics,
genomics, metabolomics, and phenotypic data
concerning the disease process. This method
requires the construction of specific disease net-
17
1.4 Methodologies of Drug Repurposing
works, recognising genetic expression, consider-
ing key targets, and identifying disease-causing
protein molecules related to cell and metabolic
pathways of interest in the disease model.
Examples of repurposed drugs
Conclusion
Drug repurposing is an emerging strategy in drug
discovery that offers significant advantages over tra-
18
1.4 Methodologies of Drug Repurposing
ditional methods. By leveraging existing drugs with
known safety profiles, drug repurposing reduces the
time, cost, and risk associated with drug development.
Bioinformatics, cheminformatics, and artificial intelli-
gence advancements have further accelerated the drug
repurposing. This approach is particularly valuable for
developing treatments for rare, difficult-to-treat and ne-
glected diseases. As the field continues to evolve, drug
repurposing is poised to play an increasingly important
role in the future of drug discovery. Drug repurposing
(DR), also known as drug repositioning, is a strategic
approach to drug discovery that focuses on identifying
new therapeutic uses for existing drugs. This method
is highly efficient, time-saving, and cost-effective and
minimises the risk of failure, thereby maximising the
therapeutic value of a drug and increasing the success
rate.
In contrast to the traditional drug discovery pro-
cess, which involves de novo identification and devel-
opment of new molecular entities (NMEs), drug repur-
posing leverages the combined efforts of activity-based
(experimental) and in silico-based (computational) ap-
proaches to develop new uses for existing drugs. Drug
repurposing represents a promising strategy in drug
discovery, offering a pathway to quickly identify new
therapeutic applications for existing drugs. With ad-
vancements in computational methods and a growing
understanding of disease mechanisms, the potential
for successful drug repositioning continues to expand,
making it a vital area of research in addressing unmet
medical needs.
References
https://www.sciencedirect.com
drug-repurposing-strategies-challenges-and-successes
Illustration-of-the-steps-involved-in-the-drug-repurposing-
process-Step-1-Screening
Fre-engineering-cures-big-data-age-precision-medicine-
computational-drug-repositioning
Illustration-of-the-steps-involved-in-the-drug-repurposing-
process-Step-1-Screening
Kulkarni, V. S., Alagarsamy, V., Solomon, V. R.,
Jose, P. A., & Murugesan, S. (2023). Drug Repur-
posing: An Effective Tool in Modern Drug Discovery.
Russian Journal of Bioorganic Chemistry, 49(2), 157–
166.
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9945820/
Wang, Y., Aldahdooh, J., Hu, Y. et al. DrugRepo:
a novel approach to repurposing drugs based on chem-
ical and genomic features. Sci Rep 12, 21116 (2022).
About the Author
Geetha Paul is one of the directors of
airis4D. She leads the Biosciences Division. Her
research interests extends from Cell & Molecular Bi-
ology to Environmental Sciences, Odonatology, and
Aquatic Biology.
19
The Evolution of Artificial Intelligence in
Neonatology: An India-Focused Perspective
by Kalyani Bagri
airis4D, Vol.2, No.9, 2024
www.airis4d.com
2.1 Introduction
Artificial Intelligence (AI) is transforming neona-
tology globally, with leading healthcare systems in-
tegrating AI-driven solutions into Neonatal Intensive
Care Units (NICUs). This advancement sets new stan-
dards for monitoring, diagnosing, and treating new-
borns. In contrast, Indias adoption of AI has been
more gradual. However, significant progress is being
made with innovative and cost-effective AI solutions
tailored to India’s unique healthcare challenges. In-
creased investment in digital health infrastructure and
AI innovation is expected to accelerate the develop-
ment and adoption of these technologies, narrowing
the gap between Indias advancements and global stan-
dards. This acceleration promises to enhance neonatal
care in India, leading to more widespread use of AI
technologies and improved outcomes for newborns na-
tionwide.
2.2 AI and Neonatology: A Growing
Synergy
AI’s potential in neonatology is profound. AI-
powered tools are increasingly used to predict com-
plications, provide early warnings, and enable timely
interventions. Advanced algorithms analyze medical
images with high precision, aiding in the accurate di-
agnosis of conditions in newborns. These technolo-
gies support clinical decision-making by offering real-
time insights and recommendations crucial for the care
of the most vulnerable infants. By analyzing pat-
terns in neonatal data, AI can forecast complications
such as neonatal sepsis, patent ductus arteriosus, and
other hemodynamic disturbances, as well as short-term
outcomes like mortality, bronchopulmonary dysplasia,
and brain injury. This capability for early interven-
tion is essential for improving neonatal outcomes and
supports the development of personalized treatment
strategies.
Despite these advantages, the adoption of AI in
neonatology in India faces challenges. Fragmented
healthcare infrastructure complicates the integration
of AI solutions across institutions. Data quality is-
sues and resistance from some healthcare providers
also hinder widespread implementation. However, re-
cent advancements in Machine Learning (ML) and
Deep Learning (DL) are driving increased AI adop-
tion. Predictive analytics powered by AI is crucial for
identifying high-risk infants in NICUs, enabling timely
interventions that improve outcomes. AI-driven com-
puter vision enhances the accuracy of diagnosing con-
ditions like respiratory distress syndrome by providing
advanced analysis of medical images, supporting clin-
icians in making more informed decisions.
Machine Learning (ML) algorithms are vital for
predicting neonatal outcomes by analyzing data from
electronic health records, sensors, and monitors. They
play a key role in the early detection of critical con-
2.3 AI in Electronic Neonatal Intensive Care Units (e-NICU)
ditions such as neonatal sepsis, respiratory distress
syndrome, and bronchopulmonary dysplasia, which is
essential for optimizing treatment and improving out-
comes.
Natural Language Processing (NLP) extracts valu-
able insights from clinical notes and reports, improv-
ing care quality and facilitating research. AI-powered
telemedicine platforms expand access to neonatal ex-
pertise, particularly in rural areas, by enabling re-
mote consultations and continuous monitoring. Clini-
cal Decision Support Systems (CDSSs) powered by AI
provide healthcare professionals with real-time, data-
driven recommendations, enhancing decision-making
in neonatal care with precise and timely insights.
AI also fosters collaboration among researchers,
clinicians, and institutions in India. Prominent orga-
nizations and initiatives, including the Indian Academy
of Pediatrics (IAP), National Neonatology Forum (NNF),
All India Institute of Medical Sciences (AIIMS), and
Indian Institutes of Technology (IITs), are spearhead-
ing efforts to implement and integrate AI in neonatol-
ogy. They actively support and advance the use of AI
tools to improve neonatal health outcomes and prac-
tices.
AI solutions in neonatology are designed to com-
plement, rather than replace, the expertise of healthcare
professionals. They assist clinicians by providing data-
driven insights and automating routine tasks, allowing
healthcare professionals to focus on complex decision-
making and personalized care. For example, AI-driven
systems can manage routine monitoring tasks, enabling
professionals to focus on interpreting complex data and
making informed decisions to improve neonatal out-
comes.
2.3 AI in Electronic Neonatal
Intensive Care Units (e-NICU)
AI is revolutionizing Neonatal Intensive Care Units
(e-NICUs) in India by significantly improving patient
care and operational efficiency. As the incidence of
preterm births and neonatal complications rises, AI
technologies are increasingly integrated to enhance
clinical outcomes and streamline e-NICU operations.
These systems facilitate continuous monitoring of vital
signs, early detection of complications, and personal-
ized treatment plans through the analysis of data from
electronic health records and real-time devices. AI-
driven decision support systems assist clinicians with
evidence-based recommendations, improving clinical
judgment and ensuring timely, accurate care.
The integration of AI into e-NICU in India is
driven by dynamic collaboration among government
institutions, private healthcare providers, non-profits,
and technology companies. Notable initiatives such as
the National Health Missions e-NICU program, AI-
IMS’ TeleNICU platform, Apollo Hospitals e-NICU
initiative, and the Medtronic-India e-NICU program
highlight a unified commitment to revolutionizing neona-
tal care. These efforts reflect a concerted push to lever-
age AI for enhancing patient outcomes and streamlin-
ing operations. They also extend specialized care to
underserved regions, demonstrating a robust and mul-
tifaceted approach to advancing neonatal healthcare in
India. Support from organizations such as the Min-
istry of Health and Family Welfare, Save the Children,
and UNICEF significantly amplifies these initiatives.
Their backing enhances efforts to improve monitoring,
predict adverse events, and personalize care, ensur-
ing equitable access to high-quality neonatal services
throughout India, regardless of geographic location.
References
1. Keles, E., Bagci, U. The past, current, and fu-
ture of neonatal intensive care units with artifi-
cial intelligence: a systematic review. npj Digit.
21
2.3 AI in Electronic Neonatal Intensive Care Units (e-NICU)
Med. 6, 220 (2023). https://doi.org/10.1038/
s41746-023-00941-5
2. Beam, K., Sharma, P., Levy, P. et al. Artificial
intelligence in the neonatal intensive care unit:
the time is now. J Perinatol 44, 131–135 (2024).
https://doi.org/10.1038/s41372-023-01719-z
3. Chioma, R.; Sbordone, A.; Patti, M.L.; Perri, A.;
Vento, G.; Nobile, S. Applications of Artificial
Intelligence in Neonatology. Appl. Sci. 2023,
13, 3211. https://doi.org/10.3390/app13053211
4. Patel, S.Y., Palma, J.P., Hoffman, J.M. et al.
Neonatal informatics: past, present and future.
J Perinatol 44, 773–776 (2024). https://doi.org/
10.1038/s41372-024-01924-4
5. Rallis, D.; Baltogianni, M.; Kapetaniou, K.; Gi-
apros, V. Current Applications of Artificial In-
telligence in the Neonatal Intensive Care Unit.
BioMedInformatics 2024, 4, 1225-1248. https:
//doi.org/10.3390/biomedinformatics402006
6. Sullivan, B.A., Beam, K., Vesoulis, Z.A., Aziz,
K.B., Husain, A.N., Knake, L.A., Moreira, A.G.,
Hooven, T.A., Weiss, E.M., Carr, N.R., El-Ferzli,
G.T., Patel, R.M., Simek, K.A., Hernandez, A.J.,
Barry, J.S., McAdams, R.M. Transforming neona-
tal care with artificial intelligence: challenges,
ethical consideration, and opportunities. J Peri-
natol. 2024 Jan; 44(1):1-11. https://doi.org/10.
1038/s41372-023-01848-5
7. Yang, X., Zhang, J., Cao, M., Pan, Y., Zhang,
Y. Application of e-health on neonatal inten-
sive care unit discharged preterm infants and
their parents: Protocol for systematic review
and meta-analysis. Digit Health. 2023 Oct
9; 9:20552076231205271. https://doi.org/10.
1177/20552076231205271
About the Author
Dr. Kalyani Bagri is a Senior Research
Associate at Fernandez Foundation in Hyderabad, In-
dia, where she plays a pivotal role as the lead data sci-
entist in the Neonatology Department. She earned her
Ph.D. in Astrophysics from Pt. Ravishankar Shukla
University, in collaboration with IUCAA and TIFR
Mumbai. In her current role, Dr. Bagri independently
integrates Artificial Intelligence (AI) into neonatal
care. She is leading critical projects, including the
development of a Bronchopulmonary Dysplasia esti-
mator to guide strategic decisions and optimize steroid
usage in neonates, and a sepsis calculator designed to
detect sepsis in neonates prior to clinical recognition.
Additionally, she is involved in a range of initiatives
aimed at advancing neonatal health outcomes through
her innovative and data-driven approaches.
22
Biomarkers: Powerful Indicators in
Healthcare
by Jinsu Ann Mathew
airis4D, Vol.2, No.9, 2024
www.airis4d.com
Biomarkers are essential tools in modern medicine
that help doctors and scientists understand whats hap-
pening inside our bodies. A biomarker is any measur-
able indicator of a biological condition or process, such
as a molecule in our blood, a gene, or even a physical
measurement like blood pressure. These indicators are
used to diagnose diseases, predict health risks, moni-
tor the effects of treatments, and guide decisions on the
best therapies for individual patients.
For example, cholesterol levels are biomarkers
that help assess the risk of heart disease. In cancer
treatment, specific genetic biomarkers can determine
whether a patient will respond well to a particular ther-
apy. By understanding these indicators, doctors can
tailor treatments to fit each patients needs, leading to
better health outcomes.
With advances in science and technology, differ-
ent types of biomarkers are being identified and cat-
egorized based on their specific roles in healthcare.
Some biomarkers are used to diagnose diseases, while
others predict a persons risk of developing a condi-
tion, monitor the progress of a disease, or assess how
well a treatment is working. This article will explore
the various types of biomarkers, including diagnos-
tic, prognostic, predictive, risk, safety, and monitor-
ing biomarkers, and explain how each of them plays
a unique role in improving patient care and guiding
medical decisions.
3.1 Diagnostic Biomarkers
Diagnostic biomarkers are critical tools in modern
medicine, used to detect and confirm the presence of a
specific disease or medical condition. These biomark-
ers are measurable indicators—often proteins, genes,
or other molecules—that can be detected in bodily flu-
ids, tissues, or other samples. Their primary function is
to aid in the accurate and timely diagnosis of diseases,
which is essential for initiating appropriate treatment
and improving patient outcomes. For example, the
detection of human chorionic gonadotropin (hCG) in
urine is a well-known diagnostic biomarker used in
pregnancy tests, offering a quick and accurate method
to confirm pregnancy.
The application of diagnostic biomarkers spans a
wide array of medical fields, from oncology and car-
diology to infectious diseases and neurological disor-
ders. In oncology, for instance, the presence of cancer-
specific biomarkers like CA-125 for ovarian cancer or
carcinoembryonic antigen (CEA) for colorectal can-
cer helps clinicians detect malignancies at an earlier
stage, often before symptoms arise. Early detection
through these biomarkers can lead to timely interven-
tion, improving the chances of successful treatment and
survival. Similarly, in infectious diseases, diagnostic
biomarkers such as viral RNA or specific antibodies
allow for rapid identification of pathogens, enabling
targeted treatment and reducing the spread of infec-
tion.
3.2 Prognostic Biomarkers
3.2 Prognostic Biomarkers
Unlike diagnostic biomarkers, which are used to
detect the presence of a disease, prognostic biomarkers
help predict the future progression of the disease, offer-
ing insights into the expected clinical trajectory. These
biomarkers can indicate whether a disease is likely to
progress rapidly or remain stable, whether a patient
is at higher risk of complications, or what the overall
chances of survival might be. For example, in breast
cancer, the overexpression of the HER2 gene is a prog-
nostic biomarker that indicates a more aggressive form
of the disease, often associated with a poorer outcome.
Prognostic biomarkers are particularly useful in
stratifying patients based on their risk levels, allowing
healthcare providers to tailor treatment plans accord-
ing to the severity and expected progression of the
disease. In oncology, for instance, the presence of
specific genetic mutations, such as the BRAF muta-
tion in melanoma, can signal a more aggressive dis-
ease course, guiding decisions about the intensity and
type of treatment needed. Similarly, in cardiovascu-
lar disease, elevated levels of certain biomarkers, like
NT-proBNP, can predict a higher risk of heart fail-
ure, prompting closer monitoring and more aggressive
management to prevent adverse outcomes.
The ability of prognostic biomarkers to predict
disease outcomes makes them invaluable in both clin-
ical practice and research. They help clinicians decide
on the best course of action, whether it involves more
aggressive treatment for high-risk patients or a more
conservative approach for those with a favorable prog-
nosis. Additionally, prognostic biomarkers play a key
role in clinical trials, where they are used to identify
patient subgroups that might benefit most from a new
therapy, thereby enhancing the precision and efficiency
of clinical research.
3.3 Predictive Biomarkers
Predictive biomarkers are integral to the advance-
ment of personalized medicine, offering a way to an-
ticipate how an individual patient is likely to respond
to a specific treatment. These biomarkers are mea-
surable indicators, such as genetic variations, protein
levels, or other molecular features, that can forecast the
effectiveness or potential side effects of a therapeutic
intervention before it is administered. By identifying
patients who are more likely to benefit from a particu-
lar treatment or who may experience adverse reactions,
predictive biomarkers allow healthcare providers to tai-
lor therapies to the unique needs of each patient. This
customization not only enhances treatment efficacy but
also minimizes the risk of unnecessary side effects.
The application of predictive biomarkers is broad,
influencing treatment decisions across various medical
fields. They help clinicians choose the most appropri-
ate therapy for each patient, avoiding a one-size-fits-all
approach. For instance, certain biomarkers might indi-
cate that a patient is likely to respond well to a specific
drug, guiding the selection of that treatment. Con-
versely, the presence or absence of a biomarker might
suggest that a different therapy would be more effec-
tive, helping to avoid treatments that are unlikely to
work. This approach leads to more targeted and effi-
cient care, reducing the trial-and-error often associated
with treatment decisions.
Moreover, predictive biomarkers are not just about
selecting the right medication. They also provide in-
sights into the appropriate dosage, frequency, and du-
ration of treatment, further refining patient care. By
understanding a patient’s unique biological character-
istics, healthcare providers can better predict how a
treatment will interact with their body, leading to more
precise and individualized care. In summary, predic-
tive biomarkers are essential tools in modern medicine,
enabling healthcare providers to personalize treatment
strategies, improve patient outcomes, and ensure that
each patient receives the most effective and safest care
possible.
3.4 Risk Biomarkers
Risk biomarkers are biological indicators that help
predict an individual’s likelihood of developing a spe-
cific disease or medical condition in the future. Unlike
diagnostic biomarkers, which detect the presence of
an existing disease, risk biomarkers are used in pre-
24
3.5 Safety Biomarkers
ventive medicine to identify individuals who are at an
increased risk of developing a condition before any
symptoms or clinical signs appear. These biomarkers
provide valuable insights into a person’s health, en-
abling early interventions that can potentially delay or
prevent the onset of disease.
Risk biomarkers can be genetic, biochemical, or
physiological in nature. For instance, certain genetic
mutations, like those found in the BRCA1 and BRCA2
genes, are well-known risk biomarkers for breast and
ovarian cancer. Individuals who carry these muta-
tions have a significantly higher lifetime risk of devel-
oping these cancers compared to the general popula-
tion. Knowing this information allows for personalized
preventive strategies, such as increased surveillance,
lifestyle modifications, or even prophylactic surgeries,
to reduce the risk of cancer development.
The utility of risk biomarkers lies in their ability
to stratify populations based on their susceptibility to
disease, allowing healthcare providers to focus preven-
tive efforts on those who are most at risk. This targeted
approach not only improves health outcomes by pre-
venting or delaying disease onset but also contributes
to more efficient use of healthcare resources. In sum-
mary, risk biomarkers play a critical role in predictive
and preventive medicine, offering a proactive means of
managing health and reducing the burden of chronic
diseases.
3.5 Safety Biomarkers
Safety biomarkers are specialized indicators used
in medicine and clinical research to detect, predict, or
monitor the potential adverse effects of a drug, treat-
ment, or environmental exposure. These biomarkers
are crucial in ensuring that therapies are safe for pa-
tients by identifying early signs of toxicity or other
harmful effects before they become clinically signif-
icant. Their primary role is to safeguard patients
during treatment by providing real-time information
about how a body responds to a particular interven-
tion. In clinical practice and drug development, safety
biomarkers play a pivotal role in identifying early signs
of harm. They help detect physiological changes that
might indicate stress or damage to specific organs or
systems in the body. For example, if a biomarker shows
an abnormal increase in a specific measurement, it
could signal that a particular organ is under strain due
to the treatment. This allows healthcare providers to
intervene promptly, potentially preventing more severe
side effects or long-term damage.
During the development of new drugs or treat-
ments, safety biomarkers are used extensively to eval-
uate the tolerability of a therapy. They provide critical
data on how the body reacts at different dosage levels,
helping to identify the safest and most effective dose.
This information is vital in clinical trials, where ensur-
ing participant safety is paramount. If safety biomark-
ers indicate potential risks, researchers can adjust the
treatment protocol or halt the trial to prevent harm to
participants.
In ongoing patient care, safety biomarkers are
used to monitor the effects of long-term treatments.
They provide continuous feedback on whether a treat-
ment remains safe for the patient over time. This on-
going monitoring helps manage the delicate balance
between treatment efficacy and safety, ensuring that
the benefits of a therapy outweigh any potential risks.
By detecting adverse effects early, safety biomarkers
enable timely interventions, which can include alter-
ing the treatment regimen or implementing additional
protective measures.
3.6 Monitoring Biomarkers
Monitoring biomarkers are used to track the pro-
gression of a disease or to assess the effectiveness of
a treatment over time. They provide ongoing informa-
tion about the status of a condition or the impact of a
therapeutic intervention. These biomarkers are often
measured repeatedly during the course of treatment to
guide clinical decisions, adjust therapies, and predict
outcomes.
One of the key applications of monitoring biomark-
ers is in chronic diseases, where regular assessment is
crucial for managing long-term health. For example,
in diabetes management, Hemoglobin A1c (HbA1c) is
a widely used monitoring biomarker that reflects av-
25
3.7 Summary of Biomarker Types
erage blood glucose levels over the past two to three
months. By regularly measuring HbA1c, healthcare
providers can determine how well a patient’s diabetes
is being controlled and whether adjustments in medi-
cation, diet, or lifestyle are needed. Similarly, in cancer
treatment, the levels of tumor markers such as prostate-
specific antigen (PSA) in prostate cancer or CA-125 in
ovarian cancer are monitored to evaluate the effective-
ness of therapy and to detect potential recurrences at
an early stage.
Monitoring biomarkers are also essential in man-
aging infectious diseases. In HIV treatment, for in-
stance, the viral load—the amount of HIV RNA in the
blood—is a critical monitoring biomarker used to as-
sess how effectively antiretroviral therapy is suppress-
ing the virus. A declining viral load indicates that the
treatment is working, while an increase may suggest
drug resistance or poor adherence to therapy, prompt-
ing a change in treatment strategy.
The utility of monitoring biomarkers lies in their
ability to provide dynamic, real-time information about
a patients health status. This continuous feedback al-
lows for more personalized and adaptive healthcare,
where treatments can be fine-tuned based on the pa-
tient’s response, thereby optimizing outcomes. Fur-
thermore, monitoring biomarkers can help in predict-
ing relapses or complications, enabling preemptive in-
terventions that can improve prognosis and quality of
life. In summary, monitoring biomarkers are essen-
tial in the ongoing management of diseases, offering
critical insights into treatment efficacy and disease pro-
gression, and ultimately guiding more precise and re-
sponsive patient care.
3.7 Summary of Biomarker Types
In this article, we have explored the different types
of biomarkers and their critical roles in medical diag-
nosis, treatment, and monitoring. To provide a clear
and concise overview, Figure ?? summarizes the main
functions of each biomarker type. This figure is de-
signed to encapsulate the essence of each biomarker’s
purpose, aiding in the quick understanding and differ-
entiation of their roles.
Figure 1: Different types of biomarkers
3.8 Conclusion
In conclusion, biomarkers play an essential role
in modern medicine by providing valuable informa-
tion across various stages of disease management. Di-
agnostic biomarkers enable precise identification of
diseases, while prognostic biomarkers offer insights
into disease severity and potential outcomes. Predic-
tive biomarkers help tailor treatment strategies to in-
dividual patients, ensuring that therapies are effective.
Risk biomarkers assess the likelihood of disease de-
velopment, allowing for preventive measures. Safety
biomarkers monitor potential adverse effects of treat-
ments, safeguarding patient health. Finally, monitoring
biomarkers track the progression or response to treat-
ment, guiding ongoing therapeutic decisions.
Understanding the distinct functions of these biomark-
ers enhances their application in clinical practice, lead-
ing to more effective and personalized healthcare. By
integrating these biomarkers into routine practice, clin-
icians can improve diagnostic accuracy, optimize treat-
ment regimens, and ultimately enhance patient out-
comes.
References
An Introduction to Biomarker Types & Applica-
tions
Biomarkers and Their Role in Detection of Biomolecules
Clinical trials, how biomarkers help research
What are Biomarkers? What are the types of
Biomarkers?
Cancer biomarker
Biomarker discovery using scRNA-seq: whats
the big deal?
26
3.8 Conclusion
About the Author
Jinsu Ann Mathew is a research scholar
in Natural Language Processing and Chemical Infor-
matics. Her interests include applying basic scientific
research on computational linguistics, practical appli-
cations of human language technology, and interdis-
ciplinary work in computational physics.
27
Part IV
General
The Swachha Shala Initiative
by Chinar Deshpande
airis4D, Vol.2, No.9, 2024
www.airis4d.com
The Swachha Shala Initiative is a project aimed
towards helping students from all corners of India get
access to clean and hygienic bathrooms, especially in
schools. According to UNICEF (1), if bathrooms are
not clean, students (mainly girls) are much less likely to
use them, leading to repeated absences from schools,
and in some cases, even dropouts. Swachha means
clean, and Shala means school in Marathi. Putting
them together gives us Swachha Shala, but the project
is so much bigger than making schools clean. Swachha
also refers to purity. Through this initiative, I aim to
transform schools into a safe haven for all students to
learn and grow as individuals. At a high level, the goal
for this initiative is to make sure that all the young boys
and girls of the current generation, and future ones, are
able to learn unhindered of all obstacles.
1.1 The Problem
The presence of clean bathrooms in schools is
directly related to the attendance of girls in the school.
This is due to multiple reasons, but the primary one
relates to menstruation.
If the bathrooms are not clean, girls will not be
able to manage menstruation well in schools. This
leads to reluctance in girls to use these facilities, and
this leads to their absence during menstrual cycles.
Since there is also a stigma surrounding menstruation
in most cultures, girls are forced to face insults and
bullying if they are not able to manage menstruation in
the school. Overall, this leads to a negative connotation
being developed in the minds of girls, leading to lower
academic performance, non-attendance, and dropouts.
Figure 1: The workings of a CNN
1.2 My Approach
In the research space, there are multiple articles
and research papers published about Dirt Detection
Systems and Street Litter Detection. These papers were
invaluable towards figuring out a suitable approach.
Since these research papers are similar to the area of
Computer Vision explored by this project, they were
a part of the literature review conducted. There were
some interesting findings collected from these papers.
Canedo et al (2) proposed a YOLOv5 driven ap-
proach, utilizing Transfer Learning to train an ML
model on a 20,000 image dataset. Although achieving
only a 73.3% accuracy, Canedo et al put forth a compre-
hensive algorithm for such types of applications which
is planned to be incorporated in the project.
The main solution ideated was to use a Convo-
lutional Neural Network (illustrated in Figure 1) to
classify images as clean or dirty, then to contact the
schools where immediate attention was required. One
interesting optimizer that can be helpful to the CNN
in this case is the Stochastic Gradient Descent (SGD)
to ensure a smooth learning curve. However, the main
obstacle was collecting these images.
1.3 Dataset
Figure 2: Ping et al’s Object Detection Algorithm
To solve this issue, Ping et al (3) were invalu-
able. In their paper about Road Litter Detection, they
collected images of the streets of San Jose using their
students. Also, since they were using IOU and Ob-
ject Detection (see Figure 2) to identify various types
of objects in the road, this project adopted a specific
strategy: the students could upload images of the bath-
rooms themselves, without having to rely on any third
parties. These images could be collected and the algo-
rithm could be run on them, predicting whether each
image was of a clean or dirty bathroom. Here, al-
gorithms like Object Detection and Detection Boxes
could be used to identify common indicators of a clean/
dirty bathroom. If there were multiple occurrences of
the same bathroom being flagged by different students
over a consistent period of time, the school would be
pinpointed and reached out to.
1.3 Dataset
For any Neural Network, a suitable dataset is re-
quired to train the model. In this case, since the project
was a very niche application of ML, there were no
datasets which were varied and detailed enough for the
model to be trained. To solve this issue, an in-depth
literature review was conducted to try to discover any
images/ datasets that could be used for the model train-
ing.
Although a comprehensive bathroom hygiene de-
tection dataset was not found during the literature re-
view, the impurities in a conventional bathroom could
be classified under categories such as dirt, trash, litter,
etc. All these are sub-categories in datasets such as
ImageNet, which can easily be accessed. Additionally,
there are a few datasets such as ACIN specifically for
dirt, that can be used for a more in-depth training pro-
cess. Lastly, there is also the option of web scraping
for images online (with the necessary permissions), to
generate a sizable dataset for some of the more uncom-
mon categories.
1.4 Current App
The current app is a React Native App called
Swachha Shala. At the moment, it is in the open testing
stage on the Play Store. There are plans to upload it to
the App Store as well in the future.
The app is a simple one, as mentioned above. It
simply collects images from the users, along with the
classification of each image into the categories of clean
and dirty. Additionally, the app also pinpoints exactly
where in India the image is being uploaded from. It
also has a text input for the user to enter the school
name, which is all important data used to store each
recording in an automatic Google Drive directory.
This app stores the coordinates of the upload, the
image itself, and the classification data in Google Fire-
base. Once the app is out on the Play Store, the data
collection process will begin. A CNN will be built to
make sure that the prediction capabilities of the overall
algorithm are as accurate as possible, to allow for the
performance of the app to be as smooth as possible.
There are also plans to incorporate a more detailed
verification system within the app. Template Match-
ing and Non-Max Suppression will be used to confirm
the identity of each student, and check that the data
provided is accurate.
1.5 Epilogue
Currently, the process of model training is to start.
Once the model is ready, the true capabilities of the Ini-
tiative will come out. The model and algorithm will
need a few tweaks and changes, but the goal is for stu-
dents to be involved in the hygiene and cleanliness of
their school, helping it with their contributions. Hope-
fully, The Swachha Shala Initiative will be a beacon of
30
1.5 Epilogue
Figure 3: Complete Process of uploading an Image
light for all the young students of India, illuminating
the path towards a better education, and a prosperous
future.
References
1. https://towardsdatascience.com/a-comprehensive-
guide-to-convolutional-neural-networks-the-eli5-
way-3bd2b1164a53
2. https://www.mdpi.com/2227-7080/9/4/94
3. https://www.researchgate.net/publication/341182644
4. https://github.com/nuwandda/clean-littered-road-
classification
5. https://www.thehindu.com/opinion/op-ed/the-link-
between-sanitation-and-schooling/article6423571.ece
About the Author
Chinar Deshpande is a 12th Grader study-
ing in Aditya English Medium School. He is pas-
sionate about research, having worked on projects in
Computer Vision, Machine Learning, and the AI x Ed-
ucation Space. Additionally, he has also completed in-
ternships at companies such as EagleRobotLab, Ban-
galore, which aims to build humanoid robots capable
of teaching and assisting instructors in the classroom.
Lastly, he is also collaborating with Pune Knowl-
edge Cluster, having demonstrated multiple POCs and
projects to the team.
31
Part V
Computer Programming
The Evolution of GPUs: Powering the Next
Generation of High-End Computing and
Artificial Intelligence
by Atharva Pathak
airis4D, Vol.2, No.9, 2024
www.airis4d.com
1.1 Introduction
Graphics processing units, or GPUs, have come a
long way from their early days as specialized hardware
for rendering video game graphics. Over the last two
decades, these powerful processors have evolved into
the backbone of high-end computing, driving advances
in fields ranging from scientific research to artificial in-
telligence (AI) and machine learning (ML). This arti-
cle explores the fascinating journey of GPU evolution,
their architectural innovations, and how they have be-
come indispensable tools in modern computing.
1.2 The Birth of the GPU: From
Graphics to General-Purpose
Processing
The GPU was born out of necessity. In the early
days of personal computing, CPUs were the workhorses
of the system, handling everything from calculations
to graphics rendering. However, as video games and
3D graphics became more sophisticated in the 1990s,
CPUs struggled to keep up with the demand for real-
time processing of complex visual data. This led to the
development of the first GPUs, which were designed
to offload these tasks from the CPU.
NVIDIAs introduction of the GeForce 256 in
1999 marked a significant milestone. This was the first
chip to be marketed as a graphics processing unit, capa-
ble of handling millions of polygons per second. Its ar-
chitecture was designed to perform many calculations
simultaneously, or in parallel, which was perfect for
rendering complex 3D scenes quickly and efficiently.
Unlike CPUs, which are optimized for sequential tasks,
GPUs excel at parallel processing, making them ideal
for graphics-intensive applications.
1.3 GPU Architecture: A Deep Dive
At the core of a GPU’s power is its architecture,
which has evolved dramatically since the early days.
Early GPUs like the GeForce 256 were focused solely
on graphics, with fixed-function pipelines that limited
their flexibility. These pipelines consisted of several
stages (vertex processing, rasterization, pixel process-
ing) that were hardwired to perform specific tasks.
However, as the demand for more flexible and
powerful GPUs grew, manufacturers began to develop
programmable pipelines. This shift was evident in
NVIDIAs G80 architecture, introduced in 2006, which
featured unified shader cores. These cores could per-
form both vertex and pixel processing tasks, making
the GPU much more versatile. This architectural in-
novation laid the groundwork for the GPU’s expansion
into general-purpose computing.
Modern GPUs, such as those in NVIDIAs Tesla
series, feature thousands of CUDA cores, each capable
1.4 From Graphics to General-Purpose Computing
of executing threads independently. These cores are
grouped into streaming multiprocessors (SMs), which
can execute multiple warps (groups of threads) simulta-
neously. This architecture allows GPUs to handle mas-
sive amounts of data in parallel, making them ideal for
high-performance computing (HPC) tasks such as sci-
entific simulations, big data analytics, and, of course,
AI and ML.
1.4 From Graphics to
General-Purpose Computing
The shift from fixed-function pipelines to pro-
grammable shaders in GPUs was a game-changer. It
opened the door to using GPUs for tasks beyond graph-
ics, leading to the concept of general-purpose comput-
ing on graphics processing units (GPGPU). This ap-
proach leverages the massive parallel processing power
of GPUs for a wide range of applications.
A pivotal moment in the evolution of GPGPU was
the introduction of CUDA (Compute Unified Device
Architecture) by NVIDIA in 2007. CUDA allowed de-
velopers to write programs that could run on GPUs,
taking advantage of their parallel processing capabil-
ities. This led to the widespread adoption of GPUs
in fields like scientific computing, where tasks such as
simulating physical systems, processing large datasets,
and performing complex calculations could be done
much faster than with traditional CPUs.
For example, in the field of genomics, researchers
use GPUs to accelerate the analysis of DNA sequences.
This involves comparing millions of genetic sequences
to identify variations and mutations, a task that would
take weeks or even months on CPUs but can be com-
pleted in a fraction of the time with GPUs.
1.5 GPUs in AI and Machine
Learning: A Match Made in
Silicon
The rise of AI and ML has been one of the most
significant technological trends of the past decade, and
GPUs have played a crucial role in this revolution.
Training AI models, particularly deep learning models,
involves adjusting millions or even billions of param-
eters. This requires an enormous amount of compu-
tational power, which GPUs are uniquely equipped to
provide.
The architecture of deep neural networks (DNNs),
which are the foundation of many AI models, involves
multiple layers of interconnected nodes (neurons). Train-
ing these networks requires performing vast amounts of
matrix multiplications, a task that is inherently parallel
and perfectly suited to GPUs. As a result, GPUs have
become the standard for training AI models, reducing
training times from weeks to days or even hours.
One of the most significant examples of GPU-
powered AI is NVIDIAs Tesla V100, built on the Volta
architecture. This GPU features Tensor Cores, special-
ized units designed to accelerate the mixed-precision
matrix multiply-accumulate operations that are at the
heart of deep learning. With Tensor Cores, the Tesla
V100 can deliver up to 125 teraflops of performance,
making it one of the most powerful GPUs for AI train-
ing.
This computational power has enabled the devel-
opment of state-of-the-art AI models like OpenAI’s
GPT-4 and Googles BERT. GPT-4, for example, is a
large language model with billions of parameters, ca-
pable of generating human-like text based on a given
prompt. Training such a model requires massive amounts
of data and computational resources, typically involv-
ing hundreds of GPUs working in parallel for several
days or weeks.
1.6 Challenges and Limitations of
GPUs in High-End Computing
While GPUs have revolutionized high-end com-
puting, they are not without challenges. One of the pri-
mary limitations is power consumption. High-performance
GPUs, such as those used in data centers, can consume
hundreds of watts of power, leading to significant en-
ergy costs and heat dissipation challenges. This has
spurred research into more energy-efficient GPU de-
signs and cooling technologies.
34
1.7 The Future of GPUs: Whats Next?
Another challenge is the complexity of program-
ming for GPUs. While frameworks like CUDA have
made it easier to develop GPU-accelerated applica-
tions, writing efficient GPU code still requires a deep
understanding of parallel programming concepts. This
learning curve can be a barrier for developers who are
accustomed to CPU programming.
Additionally, while GPUs excel at parallel pro-
cessing, they are not as well-suited for tasks that re-
quire high single-thread performance or involve a lot
of branching and decision-making. For these types of
tasks, CPUs still have the edge. As a result, many high-
performance computing systems use a combination of
CPUs and GPUs, each handling the tasks they are best
suited for.
1.7 The Future of GPUs: What’s
Next?
The future of GPUs is closely tied to the contin-
ued evolution of AI and high-performance computing.
As AI models become more complex, the demand for
more powerful and efficient GPUs will only increase.
GPU manufacturers like NVIDIA, AMD, and Intel are
already working on next-generation architectures that
promise even greater performance and efficiency.
One of the most exciting developments is the inte-
gration of AI-specific hardware into GPUs. For exam-
ple, NVIDIAs latest GPUs feature Tensor Cores, which
are specifically designed for AI workloads. AMD and
Intel are also developing their own AI-focused tech-
nologies, which will likely lead to even more powerful
and specialized GPUs in the coming years.
Moreover, the convergence of AI with other emerg-
ing technologies, such as quantum computing and neu-
romorphic computing, could lead to new types of GPUs
that are optimized for these applications. Quantum
computing, in particular, has the potential to revolu-
tionize the field of AI, and GPUs will likely play a key
role in this new era of computing.
(https://tatourian.blog/2013/09/03/nvidia-gpu-architecture-cuda-programming-environment/)
(https:
//exittechnologies.com/blog/data-center/gpu-technology-gpu-history-and- gpu-prices-over-time/)
1.8 Conclusion
The evolution of GPUs from simple graphics ac-
celerators to the powerhouses of modern computing
is a testament to the relentless pace of technological
innovation. Their ability to perform massive amounts
of parallel computations has made them indispensable
tools in fields ranging from scientific research to AI
and ML. As we look to the future, GPUs will continue
to drive the next generation of high-performance com-
puting, enabling new breakthroughs and shaping the
world we live in.
1.9 Suggested Further Reading
Deep Learning with GPU Acceleration by Ian
Goodfellow
The Hardware Behind Artificial Intelligence by
Jeffrey Dean
High-Performance Computing with GPUs: A
Complete Guide by Jack Dongarra
Image Suggestions
1.10 Timeline of GPU Evolution
A detailed timeline showing the key milestones
in the development of GPUs, from early fixed-function
pipelines to modern programmable architectures.
35
1.11 GPU Architecture Overview
(https://developer.nvidia.com/blog/nvidia-turing-architecture-in- depth/)
(https://www.nature.com/articles/s41598-021-91878-w)
1.11 GPU Architecture Overview
A diagram illustrating the architecture of a mod-
ern GPU, highlighting components like CUDA cores,
Tensor Cores, and streaming multiprocessors.
1.12 GPUs in Action
Visualisations of real-world applications of GPUs
in AI, such as the training of neural networks and the
simulation of scientific phenomena.
1.13 Future of GPUs
Conceptual images depicting next-generation GPU
architectures, potentially showing the integration of
AI-specific hardware and quantum computing elements.
This article aims to provide a comprehensive yet
accessible overview of GPUs, highlighting their techni-
cal evolution, current applications, and future potential
in driving high-end computing and AI.
(https:
//www.nextplatform.com/2024/06/02/nvidia-unfolds-gpu-interconnect-roadmaps-out-to-2027/)
(https://towardsdatascience.com/ten-years-of-ai-in- review-85decdb2a540)
About the Author
Atharva Pathak currently work as a Soft-
ware Engineer & Data Manager for the Pune Knowl-
edge Cluster, A project under the Office of Principal
Scientific Advisor, Govt. of India & Supported by
IUCAA, Pune, IN. Before this, I was an Astronomer
at the Inter-University Centre for Astronomy & Astro-
physics, IUCAA. I have also worked on various free-
lance projects, development required for websites and
applications, And localization of different software.
I am also a life member of Jyotirvidya Parisanstha,
Indias Oldest association of Amateur Astronomers,
and I look after the IOTA-India Occultation section
as a webmaster and data curator.
References
36
About airis4D
Artificial Intelligence Research and Intelligent Systems (airis4D) is an AI and Bio-sciences Research Centre.
The Centre aims to create new knowledge in the field of Space Science, Astronomy, Robotics, Agri Science,
Industry, and Biodiversity to bring Progress and Plenitude to the People and the Planet.
Vision
Humanity is in the 4th Industrial Revolution era, which operates on a cyber-physical production system. Cutting-
edge research and development in science and technology to create new knowledge and skills become the key to
the new world economy. Most of the resources for this goal can be harnessed by integrating biological systems
with intelligent computing systems offered by AI. The future survival of humans, animals, and the ecosystem
depends on how efficiently the realities and resources are responsibly used for abundance and wellness. Artificial
intelligence Research and Intelligent Systems pursue this vision and look for the best actions that ensure an
abundant environment and ecosystem for the planet and the people.
Mission Statement
The 4D in airis4D represents the mission to Dream, Design, Develop, and Deploy Knowledge with the fire of
commitment and dedication towards humanity and the ecosystem.
Dream
To promote the unlimited human potential to dream the impossible.
Design
To nurture the human capacity to articulate a dream and logically realise it.
Develop
To assist the talents to materialise a design into a product, a service, a knowledge that benefits the community
and the planet.
Deploy
To realise and educate humanity that a knowledge that is not deployed makes no difference by its absence.
Campus
Situated in a lush green village campus in Thelliyoor, Kerala, India, airis4D was established under the auspicious
of SEED Foundation (Susthiratha, Environment, Education Development Foundation) a not-for-profit company
for promoting Education, Research. Engineering, Biology, Development, etc.
The whole campus is powered by Solar power and has a rain harvesting facility to provide sufficient water supply
for up to three months of drought. The computing facility in the campus is accessible from anywhere through a
dedicated optical fibre internet connectivity 24×7.
There is a freshwater stream that originates from the nearby hills and flows through the middle of the campus.
The campus is a noted habitat for the biodiversity of tropical Fauna and Flora. airis4D carry out periodic and
systematic water quality and species diversity surveys in the region to ensure its richness. It is our pride that
the site has consistently been environment-friendly and rich in biodiversity. airis4D is also growing fruit plants
that can feed birds and provide water bodies to survive the drought.