Cover page
Image Name: grey-breasted prinia
The grey-breasted prinia or Franklins prinia (Prinia hodgsonii) is a wren-warbler belonging to the family of
small passerine birds found mainly in warmer southern regions of the Old World. This prinia is a resident
breeder in the Indian subcontinent, Sri Lanka and southeast Asia. This picture was taken in one of the field
surveys to the Kole wetland, Thrissur district, Kerala, India.
For more information read
Managing Editor Chief Editor Editorial Board Correspondence
Ninan Sajeeth Philip Abraham Mulamoottil K Babu Joseph The Chief Editor
Ajit K Kembhavi airis4D
Geetha Paul Thelliyoor - 689544
Arun Kumar Aniyan India
Sindhu G
Journal Publisher Details
Publisher : airis4D, Thelliyoor 689544, India
Website : www.airis4d.com
Email : nsp@airis4d.com
Phone : +919497552476
i
Editorial
by Fr Dr Abraham Mulamoottil
airis4D, Vol.2, No.8, 2024
www.airis4d.com
The article of Atharva Pathak ”Harnessing AI
for Geoinformatics and Weather Forecasting” explores
the transformative impact of AI on geoinformatics and
weather forecasting. AI enhances the analysis of large,
complex datasets, improving accuracy and efficiency
in applications like disaster management, agriculture,
urban planning, and environmental monitoring. The
article highlights AI’s role in processing satellite im-
agery, predicting extreme weather events, and optimis-
ing urban infrastructure. Despite challenges such as
data availability and model interpretability, AI’s po-
tential in these fields continues to grow, promising
better predictions and informed decision-making for
a sustainable future.
The edition starts with the article ”Understanding
the Basics of Vision Transformers - Part I” by Bles-
son George. It discusses how transformers, success-
ful in natural language processing (NLP), face chal-
lenges when applied to computer vision due to the
high-dimensional and spatial structure of images. Tra-
ditional transformers struggle with computational com-
plexity and lack the inductive biases that make Con-
volutional Neural Networks (CNNs) effective in image
tasks. However, Vision Transformers (ViT) have been
developed to overcome these limitations by adapting
transformers to process images as patches, enabling
them to capture global visual patterns. Despite their
promise, ViTs still face challenges such as high com-
putational costs and the need for large datasets. Future
discussions will cover practical implementations of Vi-
sion Transformers.
In ”Black Hole Stories-11: Energy Extraction
from a Rotating Black Hole, Professor Naresh Dad-
hich, former director of the Inter University Centre for
Astronomy and Astophysics (IUCAA), Pune, explores
how energy can be extracted from a Kerr black hole,
which has both mass and angular momentum. The key
concept is the ergosphere, a region outside the event
horizon where rotational energy can be harnessed. The
Penrose process, proposed in 1969, describes how a
particle can split in the ergosphere, with one fragment
gaining energy while the other, with negative energy,
falls into the black hole. This process can extract up
to 29% of the black holes rotational energy, which
has significant astrophysical implications. Though the
original Penrose process was deemed impractical for
astrophysical applications, modified versions, includ-
ing the Magnetic Penrose process, have shown greater
efficiency and potential for powering high-energy cos-
mic phenomena like quasars and active galactic nuclei.
In the article ”How do we know the Suns tem-
perature?” Linn Abraham explores the methods used
by scientists to determine the Suns surface and core
temperatures. The author traces the journey of scien-
tific discovery, from measuring the distance between
the Earth and the Sun using Kepler’s laws and paral-
lax to calculating the Suns mass through gravitational
principles. The article explains how the Suns effective
surface temperature of 5,800 K was derived using the
Stefan-Boltzmann law, and how the core temperature,
estimated at 15 million K, was calculated using the
ideal gas law, given the Suns immense pressure. This
exploration highlights the remarkable achievements of
early scientists in understanding our star’s properties
long before modern technology.
The article Advancements in Brain-Machine In-
terfaces, Understanding Neurodegenerative Diseases,
and Cognitive Neuroscience” by Geetha Paul explores
recent progress in neuroscience, focusing on brain-
machine interfaces (BMIs), neurodegenerative diseases,
and cognitive neuroscience. BMIs, which allow direct
communication between the brain and external devices,
are advancing treatments for neurological impairments.
The article also highlights research on neurodegener-
ative diseases like Alzheimer’s and Parkinsons, em-
phasizing the role of amyloid plaques and dopamine
loss. Cognitive neuroscience advancements, including
brain stimulation techniques and AI-driven analysis,
are deepening our understanding of brain functions and
potential therapies. The future of these fields promises
significant breakthroughs in medical treatments and
cognitive enhancement.
The article ”Bead Chip Technology: Simplify-
ing Genetic Analysis” by Jinsu Ann Mathew discusses
DNA microarray technology, which has revolutionized
genetic research by allowing the simultaneous analysis
of thousands of genes. The technology operates on the
principle of DNA hybridization and is used to measure
gene expression levels and identify genetic variations.
Different types of microarrays include spotted arrays
on glass, self-assembled arrays, and in-situ synthesized
arrays, each with unique advantages for genetic stud-
ies. These advancements in microarray technology
have significantly enhanced our ability to understand
complex genetic processes and hold great promise for
the future of personalized medicine.
iii
Contents
Editorial ii
I Artificial Intelligence and Machine Learning 1
1 Harnessing AI for Geoinformatics and Weather Forecasting 2
1.1 What is Geoinformatics? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.2 AI in Weather Forecasting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.3 Real-Life Applications of AI in Geoinformatics and Weather Forecasting . . . . . . . . . . . . . 3
1.4 Technologies and Tools Used . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.5 Challenges and Future Prospects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.6 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.7 Further Reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
2 Understanding the Basics of Vision Transformers- Part I 6
2.1 Challenges in Vision Tasks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
2.2 Comparison of CNNs and Transformers in Vision Tasks . . . . . . . . . . . . . . . . . . . . . . . 7
2.3 Adapting Transformers for Images: Vision Transformers . . . . . . . . . . . . . . . . . . . . . . 7
2.4 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
II Astronomy and Astrophysics 9
1 Black Hole Stories-11: Energy Extraction from a Rotating Black Hole 10
1.1 Static and One-Way Surfaces, and Event Horizon . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
1.2 Kerr Rotating Black Hole . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
1.3 Particle energetics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
1.4 Penrose Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
1.5 Astrophysical applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
1.6 Magnetic Penrose process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
2 How do we know the Sun’s temperature? 14
2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
2.2 How far away is the Sun? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
2.3 How big is the Earth’s radius? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
2.4 How do we know the Suns mass? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
2.5 Blackbody temperature of the Sun . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
2.6 What is the Suns core temperature? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
CONTENTS
III Biosciences 17
1 Advancements in Brain-Machine Interfaces, Understanding Neurodegenerative Dis-
eases, and Cognitive Neuroscience 18
1.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
1.2 Brain-Machine Interfaces (BMIs) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
1.3 Understanding Neurodegenerative Diseases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
1.4 Brain Stimulation Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
1.5 Cognitive Enhancement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
1.6 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
2 Bead Chip Technology: Simplifying Genetic Analysis 24
2.1 What is DNA Microarray . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
2.2 Principle of DNA Microarray . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
2.3 Types of Microarray . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
2.4 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
v
Part I
Artificial Intelligence and Machine Learning
Harnessing AI for Geoinformatics and
Weather Forecasting
by Atharva Pathak
airis4D, Vol.2, No.8, 2024
www.airis4d.com
Artificial Intelligence (AI) is transforming numer-
ous fields, and geoinformatics and weather forecasting
are at the forefront of this technological revolution.
These fields are crucial for understanding and predict-
ing natural phenomena, managing resources, and miti-
gating disasters. AI enhances the accuracy, efficiency,
and scope of these applications, making it an invalu-
able tool for scientists, researchers, and policymakers
alike.
1.1 What is Geoinformatics?
Geoinformatics is the science of collecting, an-
alyzing, and interpreting geographic information. It
involves using data from various sources such as satel-
lites, drones, and sensors to create detailed maps and
models of the Earths surface. This data can be used
for a wide range of applications, from urban planning
and environmental monitoring to disaster management
and agriculture.
Traditional methods of analyzing geographic in-
formation often struggle with the sheer volume and
complexity of the data. This is where AI comes in. AI
algorithms, particularly machine learning (ML) and
deep learning (DL), can process and analyze large
datasets quickly and accurately.
For example, AI can analyze satellite images to
detect changes in land use, monitor deforestation, and
track urban development. Google Earth Engine is a
prime example of this application. It uses AI to pro-
cess petabytes of satellite imagery and identify trends
in environmental changes. This allows researchers to
monitor climate change impacts, plan urban infrastruc-
ture, and manage natural resources more effectively.
1.2 AI in Weather Forecasting
Weather forecasting is another field that benefits
significantly from AI. Predicting atmospheric condi-
tions involves analyzing data from various sources, in-
cluding weather stations, satellites, and ocean buoys.
AI can improve the accuracy of forecasts by analyzing
this data more efficiently than traditional models.
Machine learning algorithms can identify patterns
in historical weather data and use them to predict future
conditions. IBM’s Deep Thunder is a notable example.
This system combines AI with advanced weather mod-
eling to provide hyper-local weather forecasts. Deep
Thunder uses machine learning to analyze vast amounts
of data, including atmospheric conditions, historical
weather patterns, and local terrain. This enables it to
provide highly accurate forecasts for specific locations,
which is particularly useful for agriculture, disaster
management, and energy production.
1.3 Real-Life Applications of AI in Geoinformatics and Weather Forecasting
1.3 Real-Life Applications of AI in
Geoinformatics and Weather
Forecasting
1.3.1 Disaster Management
AI-enhanced weather forecasting can predict ex-
treme weather events such as hurricanes, floods, and
heatwaves with greater accuracy. This helps authorities
to take timely action, evacuate affected areas, and min-
imize damage. For example, during Hurricane Harvey
in 2017, AI models helped predict the storm’s path
and intensity, allowing for more effective disaster re-
sponse. In another instance, AI algorithms were used
to analyze satellite imagery and detect early signs of
wildfires in California, enabling quicker response times
and reducing the impact of these devastating events.
1.3.2 Agriculture
Farmers rely on accurate weather forecasts to make
informed decisions about planting, irrigation, and har-
vesting. AI can provide precise, localized forecasts
that help farmers optimize their operations. For in-
stance, The Climate Corporation uses AI to offer de-
tailed weather and soil insights, helping farmers im-
prove crop yields and reduce costs. In India, AI-
based apps are helping farmers by providing real-time
weather forecasts, pest detection, and crop health mon-
itoring, significantly improving agricultural productiv-
ity and sustainability.
1.3.3 Urban Planning
Cities can use AI to analyze geospatial data and
plan infrastructure development. This includes identi-
fying suitable locations for new buildings, optimizing
transportation networks, and managing resources effi-
ciently. AI can also help monitor and mitigate the ef-
fects of urbanization on the environment. For example,
in Singapore, AI is used to analyze traffic patterns and
optimize traffic light timings, reducing congestion and
improving air quality. In New York City, AI-driven
models help in flood risk management by analyzing
historical flood data and predicting future flood zones.
1.3.4 Environmental Monitoring
AI is used to monitor environmental changes and
track the health of ecosystems. For example, AI al-
gorithms can analyze satellite images to detect defor-
estation, track glacier movements, and monitor coral
reef health. The European Space Agency’s Coperni-
cus program uses AI to process data from its Sentinel
satellites, providing valuable insights into environmen-
tal changes and helping to inform policy decisions.
1.4 Technologies and Tools Used
1.4.1 Machine Learning and Deep Learning
Tools like TensorFlow and PyTorch are widely
used for building AI models. TensorFlow, developed
by Google, is an open-source library that provides a
flexible ecosystem of tools, libraries, and community
resources to build and deploy ML models. PyTorch,
developed by Facebook, is another open-source ML
library that offers dynamic computational graphs and
is popular for deep learning applications.
1.4.2 Geographic Information Systems (GIS)
GIS software like ArcGIS and QGIS are used for
mapping and spatial analysis. ArcGIS, developed by
Esri, is a comprehensive GIS platform that provides
tools for mapping, spatial analysis, and data manage-
ment. QGIS is an open-source GIS application that
supports viewing, editing, and analyzing geospatial
data.
1.4.3 Satellite Imagery Processing
Google Earth Engine and Sentinel Hub are plat-
forms that provide access to satellite imagery and tools
for processing this data. Google Earth Engine is a
cloud-based platform that offers petabytes of satellite
imagery and geospatial datasets along with analysis ca-
pabilities. Sentinel Hub, developed by Sinergise, is a
cloud-based platform that provides easy access to Sen-
tinel satellite data and allows users to create custom
scripts for data processing.
3
1.5 Challenges and Future Prospects
1.4.4 Weather Modeling and Simulation
Tools like WRF (Weather Research and Forecast-
ing) Model and HWRF (Hurricane Weather Research
and Forecasting) Model are used for weather simu-
lation. WRF is a next-generation mesoscale numer-
ical weather prediction system designed for both at-
mospheric research and operational forecasting needs.
HWRF is a specialized version of WRF developed by
NOAA for predicting the behavior of tropical cyclones.
1.5 Challenges and Future Prospects
While AI offers significant benefits, it also poses
challenges. One major issue is the quality and avail-
ability of data. AI models require large amounts of
accurate, high-resolution data to function effectively.
In many parts of the world, such data is either unavail-
able or difficult to obtain.
Another challenge is the interpretability of AI
models. AI algorithms, especially deep learning mod-
els, can be complex and difficult to understand. This
can make it hard for researchers to trust and validate
the results.
Despite these challenges, the future of AI in geoin-
formatics and weather forecasting looks promising.
Advances in sensor technology, data collection meth-
ods, and AI algorithms will continue to improve the
accuracy and efficiency of these applications. Re-
searchers are also exploring the use of AI in combi-
nation with other technologies, such as the Internet of
Things (IoT) and blockchain, to enhance data collec-
tion, sharing, and analysis.
1.6 Conclusion
AI is transforming geoinformatics and weather
forecasting by providing more accurate, efficient, and
detailed analyses. From disaster management to agri-
culture, urban planning, and environmental monitor-
ing, AI offers a wide range of practical applications
that can help us understand and manage our environ-
ment better. As technology continues to evolve, the
potential for AI in these fields will only grow, offering
new opportunities for innovation and improvement.
Figure 1: Satellite image analysis using Google Earth
Engine.
Figure 2: IBM’s Deep Thunder for hyper-local
weather forecasting.
By leveraging the power of AI, we can enhance our
understanding of the Earth and its atmosphere, leading
to better predictions, informed decision-making, and a
more sustainable future.
1.7 Further Reading
Google Earth Engine: https://earthengine.google.
com/
IBM’s Deep Thunder: https://www.ibm.com/
weather/industries/deep-thunder
The Climate Corporation: https://www.climate.
com/
National Center for Atmospheric Research (NCAR):
https://ncar.ucar.edu/
4
1.7 Further Reading
American Geophysical Union (AGU): https://
www.agu.org/
About the Author
Atharva Pathak currently work as a Soft-
ware Engineer & Data Manager for the Pune Knowl-
edge Cluster, A project under the Office of Principal
Scientific Advisor, Govt. of India & Supported by
IUCAA, Pune, IN. Before this, I was an Astronomer
at the Inter-University Centre for Astronomy & Astro-
physics, IUCAA. I have also worked on various free-
lance projects, development required for websites and
applications, And localization of different software.
I am also a life member of Jyotirvidya Parisanstha,
Indias Oldest association of Amateur Astronomers,
and I look after the IOTA-India Occultation section
as a webmaster and data curator.
5
Understanding the Basics of Vision
Transformers- Part I
by Blesson George
airis4D, Vol.2, No.8, 2024
www.airis4d.com
In previous issues, we covered the basics of trans-
former networks. Transformers have transformed nat-
ural language processing (NLP) by adeptly captur-
ing long-range dependencies and contextual informa-
tion through their self-attention mechanism. This ca-
pability allows them to understand and generate hu-
man language with impressive accuracy, driving ad-
vancements in tasks such as translation, summariza-
tion, and question-answering. However, their applica-
tion in computer vision has been less successful due to
the inherent differences in data structure; images are
high-dimensional with a spatial grid-like arrangement,
making the dense connections of transformers compu-
tationally intensive and less efficient. Convolutional
Neural Networks (CNNs) are typically better suited
for image-related tasks because they effectively cap-
ture local patterns and hierarchies with fewer parame-
ters and lower computational demands. Nonetheless,
recent innovations like the Vision Transformer (ViT)
are beginning to address these limitations by adapting
the transformer architecture to process image patches,
yielding promising results in image recognition and
classification.
2.1 Challenges in Vision Tasks
Transformers have achieved significant success in
natural language processing (NLP) due to their ability
to capture complex relationships through self-attention
mechanisms. However, their application to computer
vision faces several unique challenges due to the inher-
ent differences between image and text data.
2.1.1 High-Dimensional and Spatially
Structured Data
Images are high-dimensional, containing a large
number of pixels with multiple color channels (e.g.,
RGB). This high dimensionality presents a challenge
for transformers, which were originally designed to
handle sequential data like text. Furthermore, images
have a spatial structure where the relative position of
each pixel is crucial for understanding visual content.
Traditional transformers, which use dense connections
across all input tokens, struggle to efficiently manage
this spatial information without incurring significant
computational costs.
2.1.2 Computational Complexity
The self-attention mechanism in transformers re-
quires computing pairwise interactions between all to-
kens in the input sequence. For images, this means
calculating interactions between all pixels or image
patches, which can be computationally expensive and
memory-intensive. As the size of the image increases,
the computational demands grow quadratically, mak-
ing it challenging to scale transformers for high-resolution
images.
2.1.3 Lack of Inductive Biases
Convolutional Neural Networks (CNNs) possess
built-in inductive biases that are particularly well-suited
2.2 Comparison of CNNs and Transformers in Vision Tasks
for image processing, such as the ability to recognize
local patterns and hierarchical structures. These bi-
ases enable CNNs to efficiently capture spatial hierar-
chies and feature representations in images. In contrast,
transformers lack these inductive biases, making them
less effective at leveraging local spatial information
without additional modifications or large amounts of
training data.
2.2 Comparison of CNNs and
Transformers in Vision Tasks
In computer vision, Convolutional Neural Net-
works (CNNs) and Transformers represent two distinct
approaches to processing image data. Each has its own
strengths and weaknesses, which are highlighted in the
following comparison.
2.2.1 Convolutional Neural Networks (CNNs)
CNNs are specifically designed for image process-
ing and have several advantages in this domain:
Local Receptive Fields: CNNs use local recep-
tive fields to capture local patterns and features
in images. This allows them to effectively rec-
ognize and process spatial hierarchies, such as
edges, textures, and object parts.
Parameter Sharing: By using shared weights
across different parts of the image, CNNs re-
duce the number of parameters and computa-
tional complexity. This makes them efficient for
large-scale image data.
Translation Invariance: CNNs achieve trans-
lation invariance through pooling layers, which
helps in recognizing objects regardless of their
position in the image.
2.2.2 Transformers
Transformers, originally designed for sequential
data in NLP, have been adapted for vision tasks but
face different challenges:
Global Contextual Information: Transform-
ers capture global context through self-attention
mechanisms, allowing them to understand rela-
tionships between all parts of the image. This
can be advantageous for tasks requiring a global
understanding of visual content.
High Computational Complexity: The self-
attention mechanism in transformers involves com-
puting pairwise interactions between all image
patches, leading to high computational and mem-
ory costs, especially for high-resolution images.
Lack of Inductive Biases: Unlike CNNs, trans-
formers do not inherently capture local spatial
patterns and hierarchies. They rely on positional
encodings and learned representations to process
spatial information, which can be less efficient
compared to CNNs.
These challenges illustrate why traditional CNNs
have been more successful in vision tasks compared
to transformers. However, ongoing research and in-
novations, such as the Vision Transformer (ViT), aim
to address these limitations by adapting transformer
architectures to better handle image data.
2.3 Adapting Transformers for
Images: Vision Transformers
Vision Transformers (ViT) adapt the original trans-
former architecture to process image data by introduc-
ing several key modifications. Instead of treating im-
ages as sequential data, ViTs divide images into fixed-
size non-overlapping patches, which are then linearly
embedded into a sequence of tokens. These tokens,
along with positional encodings that provide spatial
information, are fed into the transformer model. The
self-attention mechanism within the transformer is then
applied to these image patches, allowing the model to
capture global dependencies and relationships between
different parts of the image. By leveraging the trans-
former’s ability to model long-range interactions, ViTs
can learn complex visual patterns and contextual in-
formation across the entire image. This adaptation
allows transformers to handle image data more effec-
tively, although challenges such as high computational
costs and the need for large datasets remain. Recent
7
2.4 Conclusion
advancements continue to refine these techniques to
improve performance and efficiency in vision tasks.
2.4 Conclusion
In this issue, we explored the fundamental con-
cepts of Vision Transformers (ViT) and their adapta-
tion of the traditional transformer architecture to image
data. We discussed how ViTs address the challenges of
processing high-dimensional and spatially structured
image data by dividing images into patches, embedding
these patches into a sequence of tokens, and applying
self-attention mechanisms to capture global dependen-
cies. Despite their promise, Vision Transformers face
challenges such as high computational complexity and
the need for substantial training data.
In the next issue, we will delve deeper into the
practical aspects of Vision Transformers, including the
specific coding implementations and methods used to
train and optimize these models.
References
1. Dosovitskiy, Alexey, et al. An image is worth
16x16 words: Transformers for image recogni-
tion at scale.” arXiv preprint arXiv:2010.11929
(2020).
2. Vision Transformers
About the Author
Dr. Blesson George presently serves as an
Assistant Professor of Physics at CMS College Kot-
tayam, Kerala. His research pursuits encompass the
development of machine learning algorithms, along
with the utilization of machine learning techniques
across diverse domains.
8
Part II
Astronomy and Astrophysics
Black Hole Stories-11: Energy Extraction
from a Rotating Black Hole
by Naresh Dadhich
airis4D, Vol.2, No.8, 2024
www.airis4d.com
In Black Hole Stories 9 and 10 (BHS-9 and BHS-
10), various properties of the Kerr black hole, which
has mass and angular momentum (spin) were discussed.
In the present story we will consider some of those con-
cepts from a slightly different point of view and then
consider energy extraction from the ergosphere of a
Kerr black hole.
1.1 Static and One-Way Surfaces,
and Event Horizon
It is generally thought that a black hole imparts
strong gravitational pull on a particle, and the closer a
particle gets to it, the stronger the pull becomes. Could
there exist a limit where it becomes irresistible, i.e., it
cannot be countered by any amount of force in the op-
posite direction? Beyond such a limit, a particle would
not be able to stay fixed at a point. That happens for
a non-rotating static black hole at the Schwarzschild
radius, R
S
=2GMc
2
. The spherical surface corre-
sponding to this radius is called the static surface, on
which and below which nothing can remain fixed at a
point. It has been learned through earlier stories that
no particle or light ray (photon) can escape from within
this surface to the outside world, so it is also called the
event horizon.
When the effective potential for the Schwarzschild
case was considered in earlier stories, it was found that
there is a term which is proportional to 1r, and an-
other proportional to 1r
3
(there is also a term pro-
portional to 1r
2
, which is due to the centrifugal force).
One can say that the first term is the potential which
corresponds to the Newtonian inverse square force,
while the other is a general relativistic effect, which is
caused by curvature of space and felt only when particle
has non-zero angular momentum. Therefore Einsteins
gravity could be envisaged as Newtons gravitational
law with the effect of curved three dimensional space
added. There are two aspects, one the usual gravita-
tional pull and second the curvature of space which
also becomes stronger and stronger as one goes closer
and closer to the gravitating body. One can say here
that that light (or a photon which is a zero mass par-
ticle) does not feel gravitational pull, else its velocity
will not remain constant. It only feels or responds to
space curvature. Inside the event horizon, the curva-
ture becomes so strong that nothing can go out of the
surface. This is a one-way surface, i.e. a surface that
can be crossed in only one way. Objects can fall into it
but nothing can come out, including light.
The static surface and the event horizon are coin-
cident for the non-rotating black hole. That is not the
case for a rotating black hole and that gives rise to the
very interesting physical phenomenon of extraction of
rotational energy of black hole.
1.2 Kerr Rotating Black Hole
It was seen in BHS-9 and BHS-10 that the Kerr
solution of Einsteins equation describes gravitational
field of a rotating black hole. In addition to the mass,
1.3 Particle energetics
the rotation of the black hole also contributes to the
geometry of spacetime, and most remarkably rotation
is shared with the space surrounding the black hole.
So a particle not only feels the gravitational pull to-
wards the black hole but it also feels a tangential pull
to rotate in the same direction as the black hole. That
is, the space around a rotating black hole is endowed
with an inherent angular velocity. In particular, a zero
angular momentum particle will have non-zero angu-
lar velocity which, is a function of the Boyer-Lindquist
coordinates r and θ we introduced in BHS-10. This is
called the phenomenon of dragging of inertial frames
or frame-dragging, which was described in BHS-10.
Now the question arises, when do radial and an-
gular pulls become irresistible? Interestingly, it turns
out that first the angular pull becomes irresistible and
then the radial pull. The radial coordinate at which this
happens is given by
r
st
=µ +
µ
2
a
2
cos
2
θ (1.1)
where µ =GMc
2
and a =JMc, with M the
mass of the black hole, J its angular momentum, in
the notation that was used in the previous stories. The
relation defines the static surface for the rotating black
hole and it reduces to r
st
= 2GMc
2
when a = 0,
which is the Schwarzschild case, in which the static
surface and the event horizon coincide, and there is
no frame dragging. The static surface signifies that
any particle or light ray below r
st
has to corotate with
the black hole even if the angular momentum of the
particle or light ray is zero or negative, i.e., in the
opposite direction to that of the black hole. Here an
observer can remain stationary at a fixed radius but it
has to rotate with the frame-dragging angular velocity.
It is like a whirlpool, anything that falls in it has to
corotate. The static surface is the same as the outer
surface of infinite redshift shown in Figure 1, which
we reproduce below for easy reference.
The one-way surface, a surface which can only be
crossed in one direction things can fall in but nothing
can come out, is defined by
r
+
=µ +
µ
2
a
2
(1.2)
Figure 1: The structure of the Kerr black hole. The
figure is described in the text. The figure is from
Lingyan Guan+, Journal of Physics: Conference Series
2022. Creative Commons Licence.
This is the event horizon for the rotating black
hole which was introduced in BHS-10. It is equal to
r
st
only on the axis of rotation, or the poles, at which
θ =0. The region lying between them is called the
ergosphere, which is shown shaded in grey in Figure
1.
The ergosphere is the place of action that facili-
tates extraction of the rotational energy of black hole.
Interestingly, though nothing can come out of the event
horizon, which characterises a black hole, yet rotational
energy can be extracted from it. That is what we next
take up.
1.3 Particle energetics
Let us first consider the energy of a particle with
mass m at rest near a static object of mass M . It will
be sum of the rest mass energy and the gravitational
interaction (potential) energy: E =mc
2
GM mr.
This will become zero at r = GMc
2
and negative
below that. This interesting result follows when we
include the relativistic concept of rest mass energy in
the otherwise Newtonian gravity considerations. In
general relativity, energy per unit rest mass energy is
given by
E =
1
2GM
rc
2
(1.3)
which for large r reduces to the Newtonian rela-
tion E =mc
2
GM mr. The energy of the particle
goes to zero for r =R
S
=2GM c
2
and is negative
below that. The particle cannot have negative energy
11
1.4 Penrose Process
outside the event horizon.
However, the situation dramatically changes for a
rotating black hole. In that case, a term gets added to
the energy, arising out of interaction between the par-
ticle’s angular momentum L, and the black holes ro-
tation. The rotation is indicated by the inherent frame-
dragging angular velocity, at any given location around
the black hole. It is this term that can become nega-
tive for a particle having negative angular momentum
relative to black hole. Inside the static limit, that is in
the ergosphere, the term due to the frame dragging can
dominate over the positive term, which in fact tends to
zero close to the event horizon, as in the non-rotating
case, The total energy therefore becomes negative, rel-
ative to a distant observer, in the ergosphere when the
particle angular momentum L < 0. The interesting
question which now arises is that could this remark-
able property be utilised in extracting energy from a
rotating black hole? This is precisely what Roger Pen-
rose did in 1969 and proposed a process of energy
extraction. That is what we take up next.
1.4 Penrose Process
Let us envisage a particle of energy E
1
which falls
on a rotating black hole from infinity and splits into two
fragments of energies E
2
and E
3
near the horizon in
the ergosphere. Suppose one of them happens to have
negative angular momentum, and hence has negative
energy E
2
<0 and falls into the black hole. For the
other fragment, by the conservation of energy, the en-
ergy will be E
3
=E
1
(E
2
<0)>E
1
so that it will
come out of the ergosphere with enhanced energy. This
is how energy is extracted from the black hole through
the Penrose process by throwing in a negative energy
particle into it. Note that nothing has come out of
the black hole and yet its energy has decreased by the
amount E
2
. The process is indicated schematically in
Figure 2.
The existence of ergosphere is crucial for the en-
ergy extraction process. Since a non-rotating black
hole has no ergosphere, no energy can be extracted
from it. The energy extracted by the Penrose process
therefore has to be the rotational energy of the black
Figure 2: A schematic diagram of the Penrose pro-
cess of energy extraction in the equatorial plane. Note
that energy of infalling particle E
2
<0, and hence by
conservation of energy, energy of the escaping particle
will be E
3
=E
1
E
2
>E
1
.
hole. It has been seen in BHS-9 that the maximum
value of the rotational parameter a =JMc, where J
is angular momentum of the black hole, is given by
a
max
=GM c
2
. Such a maximal black hole has rota-
tional energy of 29 percent of the mass, which is the
maximum which can be extracted through the Penrose
process.
1.5 Astrophysical applications
A rotating black hole is a huge reservoir of rota-
tional energy comprising of 29% percent of its mass,
which could be of millions of Solar masses. Tapping
this energy is of immense astrophysical importance. It
could power high energy source like a quasar, which is
a stellar sized object giving out radiative energy which
is about 10
10
to 10
12
times the energy output of any star.
What could be the powering engine of such a monster
object? Could the source be rotational energy of black
hole being extracted by the Penrose process? This was
an exciting possibility for a purely geometric process,
which is entirely governed by the Kerr geometry of the
spacetime surrounding a rotating black hole.
The great physicist John Wheeler was very excited
about the possibility and he proposed that a star moving
grazing past a supermassive rotating black hole could
get tidally disrupted into a large number of fragments
in the ergosphere of the black hole. If some of the frag-
ments attain negative energy, there will be a swarm of
enhanced energy fragments coming out of the ergo-
sphere, feeding the enormous energy requirement of a
12
1.6 Magnetic Penrose process
quasar.
The Penrose process was indeed a very exciting
and novel energy extraction process with great astro-
physical potential, but soon (in 1972 by Saul Teukol-
sky and his collaborators, and independently in 1974
by Robert Wald) it was realized that it was not astro-
physically viable. That is because for a fragment to
attain negative energy, relative velocity between the
two fragments is required to be greater than half of the
light velocity. There can be no astrophysical mech-
anism that could accelerate fragments to such a high
velocity almost instantaneously.
In 1977 Roger Blandford and Roman Znajek pro-
posed an electromagnetic process of energy extraction
facilitated by the existence of negative energy states in
the ergosphere. It was broadly based on the phenom-
ena of twisting of magnetic field lines produced by the
accretion disk around the hole. This results in produc-
ing a potential difference between pole and equatorial
plane, and as the potential difference discharges, en-
ergy is driven out. This was how rotational energy of
black hole could be mined and utilised for powering
high energy sources. The maximum efficiency of the
Penrose process turns out to be about 21 percent, and
compared to that the Blandford-Znajek (B-Z) mecha-
nism is much more efficient.
1.6 Magnetic Penrose process
In 1985, a magnetic version of the Penrose pro-
cess was proposed by Sanjay Wagh, Sanjeev Dhurand-
har and Naresh Dadhich for a rotating black hole sitting
in a magnetic field. It was then envisaged that the two
fragments of the split in the ergosphere have opposite
electric charges. In such a case, the energy required for
a fragment to attain negative energy does not have to
entirely come kinematically, it can also arise from the
electromagnetic interaction between the charged parti-
cles, which would be dominant. Thus the formidable
velocity limit of greater than c2 is no longer valid,
and that marked the revival of the Penrose process as
the magnetic Penrose process for astrophysical ap-
plications. Here the rotational energy of black hole is
mined with the help of a magnetic field which plays the
role of a facilitating agent. This process can become
highly efficient, efficiency could even exceed 100%,
depending upon the magnetic field strength.
Let me end the discussion by saying that it is the
existence of the ergosphere caused by a separation be-
tween static surface and horizon that plays the key role
in extracting energy from a black hole. The full-fledged
magnetohydrodynamic simulations of the process be-
ing studied by various groups find that the magnetic
Penrose process is the most favoured mechanism for
powering high energy sources like quasars and active
galactic nuclei, and could also accelerate ultra-high
energy cosmic rays.
About the Author
Professor Naresh Dadhich is an emeritus
Professor at Inter University Centre for Astronomy
and Astrophysics. He was the former director of In-
ter University Centre for Astronomy and Astrophysics
(IUCAA), Pune, and is a renowned cosmologist. In
collaboration with IUCAA, he pioneered astronomy
outreach activities from the late 80s to promote as-
tronomy research in Indian universities. The Speak
with an Astronomer monthly interactive program
to answer questions based on his article will allow
young enthusiasts to gain profound knowledge about
the topic.
13
How do we know the Sun’s temperature?
by Linn Abraham
airis4D, Vol.2, No.8, 2024
www.airis4d.com
2.1 Introduction
It is often humbling to think how far humanity has
reached in its efforts to understand the cosmos. Using
the systematic knowledge building methodology that
we call Science, we humans have been able to find dis-
tances to the Sun, moon and planets. Weigh the Earth
while living on just a tiny sliver of its surface called
the Crust. Weight the Sun and find the temperature
and pressure in its interior. Most of this was done
before the time of rockets and satellites with just pen,
paper and the equations of science. In this article let
us retrace the path that has been taken by several great
thinkers and scientists who contributed to the repos-
itory of knowledge that we have today. The primary
question we want to address is how hot is the Sun?
We have seen iron rods glowing red and white when
heated. If so, then how much more hot must the Sun
be ? What is the temperature on its surface as well as
in it’s core.
2.2 How far away is the Sun?
From Keplers third law of planetary motion, we
know that
T
2
a
3
Where T is the orbital period of a planet around the
Sun and a is the semi-major axis of the orbit. By
taking ratios one can convert this to an equation.
T
2
1
T
2
2
=
a
3
1
a
3
2
Now let us put Earth as the second planet and Venus
as the first planet. We define the Earth’s orbital period
around the Sun as one year and its semi-major axis as
one astronomical unit (AU). This allows us to calculate
the distance from Venus to Sun in terms of AU if we
know the orbital time period of Venus.
T
2
1
=a
3
1
For the purposes of this discussion, let me call the
semi-major axis length itself as the distance. Since the
orbital period of Venus can be found by observations,
we can find the distance between Venus and Sun in
terms of AU. One minus this distance would then give
the distance between Earth and Venus in terms of AU.
Now consider what happens during the Venus transit,
that is, when the Venus comes directly between the
Sun and Earth. If at this time, we measure the actual
distance from Earth to Venus, we can find out the length
of one AU. Such a distance measurement can be done
using the parallax method. Doing all the calculations
we arrive at an answer of 150 million kilometers for
the length of one AU.
2.3 How big is the Earth’s radius?
One of the best arguments for the round-earther’s
claim of a spherical Earth is that the shape of the
Earths shadow on the moon is circular during any
lunar eclipse. During a lunar eclipse, the Sun illu-
minates the Earth from behind and its shadow fals on
the moon. Regardless of when or where this happens,
the shadow is always circular and a sphere is the only
shape that fits such a scenario. This fact was known
during the time of Erastothenes, who lived a very long
time ago and which is evident from the name alone.
We know that the Sun is so far away that the light rays
2.4 How do we know the Suns mass?
coming from it are parallel. Erastothenes also assumed
this to be true and went about calculating the Earths
radius based on this assumption. He found out that
at the same time when the Sun is directly overhead at
Syene in Edypt, the Sun cast a shadow that is 7 degrees
inclined from the vertical at Alexandria. By knowing
the actual distance from Syene to Alexandria and the
angular distance of 7 degrees, the circumference of the
Earth can be calculated. The final answer is 6371 km.
Henry Cavendish (1731-1810) is the man credited
with weighing the Earth for the first time. He did this
by measuring the gravitation constant that appears in
Newtons Law of Universal Gravitation. He was able
to do this because the radius of the Earth was already
known by that time.
2.4 How do we know the Sun’s mass?
Since the Earth goes around the Sun in a sort
of circular motion. The centripetal acceleration is pro-
vided by the Suns gravitational attraction on the Earth.
The centripetal acceleration is by definition
v
2
r
. Where
v is the speed of Earth’s motion around the Sun. The
speed of Earths motion around Sun is trivial to find
once we know the circumference of the Earth’s orbit.
We divide that quantity by the time taken for one full
revolution which is one year. By equating this to the
force of gravitation between Sun and Earth, we can
find out the mass of the Sun since we already know the
mass of the Earth and the Sun-Earth distance.
2.5 Blackbody temperature of the
Sun
A blackbody temperature or effective temperature
of an object is the surface temperature that the object
would have if it were a perfect blackbody. This effective
temperature can be found if one knows the luminosity
of the blackbody and its surface area, using the Steffan-
Boltzmann law. The luminosity is defined as the total
power radiated by the object. A related term, apparent
brightness or received energy flux, is the power per unit
surface area of the detector. This can be computed by
diving the total power incident on your detector by the
total surface area of your detector which would depend
on your aperture size etc. This apparent brightness is
affected by the true luminosity of the object as well as
the distance to the object. For an isotropically radiating
object this relationship can be expressed as,
Apparent brightness =
L
4πd
2
Where L is the luminosity, and d is the distance to the
object.
To calculate the effective temperature for the Sun,
we need to know its linear size, that is the Suns diam-
eter. Since the distance to the Sun is already known,
by knowing the angular size of the Sun, i.e. the angle
subtended by the Sun on the sky, we can compute its
linear size. Since the Sun is very far away, the angles
involved are quite small so that we can use the small-
angle approximation to find this. Doing all this we get
the Suns effective surface temperature to be 5,800 K.
2.6 What is the Sun’s core
temperature?
By knowing the Suns mass and volume we can
estimate the pressure in the Sun. By doing the calcu-
lations we find that the pressure is so high that matter
can only exist in the plasma state. That is, atoms are
stripped of all its electrons. Such a plasma is however
electrically neutral on the whole and the inter-particle
distances are sufficient for it to have a perfect gas be-
haviour. This allows us to estimate the temperature of
the plasma from the perfect gas law provided we first
find out the pressure at the core. The perfect gas law
states that P = nkT , where P is the pressure of the
gas, n is the number density of particles and T is the
temperature. Using the law we find the temperature at
the Suns core to be 1.5 ×10
7
K or 15 million degrees
Kelvin.
References
[1] Arnab Rai Choudhuri. Natures Third Cycle: A
Story of Sunspots. Oxford University Press, Oxford
; New York, 2015. ISBN 978-0-19-967475-6.
15
REFERENCES
[2] Frank H. Shu. The Physical Universe: An In-
troduction to Astronomy. A Series of Books in
Astronomy. Univ. Sience Books, Sausalito, Calif,
9. print edition, 1982. ISBN 978-0-935702-05-7.
[3] Eric Priest. Magnetohydrodynamics of The Sun.
[4] https://www.scientificamerican.com/article/
how-do-scientists-measure/
[5] https://en.wikipedia.org/wiki/Cavendish
experiment
About the Author
Linn Abraham is a researcher in Physics, specializing in A.I. applications to astronomy. He is currently involved
in the development of CNN based Computer Vision tools for prediction of solar flares from images of the Sun,
morphological classifications of galaxies from optical images surveys and radio galaxy source extraction from
radio observations.
About the Author
Linn Abraham is a researcher in Physics,
specializing in A.I. applications to astronomy. He is
currently involved in the development of CNN based
Computer Vision tools for prediction of solar flares
from images of the Sun, morphological classifica-
tions of galaxies from optical images surveys and ra-
dio galaxy source extraction from radio observations.
16
Part III
Biosciences
Advancements in Brain-Machine Interfaces,
Understanding Neurodegenerative Diseases,
and Cognitive Neuroscience
by Geetha Paul
airis4D, Vol.2, No.8, 2024
www.airis4d.com
1.1 Introduction
The field of neuroscience has witnessed signifi-
cant advancements in recent years, particularly in the
areas of brain-machine interfaces (BMIs), neurodegen-
erative disease research, and cognitive neuroscience.
These breakthroughs enhance our understanding of the
human brain and pave the way for innovative treatments
and technologies that can drastically improve the qual-
ity of life. Advancements in cognitive neuroscience
have significantly deepened our understanding of the
brains structure and function, elucidating the neu-
ral mechanisms underlying cognition, behaviour, and
emotion. Neuroimaging techniques such as functional
magnetic resonance imaging (fMRI), positron emis-
sion tomography (PET), and diffusion tensor imaging
(DTI) have revolutionised our ability to observe brain
activity and connectivity in real time. fMRI, for exam-
ple, maps brain regions associated with specific cog-
nitive functions by detecting changes in blood flow,
while PET scans provide insights into metabolic pro-
cesses and neurotransmitter effects. DTI enhances our
understanding of neural pathways by visualising white
matter tracts in the brain [13].
Brain stimulation methods like transcranial mag-
netic stimulation (TMS) and deep brain stimulation
(DBS) have also contributed to cognitive neuroscience.
TMS modulates brain activity non-invasively to estab-
lish causal links between brain regions and cognitive
functions and is being explored as a treatment for de-
pression and other neuropsychiatric disorders. DBS,
which involves implanting electrodes in specific brain
areas, treats movement disorders such as Parkinsons
disease and has potential applications in treating psy-
chiatric conditions [4].
Research on neuroplasticity has shown that the
brain can reorganise itself by forming new neural con-
nections throughout life, crucial for learning, memory,
and recovery from brain injuries. Studies on critical
periods have highlighted developmental times when
the brain is particularly receptive to learning and en-
vironmental influences, impacting education and reha-
bilitation strategies. Advances in genomics and epige-
netics have deepened our understanding of how genetic
and environmental factors influence brain development
and function, revealing the genetic basis of cognitive
disorders and potential therapeutic targets. Research
on neurotransmitter systems has clarified their roles in
regulating mood, cognition, and behaviour, paving the
way for new psychiatric medications [11].
Additionally, computational neuroscience, includ-
ing brain-computer interfaces (BCIs) and artificial in-
telligence (AI), is advancing our understanding and
treatment of neurological disorders. BCIs enable di-
rect communication between the brain and external
devices, offering new possibilities for prosthetics and
communication aids. AI and machine learning are be-
ing used to analyse complex neural data, model brain
1.2 Brain-Machine Interfaces (BMIs)
functions, and develop new diagnostic and therapeu-
tic tools [15]. These advancements are enhancing our
understanding of the brain and driving innovations in
medical treatments, educational strategies, and tech-
nologies to improve human health and performance
[16].
1.2 Brain-Machine Interfaces (BMIs)
BMIs are systems that enable direct communica-
tion between the brain and external devices. They
have enormous potential in both medical and non-
medical applications, ranging from restoring lost sen-
sory and motor functions to augmenting human ca-
pabilities. BMIs work by detecting neural signals,
typically through electrodes placed on the scalp (non-
invasive) or implanted directly into the brain (invasive),
and translating these signals into commands that can
control external devices such as computers, robotic
limbs, or other assistive technologies. The primary
goal is to restore lost functions in individuals with neu-
rological impairments, such as spinal cord injuries,
stroke, or neurodegenerative diseases like ALS.
In medicine and neurorehabilitation, BMIs are be-
ing developed to help patients regain control over their
movements and improve their quality of life. For in-
stance, BMIs can enable individuals with paralysis to
control prosthetic limbs, and wheelchairs, or even com-
municate through computer interfaces. Research has
shown promising results, with some patients achieving
remarkable levels of control and dexterity with their
assistive devices.
Additionally, BMIs have potential applications in
enhancing human cognitive abilities and interactions
with technology. For example, they can be used to
develop advanced communication systems for individ-
uals with severe speech or motor impairments, allow-
ing them to interact with their environment more ef-
fectively. In the field of gaming and virtual reality,
BMIs could offer more immersive and responsive ex-
periences by directly linking a users brain activity to
the virtual environment.
However, despite their promising potential, BMIs
face several challenges. The accuracy and reliability of
signal detection and interpretation need continuous im-
provement, particularly in non-invasive systems, which
often suffer from lower resolution compared to inva-
sive methods. Invasive BMIs, while offering higher
precision, come with risks associated with surgical im-
plantation and the long-term biocompatibility of the
devices.
1.2.1 Current Developments
Recent advancements in BMIs have led to remark-
able achievements, particularly in assisting individuals
with severe disabilities. Researchers at the University
of Pittsburgh have developed a BMI that allows paral-
ysed individuals to control robotic limbs with their
thoughts [5]. This technology utilises implanted elec-
trodes that record neural signals, which are then de-
coded by a computer to control the movement of a
prosthetic limb.
Figure 1: Illustration of a brain-machine interface
(BMI) system for stroke neurorehabilitation training.
Bio-signals associated with attempted movements of
the paralysed hand and fingers are translated into online
feedback and brain-state-dependent transcranial elec-
tric stimulation to augment neuroplasticity facilitating
motor recovery.
Moreover, non-invasive BMIs are gaining trac-
tion. Electroencephalography (EEG)-based systems,
which record electrical activity from the scalp, have
shown promise in enabling communication for patients
with locked-in syndrome [3]. These systems, although
less precise than invasive BMIs, offer a safer and more
accessible alternative.
1.2.2 Future Prospects
The future of BMIs looks promising with ongo-
ing research focused on improving signal accuracy and
developing more user-friendly interfaces. Advances
19
1.3 Understanding Neurodegenerative Diseases
in neural recording technologies, such as optogenetics
and high-density electrode arrays, are expected to en-
hance the precision of BMIs. Additionally, integrating
artificial intelligence (AI) for better signal decoding
could significantly improve the performance and us-
ability of these systems.
1.3 Understanding
Neurodegenerative Diseases
Neurodegenerative diseases are a diverse group
of disorders characterised by the progressive degener-
ation of neurons in the nervous system. These diseases
impact various cognitive and motor function aspects,
leading to a gradual decline in quality of life. They can
be broadly categorised based on their specific symp-
toms, underlying mechanisms, and pathological fea-
tures.
Figure 2: The path to cognitive decline in neurode-
generation. Amyloid-beta (Aβ) monomers clump to-
gether to form oligomers of variant structures. Sub-
sequently, the oligomers aggregate to form Aβ fibres,
which misarrange to form Aβ plaques. Plaque forma-
tion induces an inflammatory response which includes
the formation of tau aggregates leading to the con-
version of healthy neurons to diseased neurons. The
presence of more diseased neurons triggers another in-
flammatory response leading to more neuron loss and
a subsequent loss in brain function as well as cognitive
decline.
1.3.1 Common Neurodegenerative Diseases
1.3.1.1 Alzheimer’s Disease
Alzheimer’s Disease is the most common form
of dementia, marked by a gradual decline in memory,
cognition, and behaviour. Patients often begin with
difficulty forming new memories, which progresses
to severe cognitive impairment. The pathology of
Alzheimer’s is characterised by the buildup of amy-
loid plaques and tau tangles in the brain, which disrupt
normal neuronal function.
1.3.1.2 Parkinson’s Disease
Parkinsons Disease primarily affects motor con-
trol, leading to symptoms such as tremors, stiffness,
and difficulty with movement and balance. The dis-
ease is associated with the loss of dopamine-producing
neurons in the substantia nigra, a critical area for coor-
dinating movement. This dopamine deficiency results
in the characteristic motor symptoms of Parkinson’s.
1.3.1.3 Huntington’s Disease
Huntingtons Disease is a genetic disorder that
usually manifests in mid-adulthood with symptoms
including involuntary movements (chorea), cognitive
decline, and psychiatric disturbances. The disease is
caused by a genetic mutation leading to the progres-
sive degeneration of neurons in the basal ganglia and
cortex.
1.3.1.4 Amyotrophic Lateral Sclerosis (ALS)
ALS is characterised by the progressive loss of
motor neurons, the nerve cells responsible for control-
ling voluntary muscle movements. This degeneration
leads to muscle weakness and atrophy, eventually pro-
gressing to paralysis. ALS affects both the brain and
spinal cord, impairing the ability to initiate and control
muscle movements. As the disease advances, indi-
viduals experience severe difficulties with movement,
speech, swallowing, and breathing. The loss of motor
neurons disrupts communication between the brain and
muscles, resulting in the progressive decline of motor
function.
1.3.1.5 Multiple Sclerosis (MS)
Multiple Sclerosis is an autoimmune condition
where the immune system attacks the myelin sheath of
nerve fibres, causing inflammation and damage. Symp-
toms vary widely and can include fatigue, difficulty
20
1.4 Brain Stimulation Methods
walking, numbness, and vision problems. The disease
can present with relapsing or progressive patterns.
1.3.2 Key Discoveries
A breakthrough in Alzheimer’s research was the
identification of amyloid-beta plaques and tau tangles
as key pathological features [9]. Recent studies have
expanded our understanding of how these protein ag-
gregates impair neural function and contribute to cog-
nitive decline. Researchers are also investigating the
roles of neuroinflammation and genetic factors in the
progression of the disease [10].
These studies have increasingly emphasised the
central role of amyloid-beta (Aβ) and tau proteins in
Alzheimer’s disease (AD). The disease is marked by
the formation of Aβ-containing plaques and neurofib-
rillary tangles (NFTs) composed of hyperphosphory-
lated tau proteins. These pathological features dis-
rupt hippocampal circuitry, which impairs the ability to
consolidate short-term memories into long-term ones.
AD leads to significant neuronal loss, compromised
synaptic connections, and damage to neurotransmit-
ter systems essential for cognitive functions, including
memory.
As a result, selective memory impairment is often
the first clinical symptom of AD. Additionally, func-
tions dependent on the hippocampus and medial tem-
poral lobes, such as declarative episodic memory, are
frequently affected. Early in the disease, impairments
in executive functions, such as judgement and problem-
solving, also become apparent.
Key discoveries in Parkinsons disease (PD) have
profoundly shaped our understanding and treatment
of the condition. James Parkinsons seminal work
in 1817, An Essay on the Shaking Palsy, provided
the first comprehensive clinical description of the dis-
ease, establishing the foundation for subsequent re-
search [12]. In the mid-20th century, they brought a
critical breakthrough with Arvid Carlssons discovery
of dopamines role in motor control, which earned him
a Nobel Prize in 2000. This research clarified that
Parkinsons disease is characterised by the significant
loss of dopaminergic neurons in the substantia nigra, a
brain region crucial for motor function.
The introduction of levodopa (L-DOPA) in the
1960s and 1970s revolutionised the management of PD
by replenishing depleted dopamine levels and alleviat-
ing many motor symptoms. In the 1990s, further break-
throughs identified genetic mutations associated with
familial forms of Parkinsons disease, including those
in the SNCA gene (which encodes alpha-synuclein), as
well as LRRK2 and PARK7. Research in 1997 revealed
that alpha-synuclein forms aggregates known as Lewy
bodies, a hallmark of the disease Lewy bodies, which
are abnormal deposits of the protein alpha-synuclein,
are clumps that build up inside certain neurons in the
brain [14].
Recent research into stem cell therapy holds promise
for replacing damaged neurons and modifying disease
mechanisms. Additionally, there is increasing focus on
non-motor symptoms, such as cognitive decline, mood
disorders, and autonomic dysfunction, leading to more
comprehensive treatment approaches. The discovery
of alpha-synuclein aggregates and their spread across
different brain regions has been pivotal, and ongoing
efforts aim to understand their impact on neuronal
health and explore gene therapy and neuroprotective
strategies to slow disease progression [?].
1.4 Brain Stimulation Methods
Transcranial magnetic stimulation (TMS) and tran-
scranial direct current stimulation (tDCS) are non-
invasive brain stimulation techniques that have shown
promise in modulating cognitive functions. TMS, for
instance, has been used to enhance working memory
performance by stimulating the dorsolateral prefrontal
cortex [6]. These methods offer potential therapeutic
applications for cognitive impairments associated with
psychiatric and neurological disorders.
1.5 Cognitive Enhancement
Pursuing cognitive enhancement through pharma-
cological and non-pharmacological means is a growing
area of interest. Nootropics, or cognitive enhancers,
are being researched for their potential to improve
21
1.6 Conclusion
memory, attention, and learning [?]. Additionally,
neurofeedback, which involves training individuals to
modulate their brain activity, is being explored as a
tool for enhancing cognitive performance. Technolog-
ical advancements, such as brain-computer interfaces
(BCIs) and virtual reality (VR), are emerging as inno-
vative tools for cognitive enhancement. BCIs enable
direct communication between the brain and external
devices, potentially aiding in the rehabilitation of cog-
nitive functions in individuals with neurological im-
pairments. VR technology is being used to create im-
mersive environments for cognitive training, offering
engaging and effective ways to improve mental skills.
1.6 Conclusion
The intersection of BMIs, neurodegenerative dis-
ease research, and cognitive neuroscience represents
a dynamic and rapidly evolving field. The advance-
ments in these areas deepen our understanding of the
brain and hold immense potential for developing inno-
vative treatments and technologies. As research pro-
gresses, the integration of cutting-edge technologies
and interdisciplinary approaches will likely lead to sig-
nificant breakthroughs, ultimately improving the lives
of individuals affected by neurological conditions and
enhancing human cognitive capabilities.
References
[1] https://www.niehs.nih.gov/research/supported/
health/neurodegenerative
[2] Illustration-of-a-brain-machine-interface
[3] Bacher, D., Jarosiewicz, B., Masse, N. Y.,
Simeral, J. D., Newell, K., Oakley, E. M., ...
& Donoghue, J. P. (2015). Neural point-and-
click communication by a person with incom-
plete locked-in syndrome. Neurorehabilitation
and Neural Repair, 29(5), 462-471.
[4] Benabid, A. L., Chabardes, S., Mitrofanis, J., &
Pollak, P. (2009). Deep brain stimulation of the
subthalamic nucleus for the treatment of Parkin-
sons disease. The Lancet Neurology, 8(1), 67-81.
[5] Collinger, J. L., Wodlinger, B., Downey, J. E.,
Wang, W., Tyler-Kabara, E. C., Weber, D. J.,
... & Schwartz, A. B. (2013). High-performance
neuroprosthetic control by an individual with
tetraplegia. The Lancet, 381(9866), 557-564.
[6] Fregni, F., Boggio, P. S., Nitsche, M. A., Rig-
onatti, S. P., & Pascual-Leone, A. (2005). Cog-
nitive effects of repeated sessions of transcranial
direct current stimulation in patients with depres-
sion. Depression and Anxiety, 22(3), 166-170.
[7] Follett, K., Klein, C., & Park, S. (2010). Deep
brain stimulation for Parkinsons disease. New
England Journal of Medicine, 362(27), 2519-
2529.
[8] Gualtieri, C. T., & Morgan, D. W. (2008). The fre-
quency of cognitive impairment in patients with
anxiety, depression, and bipolar disorder: An un-
accounted source of variance in clinical trials.
Journal of Clinical Psychiatry, 69(7), 1122-1130.
[9] Hardy, J. A., & Higgins, G. A. (1992).
Alzheimer’s disease: the amyloid cascade hy-
pothesis. Science, 256(5054), 184-185.
[10] Heneka, M. T., Golenbock, D. T., & Latz, E.
(2015). Innate immunity in Alzheimer’s disease.
Nature Immunology, 16(3), 229-236.
[11] Kolb, B., & Gibb, R. (2011). Brain plasticity and
behaviour in the developing brain. Journal of the
Canadian Academy of Child and Adolescent Psy-
chiatry, 20(4), 265-276.
[12] Parkinson, J. (1817). An Essay on the Shaking
Palsy. Sherwood, Neely, and Jones.
[13] Miller, E. K., & Cohen, J. D. (2001). An integra-
tive theory of prefrontal cortex function. Annual
Review of Neuroscience, 24(1), 167-202.
[14] Spillantini, M. G., Schmidt, M. L., Lee, V. M.
Y., et al. (1997). α-Synuclein in Lewy bodies.
Nature, 388(6645), 839-840.
22
REFERENCES
[15] Sejnowski, T. J. (2020). The unreasonable ef-
fectiveness of deep learning in artificial intelli-
gence. Proceedings of the National Academy of
Sciences, 117(48), 30033-30038.
[16] Wolpaw, J. R., Birbaumer, N., McFarland, D.
J., Pfurtscheller, G., & Vaughan, T. M. (2002).
Brain-computer interfaces for communication
and control. Clinical Neurophysiology, 113(6),
767-791.
About the Author
Geetha Paul is one of the directors of
airis4D. She leads the Biosciences Division. Her
research interests extends from Cell & Molecular Bi-
ology to Environmental Sciences, Odonatology, and
Aquatic Biology.
23
Bead Chip Technology: Simplifying Genetic
Analysis
by Jinsu Ann Mathew
airis4D, Vol.2, No.8, 2024
www.airis4d.com
Genes are the fundamental units of heredity, car-
rying the instructions for the development, function-
ing, growth and reproduction of all living organisms.
Much like words in a sentence, genes are composed of
sequences of nucleotides that form the genetic code.
This code, written in the four-letter alphabet of ade-
nine (A), cytosine (C), guanine (G), and thymine (T)
orchestrates the complex symphony of life.
In genetics, understanding this code is crucial.
Just as a single letter can change a word’s meaning,
small changes in genes can significantly affect an or-
ganism’s traits. These changes can be as small as a
single nucleotide alteration or as large as structural
changes, influencing everything from physical charac-
teristics to disease risk.
DNA microarray technology emerges as a power-
ful tool in deciphering this genetic language. It enables
researchers to read, analyze, and interpret the vast array
of genetic variations with high precision and efficiency.
Imagine having a comprehensive dictionary that not
only defines each word but also reveals its contextual
usage and implications—this is what DNA microarray
offers to genomics.
2.1 What is DNA Microarray
A DNA microarray, also known as a DNA chip or
biochip, is a collection of microscopic DNA spots at-
tached to a solid surface. Scientists use DNA microar-
rays to simultaneously measure the expression levels
of many genes or to genotype multiple regions of a
(Image courtesy:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4011503/)
Figure 1: Simplified view of a DNA array.
genome. A DNA array is usually used to test a solution
with labeled nucleic acids. When these labeled nucleic
acids (targets) bind to the DNA spots on the array, the
array measures the amounts of each nucleic acid in the
solution. This process helps to determine gene expres-
sion levels. Figure 1 shows a simplified view of a DNA
array.
2.2 Principle of DNA Microarray
The principle of DNA microarray technology is
centered on the concept of hybridization, where com-
plementary strands of DNA bind to each other. This
technology uses a small glass or silicon chip onto
which thousands of specific DNA sequences, known
as probes, are fixed in a grid pattern. Each probe is de-
signed to match a particular DNA sequence of interest,
enabling the detection and analysis of those sequences
in a sample.
The process begins with the preparation and la-
beling of the sample. DNA is extracted from the cells
being studied and labeled with fluorescent dyes. The
labeled DNA sample is then applied to the microarray
2.3 Types of Microarray
(Image courtesy:https://en.wikipedia.org/wiki/DNA microarray)
Figure 2: Hybridization of the target to the probe
chip. When the sample is added to the chip, the labeled
DNA sequences will bind, or hybridize, to the comple-
mentary probes on the chip, similar to how a key fits
into a lock (Figure : 2).
After hybridization, the chip is washed to remove
any unbound or loosely bound DNA. This ensures that
only the sequences that have specifically hybridized
to the probes remain on the chip. The chip is then
scanned using a laser, which excites the fluorescent
dyes attached to the hybridized DNA. The scanner de-
tects the intensity of the fluorescence at each spot on the
chip, which corresponds to the amount of hybridized
DNA.
The data collected from the fluorescent signals
are analyzed to determine the presence and quantity of
specific DNA sequences in the sample. This allows
scientists to measure gene expression levels or identify
genetic variations. The principle of complementary
base pairing, where adenine (A) pairs with thymine (T)
and cytosine (C) pairs with guanine (G), is fundamental
to this process, ensuring that only matching sequences
hybridize effectively.
2.3 Types of Microarray
2.3.1 Spotted Arrays on Glass
Spotted arrays on glass are a fundamental type
of DNA microarray technology. In these arrays, pre-
synthesized DNA sequences, known as probes, are
deposited onto a solid surface, typically glass slides.
This process involves the use of small robotic machines
called microarray spotters, which precisely place tiny
droplets of DNA solution onto specific locations on the
glass slide. Each spot contains multiple copies of a spe-
(Image courtesy:https://www.researchgate.net/figure/Simplified-pictorial-representation-of-a-
spotted-DNA-microarray-Unique-nucleic-acid fig2 248555514)
Figure 3: Simplified pictorial representation of a spot-
ted DNA microarray.
cific DNA sequence, creating an ordered array of DNA
probes ready to interact with sample DNA (Figure 3).
One of the key advantages of spotted arrays is
their relatively low cost and the flexibility they offer in
customizing probes. Researchers can choose or create
probes specific to their studies, making this technol-
ogy highly adaptable. However, spotted arrays also
have some disadvantages. There can be variability in
spot size and DNA concentration, which may affect re-
producibility and the quality of the data obtained. This
variability can introduce noise and reduce the accuracy
of the measurements, necessitating careful quality con-
trol and data normalization procedures.
2.3.2 Self-Assembled Arrays
Self-assembled arrays represent an innovative ap-
proach to DNA microarray technology where DNA-
coated beads spontaneously organize themselves into
a predefined pattern on a substrate. This type of ar-
ray leverages the unique properties of microbeads and
their ability to self-assemble, providing a high-density
platform for genetic analysis.
In self-assembled arrays, each bead is coated with
a specific DNA sequence, known as a probe. These
beads are then randomly distributed onto a patterned
substrate, which may contain wells or other features
that facilitate the organization of the beads into a pre-
cise arrangement (Figure : 4). The pattern of the
substrate is crucial as it allows the identification and lo-
calization of each bead type, ensuring that each bead’s
position correlates with its specific DNA sequence.
One of the main advantages of self-assembled ar-
rays is their ability to achieve high-density configura-
tions, allowing for a large number of genetic sequences
to be analyzed simultaneously. This density is facili-
25
2.4 Conclusion
(Image courtesy:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4011503/)
Figure 4: self assembled arrays
tated by the small size of the beads and the precision
of the self-assembly process. Additionally, the flex-
ibility in bead design enables customization for dif-
ferent types of genetic studies, enhancing the array’s
versatility. However, there are challenges associated
with this technology. The manufacturing process can
be complex, particularly in ensuring the accurate self-
assembly and identification of beads. Potential issues
include bead misplacement or difficulties in identifying
specific beads within the array.
2.3.3 In-Situ Synthesized Arrays
In-situ synthesized arrays represent a cutting-edge
technology in the field of DNA microarrays, where
DNA probes are directly synthesized on the array sur-
face. Unlike traditional methods that require pre-synthesized
DNA to be spotted onto the array, in-situ synthesis con-
structs the DNA sequences base by base directly on the
substrate. This method offers unparalleled precision
and uniformity, making it ideal for high-density ge-
netic analysis.
The synthesis process typically involves techniques
such as photolithography or inkjet printing. Photolithog-
raphy uses light to selectively activate specific areas on
the array surface, allowing the addition of nucleotide
bases in a controlled manner. This step-by-step ad-
dition of nucleotides builds up the desired DNA se-
quences directly on the array. Inkjet printing, on the
other hand, employs a printing mechanism to deposit
nucleotide solutions onto the array surface, creating
DNA sequences in a similar incremental fashion (Fig-
ure : 5).
In-situ synthesized arrays are particularly advan-
tageous for applications requiring high probe density,
such as gene expression profiling, genotyping, and sin-
(Image courtesy:https://www.sciencedirect.com/science/article/pii/S266654252300084X)
Figure 5: Schematic diagram of the inkjet printing-
based high-throughput DNA synthesis
gle nucleotide polymorphism (SNP) detection. The
ability to create millions of probes on a single array
allows for extensive genetic analysis with high reso-
lution. The uniformity and precision of in-situ syn-
thesis ensure that each probe is consistent in length
and sequence, reducing variability and improving data
quality.
One of the significant benefits of this technology
is the customization it offers. Researchers can design
arrays with specific probes tailored to their experimen-
tal needs, enabling targeted analysis of particular genes
or genetic regions. This customization, combined with
the high density of probes, makes in-situ synthesized
arrays a versatile tool for various genetic studies.
2.4 Conclusion
DNA microarray technology has revolutionized
the field of genomics by enabling the simultaneous
analysis of thousands of genes. Through the principles
of hybridization and the innovative designs of spotted
arrays, self-assembled arrays, and in-situ synthesized
arrays, researchers can explore gene expression, detect
genetic variations, and understand complex biological
processes with unprecedented precision and efficiency.
As this technology continues to advance, it promises
to further deepen our understanding of genetics and
propel the development of personalized medicine, of-
fering new insights and solutions to some of the most
pressing challenges in healthcare and biology.
26
REFERENCES
References
Introduction to DNA Microarrays
DNA microarrays: Types, Applications and their
future
Glass slides to DNA microarrays
Inkjet printing-based high-throughput DNA syn-
thesis
DNA Microarray- Definition, Principle, Proce-
dure, Types
About the Author
Jinsu Ann Mathew is a research scholar
in Natural Language Processing and Chemical Infor-
matics. Her interests include applying basic scientific
research on computational linguistics, practical appli-
cations of human language technology, and interdis-
ciplinary work in computational physics.
References
27
About airis4D
Artificial Intelligence Research and Intelligent Systems (airis4D) is an AI and Bio-sciences Research Centre.
The Centre aims to create new knowledge in the field of Space Science, Astronomy, Robotics, Agri Science,
Industry, and Biodiversity to bring Progress and Plenitude to the People and the Planet.
Vision
Humanity is in the 4th Industrial Revolution era, which operates on a cyber-physical production system. Cutting-
edge research and development in science and technology to create new knowledge and skills become the key to
the new world economy. Most of the resources for this goal can be harnessed by integrating biological systems
with intelligent computing systems offered by AI. The future survival of humans, animals, and the ecosystem
depends on how efficiently the realities and resources are responsibly used for abundance and wellness. Artificial
intelligence Research and Intelligent Systems pursue this vision and look for the best actions that ensure an
abundant environment and ecosystem for the planet and the people.
Mission Statement
The 4D in airis4D represents the mission to Dream, Design, Develop, and Deploy Knowledge with the fire of
commitment and dedication towards humanity and the ecosystem.
Dream
To promote the unlimited human potential to dream the impossible.
Design
To nurture the human capacity to articulate a dream and logically realise it.
Develop
To assist the talents to materialise a design into a product, a service, a knowledge that benefits the community
and the planet.
Deploy
To realise and educate humanity that a knowledge that is not deployed makes no difference by its absence.
Campus
Situated in a lush green village campus in Thelliyoor, Kerala, India, airis4D was established under the auspicious
of SEED Foundation (Susthiratha, Environment, Education Development Foundation) a not-for-profit company
for promoting Education, Research. Engineering, Biology, Development, etc.
The whole campus is powered by Solar power and has a rain harvesting facility to provide sufficient water supply
for up to three months of drought. The computing facility in the campus is accessible from anywhere through a
dedicated optical fibre internet connectivity 24×7.
There is a freshwater stream that originates from the nearby hills and flows through the middle of the campus.
The campus is a noted habitat for the biodiversity of tropical Fauna and Flora. airis4D carry out periodic and
systematic water quality and species diversity surveys in the region to ensure its richness. It is our pride that
the site has consistently been environment-friendly and rich in biodiversity. airis4D is also growing fruit plants
that can feed birds and provide water bodies to survive the drought.