Cover page
M87’s Central Black Hole in Polarized Light (Image source : https://apod.nasa.gov/apod/ap210331.html )
M87 has a supermassive blackhole at its centre that accrete matter that spin around in relativistic velocities
before falling into the blackhole. The Event Horizon Telescope (EHT) is a virtual radio telescope that has a
size equivalent to the size of the earth. This allows astronomers to see the finer details as well as the bulk
behaviour of the object that they are observing. To know more, see https://eventhorizontelescope.org/blog/
wobbling-shadow-m87-black-hole
Managing Editor Chief Editor Editorial Board Correspondence
Ninan Sajeeth Philip Abraham Mulamootil K Babu Joseph The Chief Editor
Ajit K Kembhavi airis4D
Geetha Paul Thelliyoor - 689544
Arun Kumar Aniyan India
Jorunal Publisher Details
Publisher : airis4D, Thelliyoor 689544, India
Website : www.airis4d.com
Email : nsp@airis4d.com
Phone : +919497552476
i
Editorial
by Fr Dr Abraham Mulamoottil
airis4D, Vol.1, No.4, 2023
www.airis4d.com
Airis4D’s latest issue features a diverse set of topics that cater to readers interested in machine learning,
astronomy, biosciences, and climate. The articles provide valuable insights into the parameters and hyperpa-
rameters of machine learning algorithms, language models, black holes, radio waves, stem cells, and cloud
formation.
The article on machine learning emphasizes the importance of selecting the optimal algorithm, using the
right amount and quality of data, and evaluating the model’s performance using various measures. Parameters
and hyperparameters are two critical variables in machine learning models that can significantly affect their
performance and generalization ability. The text also provides examples of hyperparameters in different machine
learning models and techniques for hyperparameter tuning.
The article on language models delves into the two types of language models and their statistical techniques
to predict the likelihood of the next word or phrase based on context and previous words. The text highlights
the importance of choosing the appropriate value of ’n to capture more context without increasing sparsity and
techniques like smoothing and backing off to address the sparsity issue.
The articles on astronomy and biosciences provide valuable insights into black holes, radio waves, and
stem cells properties and applications in healthcare and regenerative medicine. The article on cloud formation
discusses the different types of clouds and their impact on the Earths climate.
The issue’s final section offers a fiction piece called Cranky the Scientist, providing a unique twist to the
conventional format of academic journals.
Overall, Airis4D’s latest issue offers informative and engaging articles that cater to a diverse set of readers
interested in various fields. The articles provide valuable insights and knowledge while keeping the content
accessible and engaging. In conclusion, Airis4D Journal provides valuable insights into various scientific fields,
making it a great source of information for researchers, students, and enthusiasts alike. The articles provide a
good starting point for exploring different topics and should be further revised and edited to improve readability
and coherence.
An End Note: ChatGPT has been widely accepted today as one of the most advanced and capable language
models for natural language processing. It has the ability to understand human language, generate responses, and
learn from its interactions with users. Its advanced AI capabilities have made it a valuable tool for a wide range
of applications, including chatbots, virtual assistants, language translation, and more. Its compatibility with
various programming languages and frameworks, along with its open-source nature, has made it accessible to
developers and businesses of all sizes, leading to its widespread adoption. ChatGPT’s acceptance and adoption
are expected to continue to grow as AI technology advances and more use cases for language processing emerge.
Contents
Editorial ii
I Artificial Intelligence and Machine Learning 1
1 Nuts and Bolts of Machine Learning - Part 2 2
1.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.2 Parameters and Hyperparameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.3 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
2 Language Models: Enabling Machines to Understand and Generate Human Language 7
2.1 What is a language model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
2.2 Statistical Language Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
2.3 Neural Language Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
II Astronomy and Astrophysics 12
1 The Mass of Black Holes 13
1.1 A Black Hole . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
1.2 The Properties of a Black Hole . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
1.3 The Black Hole Zoo . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
1.4 Supermassive Black Holes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
1.5 The Supermassive Object at the Centre of our Galaxy . . . . . . . . . . . . . . . . . . . . . . 19
1.6 The Orbit of Star S2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
1.7 Historical Note . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
1.8 Is the Compact Object a Supermassive Black Hole? . . . . . . . . . . . . . . . . . . . . . . . 25
2 The Little Green Men in Space 26
2.1 What are radio waves? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
2.2 Why did radio waves have to be discovered? . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
2.3 What is electromagnetic radiation? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
2.4 What are the mechanisms that create EM waves? . . . . . . . . . . . . . . . . . . . . . . . . 27
2.5 A radio universe . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
2.6 The story of the Little Green Men . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
2.7 Why do we need radio astronomy? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
2.8 The challenges in radio astronomy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
2.9 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
3 Pulsating Variable Stars 32
3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
3.2 Why do they pulsate? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
3.3 Different types of pulsating variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
CONTENTS
III Biosciences 40
1 The Avatars of Stem Cells and its applications in healthcare 41
1.1 What is the procedure behind stem cell therapy? . . . . . . . . . . . . . . . . . . . . . . . . . 44
1.2 Applications of stem cells in modern research . . . . . . . . . . . . . . . . . . . . . . . . . . 45
1.3 Promises of Stem Cells In Regenerative Medicine . . . . . . . . . . . . . . . . . . . . . . . . 47
IV General 49
1 Beyond the Fluffy: The Enchanting World of Clouds 50
1.1 Cloud Formation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
1.2 Types of Clouds . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
1.3 What Causes Rain? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
1.4 How Clouds Affect the Earths Climate? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
V Fiction 54
1 Cranky the Scientist 55
iv
Part I
Artificial Intelligence and Machine Learning
Nuts and Bolts of Machine Learning - Part 2
by Blesson George
airis4D, Vol.1, No.4, 2023
www.airis4d.com
1.1 Introduction
Machine learning is defined as the usage and development of computer systems that can learn and adapt
without being explicitly programmed. Although while ML learns without explicit instructions, it is not true
that the system will generate a model without human intervention. There are misunderstandings that, given
the required data, a system would automatically develop a model. In this article, we demonstrate the role of
users/humans in model construction.
The initial task of users is to determine the optimal algorithm for the programme. Consider the classification
job as an illustration. There are a variety of methods, such as SVM, logistic regression, Decision tree, Random
forest, and Naive Bayes, for performing the task. The choice of algorithm for a specific problem is left to the
discretion of the user. User must make a decision based on a variety of considerations. For instance, if the user
has the option to choose between speed and accuracy, he has several options. Kernel SVM, Random Forest,
and Neurel network are methods that provide greater accuracy than other methods, however the user will select
Linear SVM, logistic regression, or the Naive Bayes approach based on speed. Again, he must evaluate whether
or not he need explanability. Logistic regression and decision tree are approaches with superior explainability
compared to others.
People with domain experience can give useful insights on the sorts of data and patterns that are likely to
be there. This information can assist in narrowing down the list of candidate algorithms to those that are most
suited to solving the problem. Furthermore, Users may evaluate the effectiveness of various machine learning
(ML) algorithms using measures such as accuracy, precision, recall, and F1 score. This evaluation can assist in
determining which algorithm is the most effective solution for the given problem.
User participation to ML model creation is not limited to algorithm finding. The user is responsible for
ensuring enough amount and quality of data for training. Users may test and assess machine learning models
to offer feedback on their performance. This input may be utilised to enhance the model and increase its utility
for the intended application. Users can discover possible use cases for machine learning (ML) models and offer
comments on how the models might be enhanced to better meet their needs. Users may also assist in identifying
the most essential traits and parameters to include in machine learning models. This can guarantee that the
models are centred on the most pertinent and valuable data.
1.2 Parameters and Hyperparameters
1.2 Parameters and Hyperparameters
Assume that we have identified the optimal method for our problem. The work is not yet complete. The
user must provide some parameters of the algorithm. Before proceeding, let’s define the two sorts of variables
in the ML algorithm: parameters and hyperparameters.
1.2.1 Parameters
The internal variables of a model that are learned from training data are referred to as parameters in machine
learning. These factors are applied to fresh data to generate forecasts. The objective of machine learning is to
discover the model parameters that reliably predict the output given the input.
For instance, the parameters of a linear regression model are the slope and intercept of the line that best
fits the data. The parameters of a neural network are the weights and biases of the network’s nodes. During
training, the algorithm modifies the values of these parameters to reduce the disparity between the expected and
actual output.
Once the model has been trained, the learnt parameters are applied to new data to generate predictions.
Notably, the quality of the predictions is dependent on the quality and amount of the training data, as well as
the model’s complexity and its ability to generalise to new data.
1.2.2 Hyperparameters
In machine learning, hyperparameters are parameters that cannot be directly learnt from training data and
are instead defined before to the training process. These options govern the learning algorithm’s behaviour
and dictate how the model is taught. As they influence the model’s bias and variance, hyperparameters play a
significant role in defining a model’s performance.
The term ”hyper” is included in the name because these parameters control the training process and
determine the model’s parameters. The values of the hyperparameters are fixed prior to the start of training and
are not altered throughout the training process.
1.2.3 A few illustraions of ML parameters
1.2.3.1 Linear regression
Imagine a linear regression model that predicts an output based on an input value (s). The model of
machine learning is represented by a line equation, given as
h
θ
x) = θ
0
+ θ
1
x (1.1)
The θ
0
and θ
1
represents the model’s parameters. The linear regression model does not have explicit
hyperparameters. However linear models like Ridge, Lasso etc have the loss function, dropuout rate, learning
rate etc as the hyperparameters. For nonlinear regressions like SGD regression, the degree of polynomial
features can be a user-specified hyperparameter value.
1.2.3.2 Support Vector Machine (SVM)
A Support Vector Machine is a supervised machine learning approach that is applicable to both classification
and regression tasks. It uses a technique known as the kernel trick to alter the data, and based on these
transformations, it identifies the optimal boundary between the potential outputs. SVM operates by identifying
3
1.2 Parameters and Hyperparameters
Figure 1: Figure showing the linear regression plot between population in a city vs the profit. The values of θ
0
and θ
1
are -3.62 and 1.17 in this case. Image courtesy: https://gtraskas.github.io/post/ex1/
the ideal separating hyperplane that optimises the training data margin. SVM parameters include support
vector indexes, intercept, and the number of support vectors for each class, among others. User-supplied
hyperparameters include the kind of kernel used to generate the hyperplane, regularisation parameter, degree of
the polynomial kernel function, kernel coefficient, etc.
Figure 2: Figure showing the effect of Regularization parameter on the effect of deciding the support vectors.
Image courtesy: https://www.codingninjas.com/codestudio/library/svm-hyperparameter-tuning-using-gridsearchcv
1.2.3.3 Convolutional Neural Networks
Convolutional Neural Networks (CNNs) are a specialized type of neural network that is commonly used for
image classification and recognition tasks. Like all neural networks, CNNs have parameters and hyperparameters
that are critical to their performance. Optimizing Convolutional Neural Network (CNN) hyperparameters is a
difficult task for many academics and practitioners. To obtain hyperparameters with improved performance,
specialists must manually configure a set of hyperparameter options.
The parameters optimized during the training process includes Filters or kernels (which are the feature
extractors in CNNs), Weights (parameters that determine the strength of the connections between neurons in
4
1.3 Conclusion
the network and Biases (parameters that are added to the output of each neuron)
Hyperparameters are the parameters determined by the user. There are two types of hyperparameters in
case of CNN. They are
Hyperparameter that determines the network structure: This includes the size of the filter, Kernel
Type(e.g., edge detection, sharpen), the rate at which the kernel pass over the input,padding, layers
between input and output layers and the type of Activation functions.
Hyperparameter that determines the network trained such as: Learning rate, Momentum, number of
epochs, Batch size etc
The parameters and hyperparameters of a few of the networks have been presented in detail. The user
must configure certain network settings for each network. Setting up hyperparameters requires expert domain
expertise. Hyperparameter tuning is a method for determining the appropriate value of hyperaparameters.
1.2.4 Hyperparameter Tuning
Hyperparameter tuning is the process of selecting the optimal set of hyperparameters for a machine learning
algorithm. Hyperparameters are parameters that are not learned by the algorithm itself, but set by the user
before the training process begins. Examples of hyperparameters include the learning rate, batch size, number
of hidden layers, and number of neurons in each hidden layer of a neural network.
The performance of a machine learning model is highly dependent on the choice of hyperparameters.
Therefore, hyperparameter tuning is an important step in the machine learning workflow. The goal of hyperpa-
rameter tuning is to find the set of hyperparameters that results in the best performance of the model on a given
task.
There are several techniques that can be used for hyperparameter tuning, including grid search, random
search, and Bayesian optimization. Grid search involves trying every combination of hyperparameters in a
pre-defined search space. Random search involves sampling random combinations of hyperparameters from
the search space. Bayesian optimization involves modeling the performance of the model as a function of the
hyperparameters and using this model to guide the search for the optimal set of hyperparameters.
Hyperparameter tuning can be time-consuming and computationally expensive, but it is essential for
achieving the best performance of a machine learning model.
1.3 Conclusion
Understanding parameters and hyperparameters is vital for the development of effective machine learning
models. Parameters are internal variables learned by the model during training, whereas hyperparameters are
settings that affect the learning algorithm’s behaviour. Each of these variables have a substantial effect on how
well the model performs, and making the right choice between them may result in a model that is more accurate,
efficient, and easily interpretable, as well as one that generalize well to new data.
References
1. Hyperparameter tuning for machine learning models
2. Simple Linear Regression Parameter Estimates Explained.
3. InDepth: Parameter tuning for Decision Tree
4. Parameters and Hyperparameters in Machine Learning and Deep Learning
5
1.3 Conclusion
About the Author
Blesson George is currently working as Assistant Professor of Physics at CMS College Kottayam,
Kerala. His research interests include developing machine learning algorithms and application of machine
learning techniques in protein studies.
6
Language Models: Enabling Machines to
Understand and Generate Human Language
by Jinsu Ann Mathew
airis4D, Vol.1, No.4, 2023
www.airis4d.com
Have you ever noticed the ‘text prediction feature on mobile phones which suggests the next word or
phrase that the user might type based on the context and the words they have already entered (Figure 1). This
is one of the various use-cases of language models used in Natural Language Processing (NLP). But what is
a language model? To answer that question, we first need to clarify the concept of a model and its usage in
machine learning.
Models are used to represent complex and confusing real-world systems in a simplified way within a
specific field of interest, also known as a domain. For example, consider financial models. They are used to
represent complex financial systems and forecast future financial outcomes based on historical data and current
trends. These models may be used for a variety of purposes, such as predicting stock prices, estimating the
future value of an investment, or analyzing the financial impact of a business decision.
Machine learning models are much the same. It can be thought of as programs that are trained to detect
patterns in new data and make predictions. Essentially, they are algorithms that can recognize patterns or make
predictions on previously unseen datasets. Unlike rule-based programs, these models do not require explicit
coding and can evolve over time as new data is incorporated into the system. Initially, these models are trained
https://www.samsung.com/ie/support/mobile-devices/how-can-i-personalise-and-turn-predictive-text-on-and-off-on-my-samsung-galaxy-device/
Figure 1: Example suggestions provided by ‘text prediction feature’ for the typed query prefix ”Th”.
2.1 What is a language model
on a specific set of data and provided with an algorithm to analyze and extract patterns from the input data.
Once the models have been trained, they can then be used to predict outcomes on new, unseen datasets.
2.1 What is a language model
A language model is a probability distribution over words or word sequences. It’s a a type of statistical
model that is designed to analyze and understand the patterns of human language. It helps to predict the
likelihood of the next word or sequence of words in a given sentence, based on the context and previous words.
Generally, a set of example sentences serves as input to the language model, which generates a probability
distribution over sequences of words.
The modeling of language through probabilistic approaches can take on various forms depending on the
intended purpose of the language model. From a technical perspective various types differ in terms of the quantity
of text data analyzed and the mathematical techniques employed for analysis. For example, A language model
designed to provide autocomplete suggestions for an email client may use different mathematical techniques
and analyze text data differently than a language model designed for sentiment analysis of social media posts.
The former may focus on predicting the next word in a sentence based on the user’s writing history, while the
latter may be more concerned with identifying emotional tone and sentiment expressed There are primarily two
types of language models:
Statistical language model
Neural language model
2.2 Statistical Language Model
Statistical language modeling (SLM) aims to capture the underlying patterns and structure of natural
language in order to enhance the performance of various language-based applications. SLM relies on established
statistical techniques such as N-grams, Hidden Markov Models (HMM), and linguistic rules to learn the
probability distribution of words in textual data.
One of the most important features of statistical language models is their ability to predict the next word
in a sequence based on the words that came before it. This is done by using the preceding words as context
to compute the probability of each word appearing in the sequence. To accomplish this, statistical language
models often represent each word in the lexicon as a unique numerical value. These values are derived from the
training data and capture the underlying patterns and relationships between words. In this way, SLM enables
language-based applications to accurately predict and generate text based on the patterns and relationships found
in the training data.
Statistical Language Models are further classified into:
2.2.1 N-Gram model
A relatively simple type of language model is the N-gram model. The n-gram model calculates the
probability of a given sequence of words by estimating the probability of each word based on its previous n-1
words. For example, in a bigram model (n=2), the probability of a word depends only on the preceding word.
In a trigram model (n=3), the probability of a word depends on the two preceding words (Figure 2).
The choice of n in the n-gram model depends on the task and the size of the corpus. A larger value of n
captures more context and leads to a more accurate model, but it also increases the sparsity of the data, making
8
2.2 Statistical Language Model
(image courtesy:https://tejinderpal-singh17.medium.com/language-models-a-brief-overview-91631f8f1e13
Figure 2: Example of unigram, bigram and trigram
Figure 3: Bidirectional language model
it more difficult to estimate probabilities. Conversely, a smaller value of n leads to a simpler model with fewer
parameters, but it may not capture enough context to be useful for some tasks.
2.2.2 Bidirectional model
A bidirectional language model is a type of language model that takes into account both the left and right
context of a word or sentence when predicting the next word in a sequence (Figure 3).
In a traditional language model, the probability of the next word depends only on the preceding words in
the sequence. In a bidirectional language model, however, the probability of the next word depends on both the
preceding words and the following words. This allows the model to better capture the context and meaning of
the text, as it can take into account both the preceding and following words to make a prediction.
One advantage of bidirectional language models is that they can capture long-range dependencies in the
text, as they consider both the preceding and following words. However, they are typically more computationally
expensive than traditional language models, as they require processing the text in both directions.
2.2.3 Exponential model
Exponential model is a type of statistical language model that uses the principle of maximum entropy to
estimate the probability distribution of words in a language. This is also called maximum entropy model.The
principle of maximum entropy, also known as the maximum entropy principle, is a statistical method used to
make predictions based on incomplete information.
The basic idea behind this principle is to choose the probability distribution that maximizes the entropy,
subject to constraints that are known or assumed. In the case of a language model, the constraints are the
observed frequencies of words in a training corpus, and the goal is to estimate the probability distribution of
words that best fits these constraints. The maximum entropy language model achieves this by choosing the
9
2.3 Neural Language Model
probability distribution that maximizes the entropy subject to the observed frequencies of words.
One advantage of the maximum entropy language model is that it can handle a wide range of features and
constraints, such as n-gram frequencies, syntactic structures, and semantic information. This flexibility makes
it useful in many natural language processing tasks, such as machine translation, speech recognition, and text
classification.
2.3 Neural Language Model
A neural network language model (NNLM) is a type of language model that uses neural networks to learn
the probability distribution of words in a language. It is a type of neural language model that is trained on large
amounts of text data to predict the probability distribution of the next word in a sequence.
The basic idea behind the neural network language model is to use a neural network to map a sequence of
words to a probability distribution over the next word in the sequence. The neural network typically consists of
an input layer that receives the words of the sequence, one-hot encoded or as word embeddings, and one or more
hidden layers that perform nonlinear transformations on the input. The output layer produces the probability
distribution over the vocabulary of possible next words.
During training, the neural network is optimized to maximize the probability of the next word in the
sequence given the previous words. This is typically done using backpropagation and gradient descent to adjust
the weights of the neural network. The trained neural network can then be used to generate new text or to predict
the next word in a sequence.
One advantage of the neural network language model is that it can capture complex relationships between
words, such as syntactic and semantic dependencies, and can generalize to unseen sequences of words. It has
shown to outperform traditional n-gram language models in many natural language processing tasks, such as
language modeling, machine translation, and text generation.
Summary
In conclusion, a language model is a fundamental concept in natural language processing that enables
machines to understand and generate human language. It is a statistical model that estimates the probability
distribution of words in a language, and it can be used for a variety of tasks, such as speech recognition, machine
translation, and text generation. N-gram language models are the simplest and most widely used type of language
models, which estimate the probability of a word given its preceding n-1 words. However, they suffer from
the sparsity problem when dealing with large vocabularies, and they do not capture long-range dependencies
between words. To address these limitations, more advanced language models have been developed, such as the
maximum entropy language model and the neural network language model. The maximum entropy language
model uses the principle of maximum entropy to estimate the probability distribution of words, while the neural
network language model uses neural networks to learn the probability distribution of words. Neural network
language models have shown to outperform traditional n-gram language models in many natural language
processing tasks, due to their ability to capture complex relationships between words and generalize to unseen
sequences of words. Overall, language models play a critical role in natural language processing, and their
development and refinement will continue to improve the ability of machines to understand and generate human
language.
10
2.3 Neural Language Model
References
Machine Learning Models, Javatpoint
What Is a Language Model?, TUANA C¸ ELIK, deepset, July 2022
Building Language Models in NLP, Koushiki Dasgupta Chaudhuri, AnalyticsVidhya, January 2022
A Beginner’s Guide to Language Models, M
´
or Kapronczay, builtin, Decenber 2022
Language Models- A Brief Overview, Tejinderpal Singh, medium, june 2021
Neural net language models, Yoshua Bengio, Scholarpedia
What are Language Models in NLP?, daffodil, july 2020
Understanding Language Models in NLP, Jammie sandy, medium, february 2023
Language Models in AI, Dennis Ash, medium, february 2021
About the Author
Jinsu Ann Mathew is a research scholar in Natural Language Processing and Chemical Informatics.
Her interests include applying basic scientific research on computational linguistics, practical applications of
human language technology, and interdisciplinary work in computational physics.
11
Part II
Astronomy and Astrophysics
The Mass of Black Holes
by Ajit Kembhavi
airis4D, Vol.1, No.4, 2023
www.airis4d.com
1.1 A Black Hole
The term Black Hole is now well established in the public mind. It is known as an object from which no
light escapes, making it all black, and therefore invisible. The real nature of a black hole is rather more bizarre
than the simple understanding of it. It is an object which has mass, which can be very large, and yet it has zero
size. Since the mass is packed into zero volume, the density of the object is infinitely large, which is not usually
acceptable as a possible physical situation. The infinite density is not just the one problem, there is an even
greater one.
In Newton’s theory of gravity there can in principle be objects from which light cannot escape. But these
objects inevitably involve very strong gravitational fields which can be correctly described not by Newton’s
theory, but by Albert Einsteins general theory of relativity. In Einsteins theory, the force of gravity is related
to the structure of space-time, which consists of the three dimensions of space and time combined into a single
entity. Unlike the space and time of our usual experience, this space-time is dynamic in nature, which means
that its properties are influenced by the presence of matter and energy, in a manner which is determined by
Einsteins equations of gravitation. The greater the mass of an object is, the greater is the curvature produced
by it in space-time, which manifests itself as a stronger gravitational force. Because a black hole has non-zero
mass but zero size and therefore infinitely large density, the curvature it produces is infinitely large too. This
leads to a complete breakdown of the space-time structure as we understand it. At the black hole, space and
time cease to have the meaning we intuitively associate with them. And yet black holes are real objects whose
behaviour is predictable and observable using current telescopes and instruments.
1.2 The Properties of a Black Hole
A black hole is a very pristine object with just two properties, mass and spin. The spin is similar to the
rotation of the Earth round its own axis, and provides angular momentum to the black hole. Given the extreme
conditions, the spin produces interesting effects not seen around normal rotating bodies. Theoretically, a black
hole can also have electric charge, but that is not expected in real black holes because the matter from which a
black hole is generally formed has zero total electric charge.
A black hole with zero spin is the simplest as it has just one property, mass. The mathematical form for
such a black hole was discovered by Karl Schwarzschild in 1916, just a year after Einstein announced his general
theory of relativity in 1915. But the name black hole was first used in the 1960s, and was made famous by
1.3 The Black Hole Zoo
the great physicist John Wheeler. The Schwarzschild black hole has an imaginary spherical surface associated
with it known as the event horizon. This has a radius which is proportional to the mass and is known as the
Schwarzschild radius R
S
:
.
In the equation, M is the mass of the black hole, G is Newton’s constant of gravitation and cis the speed of
light. For a black hole of the mass of the Sun, the Schwarzschild radius is about 1.5 km, while for a supermassive
black hole with a mass 10
8
times the mass of the Sun, the radius would be about 150 million km. A black hole
is shown schematically in Figure 1
Figure 1: A black hole with mass, but no spin. The spherical event horizon, which is not a real surface, is
indicated.
No particles, light rays or signals can emerge to the outside world from within the event horizon. Any
matter or light ray or radiation of any kind entering the event horizon is lost forever. The name black hole is
thus fully justified. But then how does one detect such a black hole at all? The gravitational field of the black
hole can be felt by the stars and any other matter surrounding it, which can lead to subtle or dramatic effects
which reveal the presence of the black hole.
The mathematical form of a black hole with mass as well as spin was first discovered by Roy Kerr in
1963. This too has an event horizon, which is more complicated in shape than the simple spherical shape for
the Schwarzschild black hole. There is also another surface associated with the Kerr black hole known as the
ergosphere, which is outside the event horizon. Any object entering the ergosphere is dragged into rotation by
the effect of the spinning black hole.
1.3 The Black Hole Zoo
The black hole properties described above are valid for all black hole masses, the theory does not change
with the mass. In practice, some ranges of black hole mass are important because various astrophysical processes
lead to the formation of such black holes. For example, the evolution of stars with large mass can lead to the
14
1.3 The Black Hole Zoo
formation of a class of black holes, known as stellar mass black holes, with mass in the range of a few times the
Solar mass to about one hundred times the Solar mass. A Solar mass is the mass of our Sun, which is 2x10
30
Kg.
It is convenient to use this mass as the unit for astronomical objects like stars and galaxies. Black holes which
power active galaxies are much more massive, with mass in the range of a few million Solar masses to more than
a billion Solar masses. These are known as supermassive black holes. Another class is intermediate mass black
holes with mas in the range of about one hundred Solar masses to about a hundred thousand Solar masses. These
can be associated with low power active galaxies, globular clusters, ultra-luminous X-ray sources etc. A quite
different class consists of primordial black holes which were first hypothesized by Stephen Hawking. These
have mass as small as 10
-5
Solar mass and can lose their mass through quantum processes. In the following, we
will describe stellar mass and supermassive black holes, but not the other kinds.
Stellar Mass Black Holes: Black holes in this group have a mass in the range of a few times the Solar
mass to about 85 times the Solar mass. How such black holes are formed is well understood, especially for
black holes with mass less than about 20 Solar masses. Stars generate energy by nuclear fusion in their central
regions. When a star with a mass of greater than about 25 Solar masses exhausts all its nuclear fuel, the inner
region of the star implodes, while the outer parts are thrown out in a gigantic explosion known as a supernova.
The imploding inner part collapses to form a black hole, whose mass depends on how much was the mass of the
initial star (the implosion of stars with mass less than 25 Solar masses leads to the formation of a white dwarf
or a neutron star, depending on the initial mass). Such black holes are expected to be common in our Galaxy
and are known as stellar mass black holes.
Figure 2: Schematic of a black hole X-ray binary. The oval object on the right is a star which has expanded due
to evolution, with its original spherical shape distorted by gravitational and centrifugal forces. Matter flows from
the star to the black hole and an accretion disc is formed around the black hole due to the angular momentum
of the inflowing matter. The disc and a surrounding cloud of energetic electrons are responsible for the X-ray
emission. Twin jets emanating from a region close to the black hole are seen.
We cannot easily detect solitary stellar mass black holes, but we can observe their effects when they are in
a binary system. Such a system consists of two astronomical objects revolving around each other, each attracted
by the other’s gravitational field. It is possible in such a system for one object to be a star, while the other is a
black hole. When the right conditions are met, matter flows from the star onto the black hole, forms an accretion
disc around it and gradually enters the event horizon. The disc is at high temperature and releases X-rays and
other forms of high energy electromagnetic radiation. The radiation can be boosted to even higher energy
15
1.3 The Black Hole Zoo
through interaction with high energy electrons which are present around the accretion disc. All these processes
occur outside the event horizon, so much of the radiation does not fall into the black hole and escapes to great
distances. The radiation can be detected using suitable telescopes and instruments carried by satellites in orbit
around the Earth. About 20 such X-ray emitting binary systems are known, where there is high confidence that
one component of the binary is a black hole. These black holes have a mass which ranges from about five Solar
masses to about 20 Solar masses.
There are X-ray binary systems in which the companion to the star is a neutron star rather than a black hole.
There are many similarities between neutron star and black hole binaries so it not always easy to discriminate
between the two. There are some differences too, for example a neutron star binary can show pulsations in its
X-ray emission, while a black hole binary cannot have such pulsations. The most important discriminant is of
course the mass. Neutron stars cannot have a mass greater than about three Solar masses. So, if the compact
object in X-ray binary has a mass much significantly greater than three Solar masses, then it has to be a black
hole. A difficulty in using the argument is that often it is difficult to accurately estimate the mass. The main
cause of the difficulty is that the mass measurement depends on the inclination of the plane of the binary orbit
to the line of sight, which is generally not known. Given the measurements, a range of mass is possible for the
compact object in a binary, and it is not always possible to say with certainty whether the object is a neutron star
or a black hole. But when the mass is much larger than the neutron star mass limit, it is extremely likely that the
object is indeed a black hole. So, it is safe to say that black holes have been detected in X-ray binary systems.
Any remaining doubt about the existence of black holes was set aside by the detection of gravitational
waves in 2015. An important prediction made by Albert Einstein in 1917 from his new theory is the existence of
gravitational waves. These waves are emitted when large masses undergo large accelerations, i.e. their already
high velocities change very rapidly. Such a circumstance is obtained when two black holes form a binary system
and go around each other very rapidly; the binary then becomes a strong emitter of gravitational waves. The
emitted waves carry away energy from the system, so that it shrinks in size, the black holes move closer and
go around each other even more rapidly, leading to even greater emission of gravitational waves. The result is
that the black holes rapidly spiral towards each other and finally merge together forming a single black hole,
which settles down to its pristine form through a short phase known as a ringdown. This was first predicted by
Professor C. V. Vishveshwara in 1970 (the ringdown phase is named after the ringing down of a bell, which
happens when a bell is struck and allowed to vibrate freely).
Such a spiral-in and merger of two black holes was observed by the two LIGO gravitational wave detectors
in the USA in 1915, after a heroic effort spanning more than two decades. From the pattern of the waveform
observed it can be unambiguously concluded that the two objects which spiraled in, as well as the object formed
from the merger are black holes. It is also possible to accurately measure the masses of the black holes, which
are 36 and 29 Solar masses respectively for the two black holes before the merger and 62 Solar masses for the
resultant black hole after the merger. The difference of three Solar masses between the merged mass and the
pre-merger masses was radiated away as energy in the form gravitational waves. The observation led to the
award of the Nobel Prize for 2017 to Professors Barry Barish, Kip Thorne and Rainer Weiss. The existence of
stellar mass black holes is now firmly established due to the many merging black holes which have been found
with gravitational wave detectors. The range of stellar mass black holes currently detected is shown in Figure
2. The large black hole masses detected through merging events cannot easily be explained by the usual stellar
evolution theory and establishing how they formed is a very active area of research.
16
1.4 Supermassive Black Holes
Figure 3: Black hole masses determined from X-ray binary related observations and from mergers detected
by the LIGO and Virgo gravitational wave detectors. The blue circles indicate the masses of black holes in
binaries before the merger and after the merger. The purple circles indicate black hole masses estimated from
electromagnetic (EM) observations at X-ray wavelengths. The yellow circles are masses of neutron stars in
X-ray binaries, again measured through EM means, and the orange circles indicate masses of neutron stars
which were either one both components in a merging binary which was gravitationally detected. Finally, the
white circles are the black hole masses which arose out of mergers involving at least one neutron star. Image
courtesy LIGO-Virgo/ Frank Elavsky & Aaron Geller (Northwestern)
1.4 Supermassive Black Holes
The other main kind of black holes are those known as supermassive black holes which are believed to be
located at the centres of galaxies. A galaxy consists of a collection of a large number of stars, gas and dust.
The most familiar kind are the spiral galaxies, whose images showing great spiral arms are familiar to everyone.
But a significant number of galaxies appear to have elliptical shapes (these are projects on the sky of ellipsoidal
objects) while others have irregular shapes. A normal galaxy, like our own Milky Way which is a spiral galaxy,
has about a hundred billion to a thousand billion stars, and is about a hundred thousand light years in diametre
(a light year is the distance travelled by light in one year, travelling at a speed of 300,000 km per second, which
is about a nine trillion km). The light coming to us from a galaxy is the sum total of the emission from the
stars and gas in it, with the colour of the light altered when dust is present in the galaxy. It is believed that
galaxies were first formed sometime after the Big Bang in which the Universe was created, and have undergone
significant evolution in the billions of years after they were formed.
About 10 percent of all galaxies are active galaxies: they emit radio, X-ray and other radiations which
cannot be ascribed to the stars, and which must have a different source. The energy emitted in this form by some
of the brighter active galaxies can be as high 10
46
erg/sec or more, which is far greater than the energy emitted
by all the stars in a galaxy like the Milky Way, which is about 10
44
erg/sec. Such radiation often varies rapidly
with time, which tells astronomers that much of it must be coming from a very compact source at the centre of
the galaxy. Some of the radio radiation can come from magnificent large structures which are much bigger than
17
1.4 Supermassive Black Holes
the galaxy, and are fed energy by narrow jets emanating from the centre. Most intriguingly, small parts of the
radio structure at the centre of the galaxy are observed to be moving away each other with speeds exceeding the
speed of light. This seems to be contradicting Einstein’s special theory of relativity, according to which nothing
can move faster than light. This was a great puzzle in the 1960s when the high speeds were first measured, until
a young research student in Cambridge, now Lord Martin Rees, explained that the observed high speeds were
an illusion due to radio waves being emitted by objects moving close to the line of sight, at speeds approaching,
but not exceeding, the speed of light.
Figure 4: The radio galaxy 3C348, also known as Hercules A. At the centre of the image an elliptical galaxy
is seen. The image at optical wavelengths was taken with the Hubble Space Telescope. The coloured extended
structure, much larger than the host elliptical galaxy is from observation made with the Very large Array at
radio wavelengths. The colour has been introduced for illustrative purposes only, it traces the emission at radio
wavelengths. Two narrow radio jets emerge from the galaxy to power the large radio lobes. The source of the
energy is a believed to be a supermassive black hole at the centre of the galaxy. Image Credit: NASA, ESA,
STScI.
All the above observations point to the existence, at the centre of active galaxies, a very massive and
compact object, with mass in the range of millions to billions of Solar masses. There are reasons to believe
that these objects are black holes, which are known as supermassive black holes. What is interesting is that
supermassive objects, which could very well be supermassive black holes, are also being found in the centre of
normal galaxies. These objects are discovered using a variety of effects that their gravitational field produces
on stars and gas in the vicinity of the object. It seems possible that supermassive black holes exist at the centre
of all galaxies. In a small fraction, these black holes receive a regular supply of matter in the form of stars and
gas from their vicinity, resulting in the formation of an accretion disc around the black holes from which matter
gradually falls into the black hole, releasing energy which makes the galaxy active.
We saw above that stellar mass black holes are formed in the evolution of large mass stars. How are
supermassive black holes formed? That is still an open question, but the understanding is that massive black
holes were formed in the early epochs of the Universe,
or in the early stages in the formation of galaxies, which merged to form even greater mass black holes.
The mass can also increase when there is a regular supply of stars and gas to be swallowed by the black hole.
Black holes with mass intermediate between the stellar mass black holes and supermassive black holes are also
18
1.5 The Supermassive Object at the Centre of our Galaxy
known.
The mass of supermassive black holes in distant galaxies can be estimated using a variety of techniques.
One technique is to use the motion of stars or gas around the centre of a galaxy, from which the mass can be
inferred using gravitation theory and dynamics. This has been done in the most direct way for our own Milky
Way galaxy, which is described below. There are more indirect techniques like reverberation mapping and
measuring the shapes of emission lines in the X-ray part of the spectrum. These methods provide estimates of
the supermassive black which can be used in further considerations. We will now describe how the mass of the
supermassive black hole at the centre of our Galaxy, the Milky Way or Akashganga, is determined from detailed
observation of stars in the central region of the Galaxy.
The work leading to the identification of a supermassive object was done independently by Professor
Reinhard Genzel and his group in Germany, and Professor Andrea Ghez and her group in the USA. Genzel and
Ghez were awarded half the Nobel Prize for Physics in 2020 for their efforts.The other half of the prize was
awarded to Professor Roger Penrose, for his theoretical work on black holes.
1.5 The Supermassive Object at the Centre of our Galaxy
The Milky Way is a spiral galaxy with more than a hundred billion (10
11
) stars. A large fraction of these
stars is distributed in a relatively thin disc, with the stars rotating around the galactic centre while remaining in
the plane of the disc. The disc has prominent spiral arms which give spiral galaxies their name. The diametre
of the disc is about a hundred thousand light years.
The second prominent structure of the galaxy is known as the bulge. It is nearly spherical in shape with a
diametre smaller than the diametre of the disc. Each star in the bulge moves around the centre in a plane, but
the orbits are not all coplanar. The orbital planes are distributed around the centre in such way that the bulge
stars appear to be distributed nearly spherically around the centre. In addition to the bulge and the disc, there is
a much larger, nearly spherical structure known as the Galactic Halo. This consists of old stars and globular
clusters. In addition the halo contains invisible dark matter, which in fact constitutes about 95 percent of the
total mass of the galaxy. The nature of the dark matter is presently not known.
Figure 5: This is a panoramic view of our galaxy in near-infrared light obtained by mosaicking images from
the 2MASS survey. The disc and the bulge are seen, though not the spiral arms, because we have only an
edge-on view of our own galaxy. The halo is not bright enough to be seen in the image (Image Courtesy
IPAC/CALTECH).
19
1.5 The Supermassive Object at the Centre of our Galaxy
A panoramic view of our Galaxy is shown in Figure 5. We cannot see spiral arms in Figure 5 because the
image shown is a sideway representation of our Galaxy. The spiral arms are shown in in the sketch in Figure
6. The shape of the arms is determined through optical, near-infrared and radio observations. Some prominent
arms have been named.
Figure 6: The arms of the Milky Way Galaxy are shown in this diagram. The structure of the arms has been
determined using optical, radio and near-infrared observations. The thick linear structure at the centre to which
the spiral arms are attached is known as a bar. Image Courtesy: NASA/JPL-Caltech/R. Hurt (SSC/Caltech)
For the purposes of determining the mass of a possible compact object at the centre, we will be concerned
only with stars very close to the centre. If the motion of such stars could be observed, it would be possible to
determine the shape of the orbit of the stars. Then using Newtons law of gravitation and mechanics, the mass
of the object can be determined.
The bulge of our Galaxy, which includes the central region, is located in the constellation of Sagittarius
and can be seen as a large, faint nebulosity with dark patches in it. The central region is a complex environment
with a high density of stars in a cluster around the centre, stars which are being formed, exploding stars, gas
and dust. A source emitting radio waves has for long been known in the central region. In 1974 Bruce Balick
and Robert Brown discovered a very compact component of the radio source, which was later named Sgr A*.
Observations have shown that its size is smaller than the distance between the Sun and the Earth, which is about
150 million km, a size which is very tiny compared to the scale of the galaxy. Sgr A* shows no motion, which
suggests it is very massive, and is located at the centre of the galaxy.
As mentioned above, the mass of the central object can be measured by determining the orbits of the stars
in the galaxy. Observing individual stars in this region is very difficult for two reasons: First, the centre is
about 25,000 light years away from the Sun, which makes the stars very faint. This dimming is made worse by
the presence of dust in the centre, which obscures the stars, so they appear to be much fainter than would be
warranted by their distance from us. Second, stars in the centre are crowded together, so they seemingly blend
20
1.6 The Orbit of Star S2
with each other, and are difficult to distinguish as individual objects. To get over these problems, it is necessary
to use the largest telescopes and to observe in the near-infrared region of the spectrum, where the obscurations
due to the dust is low. It is also necessary to use specialised techniques like adaptive optics and speckle
interferometry to increase the resolution of the observations, so that individual stars can be distinguished. All
this high technology makes it possible for motion of individual stars close to the centre of the galaxy to be
followed over a period of time, establishing the shape of their orbit. Stars in the central region of the Galaxy are
shown in Figure 7.
Figure 7: The image on the left shows stars in the central region of our Galaxy. It was made with the Keck
10 metre telescope, and is a composite of three images in different bands of the near-infrared region of the
spectrum. Speckle interferometry and adaptive optics were used to improve image resolution so that individual
stars in the crowded field could be seen separately. The central region is marked by a rectangle. The bar in the
upper left-hand corner of the image shows that the size of the box is a few light months. This is a tiny fraction
of the 100,000 light years diametre of our Galaxy. The image on the right shows an expanded version of the
box. The position of Sgr A* is indicated by a star. Image courtesy W. M. Keck Observatory.
1.6 The Orbit of Star S2
Determining the path of a star requires its position to be known at different times over a long period.
Imagine determining the path of a planet like Mars or Jupiter in the sky. The position of the planet at given times
can be determined in comparison with the position of neighbouring stars in the sky. The stars can themselves
to be taken to be stationery, since they are much further away from us than the planets, and their motion is
negligible compared to the motion of the planets. In practice, over centuries various coordinate systems have
been developed relative to which the motion of planets, and even the tiny motions of stars can be determined
with extreme accuracy.
Such measurements are much more difficult in the galactic centre, as the stars there are in incessant motion
and it is extremely hard to determine the position of any one star over a period which stretches to years. Several
reference stars used for the purpose are known to be radio emitters. Their position at a given time can be
determined very accurately using an array of radio telescopes spread over the Earth. With this method, the
position of a star can be determined to an accuracy of about ten millionth of a degree.
21
1.6 The Orbit of Star S2
Reinhardt Genzel’s group used the Very Large Telescopes (VLT, there are four of these, each with a primary
mirror of 8.2 metres in diametre) and the New Technology Telescope (3.58 metres primary) of the European
Southern Observatory (ESO) located in Chile for their observations. Andrea Ghez and her group used the Keck
Telescopes (there are two of these, each with primary of 10 m diametre) located in Hawaii. Both the groups
followed the motion of many stars in the cluster, with the best results obtained for a star designated as S2, which
during its motion very closely approaches the central object. We will describe the results obtained for this star
over the years, beginning with results based on about a decade of observations during 1992-2002 and published
by the two groups in 2002-03. Shown in Figure 8 are the results on the motion of star S2 obtained by Genzel’s
group.
Figure 8: The red image on the left shows a number of stars from the around Sgr A*. The position of S2 at
various times from 1992 to 2002 are on the right of the figure, along with the position of Sgr A*. The short
bars on each point indicate the extent of the possible errors in the position. The path of the star over the years
is indicated by the ellipse, which is obtained as the best representation of the data. It is found that Sgr A* is
located at one focus of the rather elongated ellipse (every ellipse has two foci). Figure courtesy ESO.
From Newtons theory of gravitation, it follows that a large mass located at the focus, coincident with Sgr
A*, should be exerting a gravitational force on star S2, which makes it follow the elliptical orbit. Calculations
showed that the mass of the object is close to 3.3 million times the mass of the Sun. The time taken by S2 to
complete one orbit, which is known as the period of the orbit, was about 15.6 years and the closest distance of
approach of S2 to Sgr A* was about 0.73 light days, that is about 19 billion kilometres. The estimated mass has
increased somewhat as further data has accumulated, it is now 4.3 million times the mass of the Sun and the
period is 16.02 years. of observations
The two groups of astronomers continued to monitor S2 beyond 2003. The later observations provided
data points which covered the whole ellipse, as they were taken over a span of time longer than one orbital
period. A complete orbit is shown in Figure 9.
22
1.6 The Orbit of Star S2
Figure 9: The figure shows data points independently determined by the group of Reinhardt Genzel (in blue)
and Andrea Ghez (in red). There is close agreement between the results of the two groups. An ellipse is seen
to provide a very good representation of the data points, except for a small deviation seen at the top, which is
discussed further in the text. Figure source: Gillessen et. al. Astrophysical Journal, 707, L114, 2009.
A close look at Figure 9 shows that the ellipse is not quite perfect. At the top of the figure, it is seen that the
ellipse does not quite close on itself, there is a small gap. The gap indicates that there is a slight rotation in the
ellipse from orbit to orbit. We know that Sgr A* is a very compact source, so it should appear as a point in the
diagram. But the source appears to be slightly extended, as can be seen near the bottom of the figure. This is
interpreted as a slight motion of the gravitating mass. Such a motion would produce a deviation from a perfect
elliptical orbit, as is observed, and seemed to provide the right explanation. But later observations showed that
the deviation is due to a quite different effect.
Figure 10: The trajectory of S2, including new points from the Gravity experiment. The figure is explained in
the text. Figure source Gravity Collaboration, Astronomy and Astrophysics, 636, L5, 2020.
23
1.7 Historical Note
In Einsteins theory of gravitation, the orbit of a body around another gravitating body is nearly elliptical,
but not exactly. Over successive orbits, the effect of the theory is to produce a slight rotation of the ellipse. This
is known as the precession of the perihelion, because the shift in the ellipse means that the point at which the
two bodies are closest shifts from orbit to orbit. A result of the shifting ellipse would be the kind of gap which
is observed.
Recently, the accuracy of the measurements of the position of S2 over time has improved more than
tenfold, because of the availability of a new instrument called GRAVITY, which combines the optical beams
from the four VLTs. In Figure 10 are shown (Figure Courtesy ESO) the results obtained by the GRAVITY team,
combined with earlier observations.
Figure 11: The trajectory of star S2 over many years. Each orbit around the supermassive object takes about
16 years and is a nearly perfect ellipse. Over each orbit the ellipse swings through about 0.2 degrees. The time
taken by the ellipse to complete a full circle is about 28,800 years, as is explained in the text. Image courtesy
(Credit: ESO/L. Calc¸ada).
The grey curve, which is a near-perfect ellipse except for the small deviation near the top as before, has been
obtained by calculations which use Einsteins theory. The observed points are very well aligned with the curve,
which shows how precise the observations are. The black cross towards the bottom of the ellipse indicates the
position of the massive compact object. The red crosses indicate infra-red flashes observed around Sgr A*. In
this new data set, there is no evidence that the object is moving, it is stationary. So, the deviation of the ellipse
for a perfect form is wholly explained as being due to the effect of precession predicted by Einsteins theory.
This is a remarkable result, and is the first observed precession in the context of a supermassive object. Such
precession has previously been observed in the orbit of Mercury around the Sun and in binary pulsar systems.
Because of the precession, the trajectory of S2 over many years looks like an ellipse which keeps rotating in
space. This is shown schematically in Figure 11. The precession is about 0.2 degrees per orbit. Since the orbital
period S2 is about 16 years, the ellipse will complete one full rotation through 360 degrees in about 28,800
years.
1.7 Historical Note
The notion of the presence of a supermassive black hole at the centre of galaxies became popular after the
discovery of quasars, active galactic nuclei, superluminal motion etc. pointed to the existence such objects at
the centres of galaxies. In the mid-1960s, before many of these observations were made, Fred Hoyle and Jayant
24
1.8 Is the Compact Object a Supermassive Black Hole?
Narlikar had considered the existence of objects, with mass of about a billion times of the mass of the Sun,
around which matter would condense to form a galaxy. They predicted from their model that there would be
highly concentrated points of light in the centres of elliptical galaxies. This is probably the first mention of a
supermassive object at the centre of galaxies.
1.8 Is the Compact Object a Supermassive Black Hole?
What is the nature of the compact object? We have seen that the closest distance of approach of star S2
to the gravitating object is about 19 billion kilometres, so the size of the object should be much smaller. If the
compact mass is a black hole, then it would have zero radius. But if it is not, it would have a non-zero small
radius. What could such an object be? The object is not emitting much light, so it must be made up of a large
number of small dark objects, like stars of very low mass, or neutron stars, or even stellar mass black holes
which are the remnants of stars. But calculations show that any such aggregate would have very high density at
the centre, and would collapse under its own gravity, or simply dissipate, in less than a million years. This time
is far, far shorter than the known age of the galaxy, which is more than 13 billion years. More exotic possibilities
like a dark mass made up of exotic particles like neutrinos, gravitinos, axinos etc., or a ball of bosons are
also possible. But it is presently not possible to produce robust models of such objects consistent with known
theory and observations. It appears therefore that most of the mass of the compact object at the centre of our
galaxy is the mass of a supermassive black hole, with the remaining mass being due to a star cluster around it.
The GRAVITY instrument and other future instruments will enable us to penetrate deeper towards the object,
possibly revealing effects which could be unambiguously attributed to a supermassive black hole.
Acknowledgment
Much of the material in this article first appeared in The Edition (inthedition.wordpress.com). I thank Mr.
Kaushik Bhowmik for permission to use it. This article will serve as the basis for further articles on black holes
in airis4D
About the Author
Professor Ajit Kembhavi is an emeritus Professor at Inter University Centre for Astronomy and
Astrophysics and is also the Principal Investigator of the Pune Knowledge Cluster. He was the former director
of Inter University Centre for Astronomy and Astrophysics (IUCAA), Pune, and the International Astronomical
Union vice president. In collaboration with IUCAA, he pioneered astronomy outreach activities from the late
80s to promote astronomy research in Indian universities. The Speak with an Astronomer monthly interactive
program to answer questions based on his article will allow young enthusiasts to gain profound knowledge about
the topic.
25
The Little Green Men in Space
by Linn Abraham
airis4D, Vol.1, No.4, 2023
www.airis4d.com
Radios, apart from being a fun and useful piece of technology, are critical to almost all communication
in our daily lives. Did you know that cellphones use radio waves for communicating with each other? Or that
radio waves are used in WiFi or Bluetooth communications? If you have ever used a traditional radio receiver
for tuning into radio stations you have seen that between the designated stations which broadcast programs all
you get is noise. If so why do we use radio waves in astronomy? Do we expect radio stations to be run by the
denizens of galaxies near or far? Turns out that we did find such a broadcast when we listened very carefully into
the depths of space using powerful radio receivers. Lets find out what that was all about and more interesting
things about radio astronomy in this article.
2.1 What are radio waves?
Radios were all the rage in the not so distant past. Radio waves revolutionized communication and enter-
tainment in the early 20th century. During world war II radios played a critical role in military communication.
In the 1960s and 1970s, FM radio gained popularity for its superior sound quality. Although today radio has
mostly fallen out of fashion in the entertainment industry they are critical to most of our modern communication.
Television broadcasting, satellite communications and even the GPS system used for navigation all make use
of radio waves. Modern methods of wireless communication such as cellphone communication, WiFi and
Bluetooth would be unthinkable without radio waves. Radars that make use of radio waves are used for various
applications. Medical imaging is another area which makes use of radio waves. Radio Frequency Identification
(RFID) tags are used in a variety of places and are fast becoming common place.
2.2 Why did radio waves have to be discovered?
Life that evolved on earth developed mechanisms to detect visible light, the things that we call eyes,
probably because of the interplay of various factors. The visible spectrum corresponds to wavelengths that
are easily absorbed and reflected by the materials found in nature, such as water, plants, and animals. This
allows organisms to perceive and distinguish objects in their environment, such as food, predators, and potential
mates. Furthermore, the Sun emits a large amount of radiation in the visible portion of the spectrum, which
provides the energy for photosynthesis and drives the Earths climate and weather patterns. The other parts
of the electromagnetic spectrum remained largely hidden from us. After Maxwell predicted the existence
of electromagnetic waves in 1864, Heinrich Hertz produced radio waves in the late 1880s to experimentally
2.3 What is electromagnetic radiation?
demonstrate their existence. The discovery of other kinds of EM waves such as X-rays soon followed. The
discovery of radio waves sparked a revolution in science and technology. The telegraph which was the primary
means of long distance communication now had a wireless alternative. Other uses of these newly discovered
waves soon followed. Before we learn more about radio waves it is essential to have a good understanding of
what electromagnetic radiation is and how it is produced both naturally and by artificial means.
2.3 What is electromagnetic radiation?
Stationary electric charges produce electric fields, whereas moving electric charges produce both electric
and magnetic fields. When a charge accelerates or undergoes a change in its magnitude (charge or discharge) a
disturbance in the electromagnetic field is produced. This is often a pulse of electromagnetic radiation that can
self propagate outwards at the speed of light. This is very much like how you create a ripple on the surface of a
still pond by throwing a stone into it. In order to create a steady wave as opposed to just a pulse you need to have
a source that oscillates regularly. The frequency of such a wave is decided by the frequency of the source of the
disturbance and it is this frequency that decides the properties of the radiation. The frequency also decides the
energy of the radiation and often our ability to detect the radiation. Thus electromagnetic radiation falls on a
broad spectrum depending on the frequency from low frequency radio waves to the highest frequency gamma
rays.
2.4 What are the mechanisms that create EM waves?
The different possible ways in which you can accelerate charges leads to the different source of electromag-
netic radiation. In general you have thermal and non-thermal sources of electromagnetic waves. Examples of
thermal radiation include Continuous spectrum emissions related to the temperature of the object or material and
Specific frequency emissions from neutral hydrogen and other atoms and molecules. Examples of non-thermal
mechanisms include emissions due to synchrotron radiation and Amplified emissions due to astrophysical
masers. In a hot object be it solid, liquid or gas the constituent particles are always in accelerated motion either
due to vibration or collisions. As a result electromagnetic radiation is emitted at all possible frequencies in the
spectrum but the amount of energy in each frequency depends on the temperature of the object.
2.5 A radio universe
We already saw several man-made source of radio waves. What are some of the natural sources of radio
waves? On Earth, the discharge of electricity in lightnings is a source of radio waves. The bigger source of radio
waves is the Sun. Solar flares and coronal mass ejections can produce intense bursts of radio waves that can
be a threat to our communication and navigational systems. These radio waves are produced through a process
called plasma emission. Plasma, which is a highly ionized gas, can produce radio waves when it is disturbed
by energetic charged particles. In the case of solar flares and CMEs, the charged particles accelerated by the
events can create disturbances in the plasma that lead to the emission of radio waves.
In 1932, Karl Jansky a radio engineer at the Bell Telephone Laboratories was assigned to study radio inter-
ference from thunderstorms. During his investigations he found static that he concluded to be of extraterrestrial
origin and not related to the static from thunderstorms. Upon further investigations he discovered the source of
the static to be the Milky way galaxy and opened up radio astronomy as a field of study. The typical astrophysical
sources of radio waves are dark dust clouds (refer Fig 4). Visible stars radiate a lot of electromagnetic energy by
27
2.6 The story of the Little Green Men
Figure 1: Full-size replica of the first radio telescope, Jansky’s dipole array of 1932, preserved at the US Green
Bank Observatory in Green Bank, West Virginia. Image Credit: Wikipedia
Figure 2: The first parabolic ”dish” radio telescope by Grote Reber, Wheaton, Illinois, 1937. Image Credit:
Wikipedia
definition in the visible region of the spectrum. Part of the energy has to be in the microwave (short wave radio)
part of the spectrum, and that is the part astronomers study using radio telescopes. Despite their temperatures
however all visible stars do not make for good radio frequency sources. We can detect stars at radio frequencies
only if they emit by non-thermal mechanisms, or if they are in our solar system (that is, our sun), or if there is
gas beyond the star which is emitting (for example, a stellar wind).
2.6 The story of the Little Green Men
In 1967, Jocelyn Bell, a postgraduate student at Cambridge University, was working with her supervisor,
Antony Hewish, on a radio telescope designed to study quasars. While analyzing the data from the telescope,
she noticed a strange signal that was pulsing regularly. The signal differed from the generally chaotic nature of
most cosmic phenomenon. Additionally the signal was of a very specific radio frequency, whereas most natural
sources typically radiate across a wider range. These things led the research team to contemplate the possibility
28
2.6 The story of the Little Green Men
Figure 3: The LOFAR core (”superterp”) near Exloo, Netherlands. The bridges give an idea of the scale.
Image Credit: Wikipedia
Figure 4: Table showing different astrophysical sources of radio waves
29
2.7 Why do we need radio astronomy?
Figure 5: Graphical depiction of Pulsars. Image Credit: Space.com
of having found an artificially created signal from an extraterrestrial intelligent species. They even dubbed the
source as LGM 1 or Little Green Men 1. However their discovery later turned out to be the first detection of a
pulsar. Pulsars spin rapidly, while simultaneously radiating opposing beams of radio waves out into space. The
setup is similar to a lighthouse that spins around one up-and-down axis and radiates two beams of light from
a second axis. Antony Hewish and Martin Ryle received the Nobel Prize in physics in 1974 for their work in
radio astronomy and pulsars.
2.7 Why do we need radio astronomy?
Radio astronomy allows us to detect and study phenomena that emit little or no visible light, such as
pulsars and black holes. And in this way they act as complimentary source of information. Radio waves can
pass through clouds of dust and gas, allowing astronomers to observe objects that are obscured from view
in optical telescopes. They can travel vast distances without being absorbed, making it possible to study
objects at the farthest reaches of the universe. Finally radio telescopes can collect data continuously, day and
night, providing astronomers with a wealth of information about the universe around the clock Apart from the
discovery of pulsar radio astronomy has resulted in several remarkable astronomical discoveries. The discovery
of the cosmic microwave background radiation was made using the radio telescope in New Jersey. It provided
irrefutable evidence for the Big Bang model of the origin of the universe. More recently in 2019, the Event
Horizon Telescope project produced the first-ever image of a black hole. The image was created using data from
a network of radio telescopes around the world.
2.8 The challenges in radio astronomy
Radio astronomy is also a very technically demanding field. The building of large radio telescope involves
complex engineering and construction challenges, and maintaining and operating these instruments requires
highly trained professionals. Radio waves are relatively weak and can be easily overwhelmed by noise from
various sources, including terrestrial sources such as cell phone towers and other electronic devices. Thus
radio telescopes must be carefully designed and located in remote areas to minimize interference. Scattering
30
2.9 References
Figure 6: The proposed Lunar Crater Radio Telescope (LCRT) on the Far-Side of the Moon. Image
Credit:NASA
and distortion of radio signal by the atmosphere is another area of concern. Radio astronomers must carefully
calibrate their instruments and develop sophisticated data analysis techniques to separate the desired signal from
the background noise. These and other efforts in radio astronomy often involves analyzing vast amounts of data,
which requires sophisticated computer algorithms and data processing techniques.
2.9 References
1. Little Green Men? Pulsars Presented a Mystery 50 Years Ago
2. Cosmic microwave background
3. PSR B1919+21
4. Basics of Radio Astronomy by Diane Fisher Miller
About the Author
Linn Abraham is a researcher in Physics, specializing in A.I. applications to astronomy. He is
currently involved in the development of CNN based Computer Vision tools for classifications of astronomical
sources from PanSTARRS optical images. He has used data from a several large astronomical surveys including
SDSS, CRTS, ZTF and PanSTARRS for his research.
31
Pulsating Variable Stars
by Sindhu G
airis4D, Vol.1, No.4, 2023
www.airis4d.com
3.1 Introduction
A variable star is a star whose brightness as seen from Earth varies with time. As we have observed, the
brightness fluctuations exhibited by variable stars range from a thousandth of a magnitude to twenty magnitudes
and occur over a time frame that can vary from a fraction of a second to several years, depending on the type of
star. From the previous article “Variable Stars” we get an idea about what variable stars are. This article will
explore the fascinating world of pulsating variable stars, a type of intrinsic variable stars. Intrinsic variables
are characterized by changes in the physical properties of the star, resulting in variations in brightness. The
brightness of pulsating variable stars varies due to a physical change within the star. The surface layers of
pulsating variables expand and contract periodically. This means the star periodically grows and shrinks in size.
The luminosity of pulsating stars fluctuates as they expand and contract, altering their luminosity and spectral
characteristics. Pulsations can be divided into two categories: radial pulsations, in which the entire star expands
and contracts as a unit, and non-radial pulsations, where certain parts of the star expand and others contract. A
star that pulsates radially will stay in a spherical shape, however, a star that experiences non-radial pulsations
may deviate from a sphere periodically.
Astronomers find the study of pulsating variables to be of great importance. By analyzing the light curves
of these variables, they are able to gain insight into the interior processes in stars. The most beneficial aspect of
many types of pulsating variables is their period-luminosity relationship, which allows astronomers to measure
the distance to such stars.
3.2 Why do they pulsate?
If the outward pressure of a star exceeds the inward-acting gravitational force, its outer layers will expand
outward. As the star expands, its gravitational force weakens while its outward pressure drops at an even higher
rate. This leads to a point where gravity and pressure are in equilibrium. However, the outward-moving layers
have momentum that resists a change in motion. This momentum carries the layers past the equilibrium position.
As the gravitational force acts on the layer it slows down. At a certain point, the outward gas and radiation
pressure is no longer strong enough to counter the inward-acting gravitational force, causing it to stop. Due to
an imbalance of forces, the outer layers of the star start to collapse inwards. This causes gravity to increase, but
the pressure increases at a higher rate. With the outward pressure being greater than the inward gravitational
force, the collapsing layer slows down and eventually stops. This brings us back to the start, where the outward
3.3 Different types of pulsating variables
Figure 1: Basic Hertzsprung - Russell diagram (Image Courtesy: NASA)
pressure is stronger than the gravitational force, thus beginning the pulsation cycle once more.
3.3 Different types of pulsating variables
The different types of pulsating variables can often be distinguished by their pulsation period, mass,
evolutionary status, and the characteristics of their pulsations. Cepheids, RR Lyraes, and Long Period
Variables are pulsating variable stars and occupy regions on the H-R diagram called instability strips. The
Hertzsprung–Russell(H-R) diagram is a scatter plot of stars showing the relationship between the star’s absolute
magnitudes or luminosities versus their stellar classifications or effective temperatures (Figure 1) and (Figure
2).
The term instability strip (Figure 3) usually refers to a region of the Hertzsprung–Russell diagram largely
occupied by several related classes of pulsating variable stars. The instability strip intersects the main sequence
in the region of A and F stars and extends to G and early K bright supergiants. The vast majority of stars located
above the main sequence in the instability strip are variable. At the point where the instability strip meets the
main sequence, the majority of stars are usually stable, but certain variable stars, such as roAp stars, may exist
there as well. Most stars are classified according to their temperatures (spectral type) from the hottest to the
coolest: O, B, A, F, G, K, and M. These categories are further divided into subclasses from the hottest (0) to
the coolest (9) (Figure 4). For example, the hottest B stars are B0 and the coolest are B9, followed by A0. Each
major spectral type has its own unique spectra. The most common stellar classifications seen on H-R diagrams
are O, B, A, F, G, K, and M, but there are also additional and extended spectral classes like Wolf-Rayet stars
(W), cool dwarfs (L), brown dwarfs (T), carbon stars (C), and stars with zirconium oxide lines that are between
M and C stars (S). Some examples of type of pulsating variables are given below:
3.3.1 Cepheids(CEP)
Cepheids (Figure 5)are pulsating variables which are very luminous and have large masses, and they exhibit
periodic variations in brightness over a period of time ranging from 1 to 70 days. They take their name from
33
3.3 Different types of pulsating variables
Figure 2: H-R diagram of luminosity vs temperature (Image Courtesy: The European Southern Observatory)
Figure 3: An HR diagram with the instability strip (Image Courtesy: ATNF)
34
3.3 Different types of pulsating variables
Figure 4: Some examples of Harvard classification stellar spectra (Image Courtesy : NASA)
Delta Cephei, the first-such pulsating variable star discovered by John Goodricke in 1784. Cepheid light curves
are typically characterized by a sharp increase in brightness followed by a gradual decline, resembling a shark
fin. The amplitude range of these light curves usually ranges from 0.1 to 2. Cepheids have a high luminosity and
are spectral class F at maximum, and G to K at minimum. The later the spectral class of a Cepheid, the longer its
period. Cepheids have a fixed relationship between period and absolute magnitude, as well as a relation between
period and mean density of the star. Henrietta Leavitt (Figure 6) was the first to discover the period-luminosity
relationship of Delta Cepheids, which made them very effective for measuring the distance to galaxies within
the Local Group and even outside of it. Edwin Hubble then used this method to confirm that the spiral nebulae
were actually distant galaxies. There are two types of Cepheids: Type I or Classical Cepheids and Type II, or W
Virginis Cepheids. Both of these types are located in the Instability Strip of the Hertzsprung-Russell Diagram.
3.3.1.1 Classical Cepheid variables
Classical Cepheids, also known as Delta Cephei variables, are yellow supergiants belonging to population
I. They are young, massive size, and high luminosity stars. These stars exhibit regular pulsations that occur
over a period of several days to months. The majority of classical Cepheids display a period that falls within
the range of 5 to 10 days, and their amplitudes in visible light range between 0.5 to 2.0 magnitudes. These
fluctuations are less noticeable when observed at infrared wavebands. These stars are known to be 1.5 to 2
magnitudes more luminous than Type II Cepheids. A well- defined relationship between period and luminosity
exists among Classical Cepheids. The longer the Cepheid’s period, the more luminous it is intrinsically. This is
a significant finding, as it enables Classical Cepheids to be utilized as standard candles to determine distances.
Type I Cepheids are located on the Instability Strip of the Hertzsprung-Russell (HR) diagram.
3.3.1.2 Type II Cepheids
Type II Cepheids, historically known as W Virginis stars, exhibit extremely regular light pulsations , and
their luminosity relationship is comparable to that of Delta Cephei. These stars belong to population II stars
and are older than Type I Cepheids. Type II Cepheids typically have lower metallicity, lower mass, and slightly
lower luminosity than Type I Cepheids. Their period versus luminosity relationship is slightly offset from that
of Type I Cepheids.
35
3.3 Different types of pulsating variables
Figure 5: RS Puppis, one of the brightest known Cepheid variable stars in the Milky Way galaxy (Image Courtesy
: Hubble Space Telescope)
3.3.2 RR Lyrae variables
RR Lyrae variables (Figure 7) are older pulsating white giants that exhibit low metallicity. They are common
in globular clusters. RR Lyrae stars belong to Population II and are older than Type I Cepheids. However,
they have lower mass than Type II Cepheids. They also exhibit a well-established relationship between their
period and luminosity, making them useful for determining distances as well. RR lyrae stars are characterized
by their brief pulsation cycles, typically ranging from 1.5 hours to a day, and exhibit a brightness range of 0.3
to 2 magnitudes. These stars typically belong to spectral classes A7 to F5. Sub-types of RR Lyrae variables are
classified according to the shape of their light curves. RR Lyraes fit on the Instability Strip of the HR diagram.
3.3.3 RV Tauri variables
RV Tauri variables are yellow supergiant stars. Their light curves show alternating deep and shallow
minima. The double-peaked variation commonly observed in RV Tauri variables is typically characterized by
periods ranging between 30 to 100 days and amplitude ranges of 3 to 4 magnitudes. At maximum brightness,
the spectra of RV Tauri variables are categorized as type F or G, while at minimum brightness, they are type K
or M. These stars are situated close to the instability strip, displaying cooler temperatures than Type I Cepheids
but higher luminosity than Type II Cepheids.
3.3.4 Long period variables
LPVs (long-period variables) refer to red giants or supergiants that pulsate with periods ranging between
30 to 1000 days. Typically, these stars have spectral types of M, R, C, or N. There are two categories of LPVs:
Mira and Semiregular.
36
3.3 Different types of pulsating variables
Figure 6: Henrietta Swan Leavitt (Image Courtesy : Popular Astronomy)
Figure 7: RR Lyrae (Image Courtesy : Digitized Sky Survey - STScI/NASA, Colored & Healpixed by CDS)
37
3.3 Different types of pulsating variables
Figure 8: Pulsating mira variable star (Chi Cyg) (Image Courtesy : SAO/NASA)
3.3.4.1 Mira variables
Mira variables (Figure 8) are red giants that exhibit periodic pulsations. Mira, in particular, displays a
cycle lasting 331 days, during which its brightness varies by nearly 6 magnitudes in the visible waveband. Its
effective temperature ranges from 1,900 K to 2,600 K. Mira is also a visual binary, with its companion star being
a variable as well. The Mira-type stars are characterized by long periods that range between approximately 80
to 1,000 days, and their visual brightness varies by 2.5 to 10 magnitudes.
3.3.4.2 Semiregular variables
Semiregular variables are giants and supergiants that display periodicity along with intervals of semi-
regular or irregular light changes. Typically, these stars exhibit periods ranging from 30 to 1000 days, with
amplitude variations generally less than 2.5 magnitudes.
3.3.5 Delta Scuti variables
Delta Scuti variables share similarities with Cepheids, but they are notably dimmer and exhibit much
shorter periods. These stars typically display an amplitude range of 0.003 to 0.9 magnitudes and a period
ranging from 0.01 to 0.2 days. Their spectral type generally falls between A0 and F5.
3.3.6 Rapidly oscillating Ap variables
These stars, which belong to the spectral type A, and occasionally F0, are a subclass of Delta Scuti variables
situated on the main sequence. They exhibit remarkably rapid variations, with periods ranging a few minutes
and amplitudes of a few thousandths of a magnitude.
38
3.3 Different types of pulsating variables
3.3.7 Beta Cephei variables
Beta Cephei variables undergo pulsations of short periods, typically ranging from 0.1 to 0.6 days. These
pulsations have an amplitude of 0.01 to 0.3 magnitudes. Many stars of this kind exhibit multiple pulsation
periods.
References
Understanding Variable Stars, John R Percy, Cambridge University Press.
Pulsating Variable Stars
GCVS variability types, Samus N.N., Kazarovets E.V., Durlevich O.V., Kireeva N.N., Pastukhova
E.N.,General Catalogue of Variable Stars: Version GCVS 5.1,Astronomy Reports, 2017, vol. 61, No. 1,
pp. 80-88 2017ARep...61...80S
Pulsating Variable Stars and The Hertzsprung-Russell Diagram
Types of Variable Stars: A Guide for Beginners
Variable star
About the Author
Sindhu G is a research scholar in Physics doing research in Astronomy & Astrophysics. Her research
mainly focuses on classification of variable stars using different machine learning algorithms. She is also doing
the period prediction of different types of variable stars, especially eclipsing binaries and on the study of optical
counterparts of x-ray binaries.
39
Part III
Biosciences
The Avatars of Stem Cells and its applications
in healthcare
by Geetha Paul
airis4D, Vol.1, No.4, 2023
www.airis4d.com
Stem cells are undifferentiated cells that can develop into various types (avatars) of specialised cells
in the body. They have the unique ability to divide and self-renew, making them essential in developing,
growing, and repairing tissues in the body. Self-renewal is the ability of cells to proliferate without the loss of
differentiation potential and without undergoing senescence (biological ageing). By creating patient-specific
stem cell derivatives, researchers can study the effects of drugs on a specific individual. Using stem cells to
differentiate into any cell type in the body, they can also find applications for generating different types of cells
and tissues for research purposes. Thus stem cell avatars have attracted significant attention in modern research
in view of their potential applications in medicine and biotechnology.
Figure 1: Origin of Embryonic stem cells from the zygote (oocyte+sperm) to totipotent cells forming the
Blastocyst. Cells from the inner lining of the blastocyst (blastomeres) are separated and cultured as embryonic
stem cells. [Image courtesy:https://www.dreamstime.com]
There are two main types of stem cells: embryonic stem cells (ESCs) and adult stem cells (ASCs).
Embryonic stem cells (ESCs) are derived from the inner cell mass of a blastocyst, which is a very early
stage (3 to 5 days old) of embryonic development. At this stage, an embryo is called a blastocyst and has about
150 cells.
These are pluripotent stem cells, meaning they can divide into more stem cells or can become any type of
cell in the body. This versatility allows embryonic stem cells to regenerate or repair diseased tissue and organs.
Adult stem cells ( ASCs) There are two types of adult stem cells. One type comes from fully developed
tissues such as the brain, skin, and bone marrow. They are found in small numbers in most adult tissues
throughout the body. The second type is induced pluripotent stem cells (iPSCs). These adult stem cells are
derived from somatic cells isolated from various tissues and cultured in vitro. The cells are exposed to a combi-
nation of transcription factors called reprogramming’ to achieve the pluripotent state. Once reprogramming is
completed and the cells express specific pluripotency markers, the resulting cells are induced pluripotent stem
cells iPSCs. Compared with embryonic stem cells, adult stem cells have a limited ability to give rise to various
body cells.
Stem cells are classified according to their origin, possibly from the embryo, umbilical cords or adult
tissue. Cord and ES cells are generally said to be equivalent to pluripotent blastomeres. At the same time, adult
stem cells (also found in children and adults) possess a more comprehensive range of plasticity ranging from
pluripotent to multipotent (Table 1).
Level of plasticity Type of cells Cell Types Obtainable
Totipotent Zygote All cell types develop into embryos.
Pluripotent Embryonic Stem cells (ES cells) All cell types do not develop into embryos.
Pluripotent Umbilical stem cells Same as ES cells
Pluripotent to multipo-
tent
Adult stem cells (AS cells)
a.Haematopoietic stem cells
b. Mesenchymal stem cells
c.Others (undefined collection
method)d.Transdifferentiation
(under special conditions)
d.Transdifferentiation (under
special conditions)
a.Red, white blood cells, immune cells, others
b.Lipocytes, cartilage, bone, tendon and liga-
ments, myocytes, skin cells, neurons etc.
c.Others (collection methods still not very well-
characterised)
are Neurons, hepatocytes, and pneumocytes.
d.Transdifferentiation (occurs under special con-
ditions)
Crossing of germ layers.
Table 1: A Comparison of the different types of Stem Cells in Terms of Plasticity:
Totipotent stem cells are capable of giving rise to all other type of cells. This includes the fertilised egg
and the subsequent new cells formed until approximately day 4 of development.
Pluripotent stem cells refer to embryonic stem cells (ESCs) and induced pluripotent stem cells. These
cells can give rise to all types except those that form the amniotic sac and the placenta. ESCs (embryonic
stem cells) come from the inner cell mass in the blastocyst, while iPSCs (induced pluripotent stem cells) are
generated from somatic cells.
Multipotent stem cells are also called adult stem cells, these are cells that can develop into closely related
cell types.
42
Figure 2: Shows the culture of Adult stem cells. (iPSCs)The adult somatic cells are cultured, reprogrammed
into induced pluripotent stem cells, and finally, they are differentiated to specified organ cells.
[Image courtesy: https://images.app.goo.gl]
43
1.1 What is the procedure behind stem cell therapy?
Figure 3: Shows the origin of Totipotent, Pluripotent, and Multipotent types of Stem Cells .
[ Image courtesy: https://theory.labster.com/stem-cell-types/]
1.1 What is the procedure behind stem cell therapy?
The procedure behind stem cell technology depends on the type of stem cells used and the specific
application. However, some general steps are often involved in the process:
Source of Stem Cells: The first step is to obtain a source of stem cells. This could be embryonic stem
cells derived from a blastocyst or adult stem cells obtained from bone marrow or adipose tissue.
Isolation and Culturing: Once the stem cells are obtained, they are isolated and cultured in the laboratory.
This involves growing the cells in a special culture medium that provides the necessary nutrients and growth
factors for the cells to divide and differentiate.
Differentiation: Depending on the application, the stem cells may need to be induced to differentiate into
specific cell types. This can be done by exposing the cells to specific growth factors or chemicals that stimulate
differentiation.
Transplantation: If the goal is to use stem cells for regenerative medicine, the differentiated cells can be
transplanted into the patients body. This can be done through various methods, such as injection or implanting
the cells into the damaged tissue.
Monitoring and Follow-up: After transplantation, the patient’s progress is monitored to ensure that the
cells function as intended and that there are no adverse effects.
44
1.2 Applications of stem cells in modern research
In some cases, additional treatments or follow-up procedures may be required.
Figure 4: Steps involved in stem cell therapy.
[Image courtesy: https://www.drkraghu.in/ home]
1.2 Applications of stem cells in modern research
Stem cells have wide applicability in medical science, drug development and patient care. In drug discovery,
it allows patient specific drugs based on their stemcells. These stemcells can be developed and tested even
before administering it to the concerned.
Researchers can test how new drugs interact with those cells by growing stem cells in the laboratory and
inducing them to differentiate into specific cell types.
Figure 5: A schematic diagram of using stem cells based on Drug discovery.
45
1.3 Promises of Stem Cells In Regenerative Medicine
Stem cell therapy also plays a significant role in Disease Modelling. Stem cells can be used to create
models of human diseases, which can help researchers study the underlying mechanisms of these diseases and
develop new treatments.
Figure 6: Somatic cells from the patient generate diseased iPSCs. These diseased iPSCs may be repaired by
Gene Therapy and used to create healthy somatic cells to be transplanted into the patient. They may also find
applications to produce unrepaired somatic cells for disease modelling or drug screening.
[ Image courtesy:https://www.frontiersin.org/ articles/ 10.3389/ fcell.2015.00002/full]
The use of iPSCs (Induced Pluripotent Stem Cells) for disease modelling is based on the fact that they are
capable of self-renewing and that they can differentiate into all types of cells of the human body, which can
be utilised for the preparation of different disease models to study those diseases. Moreover, a patient-specific
iPSC could be widely used in developing specific therapeutics regimens/drugs.
Regeneration of damaged or diseased tissues is a major application of stem cell therapy. Its potential in
treating conditions such as spinal cord injuries, heart mussile injuries due to heart diseases or diabetes are
various avenues for further research.
46
1.3 Promises of Stem Cells In Regenerative Medicine
1.3 Promises of Stem Cells In Regenerative Medicine
Figure 7: Promises of stem cells in regenerative medicine: the six classes of stem cells, that is, embryonic stem
cells (ESCs), tissue-specific progenitor stem cells (TSPSCs), mesenchymal stem cells (MSCs), umbilical cord
stem cells (UCSCs), bone marrow stem cells (BMSCs), and induced pluripotent stem cells (iPSCs), have many
promises in regenerative medicine and disease therapeutics.
[ Image courtesy: https://www.ncbi.nlm.nih.gov/pmc/ articles/ PMC4969512/ ]
Tissue Engineering is a popular method using stem cells open new methods for growing tissues for organ
transplantation. This could eliminate the need for organ donors and reduce the risk of implant rejection.
Overall, stem cells have the potential to revolutionise medicine and significantly improve our ability to treat
a wide range of diseases and conditions. Its important to note that stem cell technology is still in the early stages
of development, and there is still much research to understand its potential and limitations fully. Additionally,
there are ethical concerns surrounding the use of embryonic stem cells, which has led to the development of
alternative sources of stem cells, such as induced pluripotent stem cells (iPSCs).
References:
1.Agius C.M, and R.Blundell (2007), Medwell Journals, International Journal of Molecular Medicine
and Advance Sciences 3 (4): 145-150, 2007
2.https://stemcellres.biomedcentral.com/articles/10.1186/s13287-019-1165-5
3.https://www.stanfordchildrens.org/en/topic/default?id=what-are-stem-cells-160-38
4.https://www.researchgate.net/publication/261041118
5.https://www.nature.com/articles/s41392-022-01134-4
6.https://www.dreamstime.com/%D0%BE%D1%81%D0%BD%D0%BE%D0%B2%D0%BD%D1%8B%
D0%B5-rgb-image163320490
47
1.3 Promises of Stem Cells In Regenerative Medicine
7.https://images.app.goo.gl/tMDcDzvo5Sbo9CVt6
8.https://theory.labster.com/ips
9.https://theory.labster.com/stem-cell-types/
10.https://www.drkraghu.in/home
11.https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4969512/
12.https://www.frontiersin.org/articles/10.3389/fcell.2015.00002/full
13.https://stemcellres.biomedcentral.com/articles/10.1186/s13287-019-1455-y
14.https://www.nature.com/articles/s41392-022-01134-4.
About the Author
Geetha Paul is one of the directors of airis4D. She leads the Biosciences Division. Her research
interests extends from Cell & Molecular Biology to Environmental Sciences, Odonatology, and Aquatic Biology.
48
Part IV
General
Beyond the Fluffy: The Enchanting World of
Clouds
by Robin Jacob Roy
airis4D, Vol.1, No.4, 2023
www.airis4d.com
Clouds in the sky are one of the most beautiful and fascinating natural phenomena that we can witness in
our everyday lives. These masses of condensed water vapor float in the sky, taking on a variety of shapes and
sizes that can be both enchanting and awe-inspiring.
Clouds are formed when water vapor in the atmosphere condenses into tiny water droplets or ice crystals,
and the type of cloud that forms depends on the altitude, temperature, and humidity of the air. Cloud formation
starts when the air contains as much water vapor (gas) as it can hold. This is called the saturation point. This
saturation point can be reached through two methods. Firstly, the air accumulates moisture until it reaches its
maximum capacity. Alternatively, the temperature of the moisture-filled air can be reduced, causing the amount
of moisture it can hold to decrease. Saturation can be achieved through evaporation or condensation, resulting
in the visible formation of water droplets in the form of fog or clouds.
Clouds play a crucial role in the Earths climate system, regulating the amount of incoming solar radiation
that reaches the Earths surface and controlling the planets energy balance. Clouds can indicate the likelihood
of precipitation and help regulate the temperature of the Earths surface. They can also create stunning displays
of color during sunrise and sunset, as the light interacts with the water droplets in the clouds.
1.1 Cloud Formation
Cloud formation can occur in several ways. One of the most common ways is through the process of
adiabatic cooling. When warm, moist air rises, it expands and cools due to the lower atmospheric pressure at
higher altitudes. As the air cools, it can no longer hold as much moisture, so water vapor begins to condense
into tiny water droplets or ice crystals, forming clouds.
Another way clouds can form is through the process of frontal lifting. When two air masses of different
temperatures and humidities meet, the warmer, moister air is forced to rise over the cooler, drier air, causing the
moisture to condense into clouds.
1.2 Types of Clouds
Clouds are named based on two criteria. One way is by their location in the sky, as some clouds are high
up, while others are closer to the Earths surface. Low clouds can even touch the ground and are known as fog,
1.3 What Causes Rain?
Figure 1: Major Cloud Types occurring at various altitudes.Source: Extraction, classification and visualization
of 3-dimensional clouds simulated by cloud-resolving atmospheric model, Daisuke Matsuoka, International
Journal of Modeling Simulation and Scientific Computing, April 2017.
while middle clouds are found between low and high clouds.
Another way of naming clouds is based on their shape. High clouds are called cirrus clouds, which
resemble feathers. Middle clouds are referred to as cumulus clouds, which look like large cotton balls in the
sky. Stratus clouds, on the other hand, are low clouds that cover the sky like bed sheets.
Here are some of the most common types of clouds which are distinguished based on their altitude, shape,
and composition:
1. Cumulus clouds: These clouds are white, fluffy, and usually have a flat base with a dome-like top. They
form when warm, moist air rises and cools, causing water vapor to condense into visible clouds.
2. Stratus clouds: Stratus clouds are low-lying and usually cover the entire sky in a uniform layer. They
can be gray or white and can form in both warm and cold air masses.
3. Cirrus clouds: These clouds are high up in the atmosphere and are thin and wispy in appearance. They
are made up of ice crystals and are often associated with fair weather.
4. Cumulonimbus clouds: These are large, towering clouds that are often associated with thunderstorms.
They have a flat base and a dome-like top and can reach up to 60,000 feet in height.
5. Altostratus clouds: These clouds are mid-level clouds that form in the middle of the troposphere. They
are often gray or blue-gray in appearance and can produce light precipitation.
6. Stratocumulus clouds: These clouds are low-lying and often appear as rounded masses or rolls. They
can be white, gray, or dark in appearance.
7. Nimbostratus clouds: These clouds are dark and usually cover the entire sky. They are often associated
with heavy precipitation and can form at any altitude.
Understanding the various types of clouds can help in predicting weather patterns and making weather forecasts.
Major cloud types are shown in figure 1.
51
1.3 What Causes Rain?
1.3 What Causes Rain?
Rain and snow are both forms of precipitation that form when water vapor in the atmosphere condenses
into liquid or solid water particles and falls back to the Earths surface.
Here’s the process of how rain and snow form from clouds:
1. Water vapor: The process begins with the presence of water vapor in the atmosphere. Water vapor is an
invisible gas that is created when water evaporates from the Earths surface and rises into the atmosphere.
2. Cloud formation: As water vapor rises, it cools and condenses into tiny water droplets or ice crystals,
forming clouds. Clouds can form at different altitudes, depending on the temperature and humidity of the
air.
3. Precipitation formation: As the water droplets or ice crystals in the clouds grow larger and heavier, they
eventually fall back to the ground as precipitation. The type of precipitation that forms depends on the
temperature of the atmosphere at different levels.
4. Rain formation: Rain forms when the temperature in the lower atmosphere is above freezing, causing the
water droplets to remain in a liquid state as they fall to the ground.
5. Snow formation: Snow forms when the temperature in the lower atmosphere is below freezing, causing
the water droplets to freeze into ice crystals. The ice crystals may then continue to grow as more water
vapor freezes onto them. The snowflakes become heavier and eventually fall to the ground.
6. Other forms of precipitation: Other forms of precipitation can also form, such as sleet or freezing rain,
when the temperature in the lower atmosphere is near or just above freezing, causing the precipitation to
partially melt before it reaches the ground.
Overall, the process of rain and snow formation from clouds is a complex interaction between temperature,
humidity, and air pressure in the atmosphere.
1.4 How Clouds Affect the Earth’s Climate?
Clouds have a significant impact on the Earths climate. They reflect incoming solar radiation back into
space, which helps to keep the planet cool. This is known as the albedo effect. Clouds also absorb and re-radiate
heat energy, which can trap heat in the Earths atmosphere and contribute to the greenhouse effect. The net effect
of clouds on the Earths climate depends on a number of factors, including their type, altitude, and coverage.
Low-level clouds, such as stratus clouds, tend to have a cooling effect on the Earths climate because they
reflect more sunlight than they absorb. This is particularly true for bright, white clouds, which have a high
albedo. Cumulus clouds, on the other hand, can have a warming effect on the Earth’s climate because they
absorb more sunlight than they reflect. This is because cumulus clouds are often darker than stratus clouds, and
thus have a lower albedo.
High-level clouds, such as cirrus clouds, also have a warming effect on the Earths climate. This is because
they absorb and re-radiate heat energy more effectively than they reflect sunlight. This can lead to a net warming
effect, particularly at night when the clouds trap heat in the Earth’s atmosphere.
In addition to their impact on the Earths climate, clouds also play an important role in the water cycle.
They are responsible for distributing moisture around the planet, which is essential for the growth of plants
and the survival of animals. When clouds release their moisture as precipitation, it can recharge groundwater
reserves and replenish rivers, lakes, and other bodies of our planet.
Overall, clouds are an important factor in the Earths climate system, and their properties and behavior are
a subject of ongoing research and study by climate scientists.
52
1.4 How Clouds Affect the Earths Climate?
References:
Cloud Formation
Cloud Development
Cloud Introduction
Cloud Types
Extraction, classification and visualization of 3-dimensional clouds simulated by cloud-resolving atmo-
spheric model, Daisuke Matsuoka, International Journal of Modeling Simulation and Scientific Comput-
ing, April 2017.
About the Author
Robin is a researcher in Physics specializing in the applications of machine learning for remote
sensing. He is particularly interested in using computer vision to address challenges in the fields of biodiversity,
protein studies, and astronomy. He is currently working on classifying satellite images with Landsat and Sentinel
data.
53
Part V
Fiction
Cranky the Scientist
by Ninan Sajeeth Philip
airis4D, Vol.1, No.4, 2023
www.airis4d.com
Cranky is not his real name. Nobody knows his real name. People call him Cranky because he comes up
with crazy ideas now and then for global problems. He is a wanderer with no domestic issues, which has helped
him focus on global problems like climate change, earthquakes, etc.
But this time, Cranky is not the subject of laughter anymore. His name has been nominated for the world
distinguished scientist award. The headlines in the newspaper surprised everyone. Many even turned over the
newspaper to verify the date to confirm that it was not April first! How can he, with so many well-known
scientists, be nominated for the world distinguished scientist award?
Whether good or bad news, all newspapers follow the same policy of creating a captive story. It was not
different for the case of Cranky either. Starting from his ancestral history, his childhood stories and even some
of the things everyone used to joke about him have appeared with charm and courage sprinkled like salt and
pepper on the morning toast. Overnight transformation of everything he was ridiculed for.
What is it that he has done? Everyone heard that question from every corner of the room where the paper
was read.
Yes, the newspapers sold out like hot pancakes. Curious eyes swam across the lines as a fish dropped into
a new water tank. Oh my God was the common word that almost every mouth uttered.
All the newspapers said that Mr Crank had developed a permanent earthquake solution, which he is
honoured for.
Scientists know that earthquakes are caused by platelets that move beneath the surface of the earths outer
crust. When they move and push over others, tension grows with time. How much this tension can grow depends
on the strength of each platelet that gets pressed. It is like the fight between superpowers where the cold war
continues until, at one point, it bursts to do unexpected damage to everyone. Tiny platelets easily break, and no
one notices the hundreds that cause minor earthquakes that only sensitive instruments can detect.
The scientific model of plate tectonics, or the movement of the tectonic plates in the earth’s lithosphere, is
globally approved as the cause of earthquakes. It is believed that continents drift from each other and become
separate due to the movement of these tectonic plates on which they rest. Thus, for example, South America fits
perfectly to the left lower part of Africa, indicating that they were once part of the same landscape. When these
moving plates detach and push into other floating continents, they sometimes cause the plates to uplift the soil
above and slide over one another. Mount Everest is said to be the consequence of the Indian continent pushing
over the tectonic plates of the Asian continent, causing it to rise so high above the surface. The finding of sea
species fossils on the cliffs of Everest further substantiates the claim that this was really what happened.
What if the plates do not slip over each other and result in tectonic uplift like in the case of mount Everest?
It might have caused hundreds of thousands of years to push the plates to the height of Everest. A simple
calculation of the potential energy from high school science will tell us the enormous amount of energy involved
in lifting so much soil to the height of Everest.
Imagine this much energy being built up in the plates without the so-called tectonic uplift. There are two
possibilities. One is that the plates slip over one another and continue to move without pushing each other.
The other possibility is catastrophic. The two plates compete with each other, demonstrating their strengths
over hundreds of thousands of years, storing the entire energy to lift Everest as potential energy in the form of
internal stress.
How long can these plates withstand this stress? It is like a wrestling show. When both parties are of
comparable strength, the fights continue until both falls! Likewise, the plates will grow their stress until they
reach the breaking point and release their energy into the land around them.
Imagine someone lifting and dropping Everest into the sea! It will cause the sea to rise in all directions
and flood all over. This is called a Tsunami. Tsunami is caused when an eruption happens under the sea. If it is
on land, it shakes the ground so heavily that it will behave like water! That’s why buildings fall, and trees get
uprooted in no time.
This is all known for ages, but what is so special about Cranky? He developed the technology for efficiently
releasing the energy stored in the platelets without dispersing it to the surroundings. That’s clever, isnt it? Yes,
it is, and that’s what made him so famous.
What did Cranky do? Well, that is what we call genius. With no formal education or even a family to
support him, Cranky has been an orphan who took shelter under the shop roofs when they closed down and left.
He has been an eyewitness to several events. Many had no time or interest to bother them. Though was poor
due to obvious circumstances, he would save every penny he could to spend on anything that fascinated him.
He would be a silent listener to every conversation and every news thats played on public television. He wont
comment publicly as he has become an introvert, one of the hard lessons that life has taught him to remain safe.
On one such day, the TV said that a building built somewhere on the coast in a protected zone would be
demolished on a day next week. Cranky pulled out all his pockets, picked up every penny he could gather and
was determined to witness the excellent demolishing technique the TV described in detail. It said it would all
be done with perfect timing, making the building meltdown as a pile of dust within the blink of an eye.
Cranky was on time to the building site, which was already flooded with onlookers like in a festival show,
waiting for the crackers to blast into the sky. The siren went one by one while all could hear their hearts beat.
Lo, the first cracker burst, followed by the second, third and fourth until it was just sound alone that none could
count. The building burst without a shake; as was said, it was powder thin when it rolled from top to down,
making no shake or sound other than the peoples cry atop.
When it ran down tears to all, it burst sparks in Cranky’s brain. He rushed to the nearest shade and started
recapturing all he saw. It was not just like a bomb that bursts, shaking everything around, but a silent fall of
a tower without even letting anyone feel it. How could that be? That was his thought, and he broke his shell
and came out with this wonderful thought of breaking down those tectonic plates. He had no formal training in
science but had the brain, only geniuss own. He could relate it to someone who once said that the best way to
destroy an enemy is to let him destroy himself! He was sure the crackers were not like bombs, though they were
strong and loud. Indeed, their timing and strategic placement worked to unbind the bonds that held the building
high and convert its potential energy to dust it down.
Ureka, it struck his brain. Here is what would save the world, and lets not let anyone cry anymore from
the wrath nature bestows without warning. That was just his turning point when he was heard by a passing guy,
who was just there on his holiday trip, but by himself, a famous physicist. No one knew where Cranky went,
56
Figure 1: Usually, when energy is released through plate tectonics, it is transmitted to all parts of the globe
through surrounding media and other platelets. The strategically guided triggering of platelet breaks made this
impossible, and the total energy could be confined to the platelets without escaping. This energy was spent to
break down the platelets, turning them into a pile of sand confined to the same volume.
nor anyone cared much where he went. However, they often had him in their casual talks.
Cranky had gone to Japan, where the gentleman took him from shore to work on a project for nature care.
The newspapers read the news from Tokyo, quoting the famous scientist, Cranky, a genius in line with Einstein
and Newton. He has given them a clue to release the hundred thousand years of stress and to break down
the plates beneath, powdering them in their own strength without being noticed by the land around! They all
remarked that the logic is simple, but Cranky has stoned Goliath on his forehead, saving the land we all live in.
Had he not done it so well as he did, the growing power those platelets had would have drowned all life in
Japan on the day it might burst. They had indeed succeeded in crunching all of it to dust and grain without even
the sensitive seismometers knowing which is but to the credit of Cranky, the scientist. The scientific contribution
of Cranky is the following. Usually, when energy is released through plate tectonics, it is transmitted to all
parts of the globe through surrounding media and other platelets. The strategically guided triggering of platelet
breaks made this impossible, and the total energy could be confined to the platelets without escaping. This
energy was spent to break down the platelets, turning them into a pile of sand confined to the same volume.
This is how he managed to release the tension that prevailed for centuries beneath the surface.
About the Author
Professor Ninan Sajeeth Philip is a Visiting Professor at the Inter-University Centre for Astronomy
and Astrophysics (IUCAA), Pune. He is also an Adjunct Professor of AI in Applied Medical Sciences [BCMCH,
Thiruvalla] and a Senior Advisor for the Pune Knowledge Cluster (PKC). He is the Dean and Director of airis4D
and has a teaching experience of 33+ years in Physics. His area of specialisation is AI and ML.
57
About airis4D
Artificial Intelligence Research and Intelligent Systems (airis4D) is an AI and Bio-sciences Research Centre.
The Centre aims to create new knowledge in the field of Space Science, Astronomy, Robotics, Agri Science,
Industry, and Biodiversity to bring Progress and Plenitude to the People and the Planet.
Vision
Humanity is in the 4th Industrial Revolution era, which operates on a cyber-physical production system. Cutting-
edge research and development in science and technology to create new knowledge and skills become the key to
the new world economy. Most of the resources for this goal can be harnessed by integrating biological systems
with intelligent computing systems offered by AI. The future survival of humans, animals, and the ecosystem
depends on how efficiently the realities and resources are responsibly used for abundance and wellness. Artificial
intelligence Research and Intelligent Systems pursue this vision and look for the best actions that ensure an
abundant environment and ecosystem for the planet and the people.
Mission Statement
The 4D in airis4D represents the mission to Dream, Design, Develop, and Deploy Knowledge with the fire of
commitment and dedication towards humanity and the ecosystem.
Dream
To promote the unlimited human potential to dream the impossible.
Design
To nurture the human capacity to articulate a dream and logically realise it.
Develop
To assist the talents to materialise a design into a product, a service, a knowledge that benefits the community
and the planet.
Deploy
To realise and educate humanity that a knowledge that is not deployed makes no difference by its absence.
Campus
Situated in a lush green village campus in Thelliyoor, Kerala, India, airis4D was established under the auspicious
of SEED Foundation (Susthiratha, Environment, Education Development Foundation) a not-for-profit company
for promoting Education, Research. Engineering, Biology, Development, etc.
The whole campus is powered by Solar power and has a rain harvesting facility to provide sufficient water supply
for up to three months of drought. The computing facility in the campus is accessible from anywhere through a
dedicated optical fibre internet connectivity 24×7.
There is a freshwater stream that originates from the nearby hills and flows through the middle of the campus.
The campus is a noted habitat for the biodiversity of tropical Fauna and Flora. airis4D carry out periodic and
systematic water quality and species diversity surveys in the region to ensure its richness. It is our pride that
the site has consistently been environment-friendly and rich in biodiversity. airis4D is also growing fruit plants
that can feed birds and provide water bodies to survive the drought.