Cover page
This James Web Space Telescope image shows the gravitational lensing produced by the galaxy cluster SMACS
0723, which is at a distance of 4.6 billion light years from us. Image credit: NASA, ESA, CSA, and STScI
Light travels in straight lines, and thats why we can see the images of objects in their proper proportion by
focusing them in our eyes. Assume that you look at an image formed on a cutglass vessel kept on the table in
your yard. The same trees and the sky would look distorted. Something similar happens when massive galaxies
curve and distort the space-time around them. In the image, distant galaxies appear distorted, as if they are all
elongated or curved. Albert Einstein predicted this in his General Theory of Relativity about a hundred years
ago. Fascinating science, isnt it?
Managing Editor Chief Editor Editorial Board Correspondence
Ninan Sajeeth Philip Abraham Mulamootil K Babu Joseph The Chief Editor
Ajit K Kembhavi airis4D
Geetha Paul Thelliyoor - 689544
Arun Kumar Aniyan India
Jorunal Publisher Details
Publisher : airis4D, Thelliyoor 689544, India
Website : www.airis4d.com
Email : nsp@airis4d.com
Phone : +919497552476
i
Editorial
by Fr Dr Abraham Mulamoottil
airis4D, Vol.1, No.2, 2023
www.airis4d.com
Immersed Education and Learning
Introduction
The gulf between the academy and industry is the main topic which most industrialists and academicians
try to deliver in convocation lectures and education and industry policy discussions. Of late, ”innovation” and
”sustainability” have become the main subjects of debate for the industry and academia in the post-covid era. An
Industry without switching to continuous or permanent innovation seems to fail in success and gradually ceases
to exist. Global warming, shortage of natural resources, advancement of technologies, scarcity of sufficient
human resources, resistance to adopting and skilling for the future, etc., are a few issues that affect innovation
and sustainability of the industry. Situations in education and the end of learning institutions are dramatically
changing. The new sources of knowledge and access to it unprecedentedly revolutionised. The aim of education
to seek a job has slowly evolved towards focusing on entrepreneurship and start-ups. Thus, innovation and
sustainability are buzzwords, targets, missions, and aspirations for the industry and the academy.
Immersed Education
An Immersed learning ecosystem bridges the gap between industry and academia. Immersed education
or learning is familiar to many American universities in the context of Industry-Institute-Interaction discussion.
Its core aim is to innovate collaboratively with the young brains of academia and the professional industry to
sustain our shared future.
New Resources
Immersed learning is also essential in finding new resources for humanity and industry to survive. United
Nations proposed a triple bottom line (TBL) for many countries post-colonial development and economy. The
colonies were rich resources for the development of colonisers. It also made colonies become poor, and their
natural resources started depleting. Our planet has always been a rich resource for our survival and industry
growth. But contemporary global issues are threatening and alarming for finding new resources. Many countries
and institutions need vision and plan to find necessary resources.
First, there should be a vision that can make our planet secure and safe for humanity. Secondly, rather
than sustainability, humanity should emphasise well-being. Thirdly, society should realise that creating wealth
and profit is not a sin. Additionally, acquiring reasonable and responsible prosperity by individuals provides
self-esteem, and it will build a society of generous and responsible citizens.
This triple bottom line (TBL) of development may give way to the quadruple bottom line (QBL), adding
the word plenitude (Planet, People, Prosperity and Plenitude). The planet and people are looking for new
resources; otherwise, humanity and the earth cannot sustain themselves. The third industrial revolution, or
information revolution, give rise to a knowledge society/economy. Besides the old economy’s labour and capital
(land), knowledge has become the primary means of production. Virtual resources and money are vital for
industry. Human resources and skilling becomes essential for a country’s ability to create new resources. In
this context, protecting the planet, building people as responsible citizens, and earning profit also need new
resources. Reinvesting in education and students is fundamental for society and, in particular, to industry.
Industry shall shift from maximising profits, the usual mantra for success, to Invest in Immerced education for
innovation in the existing industry. Immersed learning is beneficial, profitable and productive for the industry
and academia.
Plenitude
Human ingenuity can create plenitude or abundance through science, technology, and new resources.
Technological advancements dramatically change the world. The scientific community can abundantly develop,
manage and protect resources for the world. Covid vaccines and their management are excellent examples of
micro and macro development. The world started working on new issues: ”water, water everywhere but no drop
to drink”, ”Sun is giving plenty of energy, more than human needs”, ”Recycle, Reduce, Reuse”, etc.
SPICE
The immersed learning methodology is part of the Triple I: Industry-Institute-Interaction, collaboration
program with M. G. University, Kottayam and the Central Travancore Chamber of Commerce and Industry.
airis4D is an implementing agency of the Central Chambers for this model. This model showcases and shares
possibilities with industry and academia. As an immersed calling, the project provides exposure and experience
for students and professors and at the same time industry will collaborate with research on advancements in
science and technology.
Showcasing (Sharing) Possibilities and Immersed Calling for Experience (SPICE) Programme intends:
1. to develop a new knowledge-based business ecosystem;
2. to showcase MSME products and services;
3. to identify new resources (natural, knowledge, and HR);
4. to pin down prospects and possibilities of the existing enterprises; and
5. to encourage new people, especially students willing to invest or set up start-ups.
SPICE Programme Showcases
1. Students as a Source of Ideas and Inspiration
2. Industry as a Meeting Point of Challenges
3. University as Strength and Knowledge
4. Entrepreneurs as the Growth of a Country and
5. The Central Travancore region as a Land of Opportunity and Resources.
The SPICE Programme is planned for an ”Immersed Calling for Experience” in and with the existing
industry and business enterprises (Immersed Learning or Education).
iii
In the context of Indias jobless growth, ”what we need is not mass production, but production by the
masses”, as Mahatma Gandhi envisioned. The MSME sector needs to modernise with the help of digital
technology, professional management, and a better scale of operations. The Indian MSME sector has many
hidden treasures waiting for this makeover. Take the case of the Indian/Kerala restaurants/cuisine run with
limited resources, which can be revived by branding to the world stage as a global business in scale and
sophistication. Taiwans model of small enterprise assembling units can be incorporated into the Kerala
family/SHG (Kudumbashree) as an opportunity space. Digitisation, partnership, branding, and recognition
with the help of the Government and the Corporates will create a win-win situation. The Government of Indias
new e-commerce platform, ONDC (Open Network for Digital Commerce), is an ambitious and strategic vision
that could unleash Indias entrepreneurial dividend. (C. Sarat Chandran, LSE, The Hindu Sep.14, 2022)
The SPICE Programme can provide hands-on training and immersed experience or learning in upcoming
areas of AI, Robotics, Automation, Nanotechnology, IoT, Wearable Medical Devices, Bio 3D printing, etc., both
for the students, entrepreneurs, and the MSMEs to upgrade their enterprises.
iv
Contents
Editorial ii
I Artificial Intelligence and Machine Learning 1
1 Data Driven learning 2
1.1 Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.2 5Vs of Big Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.3 Significance of Data in Machine Learning . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.4 Types of Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.5 Generating data using GAN . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
1.6 Data for ChatGPT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
1.7 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
2 Machines that can understand Human Languages 8
2.1 Lexical Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
2.2 Syntactic analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
2.3 Semantic Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
2.4 Disclosure Integration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
2.5 Pragmatic analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
3 AI over a Coffee 14
3.1 The dumb and powerful computers of the past . . . . . . . . . . . . . . . . . . . . . . . . . . 14
3.2 New-Age computers that can learn . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
3.3 A.I. in science and engineering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
3.4 Primitive learning algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
3.5 Artificial Neural Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
II Astronomy and Astrophysics 18
1 Applications of Satellite Imaging 19
1.1 Use cases of Satellite imaging . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
1.2 Challenges . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
2 Eclipsing Binaries Part- 2 23
2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
2.2 Classification according to the physical characteristics of the components . . . . . . . . . . . 23
2.3 Classification based on the degree of filling of inner Roche lobes . . . . . . . . . . . . . . . . 23
2.4 Some examples of Eclipsing binaries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
CONTENTS
III Biosciences 27
1 An introduction to Molecular Biology 28
1.1 Central Dogma . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
1.2 How does the Gene expression connected to the Central Dogma of Molecular Biology? . . . . 30
IV Computer Programming 32
1 Remote Sensing and Artificial Intelligence in Agriculture 33
V Fiction 36
1 Curves and Curved Trajectories 37
vi
Part I
Artificial Intelligence and Machine Learning
Data Driven learning
by Blesson George
airis4D, Vol.1, No.2, 2023
www.airis4d.com
According to The Economist, the world’s most precious resource in the last century was oil but now it is data.
The significance of data in business and industry can be understood from the fact that 97 percent of organisations
worldwide utilise it to fuel their commercial opportunities and 76 percent use it as a vital component of their
business plans. Moreover, data has became a tradable commodity that can be purchased and sold. The primary
source of revenue for IT behemoths such as Google and Facebook is the data of their customers. The fourth
industrial revolution (Industry 4.0), commonly referred to as the ”Data Revolution, involves integrating new
technologies into manufacturing production processes and operations, including Internet of Things (IoT), cloud
computing and analytics, artificial intelligence, and machine learning. The main product or outcome of this
revolution thus far has been the Big Data, or the vast amounts of information that are currently available. In
today’s world, data is the true wealth. According to the deliberations at the World Economic Forum, whoever
controls data will have influence over the world in the future. Consequently, data has become the most important
factor in the modern world. Nonetheless, the life of mankind is determined by the control of data. The data we
generate controls or influences our decision-making.
1.1 Data
Data may be described as a formalised representation of facts, concepts, or instructions that is appropriate
for communication, interpretation, or processing by a human or a computer. It includes the information obtained
via observations, measurements, study, or analysis. They may include numbers, names, images, videos, and
even descriptions of objects.
1.1.0.1 Big Data
Big data refers to bigger, more complicated data collections, particularly those derived from novel data
sources. These data sets are so extensive that conventional data processing software is incapable of handling
them. The Big Data is featured by different Vs.
1.2 5Vs of Big Data
1.2 5Vs of Big Data
1.2.1 Volume
By 2025, it is anticipated that the volume of data created, consumed, duplicated, and stored would exceed
180 zettabytes (1,000,000,000,000,000,000,000 (10
21
) bytes). The total quantity of data produced and consumed
in 2020 was 64,2 zettabytes. In the past eleven years, the amount of data created, captured, duplicated, and
utilised on a global scale has increased by about 500%. The primary effect is a substantial rise in data use from
1.2 trillion gigabytes to 59 trillion gigabytes.
1.2.2 Velocity
The transition to digital business is creating a tremendous increase in the volume of data produced, used,
and stored by enterprises. Additionally, it is accelerating the velocity of data, or the rate at which data is
changing. The fast generation and storage of vast quantities of data in computer systems is known as data
explosion. Businesses worldwide have already accumulated more data than they can handle, in large part
because of how easily information can be shared across linked devices.
Figure 1.1: Graph illustrating the rise of creation of worldwide data. The sudden rise in the data is referred to
as data explosion. Image courtesy: Kleiner Perkins
1.2.3 Variety
Variety refers to the many types of data that are available. Mostly data is found in two different forms-
structured and unstructured. Structured data refers to information that is concise, factual, and well organised.
Structured data is simple to search and evaluate. There is a standard format for structured data and it is
comprehensible to machines. Whereas, unstructured data is data that lacks a preset structure or model. It
demands a large amount of storage space and is difficult to secure. It cannot be presented in a data model or
3
1.3 Significance of Data in Machine Learning
schema. Therefore, it is difficult to manage, analyse, or search unstructured data. It exists in several formats,
including text, photos, audio and video files, etc. Most databases cannot analyse unstructured data because the
vast majority of data analytics databases are created for structured data and are not suited to handle unstructured
data. Working with unstructured data presents various obstacles such as the difficulty to find pertinent and
quality data, additional processing etc.
In recent years, two other Vs have emerged: value and veracity. Data has inherent worth. However, it is
useless until its worth is revealed. Equally vital are the veracity and dependability of your data.
1.3 Significance of Data in Machine Learning
Machine Learning can be ultimately defined as the process of creating learned models. A model is a set
of algorithms and parameters that are trained on a dataset in order to make predictions or decisions about new
data. Training and prediction are two important steps in machine learning.
In machine learning, training a model involves using a dataset to adjust the model’s parameters so that it
can accurately predict the outputs for the training data. The process typically involves feeding the model a set
of input-output pairs, and adjusting the parameters so that the difference between the model’s predicted output
and the actual output is minimized.
Once the model is trained, it can be used for prediction. This process involves providing the model with
new input data and having the model return a predicted output. For example, if the model is trained on a dataset
of images and their corresponding labels (e.g. ”dog” or ”cat”), the model can then be used to predict the label
of a new, unseen image.
It’s important to note that a model’s performance on the training data is not necessarily indicative of its
performance on new, unseen data. To evaluate a model’s ability to generalize to new data, a separate dataset
called the validation set is used. The model is trained using the training set and then the performance is evaluated
using the validation set.
A sufficient quantity of high-quality data is required to obtain a successful ML model. ”Sufficient data”
refers to having enough data to train a model so that it can accurately make predictions or decisions about new,
unseen data. The amount of data required for a model to be considered sufficiently trained can vary depending
on the complexity of the model and the variability of the data. A simple model, such as linear regression, may
require less data to be sufficiently trained than a more complex model, such as a deep neural network. Similarly,
a dataset with low variability may require less data than a dataset with high variability. In general, the more data
a model is trained on, the better it will perform on new, unseen data. However, at a certain point, adding more
data will not significantly improve the model’s performance. This is known as the ”law of diminishing returns”
for data. Additionally, having a diverse dataset is also important to make sure that the model generalizes well
to new, unseen data, and not just memorizing the training data.
1.4 Types of Data
Machine Learning is often divided into three categories based on the training data. Labeled, partially
labelled, and unlabeled data are termed to as supervised, semi-supervised, and unsupervised data, respectively.
In each of these methods, a machine detects statistical patterns or regularities within the data.
In supervised learning, data is typically divided into two parts: the feature set and the label. The feature set
represents the characteristics or attributes of the data samples, while the label represents the class or category
that the data sample belongs to.
4
1.5 Generating data using GAN
Figure 1.2: Data can often be labeled, unlabelled, or a combination of both. Based on the kind of data
labeling, machine learning algorithms may be divided into three basic categories: supervised, unsupervised, and
reinforcement learning. In supervised learning, the aim is also to discover an appropriate label for each data point,
whereas, in unsupervised learning, data points are categorized into several groups. In reinforcement learning,
training is based on rewarding desired and penalizing undesired behaviors. Image Courtesy: https:Viso.ai/Deep-
Learning/Deep-Learning-vs-Machine-Learning
For example, in a image classification task, the feature set would be the pixel values of the image, and the
label would be the class of the object in the image (e.g. ”dog”, ”cat”, ”car”, etc.).
During the training phase, the supervised learning algorithm uses the feature set and the corresponding
labels to learn the relationship between the features and the classes. Then, during the testing phase, the algorithm
uses the learned relationship to predict the labels of new, unseen data samples based on their feature set. In
classification tasks, we approximate a mapping between discrete input variables (X) and discrete output variables
(y). In contrast, the output in a regression tasks are continuous variables.
There are many different types of data and labels depending on the task and the domain. Here are a few
examples:
Image classification: The data is typically images, and the labels are the classes of objects in the images
(e.g. ”dog”, ”cat”, ”car”, etc.).
Sentiment analysis: The data is typically text, and the labels are the sentiment of the text (e.g. ”positive”,
”negative”, ”neutral”).
Speech recognition: The data is typically audio, and the labels are the transcribed speech (e.g. ”hello”,
”goodbye”, ”yes”, ”no”).
Object detection: The data is typically images or videos and the labels are the bounding boxes and class
of the objects in the images or videos.
Time series forecasting: The data is typically a sequence of values over time, and the labels are the future
values.
Anomaly Detection: The data is typically a sequence of values over time, and the labels are the indicator
of normal or abnormal values.
5
1.5 Generating data using GAN
Figure 1.3: Image demonstrating the different types of labels for various image segmentation algorithms. The
image as a whole is learnt to identify the animals in the image recognition task (top-left), whereas the coordinates
of the objects inside the image are the target labels in the object detection technique (bottom-left). Each pixel
of the image is assigned a label (color) in the Semantic segmentation method (top-right), resulting in objects of
the same class with the same color. Instance segmentation (bottom-right), on the other hand, treats all objects
as unique and assigns them separate labels. Image Courtesy: https://www.reddit.com/r/learnmachinelearning/
comments/kt0hov/difference-in-image-classification-semantic
1.5 Generating data using GAN
A Generative Adversarial Network (GAN) is a type of deep learning model that is used for generating new,
previously unseen data samples that are similar to a given training dataset.
A GAN consists of two main components: a generator and a discriminator. The generator is trained to
generate new data samples that are similar to the training data, while the discriminator is trained to distinguish
between the generated samples and the real training data.
The data used for GANs can be any type of data that can be modeled as a probability distribution, such as
images, videos, audio, text, etc. For example, a GAN can be trained on a dataset of images of faces and then
used to generate new, previously unseen images of faces that are similar to the training data. The generated
images can be used in a variety of applications, such as image synthesis, image-to-image translation, and data
augmentation.
1.6 Data for ChatGPT
ChatGPT is a variant of GPT-3, which is a large-scale language model developed by OpenAI. Since its
release, GPT-3 and ChatGPT have received significant attention in the media and among researchers, developers,
and the general public. It is considered as one of the most powerful language models to date, and it has been
widely used in a variety of applications, such as natural language understanding, text generation, question
answering, and more.
The training data for GPT-3 includes a wide variety of sources such as web pages, books, articles, and
more. The corpus of text used to train GPT-3 is estimated to be around 570GB of text, which includes a diverse
range of text styles, formats, and genres. The training data is also filtered to remove any sensitive information
or personally identifiable information (PII) to protect users privacy.
6
1.7 Conclusion
The data is used to train the model to generate natural language text that is similar to the training data, and
the model is capable of understanding and responding to a wide variety of natural language inputs, including
questions, statements, and prompts.
1.7 Conclusion
Data is the foundation of machine learning and having high-quality data is crucial for training accurate
models. Without enough data, it can be difficult to train a model that generalizes well to new data, and with
poor-quality data, it can be difficult to train a model that makes accurate predictions.
There are several limitations in handling big data, including:
Storage: Storing and managing large amounts of data can be costly and challenging, especially when
dealing with structured and unstructured data.
Processing power: Processing and analyzing big data requires significant computational resources, which
can be expensive and time-consuming.
Data quality: Ensuring the quality and accuracy of big data can be difficult, as it often includes incomplete,
inconsistent, and duplicate information.
Data security: Protecting and securing big data from unauthorized access and breaches is a major concern,
especially when dealing with sensitive information.
Data privacy: Ensuring compliance with data privacy regulations, such as GDPR and CCPA, can be
difficult when handling big data.
Data integration: Integrating and combining different types of data from various sources can be challeng-
ing, and may require significant effort and specialized skills.
Scalability and Elasticity: As data grows, it’s important that the infrastructure should be able to grow
with it. This is difficult to achieve with traditional infrastructure and can be costly.
Despite all of these challenges, emerging technologies such as quantum computing, cloud computing,
distributed computing, etc., as well as new breakthroughs in hardware technologies such as smart materials for
artificial intelligence, will aid in overcoming the problem.
References
1. The Economist The world’s most valuable resource
2. World Economic Forum Annual Meeting,2023
3. Statista Worldwide Data Created
4. Medium.com- Data Science: The 5 V’s of Big Data
5. Amazon AWS-What is data labeling for machine learning?
6. Top 6 Big Data Challenges and Solutions to Overcome-Chandan Gaur
About the Author
Blesson George is currently working as Assistant Professor of Physics at CMS College Kottayam,
Kerala. His research interests include developing machine learning algorithms and application of machine
learning techniques in protein studies.
7
Machines that can understand Human
Languages
by Jinsu Ann Mathew
airis4D, Vol.1, No.2, 2023
www.airis4d.com
Phases of Natural Language Processing
Humans possess an extraordinary ability to perform complex tasks with ease. However, what comes
naturally to us can often prove to be a challenge for machines to learn. One of the most fundamental aspects
of human communication is the use of natural language, which includes both spoken and written forms.
Natural Language Processing (NLP), also known as text analytics, involves making natural language usable
for computational purposes. This is of great significance in today’s world due to the vast amount of text
data generated globally.The process of Natural Language Processing (NLP) is divided into five main stages
or phases1.1. These phases start with basic word processing and continue with the identification of complex
phrase meanings. This article will provide a brief overview of each phase and give examples of how they are
applied in information retrieval.
2.1 Lexical Analysis
Lexical analysis, also known as lexing or tokenization, is the first phase of natural language processing
(NLP) in which raw text is processed into a sequence of meaningful elements called tokens. Tokens are usually
words, but they can also be punctuation marks, numbers, or other types of symbols that carry meaning in the
text. The goal of lexical analysis is to identify and extract the tokens from the input text and to provide a stream
of tokens that can be used as input for further analysis. This involves breaking the text into words, identifying
punctuation marks and symbols, and removing any irrelevant characters or whitespace. The process of lexical
analysis can be broken down into several steps:
Character Stream
This is the process of converting input text into a stream of characters.
input text —- ”This is an example of character stream.”
output —- [’T’, ’h, ’i’, s, , ’i’, s, , ’a, ’n, , ’e, ’x’, ’a’, ’m’, ’p’, ’l’, ’e’, , o’, ’f’, , ’c’, ’h’, ’a’,
r’, ’a’, ’c’, t, ’e, r’, , s’, t’, r’, ’e’, ’a, ’m’, .’]
2.2 Syntactic analysis
(image courtesy: https://www.analyticsvidhya.com/blog/2021/05/natural-language-processing-step-by-step-guide/)
Figure 2.1: Phases of Natural Language Processing
Tokenization
This is the process of breaking a string of text into individual words or tokens.
input text —- ’This is an example of tokenization.’
Output —- [’This, ’is’, ’an, ’example’, of, tokenization.’]
Normalization
This is the process of converting text into a standard format to improve its consistency and to make it easier
to analyze. Usually words are normalized by converting them into lower case. This is an important step in NLP
as it helps to reduce the dimensionality of the data, by reducing the number of distinct words, and to improve
the consistency and accuracy of the analysis.
input text —- ’This is an Example of Normalization’
Output —- this is an example of normalization
Stemming
This is the process of reducing words to their base form, typically by removing inflectional endings such
as -ed, -ing, -ly, etc. Stemming does not always produce a valid word, it is more of a heuristic process and may
not always produce the correct root word.
Suppose we have the following words: ”computation”, ”compute”, ”computer”, ”computing”, ”computed”
All these words will be stemmed to comput.
Lexical analysis is a crucial step in the NLP pipeline because it provides the basic building blocks for
further analysis.
9
2.2 Syntactic analysis
(image courtesy:https://www.researchgate.net/figure/3-Syntactic-analysis-for-the-white-cat-sat-under-the-table fig2 271764570
Figure 2.2: Syntactic analysis for “the white cat sat under the table”
2.2 Syntactic analysis
Syntactic analysis, also known as parsing, is the second phase of natural language processing (NLP) in
which the structure of a sentence is analyzed and represented in a formal format such as a parse tree or a
dependency graph. The goal of syntactic analysis is to understand the grammatical structure of a sentence and
to identify the relationships between the words in the sentence. The syntactic analysis of the sentence the
white cat sat under the table is shown in 1.2
The process of syntactic analysis can be broken down into several steps:
Part-of-speech tagging
This is the process of marking each word in a text with its corresponding grammatical category, such as
noun, verb, adjective, adverb, etc. Part-of-speech tagging is a common step in natural language processing
(NLP) and is used to analyze the grammatical structure of a sentence and to identify the relationships between
words.
Parsing
Parsing is the process of analyzing the grammatical structure of a sentence and representing it in a formal
format such as a parse tree or a dependency graph. The goal of parsing is to understand the syntactic structure
of a sentence and to identify the relationships between the words in the sentence.
Syntactic analysis is an important step in the NLP pipeline because it provides a deeper understanding
of the meaning of a sentence and allows for more advanced analysis such as semantic analysis and discourse
analysis.
10
2.3 Semantic Analysis
Figure 2.3: Example of Relationship Extraction
2.3 Semantic Analysis
Semantic analysis is the process of extracting meaning from text. It enables computers to comprehend and
interpret sentences, paragraphs, or entire documents by examining their grammatical structure and identifying
the connections between individual words within a specific context. The task of a semantic analyzer is to
evaluate the text for meaningfulness. While lexical analysis also deals with the meaning of words, it differs from
semantic analysis in that it focuses on smaller units of text, such as individual words and phrases, while semantic
analysis looks at larger chunks of text, such as sentences and paragraphs. In other words, lexical analysis is
concerned with the individual meanings of words, while semantic analysis is concerned with how those words
relate to each other to convey meaning in a larger context. In order to understand the meaning of a sentence, the
following are the major processes involved in Semantic Analysis:
Word sense Disambiguation
Word sense disambiguation (WSD) is the process of determining the intended meaning of a word in context,
as words can have multiple meanings depending on the context in which they are used. WSD is an important
task in natural language processing (NLP) as it allows for a better understanding of the meaning of the text.
For example, Consider the sentence ”I am going to wind the clock” In this sentence, ”wind” could refer to the
movement of air or the act of tightening or loosening something by turning it. A dictionary-based approach
might look up the definition of ”wind” in a pre-existing dictionary and find that it has multiple meanings,
including ”the movement of air” and ”the act of tightening or loosening something by turning it.” The context of
the sentence, ”I am going to wind the clock,” indicates that ”wind” is being used to refer to the act of tightening
or loosening something by turning it, not the movement of air. Thus, the ability of a machine to overcome the
ambiguity involved in identifying the meaning of a word based on its usage and context is called Word Sense
Disambiguation.
Relationship Extraction
Relationship extraction is the task of identifying and extracting relationships between entities mentioned
in a text. These relationships can be between people, organizations, locations, and other types of entities. As an
example consider the sentence, Isaac Newton is best known for developing laws of motion” . In this sentence,
the entities ‘Isaac Newton and ‘laws of motion are mentioned and relationship between them is developed1.3.
11
2.4 Disclosure Integration
In conclusion, semantic analysis is the process of drawing meaning from text. It is an important step in
natural language processing (NLP) that allows computers to understand and interpret sentences, paragraphs, or
whole documents.The end goal of semantic analysis is to extract the meaning of the text and to make natural
language usable for computational tasks. This enables various applications such as text summarization, question
answering, machine translation and more.
2.4 Disclosure Integration
Disclosure integration is a process in which information from various sources is brought together and
presented in a cohesive and organized manner. The goal of disclosure integration is to make the information more
easily accessible and understandable for the intended audience. It emphasizes the importance of understanding
the context and structure of communication. Actually disclosure integration uses the relationship between
preceding and succeeding statements to generate the meaning of a current statement. This approach is guided
by a set of predefined rules and follows an organized methodology. For example, the sentence Tom met with
an accident” can suggest that Tom was the victim of an unfortunate event, but by adding ”because he was over
speeding” it implies that the accident was caused by his own actions. Without the context of him over speeding,
the sentence only states that he met with an accident and it could be caused by other factors or parties. This
illustrates how language context and structure can affect the interpretation of a statement.
2.5 Pragmatic analysis
Pragmatic analysis is the examination of how language is used in real-world situations, taking into account
the practical and logical aspects of communication. This level of language processing involves utilizing real-
world knowledge and understanding how it shapes the meaning of the communication. It delves into the
contextual dimension of the text, uncovering deeper levels of meaning that require an extensive understanding
of the world and context. For example, consider the sentence ”I’m just kidding” In this sentence, the speaker is
indicating that the previous statement was not meant to be taken seriously, and is intended to be interpreted as
a joke. The meaning of this sentence is derived from the context in which it is used, such as the speakers tone
of voice and nonverbal cues, as well as the listeners understanding of the conversation.
In conclusion, pragmatic analysis is a crucial aspect of natural language processing (NLP) that studies how
language is used in context. It enables computers to understand the intended meaning of sentences, beyond
their literal meanings, by taking into account the intentions of the speaker, the background knowledge of the
listener, and the social and cultural norms that influence communication. Pragmatic analysis can involve several
tasks such as speech act recognition, implicature, presupposition, deixis, and sarcasm and irony detection.
This approach is important for NLP applications such as sentiment analysis, text summarization, and dialogue
systems, as it allows for a deeper understanding of how language is used and provides insight into the social and
cultural factors that influence communication.
Natural Language Processing (NLP) is a complex field that involves several stages or phases to analyze
and understand human language. These phases include lexical analysis, syntactic analysis, semantic analysis,
pragmatic analysis, discourse analysis, and others. Each phase is designed to extract specific information from
text and build upon the information gathered in previous stages. By using a combination of these phases, NLP
allows computers to understand, interpret, and generate human language. This is crucial for a wide range of
applications such as text summarization, sentiment analysis, question answering, machine translation and many
12
2.5 Pragmatic analysis
more. Understanding these different phases of NLP is essential for anyone looking to work with or develop
natural language processing systems.
References
Natural Language Processing Step by Step Guide,Analytics Vidhya,Amruta Kadlaskar,May 2021
Natural Language Processing - Semantic Analysis,Tutorials Point
A Guide To The 5 NLP Phases,Algoscale, May 2022
Understanding Semantic Analysis Using Python NLP,Daksh Trehan, Roberto Iriondo,Towards AI,May
2021
INTRODUCTION TO DIFFERENT LEVELS OF NATURAL LANGUAGE PROCESSING,Data science
prophet,Vrutti Tanna, March 24, 2021
About the Author
Jinsu Ann Mathew is a research scholar in Natural Language Processing and Chemical Informatics.
Her interests include applying basic scientific research on computational linguistics, practical applications of
human language technology, and interdisciplinary work in computational physics.
13
AI over a Coffee
by Linn Abraham
airis4D, Vol.1, No.2, 2023
www.airis4d.com
This article is a coffee table discussion between two friends, Annie and Marty. Annie is an inquisitive
learner who just met her long-time friend Marty, who is doing his PhD. in artificial intelligence. Annie is amused
by the sudden rise of Artificial Intelligence and wants to understand the underlying concepts or principles that
drive its development. She also wants to be abreast of the developments in the field that have led to the current
state of affairs. The conversation is divided into sections for the reader to follow along easily.
3.1 The dumb and powerful computers of the past
Annie: Hi Marty. Can you explain to me what this whole A.I. thing is? What is all the buzz about?
Marty: Ya sure. We are probably on the brink of another industrial revolution. The invention of the
steam engine is considered to have brought about the first industrial revolution. The invention of electricity was
another big step that radically changed the nature of machines. The invention of the transistor marked the third
industrial revolution, or what we call the digital revolution. These transistors power the I.C.s or chips found in
most electronics nowadays, including modern computers. The new revolution we are talking about has to do
with the age of intelligent machines, and A.I., as you might know, stands for Artificial Intelligence.
Annie: Oh! intelligent machines, thats what it is about. But I was under the impression that computers
were already pretty intelligent. I saw this movie recently, ”The Imitation Game, where they talk about the
history of computers and how they were developed during the Second World War era to break the secret code
used by the Germans. Is that true?
Marty: Yes, it is very accurate. It is also true that the computers of yesteryears were indeed very capable.
They have helped astronauts land on the moon, enabled metrologists to predict the weather, and cosmologists to
simulate the universes evolution. As late as 1997, a chess-playing computer called Deep Blue was developed,
which beat the reigning world champion Gary Kasparov in his own game. But all of these were done using
brute-force techniques and not by intelligent machines.
3.2 New-Age computers that can learn
Annie: Oh yeah, I remember hearing about Deep Blue beating Kasparov in the news. So do you mean to
say that it was not an intelligent computer that did that?
Marty: Yes.
3.3 A.I. in science and engineering
Annie: So what are these intelligent machines capable of doing that the previous-generation computers
couldnt?
Marty: Even the most sophisticated machines of the previous generation failed spectacularly at tasks that
even a five-year-old kid could do. These are things like image recognition, speech recognition, and natural
language processing. Today’s machines are considered intelligent because they are able to do a lot of such tasks
that were considered too hot to handle for the older generation of computers.
Annie: Oh burn! So how do the new-age computers shine where previous-generation computers failed?
Marty: Traditional computers were powerful because of two significant things - brute strength and explicit
algorithms. The brute strength of the computer comes from its mind-boggling speed; the modern computer is
capable of executing about 5 billion instructions per second, and its able to crunch large numbers for hours on
end without tiring out or getting brain fatigued. An algorithm is something like that recipe you use for cooking
a Shakshouka. A set of rules or instructions given to the computer to solve a specific problem. When it is
written out in a language that the computer can understand, we call it computer code. With new-age computers,
however, we dont need such algorithms. Instead, we have something called learning algorithms.
Annie: So why are learning algorithms important for creating intelligent machines?
Marty: If we only had computers that relied on explicit algorithms, that would mean that computers could
only solve problems that humans knew how to solve in the first place. When it comes to intelligence, you see
that although human intelligence has a lot to do with evolution and our genes, one defining characteristic of
intelligence is the ability to learn from experience. A child’s brain is somewhat of a clean slate that has just
the ability to learn from its environment, hard-wired into it. Everything else that makes it intelligent is learned
during the course of its life, either from its own experiences or from the experiences of others.
Annie: But even if they could achieve these tasks, wouldnt that make these computers only as bright as a
five-year-old?
3.3 A.I. in science and engineering
Marty: You may be wrong about that. The ability to learn, paired with the brute strength of computers,
results in a deadly combination. Learning to do even the most mundane tasks that people do every day can spark
a revolution. The reason for all these fancy applications and the hype that A.I. has been generating recently has
to do with people trying to build learning capabilities in their respective domains. Self-driving cars, language
translators, spam detection, recommendation engines, etc., are all such examples. But the same techniques can
be applied in scientific research, leading to even more exciting applications.
Annie: Ohh. Using A.I. for research? That sounds exciting. What are the research problems that can be
tackled using A.I.?
Marty: Glad you asked. There are lots of such applications. For instance, in medical research, they are
used for cancer diagnosis. Using C.T. scans from both people identified with cancer and similar scans from
healthy people, an intelligent machine can be trained to identify patterns in the data that evade even the best
oncologists. A.I. is used for drug discovery to identify new drug candidates and predict how they will interact
with biological systems. A.I. systems have been used to analyze large amounts of genetic data to identify genetic
variants associated with specific diseases and predict an individual’s risk of developing a disease. A key part
of such research involves the 3D structure identification of proteins from their 2D images, and A.I. is used
to do exactly that. In astrophysics, you might remember that the Nobel prize was given for the discovery of
gravitational waves.
15
3.4 Primitive learning algorithms
Annie: Yeah I remember that. Those were theoretically predicted by Albert Einstein but not observed
until recently, right?
Marty: Yes. And even there, A.I. was used to identify the true signals buried under a lot of noise. They
are also used to analyze large amounts of telescope data to study the properties of stars, galaxies, and other
celestial objects.
3.4 Primitive learning algorithms
Annie: Wow. All these sound exciting. So if these ”learning algorithms” are the workhorses behind all of
these things that A.I. is capable of achieving, what are they actually, and how do they work?
Marty: Instead of having domain-specific rules hard-coded into the machines, the learning algorithms
enable computers to make blunders, evaluate the cost of their errors by comparing them with true values, and
keep on iterating until the accuracy becomes sufficiently high. In most of these cases, the computers try to
arrive at rules by finding patterns in the data shown to them. Remember those questions that often come up
in competitive exams where a number is omitted from a sequence of numbers, and we are asked to predict the
missing one?
Annie: Yup. Those are really hard at times.
Marty: Ya. But those are a piece of cake if you use learning algorithms to solve them. The curve fitting
you learn in mathematics is an example of such a learning algorithm.
Annie: Isn’t curve fitting what we do when you take observations on graph paper during a science
experiment and later try to fit a curve that accommodates all these data points?
Marty: Exactly.
Annie: So are these the kind of algorithms that A.I. nowadays uses?
Marty: Actually, curve-fitting is one of the simplest and most primitive learning algorithms. Many years
of research in this field have led to better learning algorithms.
Annie: But what do you mean by a better algorithm?
Marty: Curve-fitting is mostly used in cases where the form of the relationship between the variables is
known apriori. Most often than not, it is used in cases where the relationship is linear. Additionally, these
methods are sensitive to the presence of outliers or noise in the data.
3.5 Artificial Neural Networks
Annie: So what better algorithms have people come up with until now?
Marty: The state-of-the-art in artificial intelligence is what we call artificial neural networks or ANN. It is
a kind of bio-mimicry where the algorithm is derived from the working of the human brain, which is basically
a large network of neurons. Although our knowledge about the brain is quite limited, the general idea we have
is that the learning process in the brain is related to what we call neuroplasticity.
Annie: And what do you mean by neuroplasticity in our brains?
Marty: Our brain is a large collection of neurons. The term ”large” would be an oversimplification;
typically, there are about 100 billion neurons with about 100 trillion synapses, which are the connections
between the neurons.
Annie: Wow! Thats such a mind-boggling number.
Marty: Yes it is. Each time a child sees an image, say that of a dog, some of these neurons fire together.
The neurons that fire together become well-connected. The connection strength between certain other neurons
16
3.5 Artificial Neural Networks
might decrease. In this way, the idea of a dog is stored in our brain in the strength of the synapses between
individual neurons.
Annie: Oh. So that is probably the reason why different areas of the brain store different memories and
have different functions.
Marty: Yes. And as more and more data about dogs comes in through images, the brain can learn a very
strong pattern that will enable it to identify all kinds of dogs in the future, even those it hasn’t seen yet.
Annie: So why does my one-year-old nephew mistake the moon for a ball?
Marty: Yes. That can happen in the initial phases of development when the brain is slowly learning to
identify certain features that can help it to distinguish between objects. As more and more data flows into the
brain, it can identify objects at a distance and quickly conclude that its more probable to be something in the
sky rather than a ball.
Annie: Oh, this conversation has been very interesting. When can we talk more about this?
Marty: Soon enough I guess.
Annie: Then it’s a bye for now. Bye, Marty.
Marty: Bye Annie.
About the Author
Linn Abraham is a researcher in Physics, specializing in A.I. applications to astronomy. He is
currently involved in the development of CNN based Computer Vision tools for classifications of astronomical
sources from PanSTARRS optical images. He has used data from a several large astronomical surveys including
SDSS, CRTS, ZTF and PanSTARRS for his research.
17
Part II
Astronomy and Astrophysics
Applications of Satellite Imaging
by Robin Jacob Roy
airis4D, Vol.1, No.2, 2023
www.airis4d.com
Satellite remote sensing is the scientific field that combines the knowledge, analysis and correlation of
natural phenomena with measurements of electromagnetic energy that comes from or reflects off the Earths
surface or atmosphere. Sensors onboard the satellites take measurements of large number of areas on the earth’s
surface and present the data in the form of imagery. Satellite images play a crucial role in understanding and
observing the earth. Satellite images can be used to track the physical environment (water, air, land, vegetation).
It can also be used to measure, identify and track human activity.
Satellites are placed in orbit around the Earth, at various altitudes and angles, depending on the specific
mission of the satellite and they collect information about the earth using sensors or cameras mounted on them.
Satellite sensors/cameras measure the amount of reflected or emitted energy in a specific wavelength of light
along the electromagnetic spectrum to obtain information about Earth’s atmosphere, land or ocean. When the
photons strike the satellite sensors, they generate an electrical charge in the detector that is proportional to the
amount of incoming light. The sensor electronics read the value of the electrical charge and convert it to a
digital signal. Information from these sensors are turned into radio waves and is transmitted to antennas on the
ground. These radio signals are then processed and analyzed further to create the detailed maps and images
of the Earths surface. Figure 1.1 explains the various steps involved in the data capture and processing of the
Geostationary Operational Environmental Satellite (GOES), operated by the United States National Oceanic
and Atmospheric Administration (NOAA).
Figure 1.1: Process for translating satellite data into imagery.
1.1 Use cases of Satellite imaging
Figure 1.2: Deforestation over the Amazon Rain-
forests.(Source: NASA earthobservatory website)
Figure 1.3: A view of the eastern part of the Sun-
darbans in Bangladesh showing seasonally flooded
river basins.(Source: World Bank Blogs)
1.1 Use cases of Satellite imaging
Satellite imagery has a wide range of use cases, from environmental monitoring and natural resource
management to urban planning and disaster response. One of the most important uses of satellite imagery is
environmental monitoring and natural resource management. Satellites can be used to monitor the health of
forests, wetlands, and other ecosystems, as well as track changes in land use and land cover. This information can
be used to identify areas that are at risk of degradation, and to plan and implement conservation and restoration
efforts. In addition, satellite imagery can be used to monitor the health of crops, which is important for food
security and sustainable agriculture. Figure 1.2 shows the extent of deforestation happening on the Amazon
rain forests over a period of time.
Another important use of satellite imagery is urban planning and infrastructure management. Cities are
constantly growing and changing, and satellite imagery can be used to track these changes and plan for future
growth. For example, satellite imagery can be used to map and monitor urban expansion, track changes in land
use, and identify areas that are at risk of flooding or landslides. Figure 1.3 shows the regions of the Sundarbans
delta which are at a risk of flooding. In addition, satellite imagery can be used to monitor the condition of
infrastructure such as roads, bridges, and buildings, which is important for maintenance and repair.
Satellite imagery can also be used in disaster response and humanitarian aid. For example, satellites can
be used to map the extent of damage from natural disasters such as floods, earthquakes, and hurricanes. This
information can be used to plan and coordinate response efforts, and to provide assistance to those in need. In
addition, satellite imagery can be used to track the movements of refugees and internally displaced persons,
which is important for providing humanitarian aid.
Satellite imagery is also used in security, intelligence and military operations. Satellites can be used
for reconnaissance and surveillance, providing information about the location and activities of military units,
weapons systems, and other strategic assets. It can also be used for target identification and battle damage
assessment. Figure 1.4 shows the Chineese invasion at the Indian border of Arunanchal Pradesh. There are
many other use cases of Satellite imaging, including:
Weather forecasting: Satellites can be used to gather data on atmospheric conditions, such as temperature,
pressure, and wind patterns, which can be used to improve weather forecasting.
Transportation: satellite imagery can be used to track traffic and monitor the movement of vehicles on
roads.
20
1.2 Challenges
Figure 1.4: Chinese enclave constructed within Indian territory in Shi Yomi district of Arunachal.
Oil and gas exploration: Satellites can be used to detect oil spills and monitor offshore drilling platforms.
Mining: Satellite imagery can be used to identify areas of mineral deposits and monitor mining operations.
Archaeology and anthropology: Satellite imagery can be used to identify ancient ruins and track the
movement of people and resources over time.
Overall, satellite imagery plays a vital role in many different fields and has a wide range of use cases. It provides
valuable information for environmental monitoring, urban planning, disaster response, and security operations.
As technology continues to improve, the capabilities of satellite imagery will continue to expand, making it an
increasingly important tool for understanding and managing our world.
1.2 Challenges
There are several challenges associated with taking satellite imagery, including:
Weather conditions: Cloud cover and atmospheric conditions can limit the ability of satellites to capture
clear images. This can be particularly challenging in areas that are frequently cloudy or have high levels of
atmospheric moisture.
Resolution and image quality: Satellites are typically limited by the resolution and quality of their
cameras or sensors. This can make it difficult to capture highly detailed images of small or distant objects.
Cost: Launching and operating satellites can be expensive, which can limit the number of images that can
be captured and the frequency with which images are updated.
Data storage and transmission: The large amount of data generated by satellite imaging can be challenging
to store and transmit, requiring high-capacity storage systems and reliable communications networks.
Privacy and security: Satellites can capture images of sensitive areas such as military bases, government
buildings, and private property, raising concerns about privacy and security.
Geo-location and Image registration: Satellites capture images at different times, altitudes, and angles,
which can make it difficult to accurately locate and compare different images of the same area.
Interpreting and analyzing images: The images captured by satellite need to be interpreted and analyzed,
which can be a complex and time-consuming process, requiring specialized knowledge and expertise.
Despite these challenges, satellite imagery has become an essential tool for a wide range of applications,
from monitoring crop yields and natural disasters to detecting changes in land use and tracking the movement of
21
1.2 Challenges
wildlife. Advances in technology, such as high-resolution cameras, improved data storage and transmission, and
machine learning, are helping to overcome these challenges and unlock new possibilities for satellite imagery.
References:
Transforming energy to imagery
Fifty Years of Earth Observation Satellites, Andrew J. Tatem, Scott J. Goetz, Simon I. Hay, American
Scientist, Volume 96, Number 5, 2008, 390
Applications of remote sensing
Mapping the Amazon
Rapid flood response using Satellites
Major Limitations of Satellite images, Firouz A. Al-Wassai, N.V. Kalyankar
About the Author
Robin is a researcher in Physics specializing in the applications of machine learning for remote
sensing. He is particularly interested in using computer vision to address challenges in the fields of biodiversity,
protein studies, and astronomy. He is currently working on classifying satellite images with Landsat and Sentinel
data.
22
Eclipsing Binaries Part- 2
by Sindhu G
airis4D, Vol.1, No.2, 2023
www.airis4d.com
2.1 Introduction
Eclipsing binaries are one type of variable stars. We know that eclipsing binaries are binary stars in which
the components of this system eclipse each other as seen from earth. In the previous article named Eclipsing
Binaries” we explained what eclipsing binaries are, and briefly explained about the change in the intensity of
eclipsing binaries due to transit. In the previous article, we also discussed the three main types of eclipsing
binaries based on the shape of the light curves. There is one more important category. EP type stars also belong
to this category. EP type stars show eclipses by their planets. There are other two types of classification of
eclipsing binaries other than the classification based on the shape of the light curves. The first classification is
based on the physical characteristics of the components and the second one is based on the degree of filling of
inner Roche lobes. They are described below.
2.2 Classification according to the physical characteristics of the components
Based on the physical characteristics of the components of eclipsing binaries, they are classified into
different categories. In GS type of eclipsing binaries, one or both the components are giant or supergiant. One
component may also be a main sequence star. PN systems have, among their components, nuclei of planetary
nebulae. WD systems contain white dwarf components. WR eclipsing binaries contain Wolf -Rayet stars
among their components. Another important type is RS Canum Venaticorum type systems. The spectra of
these systems contain strong Ca II H and K emission lines of variable intensity. This indicates the increased
chromospheric activity of the solar type. These systems exhibit radio and x-ray emission. Light curves of some
systems exhibit quasi sine waves outside the eclipse.
2.3 Classification based on the degree of filling of inner Roche lobes
Eclipsing binaries are also classified into different classes based on the degree of filling of inner Roche
lobes. To classify the physical configuration of a binary, we have to determine whether it is a detached, semi
detached or contact system. If both components of a binary system are inside their Roche Lobes, then that
system is a detached system. In that case both components are physically separated and also non-interacting. In
the case of semi detached systems one of the companions fills it with Roche Lobe. Matter will flow through the
2.4 Some examples of Eclipsing binaries
Figure 2.1: Eclipsing Binaries (Image Courtesy: Steve Howell/Pete Marenfeld/NOAO )
Figure 2.2: Roche-Lobe(Image Courtesy - Cosmos)
inner Lagrangian point onto the other star. In the case of contact binaries, both the components fill or overfill
their Roche Lobe. Their surfaces are in physical touch. These components share a common outer envelope.
AR types are detached systems. Both components of this system are subgiants, not filling their inner
equipotential surfaces. In DM systems both the components are main sequence stars. They do not fill their
inner Roche lobes. They are detached systems. DS systems contain a subgiant as one of their components.
Subgiant doesnt fill its inner critical surface. They are also detached systems. DW systems are similar to W
UMa systems in physical properties. But the components are not in contact as in the case of W UMa systems.
KE systems are contact systems of early spectral type (O-A). Both components of this system are close in
size to their inner critical surfaces. KW types are contact systems of the W UMa type. They have ellipsoidal
components of spectral type F0 - K. Main sequence stars are the primary components. Secondary components
lie below and to the left of the main sequence in the (MV, B-V) diagram. SD systems are semi detached systems.
In these systems, the surface of the less massive component is close to its inner Roche lobe. There are also the
combination of these three classification systems.
24
2.4 Some examples of Eclipsing binaries
2.4 Some examples of Eclipsing binaries
2.4.1 RZ Cassiopeia
It is visible with binoculars. Primary eclipses are 1.5 magnitudes in depth. Secondary eclipses are only 0.1
magnitude in depth. It belongs to the class EA+DSCT. Period is 1.1952503 days. It is an active semi-detached
Algol system. It shows complex features in its light curve. A pulsating star of the δ Scuti type is one of the
components of these systems.
2.4.2 U Cephei
U Cephei is an Algol type eclipsing binary. Brightness range is 6.7-9.3. Period of observation 2.493 days.
Eclipses last for approximately 9 hours. Primary is a main sequence star and secondary is a giant star.
2.4.3 RT Andromedae
RT Andromedae is an eclipsing binary with a long observation record. It belongs to the RS CVn binaries.
It consists of two stars. Their mass is about 1.1 solar mass and 0.8 solar mass. Flaring and spot activity is high.
It is due to the magnetic effect which was caused by rapid rotation of these systems. Apparent visual magnitude
is 9.83 at minimum brightness and 8.97 at maximum brightness. Period is 0.6289216 days. It consists of a
G-type main sequence star and a K-type main sequence star.
2.4.4 TU Bootis
It is a short period system. Period is about 0.324 days. It is classified as a W Uma type eclipsing binary.
2.4.5 UU Lyncis
It is a barely detached EA type system. Components are of 1.4 and 0.6 solar mass. Each component is
critically close to filling their respective Roche lobes.
2.4.6 MY Cygni
A well-detached EA type system consisting of two nearly identical stars. Mass is about 1.8 solar mass.
Period is about four days.
2.4.7 KR Persei
A detached system consisting of nearly equal components. Spectral type is F5V. Period is almost exactly
one day. This system consists of two mid-F main-sequence stars.
2.4.8 RU Eridanis
The RU Eridanis system has a configuration that is nearly overcontact. Here each component is either
filling or close to filling its Roche - lobe. They are short period systems with a period of about 0.63 days.
25
2.4 Some examples of Eclipsing binaries
2.4.9 YY Ceti
YY Ceti is a semi-detached Algol binary. Orbital period is approximately 19 hr (0.79 days). Both the
components are gravitationally distorted and that the cooler, less - massive secondary has filled its Roche lobe.
References
OBSERVATIONS AND MODELS OF ECLIPSING BINARY SYSTEMS,Jeffrey L. Coughlin
Eclipsing Binary Observing Guide,British Astronomical Association
Binary star, Wikipedia
GCVS variability types, Samus N.N., Kazarovets E.V., Durlevich O.V., Kireeva N.N., Pastukhova
E.N.,General Catalogue of Variable Stars: Version GCVS 5.1,Astronomy Reports, 2017, vol. 61, No. 1,
pp. 80-88 2017ARep...61...80S
The International Variable Star Index, AAVSO.
About the Author
Sindhu G is a research scholar in Physics doing research in Astronomy & Astrophysics. Her research
mainly focuses on classification of variable stars using different machine learning algorithms. She is also doing
the period prediction of different types of variable stars, especially eclipsing binaries and on the study of optical
counterparts of x-ray binaries.
26
Part III
Biosciences
An introduction to Molecular Biology
by Geetha Paul
airis4D, Vol.1, No.2, 2023
www.airis4d.com
Cells are the fundamental building unit of living organisms. Cells contain organelles like the nucleus,
mitochondria and chloroplasts, endoplasmic reticulum, ribosomes, vacuoles etc. Of these, the nucleus is the
most important organelle because it carries DNA or deoxyribonucleic acid, which is the genetic material in all
living organisms. DNA carries the instructions for growth, development, functioning and reproduction. In this
article, we will explore how a living organism reads DNA information and uses it to synthesise the thousands
of different proteins that make life possible. This is known as the Central Dogma in molecular biology.
1.1 Central Dogma
In molecular biology, the central dogma is the flow of genetic information within a biological system.
DNA is found in every cell and is a read-only memory, meaning that it stays permanent with all the information
the living organism requires. This is why it is possible to reconstruct a living organism if its DNA information
can be recovered. Those who have seen the Jurassic Park movie might recollect this as the method they adopted
to bring back dinosaurs that went extinct millions of years ago. How is this possible? DNA is a code, and
several parts of it are masked from being used. The exposed regions of the DNA are called genes, and it is
the code in the genes that are used for the creation of proteins. The reading and the transfer of the information
are carried out by the RNA. The RNA that reads, decodes and transfers the information for protein synthesis
in the ribosome is called mRNA or messenger RNA. It is in these factories of ribosomes that protein synthesis
happens. This central Dogma of molecular biology was first enunciated by Francis Crick in 1958 and redesigned
later in a Nature paper published in 1970.
Figure 1.1: Illustration of the central Dogma, the flow of genetic information in a Biological system. The
data from the DNA is read (Transcription), the mRNA decodes it (Translation), and finally, the protein is
synthesised.(source: https://biologydictionary.net/central-dogma/)
1.1 Central Dogma
Let us look at the DNA in more detail. DNA, or Deoxyribonucleic acid, is a long-chain molecule that plays
a central role in life on earth. The information encoded in the strands of DNA controls the genetic makeup of
organisms. DNA is considered the pinnacle of Biology, and it is so powerful and incredible that scientists who
discovered it won the Nobel Prize.
DNA has an amazing double-helical structure, and it determines the characteristics of who we are. DNA
decides the colour of our eyes, the tone of our skin, the likelihood of developing certain diseases and even some
aspects of our personality. DNA is passed down from one generation to the next, and with it, all traits we see in
plants, people and animals. By doing research and studying DNA, we have cloned sheep, improved our crops,
made the discovery of new species, helped in investigations for determining the culprit of a crime and maybe,
if we desire so, like in Jurassic Park, can bring the dinosaurs back.
Is it not amazing to know that every creature, big or small, is created by the code encrypted in a molecular
chain that is only about 2.5 or 2.6 nanometers in size? A nanometer is 1/1000,000,000 of a metre!
Having understood what DNA is, let us lay out the basics of how DNA involves creating something that we
can see in real life. Like other processes inside our cells, its a complicated set of interrelated steps. Scientists
invented a term to describe the basic flow of genetic information from DNA to our heritable traits. Its called
the central dogma.
I shall try to explain it with a simple story that I found helpful in understanding it. Once, I made a trip to
my aunt. While there, she made me one of my favourite snacks (Protein): A fish sandwich! Obviously, I can get
fish sandwiches anywhere, but there’s something special about my aunts recipe. I really missed her sandwich,
so I wondered if I could learn to make it on my own. I asked her to give me the recipe, and she got out her giant
recipe book (DNA). She opened it to the right page (Gene expression) and handed me a pen and a recipe card
so I (mRNA) could copy the recipe down. Once I got home (Ribosome), I took out the card and made my own
Fish sandwich. I wasnt using my aunt’s recipe book, but it didnt matter. I still had the recipe, copied word for
word, and I made my breakfast taste just like aunts!
Figure 1.2: Represents the Protein synthesis in a cell connected with the cookbook, recipe & the fish sandwich.
The story is actually similar to the story of how DNA generates protein that makes life possible in all living
creatures, like animals, plants, and people. Just like my aunts recipe book, DNA is a set of instructions on how
to make something. That something, in the case of my story, was the Fish sandwich; but in the case of DNA,
it’s a protein. Protein is the product, the biological molecule that DNA provides instructions for.
Now, this story has an important first part. I had to copy down only the essential part of my aunts recipe.
That was all I needed to prepare my favourite dish. In the same way, the mRNA had to copy only a very small
29
1.2 How does the Gene expression connected to the Central Dogma of Molecular Biology?
Figure 1.3: From the double stranded DNA,the genetic information is transcribed to mRNA by transcription and
from mRNA, the messages are decoded by the translation process to form the protein molecules.(Source:https:
//biologydictionary.net/central-dogma/)
part of the codebook which I call the DNA to be carried to the ribosome to synthesise the protein. In fact, the
DNA, like my home, exists at a different place, relatively far from the ribosome where the protein is made. To
be more specific, the DNA ’lives inside the nucleus of a cell and proteins are made outside the nucleus, in the
ribosomes of the endoplasmic reticulum, seen in the cytoplasm. So, just like my recipe had to be copied, the
instructions in DNA have to be copied by mRNA and carried from the nucleus to the site where proteins are
made.
RNA is another kind of nucleic acid. It’s similar to DNA, but instead of being a double-stranded molecule,
it’s just a single strand of nucleotides. You can imagine it as one half of a ladder. RNA plays the role of my
recipe card by carrying the genetic code outside of the cell’s nucleus. Once in the cytoplasm, the instructions
for making proteins are read by the cell’s protein-building molecules.
Figure ?? illustrates the actual steps involved in the transfer of genetic information from the DNA by the
RNA to the ribosomes protein-binding molecules.
The genetic information (base sequence) of a segment of DNA is first copied to the RNA molecule. The
RNA molecules that have the encoded messages for the synthesis of proteins are called the mRNA, and the
process is called transcription. The mRNA molecules then leave the cell nucleus and enter the cytoplasm,
where they participate in protein synthesis by specifying the particular amino acids that make up individual
proteins, and the process is called translation.
1.2 How does the Gene expression connected to the Central Dogma of
Molecular Biology?
Gene expression is the message or the information encoded in the gene,when it is transformed to a function
or when the encoded information of a gene turns to a function. Genes encode proteins and proteins dictate
cell function. Therefore, the thousands of genes expressed in a particular cell determine what that cell can do.
Moreover, each step in the flow of information from DNA to RNA to protein provides the cell with a potential
control point for self-regulating its functions by adjusting the amount and type of proteins it manufactures.
The central dogma shows how information is transferred from DNA to RNA to protein; when the cell
30
1.2 How does the Gene expression connected to the Central Dogma of Molecular Biology?
Figure 1.4: The Central Dogma Describes how a gene is ultimately expressed.(Source: https://socratic.org/
questions/how-is-gene-expression-related-to-the-central-dogma-of-molecular-biology)
receives a signal that a gene must be expressed, RNA Polymerase is recruited to the region of DNA where that
gene is located. It makes an RNA copy of that region of DNA, in a process called transcription. This RNA
is then transported out of the nucleus of the cell, and is translated into a protein by molecular machines called
ribosomes,where the proteins are synthesised.
References
https://doi.org/10.1371%2Fjournal.pbio.2003243
https://www.nature.com/articles/227561a0/
https://biologydictionary.net/central-dogma/
https://www.genome.gov/genetics-glossary/Central-Dogma
https://socratic.org/questions/how-is-gene-expression-related-to-the-central-dogma-of-molecular-biology
https://www.nature.com/scitable/topicpage/gene-expression-14121669/
About the Author
Geetha Paul is one of the directors of airis4D. She leads the Biosciences Division. Her research
interests extends from Cell & Molecular Biology to Environmental Sciences, Odonatology, and Aquatic Biology.
31
Part IV
Computer Programming
Remote Sensing and Artificial Intelligence in
Agriculture
by Chandrashekharan C P
airis4D, Vol.1, No.2, 2023
www.airis4d.com
Water and air are indispensable for our sustenance. Nevertheless, life without agriculture is unimaginable.
Agriculture is our primary source of income. It plays a vital role in national revenue. Many major companies
in our country rely on agriculture for their raw material. Agricultural development is one of the most powerful
tools to end extreme poverty, boost shared prosperity and feed a projected 10 billion people by 2050.
However, agriculture is facing a significant setback. There is increasing pressure from climate change,
biodiversity loss and soil erosion. Apart from this, lack of skilled labour, pests, diseases and change in growing
season pose their challenges. One-third of food produced globally is either lost or wasted. Addressing this is
critical to improving food and nutrition security and helping to meet climate goals, and reducing environmental
stress. There are more and more mouths to feed as the population is increasing at a fast pace. The article
demonstrates a personal story to explore modern technology like remote sensing and artificial intelligence as
promising solutions to these challenges.
Drones are a breakthrough in the agricultural sector by using remote sensing technology and precision
agriculture. Some typical agricultural applications for UAVs include pest control, plant health monitoring, soil
analysis and aerial survey.
Crops are susceptible to pathogens, fungi and insects and fast-spreading infections. These can be monitored
efficiently in real-time. Selective application of weedicides, fertilisers and micro-nutrients can also be made.
Artificial intelligence applications and implementation also are revolutionising the agricultural sector.
Various applications identify pests and diseases in crops with almost 98% accuracy. When waving a mobile
phone camera over a specific crop with a symptom, the nature of the problem will pop up on the image with
a bounding box around it. Best management practices can then be adopted based on the diagnosis. Similar
apps can be developed to estimate crops to detect weeds, deficiency of fertilisers and micro-nutrients, and pest
attacks. The interesting fact is that these apps can work even without the internet. Using such techniques, one
can detect diseases and plant growth from time to time and ensure ideal crop productivity. Another advantage
of these technologies is that production can be increased without consuming more land, reducing the impact of
climate change.
Being a farmer in Wayanad, the author has firsthand experience with the challenges farmers face today.
Keeping in pace with modern technology and having done a course in Drones for Agriculture and various
courses in AI, the author developed an interest in researching these technologies and applying them to the
plantation so that the future generation can manage the farm hassle-free and with interest.
Initially using a drone (DJI Mavic mini), around 200 images of 2.5 acres of a friend’s plantation were taken.
It was stitched into a single image using a photogrammetry tool. Vegetative indices like Excess Green(EG) and
NGRDI indicate the health and growth of vegetation. The vegetation index is the spectral calculation done using
the spectral bands of each pixel of the drone image. Drones with Near InfraRed (NIR) cameras can calculate
biomass measurements and plant stress and detect living vegetation.
Figure 1.1: Stitched image of the plantation.
Figure 1.2: NGDRI map.
Figure 1.3: Excess Green(EG) map.
In a following study, using a dji mavic mini camera, a complete project on detecting areca nut and coconut
seedlings in a mixed plantation were carried out.
The images were taken on a micro SD card using a drone flying above the plantation at around 30 meters.
The images were then transferred to a laptop and labelled accordingly for training. The model training of the
images was done using YoLo v8 in google colab for 100 epochs.
34
Figure 1.4: Precision-Recall Curve.
The above model can be improved to detect various diseases in areca nut coconut and similar crops and
count diseased and healthy plants.
Figure 1.5: Detection of coconut and areca nut trees.
As millions of people from rural areas migrate to cities yearly, these modern technologies can inspire our
future generation to remain in agriculture.
About the Author
Chandrashekharan C P is an engineer by training but took over farming as his career. He is taking
mentorship at airis4D and has been instrumental in developing various application software using machine
learning.
35
Part V
Fiction
Curves and Curved Trajectories
by Ninan Sajeeth Philip
airis4D, Vol.1, No.2, 2023
www.airis4d.com
This article is a thought experiment on a virtual world with imaginary creatures capable of doing fictitious
activities.
Episode 3
Characters:
1. Dot: a zero-dimensional creature
2. Angel the Light: a one-dimensional Angel
3. Mittu: a two-dimensional ant
4. Albert: a 3D boy.
Albert is an extraordinary young boy with infinite energy, says his teachers. He is found to be doing
something all the time. His games are often a puzzle for his parents, and even his teachers often find it difficult
to answer his questions. He is a good boy, once his teacher said, except his interests are anchored to the wrong
goals. Not just his teachers, Albert hardly has any friends. He doesn’t care. He is in his world and plays alone
almost all the time in his imaginative universe.
Alberts father is a craftsman. Though not well educated and concerned about what others spoke about his
son, he always invested some time in Albert, even during his busy schedule. Spinning tops were his masterpiece.
He would make a variety of them and sell them like hot cakes in the streets to win bread for the family. He
would also have something special for Albert every time, a spinning top with sophisticated craftsmanship. Since
everyone else at home blames the father for spoiling the child with his silly toys, exchanging gifts is a secret
between the father and the son. Albert would thus hide the toy and wait for the opportunity to vanish in solitude
to play with the fascinating toy his father had brought.
This time his father has gifted him a wobbling top that has small pellet holes and a dozen led pellets which
can be rearranged to create different wobbling patterns as the top spins. Albert would play for days changing the
pellet combinations and observing the magical changes in the wobble, spin and stability of the top. He would
return home, hiding his toys in his secret drawer in the basement and none appreciated his craze for toys. His
mind would always be playing with those toys, and he would occasionally twist and turn like them, unknowingly
laughing and clapping his hands. His mother would cry and run to light candles at the altar in the church to
rescue her demon-possessed son, while the father and son would winkle their eyes. His father knew that those
were times when Albert would be crafting his mental picture for a new order of pellets that would make his toy
perform magic as it spun.
As usual, that afternoon after school and the evening snacks at home, Albert ran down with his toy to the
basement. The spinning wheel he had set on last day was still spinning with the tiny box at its edge. Albert sat
down with his new spinner and carefully stopped the spinning wheel with the tiny box.
Oh again! He could hear cries coming from the box. It was Light and Dot crying out on experiencing the
unknown hurting effect on them. What happened? Albert’s looked in. He saw Mittu and asked why they were
crying. Mittu clarified that they were experiencing some strange feelings when Albert stopped the wheel. What
feelings? asked Albert.
I think the effect of stopping the spinning wheel, a consequence of the change in their state of motion,
explained Mittu.
Oh, really? But how can they experience it? They dont have any mass. . . Albert got confused.
That’s the point, said Mittu. Light is a one-dimensional creature and hence cannot visualise curved paths.
So for him, it is not possible to see any change of state, and the experience gives a strange demonic feeling. For
Dot, it is even worse as he cant even see what Light is able to do. Because I know that you have stopped the
spinning wheel, I realise it is a change in my state and can understand the Physics involved.
That’s interesting, said Albert. I wonder how you might feel on my wobbling spinner, he added.
What’s that? Asked all three. Instead of answering, Albert detached and tied the box to the edge of his
wobbling spinner. With a few attempts with the lead shots to balance the weight of the box, he let the spinner
wobble as it went around.
This time, all three, including Mittu, experienced a strange feeling.
Oh my! Mittu cried out. What have you done? Why I am having strange feelings
What do you see? Asked Albert.
I see that I am going around as before, but I have this strange new feeling that was not there before. I feel
like being pulled down and pushed out of myself. As if my world has started behaving strangely. He said.
You are also moving along a third axis, clarified Albert.
Third axis? There is no third axis! Exclaimed Mittu.
Yes, there is! said, Albert.
A third axis? Cried out Light. I cant imagine the second axis Mittu was telling me a while ago. He told
me to extrapolate my straight-line movement from every dot it makes to a new axis orthogonal. Absurd, but
seemed to make sense logically to explain the strange feelings I had. But now, for the same feeling that I have
now, why do you want me to imagine a third axis? Light objected.
No, no, this is different. The second axis is still here, and this is something strange, said Mittu.
Alright, friends! Albert interfered. Mittu, did you tell Light to imagine lines from every dot on his line?
He asked Mittu.
Yes, I did. Said, Mittu.
Cool! Exclaimed Albert. He continued. Now imagine a dot orthogonal to every point on your surface.That
would produce a new surface like the one you already have.
Wow! But thats just a multiverse. Like what I experience when I cross a wormhole. That does not explain
a new dimension. I am sure that there cant be anything beyond two dimensions! Objected Mittu.
Wait, Mittu, please! Said, Albert. Imagine lines drawn instead of a single dot orthogonal to every dot on
your surface, insisted Albert.
That will all form lines on these two dimensions I already have. There cant be anything orthogonal,
objected Mittu.
Apply the same logic you told Light to generalise his one dimension, suggested Albert. It might be difficult
but mathematically feasible.
38
Yeah, that’s right. Mathematics always helps us to penetrate the darkness of our mental inability, Mittu
agreed. So, you are telling me that this torture i experience is due to my travel through the new dimension?
Why do all of we then experience it?
Good point, said, Albert. Every new dimension brings you a similar experience, but looking more
intensively helps you see the difference. They are not identical. The jerk in one dimension is not identical to
the centripetal force in two dimensions. Some are unidirectional, some have polarity, and some are universal,
concluded Albert.
You seem to have an answer for everything. Mittu shared his appreciation for Albert.
Not really, said Albert. I am trying to understand the mystic powers in my three-dimensional world. It
seems that as we move up the ladder of dimensions, we discriminate many things and get even more confused.
Albert expressed his concern. For Dot, everything is time-like. Light also experiences but realises the cause
for what Dot experiences. Yet he encounters an additional entity that puzzles him. But for you, Mittu, you
also experience everything that both Light and Dot experience but come across a third entity that is similar but
different from the other two. I am wondering how higher dimensions might influence my world!
That’s a great thought, supported Mittu. Maybe you can help me demystify some of the challenging puzzles
like the many worlds and wormholes in my world, said Mittu.
Oh sure, Albert assured. The wormhole is just a slot I have put on this wheel to set the lead shots. It is like
a break in a line. When you asked Light to imagine a surface as lines attached to every dot in a line, you should
think of how the surface would look if there is a break, no dots at some part of the line. It will create a blank
region. Like Light may be able to turn around at the break of the line and walk back along the same line while
thinking that he is moving forward, your multiverse is just the same surface but seen as inverted!
However, the third dimension I told you about is formed by adding dots from all the dots on your surface
and extending them to form new lines and surfaces. That creates an additional surface distinct from the one you
are familiar with! Though not realised by you, it is the same world!! There is nothing like many worlds. You
are just moving to a different surface in the world.
Oh, it’s too complex, said Mittu. I need time to think about it.
It is simple. The complexity is only because of the limitation of our brain to visualise something that has
a higher dimension, comforted Albert.
“So, for Dot, the world is a point. For Light, it is a line, and for me, a surface and whats it for you?” asked
Mittu.
“It’s a cube or a sphere”, said Albert. A cube if I extrapolate it over a surface and a sphere if I assume
that there is no preferred direction and that I can be equally far from the edges of every point in my world”, he
clarified.
“If I make such a consideration, my world should be circular!” said Mittu.
“Yes, indeed”, affirmed Albert. But Light and Dot did not understand what those mathematicians were
talking about. Their sleepy eyes, too, wobbled with the spinning top.
“Did you say that Light had a strange feeling while on the spinning wheel?” asked Albert.
“Yes”, affirmed both Mittu and Light.
“That’s fascinating news”, exclaimed Albert. “I could understand if a massive object experiences a force
when it moves along a curved path. But if you could experience it, I may need to change my entire concept of
mass!” He said.
“Oh, yes, I got your point”, said Mittu. A massless object cant experience centripetal or centrifugal
force!” he clarified.
39
“Exactly!” affirmed Albert. “What he experienced was the consequence of the curved trajectory of his
space or rather space-time!”
“Oh my God!” said Mittu. “What is space-time”?
“Oh, it is simply considering space and time together”, said Albert.
“No, I did not understand.” Mittu raised his objection. “I know what space is, but for me, time is an abstract
quantity. How can one consider them together?”
“Correct”, said Albert. “Your definition of time is based on observed variance in space. You define time
as the interval between two identically repeated events. It could be the movement of a pendulum or the interval
between two consecutive passes of the wheel about a fixed point. Nevertheless, your experience can be precisely
stated by specifying the space and time coordinates. So when you say that the sun rises at 6 am, you have
specified the location, which is your spatial coordinates on the planet and the time on your clock. Unless you
specify both, the sentence makes no sense.”
“Okay,” said Mittu, “so you are telling me that though time by itself does not mean anything, it is required
to precisely specify events and observations”
“I did not say that time does not mean anything”, clarified Albert. “I only mean to say that time and space
have independent roles when we consider objective reality.”
All right”, agreed Mittu. ”What is that you are getting at?” he asked.
“Light was always travelling along his straight-line universe, agreed?” asked Albert.
“No, but his box was on the spinning wheel”, objected Mittu.
“That’s for you. For him, there is no second dimension!” clarified Albert. “So the strange experience he
had was due to the curving of his space-time, something that you and I can perceive better than him”, Albert
clarified.
“Oh, my!”, exclaimed Mittu.
“More interestingly, the experience he has is the same irrespective of his position on the line. Otherwords,
for him this experience is universal.”, said Albert. “While an up and down linear movement of his linear
dimension along the second dimension axis may differer in polarity depending on the direction of the movement,
the curved trajectory will result in unipolar, universal experience.”
“Oh, my! I need a break!”, said Mittu while he dipped his face into a drop of water on the table and sucked
it in fully.
[Will continue..]
About the Author
Professor Ninan Sajeeth Philip is a Visiting Professor at the Inter-University Centre for Astronomy
and Astrophysics (IUCAA), Pune. He is also an Adjunct Professor of AI in Applied Medical Sciences [BCMCH,
Thiruvalla] and a Senior Advisor for the Pune Knowledge Cluster (PKC). He is the Dean and Director of airis4D
and has a teaching experience of 33+ years in Physics. His area of specialisation is AI and ML.
40
About airis4D
Artificial Intelligence Research and Intelligent Systems (airis4D) is an AI and Bio-sciences Research Centre.
The Centre aims to create new knowledge in the field of Space Science, Astronomy, Robotics, Agri Science,
Industry, and Biodiversity to bring Progress and Plenitude to the People and the Planet.
Vision
Humanity is in the 4th Industrial Revolution era, which operates on a cyber-physical production system. Cutting-
edge research and development in science and technology to create new knowledge and skills become the key to
the new world economy. Most of the resources for this goal can be harnessed by integrating biological systems
with intelligent computing systems offered by AI. The future survival of humans, animals, and the ecosystem
depends on how efficiently the realities and resources are responsibly used for abundance and wellness. Artificial
intelligence Research and Intelligent Systems pursue this vision and look for the best actions that ensure an
abundant environment and ecosystem for the planet and the people.
Mission Statement
The 4D in airis4D represents the mission to Dream, Design, Develop, and Deploy Knowledge with the fire of
commitment and dedication towards humanity and the ecosystem.
Dream
To promote the unlimited human potential to dream the impossible.
Design
To nurture the human capacity to articulate a dream and logically realise it.
Develop
To assist the talents to materialise a design into a product, a service, a knowledge that benefits the community
and the planet.
Deploy
To realise and educate humanity that a knowledge that is not deployed makes no difference by its absence.
Campus
Situated in a lush green village campus in Thelliyoor, Kerala, India, airis4D was established under the auspicious
of SEED Foundation (Susthiratha, Environment, Education Development Foundation) a not-for-profit company
for promoting Education, Research. Engineering, Biology, Development, etc.
The whole campus is powered by Solar power and has a rain harvesting facility to provide sufficient water supply
for up to three months of drought. The computing facility in the campus is accessible from anywhere through a
dedicated optical fibre internet connectivity 24×7.
There is a freshwater stream that originates from the nearby hills and flows through the middle of the campus.
The campus is a noted habitat for the biodiversity of tropical Fauna and Flora. airis4D carry out periodic and
systematic water quality and species diversity surveys in the region to ensure its richness. It is our pride that
the site has consistently been environment-friendly and rich in biodiversity. airis4D is also growing fruit plants
that can feed birds and provide water bodies to survive the drought.