Cover page
Image Name: Vitis vinifera :Vitis vinifera , the common grape vine, is a woody, deciduous, climbing plant
native to the Mediterranean, Central Europe, and southwestern Asia, and is widely cultivated on every continent
except Antarctica. With a diversity of between 5,000 and 10,000 varieties as of 2012, only a small number are
commercially significant for wine and table grape production. Its grapes can be eaten fresh, dried into raisins,
sultanas, or currants, or processed into juice, wine, and vinegar. Grape leaves are also used in many cuisines.
Varieties of Vitis vinifera form the foundation of the global wine industry, with all familiar wine types produced
from this species Photo: Geetha Paul
Managing Editor Chief Editor Editorial Board Correspondence
Ninan Sajeeth Philip Abraham Mulamoottil K Babu Joseph The Chief Editor
Ajit K Kembhavi airis4D
Geetha Paul Thelliyoor - 689544
Arun Kumar Aniyan India
Sindhu G
Journal Publisher Details
Publisher : airis4D, Thelliyoor 689544, India
Website : www.airis4d.com
Email : nsp@airis4d.com
Phone : +919497552476
i
Editorial
by Fr Dr Abraham Mulamoottil
airis4D, Vol.3, No.8, 2025
www.airis4d.com
This edition starts with the article by Arun Aniyan.
Need for AI Regulations: A Layperson’s Guide.
He emphasises the urgent need for comprehensive
regulations to govern the rapidly evolving field
of Artificial Intelligence (AI). While AI holds
immense promise—transforming healthcare, education,
transportation, and climate solutions—it also poses
significant risks if left unchecked. Key concerns
include algorithmic bias, privacy invasion, lack of
accountability, job displacement, misinformation, and
the ethical dilemmas surrounding autonomous weapons.
The article argues that just as rules exist for cars and
medicine, AI requires similar guardrails to ensure
safety, fairness, and accountability. Rather than stifling
innovation, thoughtful regulation can guide ethical
development, foster public trust, and ensure AI serves
the collective good.
The article Ambiguity, Redundancy, and
Predictability: An Entropic View of Human
Language by Jinsu Ann Mathew explores how
the concepts of entropy from information theory
help explain three key features of natural
language—ambiguity, redundancy, and predictability.
Though these features may seem contradictory, they
work together to make human communication both
expressive and reliable. Ambiguity introduces
uncertainty and flexibility, increasing entropy but
enriching meaning; redundancy reduces entropy
by reinforcing information, improving clarity and
robustness; and predictability allows efficient
processing by lowering uncertainty in communication.
Rather than flaws, these characteristics are strategic
tools that language uses to balance clarity and creativity.
The article argues that the power of human language
lies not in minimising entropy, but in managing it
intelligently to achieve a dynamic balance between
expressiveness and comprehensibility.
Ajit Kembhavi’s article Black Hole Stories-20:
The LIGO Gravitational Wave Detectors provides
an in-depth yet accessible explanation of how LIGO
(Laser Interferometer Gravitational-Wave Observatory)
detects gravitational waves using advanced Michelson
interferometry. Gravitational waves slightly alter the
distances between mirrors in the interferometer, causing
detectable changes in the interference patterns of
laser beams. LIGO’s detectors—featuring 4 km-long
arms, high-powered stabilised lasers, ultra-reflective
mirrors, and extreme vacuum systems—are engineered
to measure minuscule mirror displacements as small
as one-thousandth the diameter of a proton. The
article describes how noise from seismic activity,
thermal effects, and air molecules is minimised using
sophisticated technology, allowing LIGO to achieve the
sensitivity necessary to detect distant cosmic events.
The current version, Advanced LIGO (aLIGO), is most
effective in the 100–300 Hz frequency range and has
already made groundbreaking detections, paving the
way for future discoveries and upcoming facilities like
LIGO-India.
The article Is Every Crash a Firework?
Observing Starbursts in Interacting Galaxies by
Robin Thomas explores how interactions between
galaxies affect star formation, using ultraviolet (UV)
and radio (HI) observations of three nearby galaxy pairs.
While such encounters can trigger dramatic bursts of
star formation, the study finds that the outcomes vary
widely depending on factors like encounter geometry,
gas content, and especially the mass ratio between
galaxies. Near-equal-mass encounters tend to produce
stronger star formation enhancements, whereas more
unequal pairs show only modest effects. Interestingly,
galaxy separation alone does not correlate well with
star formation activity. The research also identifies
candidates for tidal dwarf galaxies—new galactic
entities forming from stripped material, highlighting
that interactions can both disrupt and create galaxies.
These findings support a nuanced view of galaxy
evolution, advocating for future studies combining
simulations with molecular gas and spectral data to
better understand how cosmic collisions shape the
universe.
Sindhu G’s article Understanding the Johnson
Magnitude offers a comprehensive overview of the
Johnson photometric system, a foundational method
in observational astronomy for measuring stellar
brightness. Developed in the 1950s by Harold
Johnson and William Morgan, the system introduced
standardised broadband filters—U (ultraviolet), B
(blue), and V (visual), later expanded to R
(red) and I (infrared)—which remain central to
astrophysical research. The system revolutionised
photometry by enabling accurate, reproducible
brightness measurements and colour indices like (B–V),
which help determine stellar temperatures, classify
stars, correct for interstellar extinction, and estimate
distances. Despite newer systems like SDSS and CCD-
adapted filters, the Johnson system endures due to
its historical continuity, simplicity, and widespread
calibration legacy. It remains a cornerstone in both
research and education, linking past and present
astronomical observations.
The article Protein Folding and Its Vital
Role in Biology by Geetha Paul explains how
proteins—essential molecules responsible for nearly
all cellular functions—must fold into precise three-
dimensional shapes to function properly. This folding
process begins as a linear chain of amino acids is
assembled during translation and proceeds through
stages, forming secondary structures (e.g., alpha
helices, beta sheets), tertiary structures, and sometimes
quaternary structures. Molecular chaperones facilitate
proper protein folding and prevent errors under stress.
Misfolded proteins can aggregate, leading to serious
diseases like Alzheimer’s, Parkinsons, and cystic
fibrosis by disrupting cellular function and triggering
toxicity, inflammation, and impaired clearance systems.
The article emphasises that understanding protein
folding and misfolding is crucial for advancing
treatments in neurodegenerative and other protein-
related diseases.
iii
iv
Contents
Editorial ii
I Artificial Intelligence and Machine Learning 1
1 Need for AI Regulations: A Layperson’s Guide 2
1.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.2 What Exactly is AI Regulation? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.3 The Promise of AI: Why Were Excited . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.4 The Perils of Unregulated AI: Why We Should Be Concerned . . . . . . . . . . . . . . . . . . . . 3
1.5 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
1.6 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
2 Ambiguity, Redundancy, and Predictability: An Entropic View of Human Language 7
2.1 Ambiguity: When One Form Has Many Meanings . . . . . . . . . . . . . . . . . . . . . . . . . . 7
2.2 Redundancy: Saying More to Ensure Understanding . . . . . . . . . . . . . . . . . . . . . . . . . 8
2.3 Predictability: Anticipating What Comes Next . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
2.4 Striking the Balance: Why Language Needs All Three . . . . . . . . . . . . . . . . . . . . . . . . 9
II Astronomy and Astrophysics 11
1 Black Hole Stories-20
The LIGO Gravitational Wave Detectors 12
1.1 A Simple Gravitational Wave Detector . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
1.2 Laser Interferometric Detectors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
1.3 The Advanced LIGO Detector . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
2 Is Every Crash a Firework? Observing Starbursts in Interacting Galaxies 17
2.1 Introduction: Galaxies in Motion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
2.2 Observing in Two Languages: Ultraviolet and Radio . . . . . . . . . . . . . . . . . . . . . . . . . 18
2.3 Trends in the local star formation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
2.4 Global Context: Placement on the Star-Forming Main Sequence . . . . . . . . . . . . . . . . . . 19
2.5 Limitation of the study . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
2.6 Future scope . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
2.7 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
3 Understanding the Johnson Magnitude System in Astronomy 22
3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
3.2 The Historical Background and Evolution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
3.3 Structure of the Johnson Photometric System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
3.4 Applications of Johnson Magnitudes in Astrophysics . . . . . . . . . . . . . . . . . . . . . . . . . 23
3.5 Comparison with Other Photometric Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
CONTENTS
3.6 Limitations and Calibration Challenges . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
3.7 Legacy and Continuing Use . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
3.8 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
III Biosciences 26
1 Protein Folding and its Vital Role in Biological Function 27
1.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
1.2 Translation and Primary Structure Formation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
1.3 Folding to Secondary and Tertiary Structure: The Search for Native Conformation . . . . . . . . 28
1.4 Quality Control, Quaternary Structure Assembly, and Functional Verification . . . . . . . . . . . 28
1.5 Mechanisms by Which Protein Aggregates Disrupt Cellular Function . . . . . . . . . . . . . . . 29
vi
Part I
Artificial Intelligence and Machine Learning
Need for AI Regulations: A Layperson’s
Guide
by Arun Aniyan
airis4D, Vol.3, No.8, 2025
www.airis4d.com
1.1 Introduction
Artificial Intelligence (AI) is rapidly transforming
our world. From the helpful suggestions on our
smartphones to the complex algorithms driving self-
driving cars, AI is no longer a futuristic concept but
an integral part of our daily lives. While AI offers
immense potential for progress and innovation, its
rapid advancement also brings significant challenges
and risks. Just as we have rules and laws for new
technologies like cars or medicine, we need a clear set
of guidelines and regulations for AI. This article will
explain, in simple terms, why AI regulations are not just
a good idea but an absolute necessity for our collective
future.
1.2 What Exactly is AI Regulation?
Before we dive into why we need them, let’s
understand what AI regulations are. Think of
regulations as a rulebook. For cars, we have traffic
laws, seatbelt regulations, and manufacturing standards
to ensure safety. For medicine, there are strict rules
about testing, approval, and prescription to protect
public health. AI regulations aim to do the same for AI.
Set Standards: Define what is considered
acceptable and safe AI behaviour.
Establish Responsibilities: Determine who is
accountable when AI makes a mistake or causes
harm.
Protect Rights: Safeguard individual privacy,
prevent discrimination, and ensure fairness.
Foster Trust: Build public confidence in AI
technologies by ensuring they are developed and
used responsibly.
Guide Innovation: Provide a clear framework for
developers and companies, encouraging ethical
and beneficial AI development while preventing
harmful applications.
The burgeoning field of artificial intelligence presents
both immense opportunities and significant challenges.
Therefore, the discussion around AI regulations is
not an attempt to hinder technological advancement
or stifle the groundbreaking work being done by
innovators. On the contrary, the core objective of
developing comprehensive AI regulations is to establish
a framework that ensures the benefits of AI are widely
distributed across society, promoting inclusivity and
equitable access to these benefits. Simultaneously,
its crucial to proactively mitigate potential negative
consequences, preventing the inadvertent creation of
new societal, ethical, or economic problems that could
arise from the unchecked development and deployment
of AI systems. This thoughtful and balanced approach is
crucial for harnessing the full potential of AI responsibly
and sustainably.
1.3 The Promise of AI: Why Were Excited
1.3 The Promise of AI: Why Were
Excited
To understand the need for regulation, its
important to appreciate the incredible promise of AI.
AI can:
Revolutionise Healthcare: Assist in diagnosing
diseases earlier, developing personalised
treatments, and even speeding up drug discovery.
Imagine AI helping doctors identify subtle signs
of illness that human eyes might miss.
Transform Transportation: Lead to safer and
more efficient self-driving vehicles, reducing
accidents and traffic congestion.
Boost Economic Growth: Automate repetitive
tasks, increase productivity, and create new
industries and job opportunities.
Address Global Challenges: Help us tackle
climate change, optimise energy consumption,
and manage natural resources more effectively.
Enhance Education: Provide personalised
learning experiences for students, adapting to
their individual needs and pace.
With such vast potential, its easy to be optimistic. The
future may hold advancements in healthcare, education,
and environmental sustainability, ultimately leading
to a more prosperous and equitable world. Imagine
AI-driven diagnostics that identify diseases at their
earliest stages, personalised learning platforms that
adapt to each student’s unique needs, or intelligent
systems that optimise energy consumption and waste
reduction. These are just a few glimpses of the positive
transformations AI promises.
However, without proper guardrails, this promise
can quickly turn into peril. The very power that allows
AI to solve complex problems also presents significant
risks if not managed responsibly. Unregulated AI could
lead to widespread job displacement, algorithmic bias
perpetuating societal inequalities, or even autonomous
systems making critical decisions without human
oversight. The potential for misuse, such as in
surveillance or the spread of misinformation, is also a
serious concern. Therefore, establishing robust ethical
frameworks and regulatory measures is not merely an
option, but an urgent necessity to ensure AI serves
humanity’s best interests.
1.4 The Perils of Unregulated AI:
Why We Should Be Concerned
While the benefits are clear, the risks of
unregulated AI are equally significant and, in some
cases, truly alarming.
1.4.1
Bias and Discrimination: The ”Garbage
In, Garbage Out” Problem
AI systems learn from the data they are fed. If this
data reflects existing human biases—whether conscious
or unconscious—the AI will learn and perpetuate those
biases. This is often referred to as ”garbage in, garbage
out.”
Some examples are :
Hiring Algorithms: If an AI hiring tool is trained
on historical data where certain demographics
were less likely to be hired for specific roles, it
might unfairly screen out qualified candidates
from those groups, even if they are equally or
more capable.
Facial Recognition: Studies have shown that
some facial recognition systems are less accurate
at identifying individuals with darker skin
tones or women, leading to higher rates of
misidentification and potential wrongful arrests.
This is because the training data for these systems
was predominantly composed of lighter-skinned
individuals and men.
Loan Approvals: An AI system used by banks
to approve loans could deny loans to individuals
from certain neighbourhoods or backgrounds
if its training data links those demographics
to higher default rates, even if the individual
applicant is creditworthy.
Regulations can mandate fairness audits, require
transparency in data collection, and impose penalties for
discriminatory outcomes, forcing developers to actively
identify and mitigate bias in their AI systems.
3
1.4 The Perils of Unregulated AI: Why We Should Be Concerned
1.4.2 Privacy Invasion: The All-Seeing Eye
AI systems often require vast amounts of data to
function effectively. This data can include personal
information, online behaviours, location data, and even
biometric details. Without strict privacy regulations,
this data can be misused or exposed.
Targeted Advertising: While often seen as
harmless, extreme personalisation based on AI
analysis of personal data can lead to manipulative
advertising practices or even psychological
targeting.
Surveillance: Governments and corporations
could use AI-powered surveillance systems to
track citizens movements, monitor their online
activities, and analyse their behaviours without
consent, leading to a chilling effect on civil
liberties.
Data Breaches: AI systems storing massive
amounts of personal data become prime targets
for cyberattacks. A breach could expose sensitive
information, leading to identity theft, financial
fraud, and other serious consequences.
Regulations like GDPR (General Data Protection
Regulation) in Europe are pioneers in this area, giving
individuals more control over their data. Future AI
regulations need to expand on this, ensuring data
minimisation (collecting only what’s necessary), secure
data storage, and strict rules around consent and data
usage.
1.4.3 Accountability and Liability: Who’s to
Blame?
When an AI system causes harm, who is
responsible? Is it the developer who coded the
algorithm, the company that deployed it, or the user who
interacted with it? This question becomes incredibly
complex as AI systems become more autonomous. The
following examples illustrate the case.
Self-Driving Car Accidents: If an autonomous
vehicle causes an accident, is it the car
manufacturer, the software developer, or the
owner of the vehicle who is liable for damages
or injuries?
AI in Healthcare: If an AI diagnostic tool
provides an incorrect diagnosis that leads to
adverse patient outcomes, who is accountable?
The hospital, the AI vendor, or the doctor who
relied on the AI’s output?
Automated Trading Systems: A malfunction
in an AI-powered financial trading system could
cause significant market disruptions or financial
losses. Pinpointing responsibility in such a
complex chain of events is challenging.
Regulations can establish clear frameworks for
accountability, defining legal responsibilities for AI
developers, deployers, and users. This can involve
requiring human oversight, independent audits, and
robust testing before deployment.
1.4.4 Job Displacement and Economic
Inequality: The Automation Dilemma
As AI and automation become more sophisticated,
they will inevitably impact the job market. While
new jobs will be created, many existing jobs may
be automated, potentially leading to widespread job
displacement and exacerbating economic inequality if
not managed carefully.
Manufacturing and Logistics: AI-powered
robots and automated systems can perform tasks
historically done by human workers in factories
and warehouses.
Customer Service: Chatbots and AI assistants
are increasingly handling customer inquiries,
reducing the need for human customer service
representatives.
Data Entry and Clerical Work: Many
administrative tasks can be automated by AI.
While regulations can’t stop technological progress,
they can influence its trajectory. Governments
can implement policies like retraining programs,
universal basic income (UBI) experiments, or incentives
for companies to invest in job creation alongside
automation. Regulations could also require impact
assessments before large-scale AI deployments to
understand potential societal effects.
4
1.4 The Perils of Unregulated AI: Why We Should Be Concerned
1.4.5 Algorithmic Manipulation and
Misinformation: The Echo Chamber
Effect
The pervasive influence of AI algorithms,
particularly those employed by social media platforms,
stems from their fundamental design: to maximise
user engagement. This objective, while seemingly
benign, often leads to the prioritisation of content
that affirms a user’s pre-existing beliefs. This
phenomenon, commonly referred to as the creation
of ”echo chambers, has profound implications for
individual perception and societal discourse.
Within these algorithmic echo chambers, users
are consistently exposed to information that reinforces
their current viewpoints, while dissenting or alternative
perspectives are systematically filtered out. This
selective exposure can lead to a distorted understanding
of reality, as individuals become less aware of the
complexities and nuances of various issues. The
constant validation of existing beliefs can also foster
an increased sense of certainty and an unwillingness to
engage with opposing arguments.
A significant consequence of this algorithmic
design is a heightened susceptibility to misinformation
and manipulation. When users are primarily exposed
to content that aligns with their biases, their critical
thinking skills can be dulled. They may become less
adept at discerning factual information from fabricated
narratives, as the content they consume consistently
validates their existing worldview. This vulnerability
makes individuals more susceptible to propaganda,
conspiracy theories, and other forms of deceptive
content, which can be strategically disseminated within
these echo chambers. The long-term effects include a
polarisation of opinions, a breakdown in civil discourse,
and a diminished capacity for collective problem-
solving.
Political Polarisation: AI algorithms can
reinforce existing political views by showing
users only content that aligns with their ideology,
leading to increased polarisation and reduced
civil discourse.
Spread of Fake News: Malicious actors can
use AI to generate highly convincing fake news
articles, images, and videos (deepfakes), which
can spread rapidly and influence public opinion,
elections, and even incite violence.
Erosion of Critical Thinking: Constant
exposure to algorithmically curated content
can diminish individuals capacity for critical
thinking and discernment between fact and
fiction.
Regulations could require greater transparency from
platforms about how their algorithms work, impose
stricter rules on content moderation, and mandate fact-
checking initiatives. They could also hold platforms
accountable for the spread of harmful misinformation.
1.4.6 Autonomous Weapons Systems: The
”Killer Robots” Dilemma
One of the most alarming and ethically fraught
concerns surrounding artificial intelligence is the
proliferation and development of fully autonomous
weapons systems. These are sophisticated AI-powered
weapons, often colloquially referred to as ”killer robots,
designed to independently select and engage targets
without human intervention or oversight in the decision-
making process. The very notion of machines making
life-or-death decisions raises profound moral, legal, and
ethical questions, striking at the core of human dignity
and accountability.
The potential implications of killer robots are vast
and terrifying. In a conflict scenario, the deployment
of such systems could lead to an accelerated pace of
warfare, reducing the time for human deliberation and
potentially escalating conflicts beyond control. There
are also concerns about the potential for unintended
consequences, as the AI’s programming might not fully
account for the complexities of real-world situations,
leading to civilian casualties or misidentification of
targets. Furthermore, the absence of a human in
the loop blurs the lines of accountability, making it
difficult to assign responsibility when errors or atrocities
occur. The potential for these weapons to fall into the
wrong hands, or to be used in violation of international
humanitarian law, adds another layer of grave concern
5
1.5 Conclusion
to an already complex issue.
The following are the main concerns in this regard.
Loss of Human Control: Handing over life-
and-death decisions to machines raises profound
ethical and moral questions
Escalation of Conflict: Autonomous weapons
could accelerate conflicts and reduce the
threshold for war.
Accountability Gap: If an autonomous weapon
commits a war crime, who is responsible? The
programmer, the manufacturer, or the commander
who deployed it?
Many experts and organisations are calling for AI
regulation to ensure its responsible development and
deployment. This is driven by a number of concerns,
including the potential for job displacement, ethical
dilemmas surrounding autonomous decision-making,
the spread of misinformation, and privacy violations.
Proponents of regulation argue that clear guidelines can
foster public trust, encourage innovation within safe
boundaries, and prevent a ”race to the bottom” where
companies prioritise speed over safety. Additionally,
regulation can address issues of accountability when
AI systems cause harm and ensure equitable access to
the benefits of AI technologies.
1.5 Conclusion
The incredible potential of Artificial Intelligence
is undeniable, promising advancements that can enrich
our lives and solve complex global challenges. However,
as this guide has outlined, the path to fully realising
these benefits is fraught with significant risks if left
unregulated. From the insidious spread of bias and
the erosion of privacy to the complex questions of
accountability and the profound ethical dilemmas posed
by autonomous weapons, the perils of unchecked AI
are too great to ignore.
Just as societies have historically established rules
and frameworks for other transformative technologies, a
comprehensive and proactive approach to AI regulation
is not merely advisable but essential. Such regulations
are not intended to stifle innovation, but rather to guide
it responsibly, ensuring that AI development remains
ethical, transparent, and aligned with human values.
By setting clear standards, defining accountability,
protecting fundamental rights, and mitigating societal
disruptions, well-crafted AI regulations can foster public
trust and pave the way for a future where AI serves
humanity’s best interests, rather than undermining them.
The time to act is now, to ensure that the transformative
power of AI is harnessed for the collective good,
securing a safer and more equitable future for all.
1.6 References
https://gdpr-info.eu/
https://www.brookings.edu/articles/algorithmic-
bias-how-data-discriminates/
https://www.un.org/disarmament/topics/lethal-
autonomous-weapons-systems/
https://digital-strategy.ec.europa.eu/en/policies/ethical-
guidelines-trustworthy-ai
About the Author
Dr.Arun Aniyan is leading the R&D for
Artificial intelligence at DeepAlert Ltd,UK. He comes
from an academic background and has experience
in designing machine learning products for different
domains. His major interest is knowledge representation
and computer vision.
6
Ambiguity, Redundancy, and Predictability:
An Entropic View of Human Language
by Jinsu Ann Mathew
airis4D, Vol.3, No.8, 2025
www.airis4d.com
In the previous articles, we explored how entropy
helps quantify uncertainty in language—from the
fundamental definitions in information theory to specific
measures like lexical entropy, n-gram entropy, and
cross-entropy in language models. Building on that
foundation, we now turn to three essential characteristics
of natural language: ambiguity, redundancy, and
predictability.
These features often seem contradictory.
Ambiguity introduces multiple meanings, which could
confuse a listener. Redundancy repeats information
that might appear unnecessary. Predictability makes
language easier to process but potentially less
informative. Yet, together, these features form a delicate
balance that allows human communication to be both
expressive and reliable.
Viewed through the lens of entropy, we can
better understand why language behaves this way.
Ambiguity reflects areas of high entropy—where
multiple interpretations are possible. Redundancy
works in the opposite direction, lowering entropy
to make messages more robust against noise or
misunderstanding. Predictability, meanwhile, allows
both humans and machines to process information
efficiently by reducing uncertainty in upcoming words
or phrases.
This article explores how entropy provides a useful
framework for analyzing these three aspects of language.
We’ll see how ambiguity can be measured using lexical
entropy, how redundancy contributes to communication
reliability, and how predictability shapes both sentence
processing and model performance. Rather than treating
these features as limitations, we’ll understand them as
essential design elements of human language—and key
considerations in computational models that aim to
understand or generate text.
2.1 Ambiguity: When One Form Has
Many Meanings
Ambiguity occurs when a word, phrase, or
sentence can be interpreted in more than one way.
It adds uncertainty to communication and is a key
contributor to higher entropy in language. There are
several forms:
Lexical ambiguity
A single word has multiple meanings.
Example: “He swung the bat.”
The word “bat”
could refer to an animal (a flying mammal) or a piece
of sports equipment. Without more context, its unclear
which one is meant.
Syntactic ambiguity
A sentence allows multiple grammatical
interpretations.
Example: “Visiting relatives can be annoying.”
This can mean either (1) the act of visiting relatives
is annoying, or (2) the relatives who are visiting are
annoying.
2.2 Redundancy: Saying More to Ensure Understanding
Semantic ambiguity
The overall meaning is unclear even if the grammar
is correct.
Example: “He saw the man with the telescope.”
Its unclear whether he used a telescope to see the
man, or whether the man had the telescope.
In all these cases, ambiguity increases
uncertainty—and therefore entropy—because multiple
interpretations are possible. However, in real
communication, context usually helps resolve the
intended meaning. For example, in “The bat flew
across the cave, the context makes it clear that “bat”
refers to the animal.
In computational linguistics, this kind of
uncertainty is often measured using lexical entropy.
Words that are used evenly across multiple senses tend
to have higher entropy, indicating greater ambiguity.
Systems like search engines or translation tools must
estimate the most likely meaning based on surrounding
words.
2.2 Redundancy: Saying More to
Ensure Understanding
Redundancy in language refers to the inclusion of
extra information that may not be strictly necessary
to convey the core message, but helps make
communication more clear, reliable, and error-resistant.
From an information theory perspective, redundancy
works by reducing entropy—it makes language more
predictable and easier to process, especially when
there’s a chance of miscommunication.
While redundancy might seem wasteful, its
actually a key feature of natural language. It helps
listeners understand a message even when parts of it
are unclear, misheard, or missing. This is particularly
useful in noisy environments, fast speech, or casual
conversation.
Consider this sentence:
“I saw it with my own eyes.”
The phrase “with
my own eyes” is technically unnecessary—“I saw it”
already conveys the full meaning. But the repetition
serves to emphasize the speaker’s certainty and make
the message more forceful and clear. If the sentence
were spoken in a noisy room, the added redundancy
also increases the chance that the core meaning is still
understood.
Another example:
“The reason he left is because he was tired.”
This sentence contains overlapping phrases: “the
reason” and “because” both express causality. One
could be removed without changing the basic meaning,
but including both makes the structure more familiar
and easier to follow.
Redundancy is a natural strategy for error
correction in human communication. Just like in
digital systems, where repeated signals can help correct
transmission errors, redundant elements in language
help listeners recover meaning when the input is
incomplete or distorted.
In short, redundancy may reduce efficiency, but it
improves clarity, emphasis, and robustness—qualities
that are essential in real-world communication.
2.3
Predictability: Anticipating What
Comes Next
Predictability in language refers to how easily a
listener or reader can guess what word or phrase is likely
to come next. It plays a central role in how we process
and understand language in real time. When a sentence
follows familiar patterns or uses common phrases, it
becomes more predictable and easier to comprehend.
From an information theory perspective,
predictability is closely related to low entropy. The
more predictable a word is in its context, the lower its
entropy. This means the listener needs less effort to
understand it, since it causes less surprise.
For example:
“She drank a cup of...”
Most people would expect the next word to
be something like “tea” or “coffee.” These are high-
probability continuations in everyday language, so they
are highly predictable and carry less information. If
the sentence ended with “vinegar” instead, it would be
unexpected, and therefore more informative—but also
8
2.4 Striking the Balance: Why Language Needs All Three
more surprising.
Predictability is not just a linguistic curiosity—it’s
central to how our brains handle language. Studies
in psycholinguistics show that people read predictable
words faster, fixate on them for less time, and remember
them differently. Our minds are constantly generating
expectations based on what weve heard or read so far.
Language models work in a similar way. They
calculate the probability of each possible next word
and use that to generate fluent sentences. The more
predictable a word is in its context, the higher its
probability—and the lower its contribution to the
model’s calculated entropy.
In practice, predictability helps make
communication efficient and smooth. Speakers
can rely on shared knowledge and familiar structures,
while listeners use context to fill in gaps. This
makes language faster to process and easier to follow,
especially in everyday conversation.
2.4 Striking the Balance: Why
Language Needs All Three
Language is not designed to eliminate
entropy—but to manage it intelligently. Ambiguity,
redundancy, and predictability each affect the level
of entropy in different ways, and the effectiveness of
language lies in how it balances these forces rather than
avoiding them entirely.
Ambiguity increases entropy—on purpose
Ambiguity introduces uncertainty into a message,
raising entropy. A word with multiple meanings
or a sentence open to multiple interpretations can
make communication less certain on the surface.
But this isnt a flaw—its a feature. Ambiguity
makes language compact and flexible, allowing us
to reuse words and structures in different contexts.
It enables metaphor, humor, creativity, and even
politeness.Without ambiguity, language would be rigid
and unnecessarily long. But without limits, it would
become unintelligible.
Redundancy reduces entropy—strategically
Redundancy helps bring entropy down by
repeating or reinforcing information. This makes
messages more robust and recoverable, especially
in noisy or uncertain environments. From a
communication theory standpoint, redundancy serves
as a buffer against loss: even if part of the signal is lost,
the message can still be understood. Redundancy lowers
entropy—but at the cost of efficiency. So language uses
it selectively, where precision and clarity matter most.
Predictability controls entropy—efficiently
Predictable patterns in language reduce entropy by
making it easier to anticipate the next word or structure.
This speeds up processing and lightens the cognitive
load. Language relies on familiar phrases, grammar
rules, and contextual cues so that not every word needs to
be consciously analyzed. High predictability means low
entropy—but too much predictability makes language
dull or repetitive.
The Real Power: Balancing Entropy, Not
Erasing It
The genius of human language is in how it balances
entropy, rather than trying to minimize or maximize it
outright.
A poem may allow more ambiguity (higher
entropy) for emotional or aesthetic depth.
A legal document may use redundancy (lower
entropy) to avoid misinterpretation.
Everyday conversation often favors predictability
to enable fast and fluent exchange.
Natural language operates in this sweet spot—not
too chaotic, not too rigid. It allows just enough entropy
to be expressive, but controls it through redundancy
and predictability to remain clear and usable.
References
Maximum Entropy Models For Natural Language
Ambiguity Resolution
Entropy, prediction and the cultural ecosystem of
human cognition
9
2.4 Striking the Balance: Why Language Needs All Three
The Word Entropy of Natural Languages
Prediction during language comprehension: what
is next?
The communicative function of ambiguity in
language
Excess entropy in natural language: Present state
and perspectives
Redundancy can benefit learning: Evidence from
word order and case marking
About the Author
Jinsu Ann Mathew is a research scholar
in Natural Language Processing and Chemical
Informatics. Her interests include applying basic
scientific research on computational linguistics,
practical applications of human language technology,
and interdisciplinary work in computational physics.
10
Part II
Astronomy and Astrophysics
Black Hole Stories-20
The LIGO Gravitational Wave Detectors
by Ajit Kembhavi
airis4D, Vol.3, No.8, 2025
www.airis4d.com
In this story we will describe the LIGO
gravitational wave detectors, which are based on the
principle of the Michaelson interferometer.
1.1 A Simple Gravitational Wave
Detector
We have seen in BHS-19 how the concept
of a Michaelson interferometer provides a way to
detect gravitational waves. We reproduce here, for
convenience, the corresponding figure.
Figure 1: A schematic representation of a Michaelson
interferometer for detecting gravitational waves.
Credit: Kaushal Sharma and Ajith Parameswaran.
The functioning of the interferometer has been
described in BHS-19. To summarise, half of a laser
beam reaching the semi-reflecting mirror M passes
through to mirror M1, while the other half is reflected
to mirror M2. The returning beams meet at the detector.
The distances D1 and D2 are equal, so the beams, which
start from a single source, reach the detector in phase
and combine to produce a bright spot at the centre of
the detector. Now suppose a gravitational wave passes
through the circle of particles, so that it is stretched to
form an ellipse as shown on the right of the figure. The
mirror M1 is now farther from M than it was in the
absence of the wave, while M2 has moved closer to M, so
D2 is greater than D1. The distances travelled by the two
beams are therefore different and for a certain difference
in the path length, equal to an odd integral multiple
of half the wavelength of the laser, the interference is
destructive and a dark spot is seen. As the mirrors move
back and forth the interference changes periodically,
which can be detected by the changing intensity of the
spot. The changing interference becomes a signature
of the passage of a gravitational wave. The idea is
schematically represented in the lower half of Fig. 1.
1.2 Laser Interferometric Detectors
These are based on the principle of the Michelson
interferometer described above. To be able to detect the
very weak gravitational waves reaching the Earth from
distant cosmic events, the detectors have very long arm
lengths. The present detectors of this type are (1) the
detectors near Livingston, Louisiana and near Hanford,
Washington State, which are both part of LIGO, the
Laser Interferometric Gravitational Wave Observatory
each with 4 km long arms; (2) the VIRGO detector
near Pisa with 3 km arms; (3) the GEO600 detector
1.3 The Advanced LIGO Detector
near Hanover with 600 m arms and (4) the KAGRA
underground detector at Kamioka in Japan with 3 km
long arms. The detectors undergo periodic upgrades in
technology to make them more sensitive. The current
version of the LIGO detector is known as Advanced-
LIGO (aLIGO), and the following description pertains
to this version of the detector.
Translating from laboratory scale interferometers
to the LIGO scale with each arm being 4 km long
is obviously rather difficult. The difficulty is greatly
compounded by the accuracy that is required for the
detection of gravitational waves. In the laboratory,
an interferometer is used, for example, to measure
the wavelength of light. This requires an accuracy
of about
10
12
m in very precise measurements. But
the displacement caused by a gravitational wave in the
LIGO mirrors can be as small as
10
19
m, which is about
1
8500
of the radius of a proton. This extremely small
displacement has to be observed against a background
of much larger displacements caused by various effects.
The required sensitivity is reached by adapting a
series of technical innovations designed to eliminate
or minimise spurious vibrations, which we can term as
noise. Some of the measures are (1) using a powerful
laser as the source of the light beams; (2) using extra
mirrors to increase the intensity of light and the distance
traversed by the beam through repeated reflections; (3)
placing the entire interferometer in a high vacuum
enclosure and (4) using very sophisticated suspensions
for holding the mirrors. The details that we describe
below are for aLIGO, but similar systems are used in
the other detectors.
1.3 The Advanced LIGO Detector
Michaelson interferometers originally used
conventional light at optical wavelengths, whereas
gravitational wave detectors use lasers. The laser
provides a very intense, steady and non-diverging
beam of light at a precisely determined wavelength.
The complex laser device used in the aLIGO detector
produces a laser beam at a wavelength of 1064 nm, with
output power of 200 Watts. It is one of the most steady
and powerful lasers operating at the chosen wavelength,
which is in the near-infrared part of the spectrum. The
laser is stabilised to a level of one part in ten billion
in intensity and one part in a billion in frequency.
The power produced by the laser needs further great
amplification to serve the needs of the detector, which
is achieved by recycling the beam between mirrors
designed for the purpose. The power finally achieved is
about 750 kilo Watts.
The length of each arm of LIGO, from the beam
splitter to the reflecting mirror at the end of the arm is 4
km. The longer this length, the higher is the sensitivity
of the detector. A much longer length is effectively
achieved by placing an additional mirror, called a signal
recycling mirror, in each arm exactly 4 km from the
mirror at the end of the arm. The 4 km long space is
known as a Fabry-Perot cavity. The beam in each cavity
is reflected back and forth about 300 times, effectively
increasing the arm length to about 1200 km, and also
increasing the light intensity in the arms, which helps
in reducing noise.
The arrangement of the optics is such that
when the arm lengths are equal, a dark spot is
produced at the detector. As the arm lengths change,
the interference changes, and the spot gets brighter.
Conventionally, equal arm lengths would lead to
constructive interference and a bright spot would be
seen at the detector. But in the LIGO interferometer, a
dark spot is chosen by adjusting the optics, because it
is easier to judge the brightening of a dark spot, rather
than the dimming of a bright spot. A schematic of
the aLIGO interferometer is shown in Figure 2. The
functions of the various mirrors are indicated in the
figure. The arrangement in the real interferometer is far
more complex, of course.
The mirrors in the LIGO interferometer are used
to set the beam paths as described above, but they also
have another crucial function: they act as free particles
which respond to the passage of a gravitational wave.
We have seen in BHS-19 how a circle of free particles
oscillates between circular and elliptical forms as a
gravitational wave passes through. In the gravitational
wave interferometer, the two mirrors at the end of the
beam paths (M1 and M2 in Figure 1, and marked as
end mirrors in Figure 2) can be considered to be two
13
1.3 The Advanced LIGO Detector
Figure 2: A schematic diagram of the Advanced LIGO
interferometer.
Image Credit: D.V. Martynov et al. Phys.Rev.D 93, 112004, 2016.
free particles on a circle. The distance of these mirrors
from the centre is equal, but becomes different as a
gravitational wave passes through. For this to happen,
the mirrors are to be free, in the sense that no net
force, other than the gravitational force produced by
the wave are acting on them. If such forces exist, they
would dominate the extremely tiny effect produced. The
mirrors are suspended from above, so that in the vertical
direction the downwards gravitational force of the Earth
is balanced by the upwards tension in the suspension.
The mirrors can move freely in the horizontal direction
and the effect of a gravitational wave would be to change
the horizontal distances.
It is necessary to minimise the effects of any other
forces which would result in the unwanted horizontal
movement of the mirrors, which is considered to
be noise, since it can conceal the tiny effect due to
gravitational waves. That requires advanced mirror
isolation technology, which has evolved over the years.
In spite of the advanced technology used for noise
reduction, there is residual noise which can cause small
displacements in the mirror. There is seismic noise due
to ground vibrations or seismic disturbance caused by
human activities, winds, tidal motions in the ground
caused by the Sun and the Moon similar to tides in the
oceans, and ocean waves which can cause disturbance
even when the coastline is far away. Disturbances can
also be caused due to slight heating by the laser beams
of the mirrors and their suspensions and mechanical loss
of the reflective coatings of the mirrors. This affects
the random motion of the particles in them, leading to
thermal noise. The effect of these sources of noise on
the sensitivity of the aLIGO detector has to be taken
into account.
The mirrors which constitute the free particles
have a diametre of 34 cm, thickness 20 cm and weigh
40 kg each, the large mass helping to hold them still.
They are made of extremely pure and homogeneous
fused silica. The mirrors have very precisely defined
shapes which are accurate to better than
10
12
m, which
is far greater precision than is required for mirrors used
in large optical telescopes. The surfaces of the mirrors
are so highly reflective that they absorb only about one
out of 2.3 million photons which reach the mirrors. Any
absorbed photons lead to heating of the mirror. Due to
the very high intensity of the light, in spite of the small
absorption, the heat produced is sufficient to produce a
slight distortion of the mirror surface. To avoid that, a
system is used, which monitors the shape of the surface
and makes the necessary corrections.
The entire LIGO interferometer is operated in
a high vacuum. The laser beams pass through two
steel tubes each of which is 4 km long and the mirrors
are enclosed by end stations. The whole evacuated
assembly has a volume of about 10 million litres. The
only bigger evacuated volume than LIGO is the Large
Hadron Collider in CERN in Switzerland, which was
used to discover the Higgs Boson. The vacuum is
so good that the pressure of the residual gases in the
volume is only a trillionth of the air pressure at sea level.
The high vacuum is required for various reasons. The
molecules of air are in incessant motion due to their
heat energy. The molecules collide with the mirrors
to produce tiny motions in them. If the pressure is not
low enough, the jitter produced in the mirrors would
produce noise at an unacceptable level. When light
propagates through air, its direction of travel can be
slightly altered depending on the density of the air.
There are always small disturbances in air which can
cause the beam to randomly deviate from its straight
path, which changes slightly the length travelled by the
beam, adversely affecting the interference pattern. The
air can contain dust particles which scatter the beam
14
1.3 The Advanced LIGO Detector
Figure 3: The sensitivity of the aLIGO detectors as
function of frequency. See text for a description of the
figure.
Image Credit; B. P. Abbott et al. Phys. Rev. Letters 116, 131103, 2016.
from its straight line path. These effects are all very
small and would not matter at all in other situations.
But because of the very small changes produced by
gravitational waves the effects can become important in
comparison. The very high vacuum is needed to reduce
such effects to an acceptable level.
The sensitivity of the aLIGO detector to
gravitational waves and to noise in the system as a
function of the gravitational wave frequency is shown
in Fig. 3. The frequency at which the measurement is
made is shown on the horizontal axis, while a quantity
known as the strain noise is shown on the vertical
axis. This quantity is indicative of the ratio of the
net displacement produced in the two arms to the total
distance traversed by the beam in the multiple reflections
discussed above. The curve in dark red shows the strain
noise for the aLIGO detector at Hanford during the
first observing run O1 in 2015 during which the first
detection of a gravitational wave source was made by
aLIGO. The detection will be described in the next story.
The curve in light red is for the detector at Livingston.
The near coincidence of the two curves shows that the
sensitivity of the two detectors has nearly the same
dependence on the frequency.
The green curve shows the sensitivity of an earlier
version of the LIGO detector, while the blue and cyan
curves are the planned sensitivities of future versions
of the detector. A gravitational wave source can be
detected by aLIGO only if the strain produced by the
source is above the curves. How much confidence we
Figure 4: The LIGO observatory in Hanford,
Washington State, USA. The two 4 km long arms are
seen. The central part of the detector is in the buildings
at the intersection of the two arms.
can place in the detection depends upon how much above
the curve the source is located. Therefore, the lower a
curve is on the diagram, the better the detector would
be for observing faint sources. Using this criterion, it
is seen that the aLIGO detectors are most sensitive in
the frequency range of 100-300 Hz. The strong lines at
various points along the red and green curves are due to
specific contributions to the noise from the suspensions,
from AC power lines and due to signals injected for
calibration purposes. The lines are accounted for in the
analysis of the data.
An aerial view of the LIGO facility at Hanford in
Washington State and a close-up of its Northern arm
are shown in Figure 4. The LIGO-India facility, to be
completed over the next several years, will be similar
in nature.
Further detail about the LIGO detectors and
observatory, and LIGO-India, can be found in the book
Gravitational Waves A New Window to the Universe,
by Ajit Kembhavi & Pushpa Khare, Springer 2020.
In the next few stories we will describe some of the
gravitational wave sources detected by LIGO, all but
one of which were black hole binaries which merged
together to form a single black hole. The one exception
was a neutron star binary which again merged to form a
black hole.
I thank Dr. Joe Jacob for careful reading of the
story and suggesting corrections.
15
1.3 The Advanced LIGO Detector
About the Author
Professor Ajit Kembhavi is an emeritus
Professor at Inter University Centre for Astronomy
and Astrophysics and is also the Principal Investigator
of the Pune Knowledge Cluster. He was the former
director of Inter University Centre for Astronomy and
Astrophysics (IUCAA), Pune, and the International
Astronomical Union vice president. In collaboration
with IUCAA, he pioneered astronomy outreach
activities from the late 80s to promote astronomy
research in Indian universities.
16
Is Every Crash a Firework? Observing
Starbursts in Interacting Galaxies
by Robin Thomas
airis4D, Vol.3, No.8, 2025
www.airis4d.com
2.1 Introduction: Galaxies in Motion
Galaxies evolve within a constantly changing
gravitational landscape. When two massive galaxies
pass sufficiently close to one another, mutual tidal
forces torque their stellar disks and, more importantly,
redistribute their gaseous reservoirs. Such interactions
can channel gas toward galactic centres, ignite nuclear
starbursts, or expel material into tidal bridges and tails
where it may cool and condense into off-disk star-
forming regions. However, observational surveys and
simulations increasingly demonstrate that the outcome
is not uniform: while some encounters trigger dramatic,
system-wide increases in star formation, others lead
to only modest enhancements—or even temporary
suppression—depending on encounter geometry, gas
fraction, orbital parameters, and, critically, the mass
ratio of the pair. Our study was designed to quantify this
diversity by examining both the local (kiloparsec-scale)
and global (galaxy-integrated) star-formation response
in a small but well-characterised sample of nearby
interacting systems.
To explore how different dynamical configurations
influence star formation, we targeted three nearby, nearly
face-on interacting pairs: NGC 2207/IC 2163, NGC
4017/4016 (ARP 305), and NGC 7753/7752 (ARP
86). These systems span a range of interaction stages
and morphologies—from near-grazing encounters
with pronounced ocular features to bridge–satellite
configurations—allowing us to isolate the role of
geometry and timing. Their proximity permits high
Figure 1: Colour composite images from the HiPS
survey and corresponding UVIT image for the sample
of galaxies are shown in the top and bottom panels,
respectively. The galaxy pairs in our sample are labelled
in panels (a), (b) and (c). The emission in FUV1 for
NGC 2207/IC 2163 and NGC 4016/4107 and in FUV2
for NGC 7752/7753 are shown in panels (d), (e) and
(f), respectively. The red bars in the bottom panels
represent an extent of 20 kpc at the distance to the
galaxy pairs. The foreground stars in the FOV were
confirmed and removed using Gaia DR3 catalogue
(Gaia Collaboration et al., 2021).
spatial resolution in the ultraviolet (UV), enabling us
to resolve individual star-forming clumps and compare
them directly with the underlying gas distribution
traced by neutral hydrogen (HI). By studying different
“snapshots” of the interaction sequence, we can extract
trends that would be obscured in a heterogeneous, poorly
resolved sample.
2.2 Observing in Two Languages: Ultraviolet and Radio
2.2 Observing in Two Languages:
Ultraviolet and Radio
Young, massive stellar populations emit strongly
in the far- and near-ultraviolet, making UV imaging an
excellent proxy for recent (
100 Myr) star formation.
We exploited AstroSat/UVIT’s sub-arcsecond point
spread function (
1.2 1.4”
) to isolate discrete star-
forming knots across disks, bridges, and tidal debris.
After standard CCDLAB reduction and photometric
calibration using established zero-points, we derived
star formation rates (SFRs) and surface densities (
Σ
SFR
)
for each knot. To map the fuel reservoir and dynamical
response of the gas, we combined these UV maps with
archival or newly reduced HI 21
-
cm data from the VLA
and GMRT. Neutral hydrogen is both spatially extended
and dynamically sensitive to tidal perturbations; thus, its
column-density distribution provides a direct measure
of where gas has been compressed or displaced. The
juxtaposition of UV and HI maps therefore reveals not
only where stars are forming, but also how efficiently
the disturbed gas is being converted into stars.
2.3 Trends in the local star formation
2.3.1 NGC 2207/IC 2163: Triggered Star
Formation and a New TDG Candidate
In NGC 2207/IC 2163, the UVIT data reveal
a broad range of
Σ
SFR
values (roughly
10
3
-
10
2
M
yr
1
kpc
2
) distributed both along the primary
interaction interface and on the far side of the main
disk, indicating that tidal perturbations propagate across
the system rather than being confined to the direct
overlap region. A particularly notable result is the
first detection of ongoing star formation within the
northwestern gas complex N2207
-
NW1. Its coincident
HI column density and systemic velocity make it a
strong candidate for a tidal dwarf galaxy (TDG), a
self-gravitating system formed from tidally stripped,
chemically enriched gas rather than from primordial
collapse. This detection underscores the capacity of
interactions not merely to rearrange existing stellar
material, but to seed the formation of entirely new
galactic entities in debris fields.
2.3.2
ARP 305 (NGC 4017/4016): Distributed,
Moderate Enhancement
The ARP 305 system exhibits widespread but
moderate star formation spread over spiral arms, tidal
debris, and bridges. The spatial distribution suggests
a significant prior pericentric passage that stirred the
gas on large scales yet did not funnel it into a single
dominant nuclear starburst. Instead, the interaction
appears to have created numerous local pockets of
compression suitable for star formation. This pattern is
consistent with scenarios in which the strongest tidal
torques have subsided, leaving behind a “fossil record”
of triggered clumps that continue to form stars as long
as sufficient gas remains locally unstable.
2.3.3 ARP 86 (NGC 7753/7752): A Gas-Rich
Satellite and Another Debris-Born
Candidate
In ARP 86, the smaller companion NGC 7752
is exceptionally gas-rich, with an HI-to-total mass
fraction around 0.17, which is substantially higher
than typical values reported for classical starbursts.
UV imaging shows moderate to high
Σ
SFR
within
the satellite, whereas the main galaxy and connecting
bridge exhibit more modest levels. Embedded
within an HI bridge of column density exceeding
10
21
cm
2
lies 2MASXJ23470758+2926531, which
also hosts detectable star formation. Its location and
properties make it another compelling TDG candidate.
Collectively, these findings illustrate how tidal bridges
can serve as nurseries for new dwarf systems, provided
that the displaced gas remains sufficiently dense and
self-gravitating.
The identification of active star formation in
debris-bound condensations such as N2207
-
NW1 and
2MASXJ23470758+2926531 strengthens the evidence
that interactions can spawn long-lived, self-gravitating
tidal dwarf galaxies. TDGs inherit chemically enriched
gas from their progenitors. Their prevalence has
implications for the metallicity distribution in group
environments and the census of dwarf populations
18
2.4 Global Context: Placement on the Star-Forming Main Sequence
9.0 9.5 10.0 10.5 11.0 11.5
Stellar Mass (M )
2.5
2.0
1.5
1.0
0.5
0.0
0.5
1.0
1.5
SFR (M yr
1
)
NGC 2207 NGC 4017 NGC 7753
2.0
1.5
1.0
0.5
log(M
H
2
/M
)
Figure 2: The SFMS of galaxies is presented. Circles
represent the xCOLD GASS sample (Saintonge et al.,
2017), color mapped to represent the molecular gas
fraction in them. The main sequence obtained from
Dave (2008) is represented by blue dotted lines, with the
red dotted line representing 0.3 dex scatter. Our sample
of main galaxies are also color mapped to represent the
gas fraction. Due to the absence of relevant estimations
of molecular gas mass in NGC 7753, its molecular gas
fraction is not represented. We observe an enhancement
in the global SFR in our sample of main galaxies.
in the nearby universe. By identifying ongoing star
formation in these structures, our work adds to the
growing recognition that galaxy interactions can both
dismantle and create galaxies in the same event.
2.4
Global Context: Placement on the
Star-Forming Main Sequence
To assess the global impact of these interactions,
we located each primary galaxy on the star-forming
main sequence (SFMS), the empirical SFR–
M
relation
for star-forming disks. Using literature stellar masses
and SFRs from xCOLDGASS (Saintonge et al.,
2017), we find that all three main galaxies lie above
the canonical SFMS (e.g., Dave (2008)), indicating
statistically significant but moderate enhancements in
their integrated SFRs relative to isolated counterparts.
Such moderate offsets are increasingly recognised as
characteristic of many interacting systems, highlighting
the need to move beyond a binary “starburst vs.
quiescent” framework when interpreting galaxy-galaxy
encounters.
To quantify the physical parameters that might
govern the observed enhancement, we study the star
formation enhancement as functions of stellar pair mass
ratio and pair separations.
2.4.1 Pair mass ratio as a metric
By quantifying SFR enhancement as the ratio of
each galaxy’s SFR to that of a mass, morphology and
distance matched control sample, we isolate the effect
of interaction parameters. A clear inverse correlation
emerges between enhancement and pair mass ratio:
encounters between near-equal-mass galaxies maximise
tidal torques and gas inflows, whereas highly unequal
pairs often generate only weak responses in the more
massive member. This empirical result corroborates
hydrodynamic simulations (e.g., Cox et al., 2008;
Hani et al., 2020), which predict that mass symmetry
enhances dissipation and central gas accumulation.
Consequently, mass ratio should be treated as a first-
order predictor when estimating the star-forming impact
of a given interaction.
2.4.2 Pair separation as a limiting parameter
In contrast, projected pair separation shows no
statistically significant linear correlation with SFR
enhancement (Pearson r
0.05) across our systems
and in the larger comparison sample. While there
is an obvious physical limit—galaxies too widely
separated cannot tidally influence one another—the
absence of a tight trend within interacting pairs implies
that separation alone cannot predict the magnitude
of the response. Projection effects, differing orbital
phases, encounter velocities, and time lags between
gas compression and observable star formation all
contribute to the scatter. Thus, separation should be
interpreted as a contextual parameter, not a deterministic
one.
2.5 Limitation of the study
Several caveats accompany our analysis. Projected
separations may underestimate true three-dimensional
distances, diluting any intrinsic trends with tidal
strength. UV-based SFRs are sensitive to dust
attenuation and recent star-formation histories. Even
though the UV fluxes were accounted for extinction,
19
2.6 Future scope
Figure 3: (Left) SFR enhancement studied as a function of the pair mass ratios. We find an inverse correlation
between the SFR enhancement and mass ratios. (Right) SFR enhancement was studied as a function of the pair
separation. Star formation enhancement for interacting galaxies from Knapen et al. (2015) are also plotted as a
function of pair separation. We find an absence of correlation, with a Pearson coefficient of 0.05.
residual uncertainties can remain without full spectral
energy distribution fitting. Finally, although our three
systems are illustrative, they cannot capture the full
diversity of interaction outcomes—hence the necessity
of the larger statistical cross-check.
2.6 Future scope
A natural next step is to couple our observational
constraints with tailored N-body/hydrodynamical
simulations that recover each system’s orbital history
and gas inflow patterns. High-resolution CO mapping
with ALMA or NOEMA would directly link dense
molecular gas to the UV-bright knots, clarifying
the efficiency of the
HI H
2
conversion into stars
disturbed environments. Integral-field spectroscopy
can disentangle photoionization from shock excitation
in extraplanar regions and bridges, providing a more
complete diagnostic of the physical processes at work.
Finally, deeper, wider-field UV and radio observations
will reveal fainter tidal structures and nascent TDGs,
enabling a more complete census of interaction-induced
substructure.
2.7 Conclusion
Galaxy–galaxy interactions influence the
morphology, gas dynamics and star formation
properties, all of which depend sensitively on the
physical context of each encounter. In the three
nearby pairs examined here, we find that tidal forces
redistribute gas and ignite patchy, kiloparsec-scale
bursts of star formation across disks, bridges, and
debris, while the galaxies integrated SFR rise only
moderately above the star-forming main sequence.
Among the parameters we probed, the stellar
mass ratio emerges as the most predictive lever:
encounters between near-equal-mass partners yield
stronger tidal torques, deeper gas inflows, and larger
SFR enhancements than highly unequal pairs. By
contrast, present-day projected separation shows no
tight correlation with enhancement, underscoring that
proximity alone is a blunt proxy once a system is
already interacting. These empirical trends corroborate
hydrodynamic simulations and argue for including mass
symmetry, orbital geometry, and gas fraction as first-
order inputs in models of interaction-triggered activity.
Taken together, these results advocate for a multi-
scale, multi-phase approach to interaction studies.
Future work that combines tailored simulations with
molecular gas and integral-field spectroscopy will be
20
2.7 Conclusion
essential to reconstruct the dynamical histories of such
systems and to quantify, with greater physical fidelity,
how often and how efficiently interactions reshape the
star-forming lives of galaxies.
Correspondence: robinthomas546@gmail.com
Reference: Thomas et al. (2024), MNRAS, 534, 1902
Data Sources: AstroSat UVIT; VLA; uGMRT
References
Cox T. J., Jonsson P., Somerville R. S., Primack J.
R., Dekel A., 2008, , 384, 386
Dave R., 2008, , 385, 147
Gaia Collaboration et al., 2021, , 649, A1
Hani M. H., Gosain H., Ellison S. L., Patton D. R.,
Torrey P., 2020, , 493, 3716
Knapen J. H., Cisternas M., Querejeta M., 2015, , 454,
1742
Saintonge A., et al., 2017, , 233, 22
About the Author
Dr Robin is currently a Project Scientist at the
Indian Institute of Technology Kanpur. He completed his
PhD in astrophysics at CHRIST University, Bangalore,
with a focus on the evolution of galaxies. With a
background in both observational and simulation-based
astronomy, he brings a multidisciplinary approach to his
research. He has been a core member of CosmicVarta,
a science communication platform led by PhD scholars,
since its inception. Through this initiative, he has actively
contributed to making astronomy research accessible to
the general public.
21
Understanding the Johnson Magnitude
System in Astronomy
by Sindhu G
airis4D, Vol.3, No.8, 2025
www.airis4d.com
3.1 Introduction
In the field of observational astronomy, the precise
measurement of stellar brightness is fundamental to
our understanding of the universe. From determining
stellar distances and compositions to studying the
evolution of galaxies, photometric measurements
underpin nearly every branch of astrophysical research.
Over the years, various magnitude systems have been
developed to quantify the brightness of stars and other
celestial objects. Among these, the Johnson-Morgan
photometric system, more commonly referred to as the
Johnson system, stands as one of the most influential
and widely adopted standards.
Developed in the early 1950s by Harold Lester
Johnson and William Wilson Morgan, the Johnson
system was the first comprehensive photoelectric
photometric system. It replaced older visual methods
with more accurate and reproducible measurements
across well-defined filter bands. The system
standardized the use of color indices and helped lay
the groundwork for modern multi-band photometric
surveys. Even today, many modern photometric systems
are calibrated using Johnsons original UBV filters or
derived from them.
3.2 The Historical Background and
Evolution
The concept of stellar magnitude dates back
to antiquity, when Greek astronomer Hipparchus
introduced a six-magnitude scale to classify the
brightness of visible stars. This logarithmic idea was
formalized in the 19th century when Norman Pogson
defined a first magnitude star as 100 times brighter than
a sixth magnitude star. However, this classification was
based on visual observation and lacked instrumental
precision.
With the advent of photoelectric photometry in
the early 20th century, astronomers gained tools to
measure brightness with unprecedented accuracy. It
was in this context that Harold Johnson and William
Morgan developed the UBV photometric system. In
1953, they published a landmark paper introducing a
system based on three filters U (ultraviolet), B (blue),
and V (visual) centered around specific wavelength
bands. This was soon extended to include R (red) and I
(infrared) filters.
The Johnson system offered a major advancement
in standardizing observations between observatories,
enabling large-scale comparison of stellar properties.
For the first time, astronomers had a reproducible and
calibrated method for measuring and comparing stellar
magnitudes across different wavelength regions.
3.3 Structure of the Johnson
Photometric System
The original Johnson system is characterized by a
set of broadband filters with well-defined wavelength
ranges. These filters are designed to correspond
3.4 Applications of Johnson Magnitudes in Astrophysics
to different parts of the electromagnetic spectrum,
capturing the continuum light of stars rather than
individual spectral lines. The most commonly used
filters include:
Filter Name Central λ Bandwidth
(nm) (nm)
U Ultraviolet 360 60
B Blue 440 100
V Visual (Green) 550 90
R Red 700 150
I Infrared 900 150
Table 3.1: Broadband filters in the Johnson photometric
system.
The UBV system was the earliest and remains the
most frequently cited. The color indices derived from
this system, such as
(B V )
and
(U B)
, provide
valuable information about the temperature, reddening,
and spectral classification of stars.
(B–V) Index: A measure of the stars
temperature; hotter stars are bluer and have lower
(B V ) values.
(U–B) Index: Sensitive to both temperature and
interstellar reddening.
These indices are differential magnitudes,
calculated by subtracting the magnitudes measured
in two filters:
(B V ) = m
B
m
V
(3.1)
where
m
B
and
m
V
are the magnitudes in the B
and V bands, respectively.
3.4 Applications of Johnson
Magnitudes in Astrophysics
3.4.1 Stellar Classification and Temperature
Determination
The Johnson system plays a critical role in spectral
classification of stars. The (B–V) color index is directly
correlated with the effective temperature of a star.
This relationship enables astronomers to classify stars
along the Hertzsprung–Russell diagram, distinguishing
between main-sequence stars, giants, and white dwarfs.
For example:
O-type stars (very hot): (B V ) 0.3
Sun-like stars (G-type): (B V ) 0.65
Cool red stars (M-type): (B V ) > 1.5
3.4.2 Extinction and Reddening Studies
Starlight traveling through interstellar dust is
absorbed and scattered, causing interstellar extinction
and a phenomenon known as reddening. Since dust
affects shorter wavelengths more strongly, blue light
is absorbed more than red light, making stars appear
redder than they are.
By comparing observed color indices with intrinsic
(theoretical or known) indices, astronomers can estimate
the amount of dust between the star and Earth. The
color excess E(B V ) is calculated as:
E(B V ) = (B V )
observed
(B V )
intrinsic
(3.2)
This value helps derive the visual extinction
A
V
,
often using:
A
V
= R
V
E(B V ) (3.3)
with R
V
3.1 for the Milky Way.
3.4.3 Distance Measurements
Johnson magnitudes also support the
determination of distances through the distance
modulus:
m M = 5 log
10
(d) 5 + A
V
(3.4)
where:
m = apparent V magnitude
M = absolute V magnitude
d = distance in parsecs
A
V
= extinction in the V band
Thus, photometry in the Johnson system allows
astronomers to estimate distances to stars and stellar
clusters, provided extinction and intrinsic luminosity
are known.
23
3.5 Comparison with Other Photometric Systems
3.4.4 Color–Magnitude Diagrams (CMDs)
Color–Magnitude Diagrams (CMDs) are essential
tools in stellar population studies. Plotting the
V
magnitude against the
(B V )
color, for example,
reveals the distribution of stars by brightness and
temperature. These diagrams are used to study:
Open and globular clusters,
Star formation history, and
Stellar evolution stages.
Because of the Johnson system’s widespread
adoption, a vast library of CMDs exists throughout
the literature for objects across the Galaxy and the
Local Group.
3.5 Comparison with Other
Photometric Systems
While the Johnson system was foundational, newer
photometric systems have been developed to suit
specific instruments and scientific objectives. Notable
examples include:
Cousins System (RI): Modified the redder filters
(
R
and
I
) to better match CCD detector response.
Str
¨
omgren System: Utilizes narrow-band
filters for high-precision determination of stellar
parameters such as temperature, gravity, and
metallicity.
Sloan Digital Sky Survey (SDSS): Employs five
filters (
u
,
g
,
r
,
i
,
z
) optimized for modern CCDs
and large-area sky surveys.
Despite these advancements, the Johnson system
remains highly relevant. Many space- and ground-
based observations still rely on transformations between
modern photometric systems and Johnson UBVRI
magnitudes to maintain consistency and comparability
with historical datasets.
3.6 Limitations and Calibration
Challenges
Despite its strengths, the Johnson system has
several limitations:
Filter Variability: Different observatories may
use slightly different filter transmission curves,
introducing systematic errors in the measured
magnitudes.
CCD Incompatibility: Originally designed for
photoelectric photometers, Johnson filters are
not ideally matched to the spectral response of
modern CCD detectors.
Atmospheric Dependence: UV measurements,
particularly in the
U
band, are highly sensitive
to atmospheric extinction and therefore require
careful calibration.
To mitigate these issues, astronomers have
developed transformation equations and networks of
secondary standard stars to cross-calibrate data obtained
from different instruments and photometric systems.
3.7 Legacy and Continuing Use
The Johnson photometric system’s legacy is visible
in almost every corner of stellar astrophysics. Its color
indices are still quoted in the latest catalogues, from
GAIA to 2MASS, and its filters remain a reference
point for calibration.
Moreover, the system provides a historical link
between older observations and modern surveys.
Many photometric databases, such as Simbad, Vizier,
and AAVSO, list Johnson magnitudes as baseline
photometric data.
In education, the Johnson system is often the
first photometric system taught to students because
of its clarity, simplicity, and central role in stellar
astrophysics.
3.8 Conclusion
The development of the Johnson magnitude system
represented a transformative moment in astronomical
measurement. By providing a standardized, repeatable
method for quantifying stellar brightness across multiple
wavelengths, Johnson and Morgan ushered in a new
era of precision photometry. The system’s influence
continues to shape the way astronomers measure,
classify, and understand stars and galaxies.
24
3.8 Conclusion
While newer systems have evolved to meet the
demands of digital detectors and large-scale surveys, the
Johnson UBVRI filters remain deeply embedded in the
fabric of observational astronomy. For both historical
continuity and practical application, the Johnson system
continues to serve as a vital photometric standard a
testament to its enduring utility in unveiling the cosmos.
References:
FUNDAMENTAL STELLAR PHOTOMETRY
FOR STANDARDS OF SPECTRAL TYPE ON
THE REVISED SYSTEM OF THE YERRES
SPECTRAL ATLAS
The Measurement of Starlight, Two Centuries of
Astronomical Photometry
INFRARED STELLAR PHOTOMETRY
UBV Photoelectric Photometry Catalogue (1986):
I. The Original data
About the Author
Sindhu G is a research scholar in the
Department of Physics at St. Thomas College,
Kozhencherry. She is doing research in Astronomy
& Astrophysics, with her work primarily focusing
on the classification of variable stars using different
machine learning algorithms. She is also involved
in period prediction for various types of variable
stars—especially eclipsing binaries—and in the study
of optical counterparts of X-ray binaries.
25
Part III
Biosciences
Protein Folding and its Vital Role in Biological
Function
by Geetha Paul
airis4D, Vol.3, No.8, 2025
www.airis4d.com
1.1 Introduction
Proteins are the molecular workhorses of all living
cells, responsible for various biological tasks, from
catalysing metabolic reactions to providing structural
support and transmitting signals. Yet, the functionality
of these remarkable molecules depends not merely on
their chemical composition, but critically on how they
fold into unique three-dimensional shapes. Protein
folding is the intricate process by which a chain of
amino acids, synthesised in a linear sequence according
to the instructions encoded in DNA, adopts a specific
conformation that allows it to perform its designated
function. This process is essential for life: every
movement, thought, and heartbeat relies on properly
folding proteins within the body.
The significance of protein folding is underscored
by its central role in cellular biology. The journey from
gene to functional protein is more than a matter of
assembling the correct amino acids; it enables these
polymers to self-assemble into complex architectures,
precisely tailored for tasks as diverse as carrying oxygen,
repairing DNA, or defending against pathogens. This
intricate folding occurs under the crowded and dynamic
conditions typically found within cells. Despite the
potential for error, most proteins consistently fold into
their functional forms within seconds or minutes, a feat
made possible by the delicate balance of chemical and
physical forces acting along the polypeptide chain.
Disruptions in protein folding can have dramatic
consequences. Even a single error in this process
can transform a helpful protein into a harmful entity,
leading to the formation of aggregates that are hallmarks
of diseases like Alzheimer’s, Parkinsons, and cystic
fibrosis. Understanding how proteins fold, and what
happens when folding goes awry, is one of the grand
challenges of modern biology. Recent innovations in
research, ranging from uncovering the fundamental
principles that direct protein folding to advancing
computational approaches for structure prediction, are
having far-reaching impacts. These breakthroughs
are transforming medicine, enabling the development
of more precise drugs, and accelerating progress in
biotechnology, establishing protein folding as both a
cornerstone and a cutting-edge field in modern science.
1.2 Translation and Primary
Structure Formation
Protein folding begins at the molecular level by
synthesising a polypeptide chain, a process called
translation. During this step, ribosomes read the
genetic code carried by messenger RNA (mRNA) and
link amino acids in a precise order, producing the
linear primary structure of the protein. This sequence
determines the intrinsic chemical properties of the future
protein by dictating how amino acids with different side
chains interact. Hydrophilic, hydrophobic, acidic, or
basic will interact. Remarkably, the primary sequence
alone encodes all the information required for the
protein to adopt its native, functional structure. The
1.3 Folding to Secondary and Tertiary Structure: The Search for Native Conformation
growing protein chain may start folding into preliminary
structures even before synthesis is complete, a process
known as co-translational folding.
1.3
Folding to Secondary and Tertiary
Structure: The Search for Native
Conformation
As the newly synthesised chain is released from the
ribosome, it undergoes rapid structural transitions. The
first level is the formation of local secondary structures,
alpha helices and beta sheets, stabilised by hydrogen
bonding patterns within the backbone. These early
structures form rapidly, providing a framework for
further folding.
The polypeptide chain then experiences a process
called hydrophobic collapse, where water-fearing
(hydrophobic) residues tuck inside, away from the
surrounding aqueous environment, leading to a compact
“molten globule” state. This intermediate is more
ordered than the unfolded chain but still flexible.
Folding continues as further interactions occur between
distant regions of the chain, such as hydrophobic
contacts, salt bridges, disulfide bonds, and van der Waals
forces, driving the conversion into the tertiary structure,
which is the unique, three-dimensional form required for
biological activity. Proteins reach their functional state
usually by traversing an “energy landscape, avoiding
misfolded intermediates or kinetic traps.
Some proteins, especially larger or more complex
ones, require assistance to fold correctly. Molecular
chaperones act as folding facilitators, preventing
improper contact and aggregation, particularly under
conditions prone to causing errors, such as heat shock
or cellular stress. Chaperones do not determine the final
structure but provide an environment in which correct
folding can occur efficiently.
1.4 Quality Control, Quaternary
Structure Assembly, and
Functional Verification
Once the tertiary structure is achieved, some
proteins interact with other polypeptide chains or
subunits, assembling into quaternary structures. They
are complexes that function as multimeric machines
(e.g., haemoglobin, which consists of four subunits).
Stability and correct assembly are ensured by the same
types of non-covalent interactions that govern earlier
folding stages.
Cellular machinery constantly monitors proteins
for incorrect or incomplete folding. Misfolded proteins
are recognised and tagged for degradation by quality-
control systems such as the ubiquitin-proteasome
pathway or autophagy. This process helps prevent the
accumulation of potentially toxic protein aggregates,
which are implicated in neurodegenerative and systemic
amyloid diseases if allowed to persist.
Thus, the journey from a linear amino acid
sequence to a finely tuned functional protein is a
tightly regulated, multi-step process essential to cellular
function and survival.
Protein aggregates are a hallmark of several
neurodegenerative diseases, such as Alzheimer’s,
Parkinsons, and cystic fibrosis, and play a central
role in their pathology. These aggregates typically
consist of misfolded proteins that have adopted
abnormal conformations rich in b-sheet structures. This
conformation exposes hydrophobic regions normally
buried inside the protein, promoting oligomerisation
and fibril formation, which leads to insoluble deposits
in cells or extracellular spaces.
In Alzheimers disease, protein aggregates include
extracellular amyloid plaques formed mainly from
b-amyloid peptides and intracellular neurofibrillary
tangles composed of hyperphosphorylated tau protein.
Parkinsons disease is characterised by Lewy bodies,
intracellular aggregates primarily of a-synuclein protein.
Similarly, cystic fibrosis involves misfolding and
aggregation of the CFTR protein, affecting its function.
28
1.5 Mechanisms by Which Protein Aggregates Disrupt Cellular Function
Image courtesy: https://doi.org/10.1038/nature10317
Figure 1: Competing reactions of protein folding and
aggregation
Figure 2: Protein aggregation and its affecting
mechanisms from molecule to molecule.
1.5 Mechanisms by Which Protein
Aggregates Disrupt Cellular
Function
Toxicity to neurons: Aggregates interfere
with cellular processes, causing oxidative stress,
mitochondrial dysfunction, and inflammation,
ultimately leading to cell death.
Impaired protein clearance: Dysfunction of
cellular quality control systems, such as the ubiquitin-
proteasome system and autophagy-lysosomal pathway,
contributes to the accumulation of aggregates. The
aggregated proteins can further impair these clearance
pathways, creating a vicious cycle.
Seeding and spreading: Aggregates can propagate
by inducing misfolding of normal proteins in
neighbouring cells, contributing to disease progression.
The causes of aggregation are multifactorial,
including genetic mutations, environmental stressors,
ageing, and failure of molecular chaperones and quality
control machinery. Deficiencies in protein homeostasis
(proteostasis) exacerbate the accumulation of toxic
aggregates, which are pathogenic in these diseases.
Understanding these aggregates is critical, as
their formation marks the onset and progression of
neurodegeneration and is a primary target for therapeutic
intervention.
References
1.
Dobson, C. M. (2003). Protein folding and
misfolding. Nature, 426(6968), 884–890.
https://www.nature.com/articles/nature02261
2.
Onuchic, J. N., & Wolynes, P. G. (2004). Theory
of protein folding. Current Opinion in Structural
Biology, 14(1), 70–75.
3.
https://www.annualreviews.org/doi/10.1146/annurev.biophys.35.040405.102117
4.
Hartl, F. U., Bracher, A., & Hayer-Hartl, M.
(2011). Molecular chaperones in protein folding
and proteostasis. Nature, 475(7356), 324–332.
https://www.nature.com/articles/nature10317
29
1.5 Mechanisms by Which Protein Aggregates Disrupt Cellular Function
5.
Chiti, F., & Dobson, C. M. (2017).
Protein misfolding, amyloid formation,
and human disease: A summary of
progress over the last decade. Annual
Review of Biochemistry, 86, 27–68.
https://www.annualreviews.org/doi/10.1146/annurev-
biochem-061516-045115
About the Author
Geetha Paul is one of the directors of
airis4D. She leads the Biosciences Division. Her
research interests extends from Cell & Molecular
Biology to Environmental Sciences, Odonatology, and
Aquatic Biology.
30
About airis4D
Artificial Intelligence Research and Intelligent Systems (airis4D) is an AI and Bio-sciences Research Centre.
The Centre aims to create new knowledge in the field of Space Science, Astronomy, Robotics, Agri Science,
Industry, and Biodiversity to bring Progress and Plenitude to the People and the Planet.
Vision
Humanity is in the 4th Industrial Revolution era, which operates on a cyber-physical production system. Cutting-
edge research and development in science and technology to create new knowledge and skills become the key to
the new world economy. Most of the resources for this goal can be harnessed by integrating biological systems
with intelligent computing systems offered by AI. The future survival of humans, animals, and the ecosystem
depends on how efficiently the realities and resources are responsibly used for abundance and wellness. Artificial
intelligence Research and Intelligent Systems pursue this vision and look for the best actions that ensure an
abundant environment and ecosystem for the planet and the people.
Mission Statement
The 4D in airis4D represents the mission to Dream, Design, Develop, and Deploy Knowledge with the fire of
commitment and dedication towards humanity and the ecosystem.
Dream
To promote the unlimited human potential to dream the impossible.
Design
To nurture the human capacity to articulate a dream and logically realise it.
Develop
To assist the talents to materialise a design into a product, a service, a knowledge that benefits the community
and the planet.
Deploy
To realise and educate humanity that a knowledge that is not deployed makes no difference by its absence.
Campus
Situated in a lush green village campus in Thelliyoor, Kerala, India, airis4D was established under the auspicious
of SEED Foundation (Susthiratha, Environment, Education Development Foundation) a not-for-profit company
for promoting Education, Research. Engineering, Biology, Development, etc.
The whole campus is powered by Solar power and has a rain harvesting facility to provide sufficient water supply
for up to three months of drought. The computing facility in the campus is accessible from anywhere through a
dedicated optical fibre internet connectivity 24×7.
There is a freshwater stream that originates from the nearby hills and flows through the middle of the campus.
The campus is a noted habitat for the biodiversity of tropical Fauna and Flora. airis4D carry out periodic and
systematic water quality and species diversity surveys in the region to ensure its richness. It is our pride that the
site has consistently been environment-friendly and rich in biodiversity. airis4D is also growing fruit plants that
can feed birds and provide water bodies to survive the drought.