New Home

Causes and Effects of Climate Change

Reading Time: 15 minutes

The world is going through a time of global warming that has never happened before. Changes in rain and snow patterns, rising sea levels, more and stronger droughts, wildfires, storms, tornadoes, and hurricanes are all effects of global warming. These effects, which are now obvious, are becoming more important and severe every year and are likely to change our lives and the lives of our children and their children in future. Climate change is one of the biggest threats that humanity is facing today.

The greenhouse effect is the main factor contributing to the planet’s warming. Some feedback mechanisms, such as the evaporation of water from the oceans and the loss of albedo effect on polar ice sheets, make the situation worse, leading to more global warming and possibly, in the not too distant future, an uncontrolled global warming disaster. In this article, we discuss the causes of climate change (mainly greenhouse effect) and some of the impacts of climate change.

Greenhouse Effect

The greenhouse effect was first thought of by Joseph Fourier in the 1820s. He thought that something in the earth’s atmosphere controlled the temperature at the surface of the earth. He was investigating the origins of historic glaciers and the ice sheets that once used to cover most of Europe. Decades later, Tyndall took Fourier’s idea and used an experiment set up by Macedonio Melloni to show that CO2 could absorb a lot more heat than other gases. This supported Fourier’s idea and showed that CO2 was the part of the atmosphere that Fourier was looking for. Many researchers tried to measure CO2 and warn the world about the increasing concentrations of CO2, but it was only in the 1960s, when C.D. Keeling measured the amount of CO2 in the atmosphere and found that it was rising quickly, and that anthropogenic activities were to blame for it.

The greenhouse effect of water vapour is significantly greater than that of carbon dioxide. Also, the amount of water vapour in the air is about a hundred times higher than the amount of CO2, as a result water is responsible for more than 60% of the global warming effect. The temperature determines how much water vapour is in the air. When the amount of CO2 in the air goes up, the global temperature goes up by only a small amount, but that’s enough to cause more water vapour to be released from the oceans and get into the air. The biggest impact on the world’s temperature comes from this feedback mechanism. Interestingly, the amount of water vapour in the atmosphere is controlled by the concentration of CO2, which in turn determines the global average surface temperature. In fact, if there was no CO2 in the air, the planet’s surface temperature would be about 33°C lower than it is now.

The sun radiates energy on the earth with wavelengths that range from 0.3 to 5 μm. There is a lot of energy coming from the sun. It heats the atmosphere we breathe in and everything on Earth. At night, a lot of this heat energy is sent back into space, but at different wavelengths, which are in the infrared range from 4 to 50 μm. According to Planck’s Law of blackbody radiation, the temperature of a body affects the frequency of the heat it emits. When this energy leaves the Earth, it heats the molecules of greenhouse gases (like H2O, CO2, CH4, etc.) in the air. Let’s understand this using CO2 and H2O as examples. This heating process happens because the radiated Infrared frequency is in sync (resonates) with the natural frequency of the carbon-oxygen bond of CO2 (4.26 m is the asymmetric stretching vibration mode and 14.99 m is the bending vibration mode) and the oxygen-hydrogen bond of H2O. The CO2 and H2O molecules are heated because their bond vibrations are increased. When these molecules heat up, they transfer their energy to the other molecules in the atmosphere (N2, O2), maintaining a consistent temperature on Earth. The O-O bond in oxygen molecules and the N-N bond in nitrogen molecules both have vibrational frequencies that are different than the radiation frequencies, hence they are unaffected by the radiation that leaves Earth at night.

Global warming

There is overwhelming evidence from the scientific community that human activities are to blame for the increasing concentration of carbon dioxide (CO2) in the atmosphere, and thus for the resulting global warming. This view is shared by each and every scientific group and research organisation focusing on climate change. The current rise in global temperature has been triggered by an almost 50 percent increase in atmospheric CO2 concentration, from 280 ppm (before the industrial revolution) to 417 ppm in May 2020. Total atmospheric CO2 and it’s concentration value are the most reliable measurements of global warming we have right now. In 1960, the rate of increase of CO2 was less than 1 ppm per year. Whereas, right now it is 2.4 ppm per year.

This rate of change is the best way to tell if we are making progress in stopping global warming. At the moment, there are no signs that this is happening. In fact, the opposite is true. Even if we stopped burning fossil fuels, it would take a long time for CO2 levels to go down because the lifetime of CO2 is of the order of hundreds of years in the upper atmosphere. The most convincing evidence that the rise in CO2 is the most likely cause of global warming can be seen in graphs that show how the amount of CO2 in the air and the average temperature around the world have changed over time over the past several decades (see Fig. 1, Fig. 2). Over the past 60 years, the average temperature around the world has shown a similar trend as that of CO2 levels. The Average global temperatures from 2010 to 2022 compared to a baseline average from 1951 to 1980 can be seen in (Fig. 3).

Causes and Effects of Climate Change

Figure 1:Carbon dioxide concentration level.

Source: NASA satellite observations.

Causes and Effects of Climate Change

Figure 2:Global temperature variation.

Source: NASA satellite observations.

Causes and Effects of Climate Change

Figure 3: Average global temperatures from 2010 to 2022 compared to a baseline average from 1951 to 1980.

Source: NASA Data

Impact of climate change

One of the most pressing challenges confronting humanity today is climate change and how to minimize the damage it causes. It’s multifaceted, therefore solving it will require expertise in many disciplines like science, economics, society, governance and ethics. Consequences from global warming will be felt for generations, if not centuries. While it will be impossible to completely stop global warming, its growth rate is within our control. As the world’s temperatures continue to rise, it will have a negative effect on the world’s economy, energy supply, environmental quality, and health.

So far, some of the effects of climate change are –

  • Earth is getting warmer: As temperatures rise, days of extreme heat that used to happen once every 20 years may now happen every 2 or 3 years on average With the exception of June, 2016 was the warmest year on record from January to September (NASA, 2020c). Since 2005, 10 of the warmest years in the record-keeping period of 140 years have happened. Six of the hottest years on record occurred in the past six years (IPCC, 2018).
  • Oceans get warmer: Over 90% of the warming that has happened on Earth in the past 50 years has happened in the oceans (NASA, 2020c). Rising sea levels, ocean heat waves, coral bleaching, severe storms, changes in marine ecosystem, and the melting of glaciers and ice sheets around Greenland and Antarctica are all caused by warmer oceans. The waters were warmer last year than they have ever been since measurement of ocean temperature started more than 60 years ago.
  • Ice Sheets are shrinking: Between 1993 and 2016, the Greenland ice sheet lost an average of 286 billion tonnes of ice per year. During the same time period, the Antarctic ice sheet lost about 127 billion tonnes of ice per year. In the last ten years, the rate of ice mass loss in Antarctica has tripled (NASA).
  • Glacial retreat: Most of the world’s glaciers are melting, including those in Africa, Alaska, the Alps, Andes, Himalayas, and the Rocky Mountains. Most of the sea level rise in the last few decades has been caused by glaciers and ice sheets melting. The melting of glaciers is a major threat to ecosystems and water supplies for people in many parts of the world.
  • Sea level rise: The sea level rises when the oceans get warmer and glaciers and other ice start to melt. When the water in the ocean gets warmer, it expands. This makes the sea level rise even more. In the last 100 years, the sea level rose about 20 cm around the world. In the last two decades, the rate of growth was twice as fast as in the last century, and this rate is getting faster. Flooding is getting worse and happening more often in many places.
  • Increased frequency of extreme hydrological and meteorological events: Since the middle of the last century, there have been more events with record high temperatures and heavy rainfall. Since the early 1980s, hurricanes have been getting stronger, happening more often, and lasting longer. As the oceans continue to warm, hurricane storms will get stronger and rain will fall at a faster rate.
  • Oceans are getting more acidic: Since the start of the Industrial Revolution, the surface waters of the oceans have become about 30% more acidic. The cause of this increase is that humans are releasing more carbon dioxide into the atmosphere, which causes more of it to be absorbed by the oceans. Carbon dioxide is being taken up by the top layer of the oceans at a rate of about 2 billion tonnes per year.

Future Scenario

According to reports made by the Intergovernmental Panel on Climate Change (IPCC), the average global temperature is on track to rise by 3°C by the end of this century. Their goal is a maximum of 1.5°C, but reaching that goal will require “rapid, far-reaching, and unprecedented changes in all parts of society. To reach a goal of 1.5°C warming, greenhouse gas emissions will need to be cut by 45 percent below what they were in 2010 by 2030. And, as we’ve already said, even if all of these emissions stopped right now, the world’s temperature would still rise for decades because of the long lasting effects of the atmosphere and oceans. Climate change affects the quality of our environment, our food supplies, our susceptibility to diseases and other health problems, and our ability to make money. Most of these effects are being felt and will continue to be felt more in the future and sadly more by the poor than by the rich.

Causes and Effects of Climate Change

The world is going through a time of global warming that has never happened before. Changes in rain and snow patterns, rising sea levels, more and stronger droughts, wildfires, storms, tornadoes, and hurricanes are all effects of global warming. These effects, which are now obvious, are becoming more important and severe every year and are likely to change our lives and the lives of our children and their children in future. Climate change is one of the biggest threats that humanity is facing today.

The greenhouse effect is the main factor contributing to the planet’s warming. Some feedback mechanisms, such as the evaporation of water from the oceans and the loss of albedo effect on polar ice sheets, make the situation worse, leading to more global warming and possibly, in the not too distant future, an uncontrolled global warming disaster. In this article, we discuss the causes of climate change (mainly greenhouse effect) and some of the impacts of climate change.

Greenhouse Effect

The greenhouse effect was first thought of by Joseph Fourier in the 1820s. He thought that something in the earth’s atmosphere controlled the temperature at the surface of the earth. He was investigating the origins of historic glaciers and the ice sheets that once used to cover most of Europe. Decades later, Tyndall took Fourier’s idea and used an experiment set up by Macedonio Melloni to show that CO2 could absorb a lot more heat than other gases. This supported Fourier’s idea and showed that CO2 was the part of the atmosphere that Fourier was looking for. Many researchers tried to measure CO2 and warn the world about the increasing concentrations of CO2, but it was only in the 1960s, when C.D. Keeling measured the amount of CO2 in the atmosphere and found that it was rising quickly, and that anthropogenic activities were to blame for it.

The greenhouse effect of water vapour is significantly greater than that of carbon dioxide. Also, the amount of water vapour in the air is about a hundred times higher than the amount of CO2, as a result water is responsible for more than 60% of the global warming effect. The temperature determines how much water vapour is in the air. When the amount of CO2 in the air goes up, the global temperature goes up by only a small amount, but that’s enough to cause more water vapour to be released from the oceans and get into the air. The biggest impact on the world’s temperature comes from this feedback mechanism. Interestingly, the amount of water vapour in the atmosphere is controlled by the concentration of CO2, which in turn determines the global average surface temperature. In fact, if there was no CO2 in the air, the planet’s surface temperature would be about 33°C lower than it is now.

The sun radiates energy on the earth with wavelengths that range from 0.3 to 5 μm. There is a lot of energy coming from the sun. It heats the atmosphere we breathe in and everything on Earth. At night, a lot of this heat energy is sent back into space, but at different wavelengths, which are in the infrared range from 4 to 50 μm. According to Planck’s Law of blackbody radiation, the temperature of a body affects the frequency of the heat it emits. When this energy leaves the Earth, it heats the molecules of greenhouse gases (like H2O, CO2, CH4, etc.) in the air. Let’s understand this using CO2 and H2O as examples. This heating process happens because the radiated Infrared frequency is in sync (resonates) with the natural frequency of the carbon-oxygen bond of CO2 (4.26 m is the asymmetric stretching vibration mode and 14.99 m is the bending vibration mode) and the oxygen-hydrogen bond of H2O. The CO2 and H2O molecules are heated because their bond vibrations are increased. When these molecules heat up, they transfer their energy to the other molecules in the atmosphere (N2, O2), maintaining a consistent temperature on Earth. The O-O bond in oxygen molecules and the N-N bond in nitrogen molecules both have vibrational frequencies that are different than the radiation frequencies, hence they are unaffected by the radiation that leaves Earth at night.

Global warming

There is overwhelming evidence from the scientific community that human activities are to blame for the increasing concentration of carbon dioxide (CO2) in the atmosphere, and thus for the resulting global warming. This view is shared by each and every scientific group and research organisation focusing on climate change. The current rise in global temperature has been triggered by an almost 50 percent increase in atmospheric CO2 concentration, from 280 ppm (before the industrial revolution) to 417 ppm in May 2020. Total atmospheric CO2 and it’s concentration value are the most reliable measurements of global warming we have right now. In 1960, the rate of increase of CO2 was less than 1 ppm per year. Whereas, right now it is 2.4 ppm per year.

This rate of change is the best way to tell if we are making progress in stopping global warming. At the moment, there are no signs that this is happening. In fact, the opposite is true. Even if we stopped burning fossil fuels, it would take a long time for CO2 levels to go down because the lifetime of CO2 is of the order of hundreds of years in the upper atmosphere. The most convincing evidence that the rise in CO2 is the most likely cause of global warming can be seen in graphs that show how the amount of CO2 in the air and the average temperature around the world have changed over time over the past several decades (see Fig. 1, Fig. 2). Over the past 60 years, the average temperature around the world has shown a similar trend as that of CO2 levels. The Average global temperatures from 2010 to 2022 compared to a baseline average from 1951 to 1980 can be seen in (Fig. 3).

Causes and Effects of Climate Change

Figure 1:Carbon dioxide concentration level.

Source: NASA satellite observations.

Causes and Effects of Climate Change

Figure 2:Global temperature variation.

Source: NASA satellite observations.

Causes and Effects of Climate Change

Figure 3: Average global temperatures from 2010 to 2022 compared to a baseline average from 1951 to 1980.

Source: NASA Data

Impact of climate change

One of the most pressing challenges confronting humanity today is climate change and how to minimize the damage it causes. It’s multifaceted, therefore solving it will require expertise in many disciplines like science, economics, society, governance and ethics. Consequences from global warming will be felt for generations, if not centuries. While it will be impossible to completely stop global warming, its growth rate is within our control. As the world’s temperatures continue to rise, it will have a negative effect on the world’s economy, energy supply, environmental quality, and health.

So far, some of the effects of climate change are –

  • Earth is getting warmer: As temperatures rise, days of extreme heat that used to happen once every 20 years may now happen every 2 or 3 years on average With the exception of June, 2016 was the warmest year on record from January to September (NASA, 2020c). Since 2005, 10 of the warmest years in the record-keeping period of 140 years have happened. Six of the hottest years on record occurred in the past six years (IPCC, 2018).
  • Oceans get warmer: Over 90% of the warming that has happened on Earth in the past 50 years has happened in the oceans (NASA, 2020c). Rising sea levels, ocean heat waves, coral bleaching, severe storms, changes in marine ecosystem, and the melting of glaciers and ice sheets around Greenland and Antarctica are all caused by warmer oceans. The waters were warmer last year than they have ever been since measurement of ocean temperature started more than 60 years ago.
  • Ice Sheets are shrinking: Between 1993 and 2016, the Greenland ice sheet lost an average of 286 billion tonnes of ice per year. During the same time period, the Antarctic ice sheet lost about 127 billion tonnes of ice per year. In the last ten years, the rate of ice mass loss in Antarctica has tripled (NASA).
  • Glacial retreat: Most of the world’s glaciers are melting, including those in Africa, Alaska, the Alps, Andes, Himalayas, and the Rocky Mountains. Most of the sea level rise in the last few decades has been caused by glaciers and ice sheets melting. The melting of glaciers is a major threat to ecosystems and water supplies for people in many parts of the world.
  • Sea level rise: The sea level rises when the oceans get warmer and glaciers and other ice start to melt. When the water in the ocean gets warmer, it expands. This makes the sea level rise even more. In the last 100 years, the sea level rose about 20 cm around the world. In the last two decades, the rate of growth was twice as fast as in the last century, and this rate is getting faster. Flooding is getting worse and happening more often in many places.
  • Increased frequency of extreme hydrological and meteorological events: Since the middle of the last century, there have been more events with record high temperatures and heavy rainfall. Since the early 1980s, hurricanes have been getting stronger, happening more often, and lasting longer. As the oceans continue to warm, hurricane storms will get stronger and rain will fall at a faster rate.
  • Oceans are getting more acidic: Since the start of the Industrial Revolution, the surface waters of the oceans have become about 30% more acidic. The cause of this increase is that humans are releasing more carbon dioxide into the atmosphere, which causes more of it to be absorbed by the oceans. Carbon dioxide is being taken up by the top layer of the oceans at a rate of about 2 billion tonnes per year.

Future Scenario

According to reports made by the Intergovernmental Panel on Climate Change (IPCC), the average global temperature is on track to rise by 3°C by the end of this century. Their goal is a maximum of 1.5°C, but reaching that goal will require “rapid, far-reaching, and unprecedented changes in all parts of society. To reach a goal of 1.5°C warming, greenhouse gas emissions will need to be cut by 45 percent below what they were in 2010 by 2030. And, as we’ve already said, even if all of these emissions stopped right now, the world’s temperature would still rise for decades because of the long lasting effects of the atmosphere and oceans. Climate change affects the quality of our environment, our food supplies, our susceptibility to diseases and other health problems, and our ability to make money. Most of these effects are being felt and will continue to be felt more in the future and sadly more by the poor than by the rich.

Information security

Reading Time: 5 minutes

What do we first mean by “Security”?

“The quality or state of being secure—to be free from danger.” A successful organization should have multiple layers of security in place:

• Physical security: safeguarding personnel, hardware, software, networks, and data against physical actions and events that could result in significant loss or damage to an enterprise, agency, or institution.

• Operations security-Protection of the details of a particular operation or activity.

• Communication Security-Protection of organization communication media, technology, and content

• Network Security-Protection of Networking components, connections and contents.

• Information Security-Protection of information and its Critical elements.

Information security

Information Security

Information security refers to securing the data or information and information systems from unauthorized access, unauthorized use, misuse, destruction, or alteration. It plays a vital role in protecting the interests of individuals who depend on information or data. Information security aims to protect the confidentiality, integrity, and availability of information.

Information security

Elements of Information Security

Confidentiality:

Information is accessible only to those who are authorized to view it.

Integrity:

Information, especially while being communicated, is protected against unauthorized modification.

Availability:

Information is invulnerable to attacks or is recoverable in a secured way, i.e., it is available (only to authorized) when it should be.

Non-Repudiation:

The sender of information cannot deny that information has NOT been sent by him.

The Need for Information Security

-Protecting the functionality of the organization

-Enabling the safe operation of applications

-Protecting the data that the organization collects and uses

-Safeguarding technology assets in organizations

The Importance of Information Security

Companies need to be confident that they have strong data security and that they can protect against cyber-attacks and other unauthorized access and data breaches. Weak data security can lead to key information being lost or stolen, creating a poor experience for customers that can lead to lost business and reputational harm if a company does not implement sufficient protections over customer data and information security weaknesses are exploited by hackers. Solid information security reduces the risks of attacks in information technology systems, implements security controls to prevent unauthorized access to sensitive data, prevents service disruption caused by cyber-attacks such as denial-of-service (DoS attacks), and much more.

Information Security Attack Vectors

Information security

An attack vector is a pathway or method used by a hacker to illegally access a network or computer to exploit system vulnerabilities. Hackers use numerous attack vectors to launch attacks that take advantage of system weaknesses, cause data breaches, or steal login credentials.

Cloud Computing Threats:

Cloud computing is an on-demand delivery of IT capabilities where sensitive data of organizations and clients is stored.

The flaw in one client’s application cloud allows attackers to access other clients’ data.

Advanced Persistent Threats:

An APT is an attack that focuses on stealing information from the victim’s machine without the user being aware of it.

Viruses and Worms:

Viruses and worms are the most prevalent networking threats that are capable of infecting a network within seconds.

Mobile Threats:

The focus of attackers has shifted to mobile devices due to the increased adoption of mobile devices for business and personal purposes and comparatively lower security controls.

Botnet:

A botnet is a huge network of compromised systems used by an intruder to perform various network attacks.

Insider Attack:

It is an attack performed on a corporate network or a single computer by an entrusted person (insider) who has authorized access to the network.

Information Security Threat Categories

Network Threats

-Information gathering

-Sniffing and eavesdropping

-Spoofing

-Session hijacking and Man-in-the-Middle attack

-DNS and ARP Poisoning

-Password-based attacks

-Denial-of-Service attack

-Compromised-key attack

-Firewall and IDS attack

Host Threats

-Malware attacks

-Footprinting

-Password attacks

-Denial-of-Service attacks

-Arbitrary code execution

-Unauthorized access

-Privilege escalation

-Backdoor attacks

-Physical security threats

Application Threats

-Improper data/input validation

-Authentication and Authorization attacks

-Security misconfiguration

-Information disclosure

-Broken session management

-Buffer overflow issues

-Cryptography attacks

-SQL injection

-Improper error handling and exception management

Cybersecurity

Information security

Cybersecurity is the protection of internet-connected systems such as hardware, software, and data from cyber threats. The practice is used by individuals and enterprises to protect against unauthorized access to data centres and other computerized systems.

Distinctions between Cybersecurity and Information security

Information security

Cybersecurity is meant to protect against attacks in cyberspace such as data, storage sources, devices, etc. In contrast, information security is intended to protect data from any form of threat, regardless of analogue or digital. Cybersecurity usually deals with cybercrimes, cyber fraud, and law enforcement. On the contrary, information security deals with unauthorized access, disclosure modification, and disruption.

Cybersecurity is handled by professionals who are trained to deal with advanced persistent threats (APT) specifically. Information security, on the other hand, lays the foundation of data security and is trained to prioritize resources first before eradicating threats or attacks.

Information Security Laws and Standards

• Payment Card Industry Data Security Standard (PCI-DSS)

• ISO/IEC 27001:2013

• Health Insurance Portability and Accountability Act (HIPAA) 1996

• Sarbanes Oxley Act (SOX) 2002

• The Digital Millennium Copyright Act (DMCA) 1998

• Federal Info Security Management Act (FISMA) 2002

• Cyber Laws

Conclusion

In conclusion, nowadays the value of data has reached a critical point, becoming one of the most important assets that a company can possess, while collecting, processing, transmitting, and storing it has become too complex.

Information security is designed to protect the confidentiality, integrity, and availability of computer systems and physical data from unauthorized access, whether with malicious intent or not.

Confidentiality, integrity, and availability are referred to as the CIA triad.

Every information security program is concerned with protecting the CIA triad while maintaining organizational productivity.

“The goal of information security is not to bring residual risk to zero; it is to bring residual risk into line with an organization’s comfort zone or risk appetite.”

GitHub Copilot or Nopilot?

Reading Time: 3 minutes

What’s the answer to most of your coding questions? Some will say it’s Stackoverflow, while some will say it’s a co-programmer. What if we had a combination of both? Life would have been simple, right? So Github Copilot is here for you. 

Introduction

Github Copilot is an AI (Artificial Intelligence) pair programmer that gives autocomplete style suggestions. This software is like a co-programmer friend next to you who can suggest the code. AI has been our support in our daily lives. It assists us in writing emails by completing sentences with suggestions, automatically generating photo albums, and how can we forget Google Assistant, the digital assistant that assists you in a variety of tasks. But developing software is still a manual job. Hence, to change this, Github introduced Github Copilot.

What exactly does it do and how?

GitHub Copilot or Nopilot?

Github Copilot is built by Github, one of the most prominent open-source platforms, where people contribute to various projects. Hence, a lot of code is available on GitHub, which is accessible to all. Meanwhile, an AI is a program that takes a large dataset and an algorithm to learn from patterns and features in the data that they analyze.

To explain it further, let’s assume that Github Copilot is a very senior developer who has seen billions of functions in code and a lot of projects. So if you ask him/her a query in your code, then there’s a high probability of getting a correct answer.

Github Copilot uses its large sets of data to give suggestions and autocomplete code.

For instance, you want to write a function in Python that takes two numbers and returns their sum.

If you write a comment, “ Function to print the sum of two numbers,” then GitHub copilot may suggest you :

def total(a,b) : 

      result = a+b

      print ( a, “+”, b,”=”,result)

Isn’t that very cool? Surely it will be and will save a lot of time and effort to write code.

But it has some disadvantages also. Let’s dive into it.

Disadvantages

GitHub Copilot or Nopilot?
  1. Many people think that this program has the potential to replace human developers, which means there’s a probability that it will take away the jobs. And with time, when it gets better and better, there is a high chance that this program will replace developers who don’t have logic-building jobs.
  2. Is it credible? Maybe not right now. It’s still at the beginner stage and users have experienced wrong code suggestions. 
  3. Available to all developers but generally at $10/month or $100/year which might be unaffordable for some people.
  4. It’s free for verified students, so if students get used to Copilot and start to rely on it then that could be harmful to their future.

Conclusion

Github Copilot is software that enhances the coding experience of users and makes it easier for them to develop software. It gives suggestions and can help coders get faster results. It also has some disadvantages that are mentioned above. Overall, it is a very innovative and useful piece of software, but only if used appropriately.

Time travel

Reading Time: 5 minutes

Time travel is commonly defined by David Lewis’ definition: An object time travels if and only if the difference between its departure and arrival times as measured in the surrounding world does not equal the duration of the journey undergone by the object.

People ask is time travel a philosophical structure or hypothetical or mathematically shown? The answer is that it is the combination of them all!!

Can we travel in time?

Yes, we can! Even we are traveling in time at a steady rate of one second per second or one hour per hour. Most commonly we are taught that there are only three dimensions and there is no similarity in traveling between 2 spatial dimensions. But according to Einstein’s theory of relativity we live in a four-dimensional continuum i.e. space-time in which space and time are interchangeable.

Many of you may have heard about the novel “The Time Machine” by H.G. Wells published in 1895. One of the earliest works in this genre, where Well talked about the concept of time travel by using a device or machine to travel purposely backward or forward in time.

Time travel

In today’s date, Wikipedia lists over 400 titles in the category, ‘Movies about time travel’ In documentaries like “Back to future”, “Doctor Who”, and “Star Trek” characters climb into some wild vehicle to blast into the past or spin into the future.

Many of us are fascinated by the idea of changing the past or looking into the future, but no person has ever demonstrated the kind of back-and-forth time travel depicted in science fiction. Neither of them has proposed any method of sending a person through time without destroying them.

 But that also doesn’t mean that time travel is not happening. We know that according to Einstein’s theory of relativity, the faster we move the slower we experience time. A clock on a jet airplane will run slightly behind the clock on the ground.

But what we don’t know is that we are using the fundamentals of time travel in many day-to-day activities.

Time travel

Time Travel Applications

We use GPS technology to search for varied locations across the globe. NASA also uses high accuracy version of GPS to keep track of where satellites are in space.

To our surprise, this whole system relies on time travel calculations. We and the satellites are traveling into the future at very slightly different rates.GPS satellites orbit around Earth very quickly at about 8,700 miles per hour. This slows down GPS satellite clocks by a small fraction of a second (as demonstrated in 2 clocks example above).

Time travel

However, the satellites are also orbiting Earth about 12,550 miles above the surface. This actually speeds up GPS satellite clocks by a slighter larger fraction of a second.

Here’s how: Gravity is much weaker at the height where these satellites orbit.

Einstein’s theory also says that gravity slows down the passage of time. So the clocks will run at a faster pace where the gravity is comparatively less.

The combined result is that the clocks on GPS satellites experience time at a rate slightly faster than 1 second per second. Luckily, scientists can use math to correct these differences in time.

If all these corrections are being ignored, GPS maps might think your home is nowhere near where it actually is!

Time travel into future

If you want to advance through the years a little faster than the next person, you’ll need to exploit space-time. As we see above Global positioning satellites (GPS) pull this off every day, accruing an extra third-of-a-billionth of a second daily.

You wouldn’t be able to notice minute changes in the flow of time, but a sufficiently massive object would make a huge difference — say, like the supermassive black hole at the center of our galaxy. Here, the mass of 4 million suns exists as a single, infinitely dense point, known as a “Singularity”. Circle this black hole for a while (without falling in) and you’d experience time at half the Earth rate. In other words, due to time dilation you’d round out a five-year journey to discover an entire decade had passed on Earth.

Quite a similar thing was shown in movie the “Interstellar” directed by Christopher Nolan.

In the movie, the crew spends a few hours on a planet orbiting a supermassive black hole, but because of time dilation, observers on Earth experience those hours as a matter of decades.

Thinking in this way we are traveling into the future. But what about the past? Could the fastest starship imaginable turn back the clock?

Time travel

Time Travel into the past (Changing history!)

A big question in all of our minds. Can we change our history? Can engineers take their decision back of pursuing engineering?

A glance into the night sky should supply an answer. The Milky Way galaxy is roughly 100,000 light-years wide, so the light from its more distant stars can take thousands upon thousands of years to reach Earth. Glimpse that light, and you’re essentially looking back in time.

But can we do better than this?

There’s nothing in Einstein’s theory that precludes time travel into the past, but the very premise of pushing a button and going back to yesterday violates the law of causality. One event happens in our universe, and it leads to yet another in an endless one-way string of events. In every instance, the cause occurs before the effect.

What happens if you go back in time and kill your parents before you are born. How can you be born?

To travel back in time, some scientists proposed the idea of traveling at a speed greater than the speed of light. After all, if the time slows down for the object approaching the speed of light, then exceeding that limit might cause the time to flow backward. But as an object nears the speed of light its relative mass increases until it becomes infinite at light speed.

Accelerating an infinite mass any faster than that is impossible. Maybe that’s the reason we can’t move backward in time.

Time travel

Conclusion

We may reach to the conclusion that time travel is possible but probably not in the way we see it in science fiction and movies. There are still many cards unturned and a lot more to know about this vast topic. Hopefully, this blog gave you a headstart!

Web3.0, Blockchain and Dapps

Reading Time: 6 minutes

We have already entered the era of Web3.0, which is the next evolution of the internet…. Wait! Web3.0? Are you serious? Why can we not have just a single web? What’s the matter with these different versions of the web? Oh wait, I got it! It’s all about breaking the monotonous “Web” into different versions just to sound cool, yeah?

Well, I wouldn’t deny the fact that, yes, it indeed “sounds cool”, but there is a whole story behind it…. want to know about why there have been different versions of the web and what makes web3.0 the buzzword in the community? Well, ride with me to explore this journey ahead.

  • Introduction

Web 1.0 vs Web 2.0 vs Web 3.0. We all understand the hype about Web… | by Nazhim Kalam | Enlear Academy

Do you like to create content? Writing articles, blogs and sharing information to show it to people surfing over the internet… well this was what Web1.0 was all about. Coined by Tim Berners Lee, it only had static pages in order to serve the content created by the users, the time between 1991 and 2004, when there were only a few who created the content and the rest of the people used to watch that over the internet.

Then came Web2.0, which we have been using, increased the interactivity of the users by not just providing them with the content but enabling them to interact with each other over the internet. Do you want food and groceries to be delivered to your place, want to book a ride for going out, or would you like to get feedback on your service…? Well this phase of the web addresses all these questions and provides solutions for them.

Now let me come back to where I dropped off… We have already entered the “Era of Web3.0”, which is the next evolution of the internet, that is based on openness (which means a small group of the organization will not control the content and the code but on open source platforms – means anyone, from anywhere in the world, who will be having the access to the internet will be able to use the same) and decentralization (means no permission will be needed from a central authority for posting anything on web and users will be able to interact with the services without the authorization from central authority).

As a result, Web 3.0 applications will run on the blockchain, and such decentralized apps are referred to as dApps (discussed later below). It will even use machine learning to effectively use the data, giving valuable insights to the companies and fruitful results to the consumers but it doesn’t just stop here. The application of Virtual Reality is also immense, where people can be exposed to virtual environments in various domains such as military, sports, medical, just to name a few, before they go into the real world.

  • Blockchain

Blockchain Technology Explained and What It Could Mean for the Caribbean - Caribbean Development Trends

The heart of Web3.0, blockchain, is a distributed database or ledger that is shared among the nodes (nodes are the devices or computers that participate in a blockchain network and continuously exchange the newest information or the transaction on the blockchain) of a computer network.

The first decentralized blockchain was conceptualized by a person (or group of people) known as Satoshi Nakamoto, who is known as Bitcoin’s pseudonymous creator. In a research paper introducing the digital currency, he (they) referred to it as “a new electronic cash system that’s fully peer-to-peer, with no trusted third party.” After this event, there was no going back.

As a database, a blockchain stores information electronically in digital format. Compared to a normal database, which usually stores data in the form of tables (for a simpler explanation of database, we can consider Excel sheets – as they have the data organized in the form of rows and columns), blockchain structures its data into blocks, that records the data, and distributes it over the network of nodes but do not allow it to be edited. In this way, a blockchain is the foundation for immutable ledgers, or records of transactions that cannot be altered, deleted, or destroyed.

Distributing the data among several network nodes at various locations, not only creates redundancy (means multiple copies of the same data available all over the network) but also maintains the fidelity of the data stored therein—if somebody tries to alter a record at one instance of the database, the other nodes would not be altered and thus would prevent a bad actor from doing so.

If one user tampers with a record of transactions, then, all other nodes would cross-reference each other and easily pinpoint the node with the incorrect information. This system helps to establish an exact and transparent order of events. This way, no single node within the network can alter information held within it.

Web3.0, Blockchain and Dapps

Because of this, the information and history of the data stored on the blockchain are irreversible. Blockchain can find its applications in various places such as in:

  • Secure sharing of medical data
  • NFT (non-fungible token) marketplace
  • Cross-borderer payments
  • Supply chain and logistics monitoring
  • Anti-money laundering tracking system
  • Voting mechanism
  • Crypto exchanges
  • Secure IoT (internet of things) network

  • Dapps

What Are dApps?

Decentralized applications (dApps) are digital applications or programs that exist and run on a blockchain or network of computers directly connected with each other on the network (P2P or peer-to-peer network) instead of a single computer. These are outside the purview and control of a single authority.

Well, this was somewhat a broader definition, and to understand it better, let’s understand it through an example…

A standard web app, such as Ola, Uber, Twitter, Amazon, etc, runs on a computer system that is owned and operated by an organization, giving it full authority over the app and its workings. There may be multiple users using the app on one side, but the backend is controlled by a single organization. DApps can run on a P2P network or a blockchain network.

Web3.0, Blockchain and Dapps

A peer-to-peer (P2P) service is a decentralized platform whereby two individuals interact directly with each other, without intermediation by a third party. Here the buyer and the seller directly communicate with each other without the involvement of another party or organization between them. Also, as it runs on a blockchain network, hence whatever data once registered on it, cannot be deleted by anyone.

dApps use smart contracts (these are basically a piece of code which are embedded inside the block of a blockchain, in order to carry out computations and transactions, like Ethereum blockchain uses Solidity language to write, store and run codes inside its block) to complete the transaction between two anonymous parties without the need to rely on a central authority.

A smart contract is a self-executing contract with the terms of the agreement between buyer and seller being directly written into lines of code.

Web3.0, Blockchain and Dapps

Ethereum dApps use smart contracts for their logic. They are deployed on the Ethereum network and use the platform’s blockchain for data storage. Ethereum provides flexible dApps development, hence helping in the rapid deployment of dApps for a variety of industries including banking and finance, gaming, social media, online shopping, and many more.

  •  Conclusion

We can finally see the story behind the different versions of the Web and how the blockchain and decentralization have already been bringing the revolution in the tech domain. The digital world is changing rapidly, and companies need to keep up with the swiftly moving tech environment. Along with the survival of the fittest, it’s also about the survival of the swiftest! You can beat your competition in the market if your company not only adapts to modern technology but if it’s also able to adjust itself to the modern environment faster than the others.

Spiralling fuel prices and their implications on the Indian economy

Reading Time: 4 minutes

With oil prices at all-time highs, consumers and businesses alike are feeling the pinch. With India’s economy sputtering, it has been struggling to cope with the increased cost of fuel. Find out how a complicated web of factors is affecting the industry, and what will happen if fuel prices continue to rise. 

Introduction

India imports about 85% of its oil requirements. While the global demand for crude oil is on a continuous rise, geopolitical tensions add to a sudden push to the increase in its value. The direct effect of increased prices is inflation. To curb inflation, the Reserve Bank of India has hiked interest rates, which in turn has made it difficult for many industries to grow. This has led to a slowdown in the economy and has adversely affected the growth prospects of the country.

Current situation

The government has been trying to mitigate the effects of high fuel prices by providing subsidies and increasing domestic production. However, these measures have not been very effective in reducing the overall burden on the economy. They have often led to higher fiscal deficits. A war-like scenario increases the fuel demand proportionally; recently, after the open conflict between USA and Iran (Iran being the second-largest oil producer in the world after Saudi Arabia), the fuel prices spiked as Iran exports at a higher price, due to lower production caused by the conflict in 2020. The Russia-Ukraine conflict also plays a hand in this shuffled deck of problems as India suffers the fallout to some level. Similarly, OPEC+ (The Organisation of the Petroleum Exporting Countries and more) including Russia are unable to meet apt crude oil export according to demand due to low production.

Spiralling fuel prices and their implications on the Indian economy

 Impact on the Indian Economy

Just from February to March of 2022, the prices for crude per barrel jumped from $90 to a soaring $120 per barrel. As the oil price per barrel increases, the value of the Indian rupee cripples currently at Rs.78.29 per dollar and is expected to drop to Rs.80 per dollar. This increases India’s spending to keep up with consumer demand. Demand not only for crude oil but also edible oils. According to a report by Kotak Institutional equities an additional burden of 70 billion dollars is estimated on the country’s economy in FY23 against the FY22 level. However, the solution implemented by the RBI is questionable. Even as the second wave of Covid-19 hit India this year and left a trail of economic devastation, the central and state governments have hiked taxes on fuels, pushing retail fuel prices to record highs in most of the country. Fuel sales attract central and state taxes apart from refining costs. Taxes contribute substantially to state treasuries. The central government also estimates a significant contribution from fuel sales to the annual budget. The fuel prices seemed to have softened over the 2020-21 yet, the benefit was not passed on to the Indian consumer. Even though officials deny rumours and provide false assurances, neither the State Government nor the Central Government would like to reduce their revenues. More so, recently, fuel pumps across the country in several states have reportedly dried out of diesel, and a diesel shortage appears. HPCL and BPCL (leading petroleum corporations) have restricted the supply of petrol and diesel to only 33 percent of the demand, leading to a crisis. Does no change in fuel retail price have anything to do with this? Yes, despite the rising trend in crude oil prices, the fuel retail prices have been kept stagnant since April, so consumers haven’t felt the heat from increased prices; while the oil marketing companies are incurring huge losses and hence reduced the supply.

Spiralling fuel prices and their implications on the Indian economy

Conclusion

The Indian economy feels the pinch of increased pricing; also putting pressure on consumers and businesses alike, in the form of higher transportation costs, which are leading to inflationary pressures. The knock-on effect of transportation costs is felt in other areas of the economy. Transportation cost of raw materials for industrial projects to daily goods and essentials has risen, putting pressure directly on all sizes of business and even household budgets. While the per capita earning increases, average spending has increased drastically in this decade. Hence, the consumers have to bear the brunt of the increased retail price of fuels. The government is under immense heat, they would have to reduce the taxes to keep the fuel price in check and in turn, reduce the burden on the consumers and themselves. Changes in policies are imminent and expected!

Hyperverse

Reading Time: 3 minutes

Hyperverse is a game inspired by the concept of the metaverse. It combines many metaverses linked to form a single component, opening the door for a revolutionary virtual experience.

In the Hyperverse, players, also known as voyagers, can connect with friends, experience different cultures, create tokenized items, run businesses, and explore the universe.

Hyperverse is a New York-based company that offers a virtual reality experience distribution platform. It was founded by Roman Mikhailov and Arsen Avdalyan in January 2016.

Features:

  • Virtual Experience: It allows players to clone themselves and experience the virtual life that Hyperverse offers.
  • Tokenized in-game items: Virtually anything within the hyperverse can be traded as NFT coins.
  • Space expedition: Form groups for interstellar voyages to explore unknown space and planets.
  • Decentralized: Trade tokenized real-world stocks, options, contracts, and EFTs

ADVANTAGES OF USING CRYPTO AS IN-GAME CURRENCY:

Hyperverse

Digital proof of ownership:

By owning a wallet with access to your private keys, you can instantly prove ownership of an activity or an asset on the blockchain. A wallet is one of the most secure and robust methods for establishing a digital identity and proof of ownership.

Digital collectability:

Just as we can establish who owns something, we can also show that an item is original and unique. Through NFTs, we can create unique objects that can never be forged.

Transfer of value:

In-game currencies in multiplayer games are less secure than crypto on a blockchain. If users spend a lot of time in the metaverse and earn money there, they will need a reliable currency.

Governance:

In real life, we can have voting rights in companies and elect leaders and governments. The metaverse will also need ways to implement fair governance, and blockchain takes care of that.

Accessibility:

Creating a wallet is open to anyone around the world on public blockchains. Unlike a bank account, you don’t need to pay any money or provide any details. This is a considerable advantage in managing your finances effectively.

HOW HYPERVERSE DEVELOPERS FUND THE DEVELOPMENT OF HYPERVERSE:

The developers promised people who invest in hyperverse to triple their invested money. They convert the investor’s money USDT, a stable crypto coin, into HU, an in-game currency known as hyper unit per day. Hyperverse has raised over $530k in total.

WHAT HYPERVERSE IS?

It’s one of the most well-thought scams which hides its true intention very effectively. It’s a Ponzi scheme which is also known as a pyramid scheme. But investors can easily fall into this scam without realizing it because of its well-designed system.

 

To Enroll within hyperverse, you need to purchase a membership that seems no threat to the average eye but is designed to hide the Ponzi scheme the investor is getting into.

 

The investor is promised to get their investment tripled within 600 days. How it is tripled is kept unclear, and once the investor falls into the greed trap, they are trapped in a commitment of 600 days.

 

They have lured investors with reviews and success members of the hyperverse with comments like, “I know two people who invested in HYPERVERSE, and they’re doing great. One put in 400 k, and he’s paying for a 15-million-dollar estate. The other put in 30k, and he’s already pulled out his initial investment. I’m doing great myself going well so far.”

 

Hyperverse is a scam by Ryan Zou and Sam Lee, who have been involved in disgusting scams like these several times.

Conclusion:

With the rapid growth of cryptocurrency, it has become easier for scammers to scam people. Investors should be cautious before investing in crypto-based projects without full knowledge of how and why their investment will grow the promised amount and should consult experts for safe and profitable investments.

Space Travel- The New Era of Tourism

Reading Time: 6 minutes

SPACE…….the word that amazes us to date. We all have dreams about going to space and seeing the bewildering site from high above. But many of us couldn’t follow that dream. Only a few of us who became astronauts got a real opportunity to do so.

For those who couldn’t go, science came to the rescue and has made space travel for civilians possible.

Even if the concept of space tourism seems afresh, it was first in mid-2001 when Dennis Tito, the American businessman, became the first space tourist and spend nearly 8 days in space.

Where have we reached till now?

The advent of space tourism occurred at the end of the 1990s when Russian company MirCorp and the American company Space Adventures Ltd came together and decided to sell a trip to Dennis Tito in order to generate revenue for the maintenance of the Mir space station which MirCorp was in charge of. Tito became the first paying passenger, who paid $20 million for a roughly 8-day expedition.

Orbital space tourism continued to grow following Tito’s mission, with flights to the ISS by South African computer millionaire Mark Shuttleworth in 2002, Gregory Olsen in 2005, Anousheh Ansari in September 2006, and many more

Space Travel- The New Era of Tourism
(Dennis Tito’s space exploration)

Different types of space flights:

We can classify the type of space flights passengers can enjoy into two divisions, namely suborbital and orbital.

1)Suborbital Spaceflight

These flights aim to reach an astonishing altitude of over 300,000 feet, reaching the Karman line where outer space begins. Currently, there are two major companies targeting this type of flight, Virgin Galactic, part of Richard Branson’s empire, and Blue Origin, run by Amazon’s billionaire founder Jeff Bezos.

2)Orbital Spaceflight

Orbital ones would be completely different than suborbital ones, which last up to a few minutes. These spaceflights strive for giving the passenger an exotic journey of a few days to a week in space over 1.3 million feet. The final quarter of 2021 is likely to be huge for tourists in orbital spaceflight, with two major companies Space Adventures and Axiom Space announcing up to nine seats to orbit available for purchase by either individuals or organizations

But orbital spaceflight has its own challenges of providing boarding for a few days to passengers in space. Currently, the International Space Station (ISS) is the only habitable structure, but many companies are looking for a breakthrough.

Why Space tourism?

It is easy to dismiss a plan such as that of space tourism which is a luxury that only the rich can afford, but there are certain advantages to it that each part of the society might experience.

1)It is a dream project to work on.

From the onset of space tourism, it is only growth that we have seen in the frequency of space flight. This will automatically create a boom as job opportunities will be created in every sector known. And who won’t like to work on a project that is related to SPACE!!!

2)The cost will definitely go down

Even if space tourism seems costly now, we will surely see a decrease in cost. The reason behind this is that we could create a space shuttle having more capacity so the cost per person reduces.

Comparing the same situation with the airplanes, when once they used to be a mode of travel for the upper class people in the society, but then, as the aviation sector boomed, the prices of the airplane tickets went down significantly and finally any common man is now able to afford the same.

3)Space tourism will not be a complete disaster

Most of us are still concerned regarding the environmental imbalance that space tourism may cause. Fortunately, Blue Origin’s New Shepard produces only water as an exhaust, which took care of the global warming that rockets caused earlier. Also, Elon Musk has declared his intention to produce methane fuel directly from the atmosphere using solar power, assuring that the fuel cycle is carbon neutral.

4)Increase in space exploration and experiments held in space

When Dennis Tito went to space, he conducted a scientific experiment up in space on the behalf of a lab. This shows that space tourism will not only support scientific exploration but also open avenues for advanced technology to be created because of the demand created due to this massive boom.

Challenges Confronting Space Tourism

1)Cost:

Cost is one of the biggest challenges as such space travel by tourists is limited only to rich people till now. OECD space agencies have spent approximately $1 trillion since 1961. As a consequence, unless there is a reduction in costs as far as technology allows, space agencies’ role in the future development and exploration of space is likely to shrink progressively. Ultimately, by reduction of cost, the development of space travel will lead to the permanent and progressive expansion of human culture into space.

2)Lack of understanding:

One of the other challenges space tourism faces, is that it is still not well understood by the general public. This lack of understanding by the general public poses a marketing challenge. Advertisement largely relies on what is essentially word-of-mouth advertising.

3)Environmental challenges:

Space tourism poses a grave environmental challenge to mankind. The pollution caused due to the launching of space shuttles and rockets is enormous. Consequently, the amount of pollution will rise and the amount of debris in the space will reach a dangerous level. This will contribute to global warming and the harmful effects of it will be borne by the poor people who have no stake in the development of space tourism.

Space Travel- The New Era of Tourism
(space junk)

What’s next?

Currently, space tourism is on course to being developed as a model of space adventure, though with some potential concerns. Space tourism is in its pioneering phase where customers will be very few, and the cost will still be fairly high. As companies like SpaceX test reusable rocket technology to make spaceflight more affordable and accessible for humans, other private firms, including Virgin Galactic and Blue Origin, are investing in suborbital space tourism to take Earthlings into the very edge of space and back. There are prospects in the future for the start of sub-orbital passenger space flight operations from newly developed commercial spaceports. More next-generation engineers will enter the space tourism sector for the scope of opportunities and innovation, eventually decreasing the barriers to entry that will increase competition, lower the costs, and ultimately democratize space travel for everyday citizens. Some companies have their sights set on venturing even further, with aspirations of building the first orbital space hotel. While only uber-wealthy passengers and private researchers will have access to space tourism in the immediate future, the long-term holds promise for ordinary citizens.

 

Conclusion

It can be concluded that space tourism is at a nascent stage and many challenges need to be solved before flowering the space tourism industry. Certainly, the progress made by the space tourism industry in the last several years has been uneven, and arguably at a slower pace than hoped for. The industry in general, may have made some mistakes and lost some opportunities along the way, but it is equally necessary to analyze enhancing procedures to make individuals understand prospective market niche and depth in space tourism. Recognition of required future technologies is equally vital considering that the industry is a multi-million-dollar investment. The growth of space tourism is going to have a huge benefit and cultural effect, which will widen human horizons, as its appropriate for the 21st century. Tourism isn’t just going to be a small part of future space activity; it is going to be the mainstream space activity. The sweet escape to the stars can eventually manifest the awe-inspiring potential of space exploration while also giving us a better appreciation of our home.

The Combat Uniform

Reading Time: 4 minutes

As you all know army uniform is changed recently on Jan 15 which was Army day and here we have given some insight on this new uniform.

  • The New Combat Uniform has been designed by the National Institute of Fashion Technology (NIFT) in close coordination with the Army.
  • The uniform was designed by a team of 12 people which included seven professors, three students, and two alumni. 
  • The Combat Uniform was created through a consultative process with the Army, keeping in mind the “4Cs” — comfort, climate, camouflage, and confidentiality.
The Combat Uniform

Purpose:

The uniform has been designed to serve two requirements: 

a) protection against harsh climatic conditions, including extreme heat and cold

b) To provide soldiers’ outfits with field camouflage, so as to increase battlefield survivability.

The digital descriptive pattern:  It’s the new trend in camouflage designs to have a lot of little random squares instead of a lot of huge random squiggles. Any camouflage pattern created with the help of a computer is referred to as “digital camouflage.” You feed the computer information on what colours to expect and in what percentages, and the computer generates a pattern. This design is refined and refined until it is attractive to the eye.

The Combat Uniform

History:

Digital camouflage provides a disruptive effect through the use of pixellated patterns at a range of scales, meaning that the camouflage helps to defeat observation at a range of distances. Such patterns were first developed during the Second World War, when Johann Georg Otto Schick designed a number of patterns for the Waffen-SS, combining micro- and macro-patterns in one scheme. The German Army and The Soviet Army developed the idea further more.

Later US Army officer Timothy R. O’Neill suggested that patterns consisting of square blocks of colour would provide effective camouflage. By 2000, O’Neill’s idea was combined with patterns like the German Flecktarn to create pixellated patterns such as CADPAT and MARPAT. Battledress in digital camouflage patterns was first designed by the Canadian Forces. The “digital” refers to the coordinates of the pattern, which are digitally defined. The term is also used of computer generated patterns like the non-pixellated Multicam and the Italian fractal Vegetato pattern. Pixellation does not in itself contribute to the camouflaging effect. The pixellated style, however, simplifies design and eases printing on fabric.

The Combat Uniform

Improvements:

The main changes in the new uniform, compared to the old one that has been in use since 2008, are with regard to the camouflage pattern, design, and the use of a new material. While the new unique camouflage pattern retains the combination of the same colours — olive green and earthen shades — the pattern is digital now. It has been designed keeping in mind the many kinds of operational conditions that the soldiers function in, from deserts to high-altitude areas, jungles and plains. The fabric for the new material makes it lighter, sturdier, more breathable, and more suitable for the different terrains that soldiers are posted in. The cotton-to-polyester ratio is 70:30, making it quicker to dry, more comfortable to wear in humid and hot conditions, and lightweight. According to the Army, it is an ergonomically designed, operationally effective, new-generation camouflage combat uniform. The fabric is 15 per cent lighter, and has 23 per cent more strength against tearing, against the current uniform.

The ergonomic features allow for long-hour use and comfort, and micro features are inbuilt for the use of the wearer in field conditions. The new uniform has a mix of Olive Green and earthy colours for better camouflage and the fabric is durable, sturdier and lighter than the ones used earlier. 

The pattern of the Indian Army’s new uniform is widely used by troops abroad and is engineered to withstand tough conditions like harsh temperatures, explosive bursts, and fluctuating air pressures and is supposed to provide the soldiers more comfort. The new combat uniform is also eco-friendly. It even resembles the modified version of the Canadian CADPAT camouflage pattern that was rolled out to the US Army, US Airmen in the Middle East and Air Force Global Strike Command security forces.  

James Webb Space Telescope

Reading Time: 12 minutes

INTRODUCTION

The curiosity about what was there 13.5 billion years ago and the search for the habitable planets might end. On 25th December 2021, NASA launched their massive 10 billion dollar endeavor, which will help humans look for what was there and what surprises the universe holds for us. The expensive James Webb Space telescope, simply called Webb, is named after James E. Webb, who served as the second administrator of NASA during the 60s and oversaw U.S. crewed missions throughout the Mercury and Gemini programs.
 

The JWST or Webb is a space telescope which is developed by NASA in collaboration with the European Space Agency and Canadian Space Agency. It will complement the Hubble space telescope and is optimized for the wavelengths in the infrared region. The JWST is 100 times more powerful than it. The diameter of the optical mirror of Webb is 6.5 meters making its collecting area 6.25 times more than Hubble. The Webb consists of 18 hexagonal adjustable mirrors made of gold-plated beryllium with just 48.2 grams of gold, about the same weight as a golf ball. Since the telescope is operating in the infrared region, the temperature around it needs to be very low to prevent the overwhelming of the sensors by the heat from the Sun, the Earth, and the heat emitted by its parts’. To overcome that the special material called Kapton with a coating of aluminum is used such that, one side facing the sun and earth would be around 85 degrees celsius while the other side would be 233 degrees Celsius below zero. Also, the problem of keeping the instrument’s temperature at an optimal level is solved by using liquid helium as the coolant. The telescope is going to have 50 major deployments and 178 release mechanisms for the smooth functioning of the satellite. The Webb was launched on Ariane 5 from Kourou in French Guiana and will take six months to become fully operational and is expected to work for 10 years.

 

The JWST project was being planned for 30 years and had to face many delays and cost overruns. The first planning was carried out in 1989 whose main mission was to “think about a major mission beyond Hubble”. There were many cost overruns and project delays throughout the making of the telescope. There were also many budget changes throughout the period. The original budget for making the telescope was going to be US$1.6 billion. Which was then estimated to be US$ 5 million by the time construction started in 2008. By 2010, the JWST project almost got shelved due to the huge budgets until November 2011 when Congress reversed the plan to discontinue JWST and set the cap of the funding at US$ 8 billion.

 

The telescope has been launched to study the early planets and galaxies formed after the Big Bang. The telescope would also help in finding out the formation of new planets and galaxies. The US Congress capped its funding to US$ 6 million.

ORBIT OF THE TELESCOPE

Being an infrared telescope, the position of the telescope in space is crucial for its desired operation. The telescope has to be as far as possible from the sun so that the sun’s infrared ways don’t interfere with the telescope’s instruments as well as not being too far away from the earth to stay in contact with NASA all the time. So NASA decided to put the telescope in Lagrange point 2 of the sun-earth system. So the question arises what is a Lagrange point and what is its importance. Let’s go back and learn how Lagrange and Euler discover these points in space. The Lagrange points are points of equilibrium for small-mass objects under the influence of two massive orbiting bodies. Mathematically, this involves the solution of the restricted three-body problem in which two bodies are very much more massive than the third. These points are named after the French Italian mathematician and astronomer Joseph-Louis Lagrange who discovered the Lagrange points L4 and L5 in 1772 but the first 3 points were discovered by Swiss Mathematician and Astronomer Leonhard Euler in 1772.

 

Joseph-Louis Lagrange was an Italian mathematician and astronomer. He made significant contributions to the fields of analysis, number theory, and both classical and celestial mechanics. In 1766, on the recommendation of Swiss Leonhard Euler and French d’Alembert, Lagrange succeeded Euler as the director of mathematics at the Prussian Academy of Sciences in Berlin, Prussia, where he stayed for over twenty years, producing volumes of work and winning several prizes of the French Academy of Sciences. Lagrange’s treatise on analytical mechanics written in Berlin and first published in 1788, offered the most comprehensive treatment of classical mechanics since Newton and formed a basis for the development of mathematical physics in the nineteenth century

Lagrange was one of the creators of the calculus of variations, deriving the Euler–Lagrange equations for extrema of functionals. He extended the method to include possible constraints, arriving at the method of Lagrange multipliers. Lagrange invented the method of solving differential equations known as variation of parameters, applied differential calculus to the theory of probabilities, and worked on solutions for algebraic equations. In calculus, Lagrange developed a novel approach to interpolation and Taylor theorem. He studied the three-body problem for the Earth, Sun, and Moon (1764) and the movement of Jupiter’s satellites (1766), and in 1772 found the special-case solutions to this problem that yield what are now known as Lagrangian points. Lagrange is best known for transforming Newtonian mechanics into a branch of analysis, Lagrangian mechanics, and presented the mechanical “principles” as simple results of the variational calculus.

 

Normally, the two massive bodies exert an unbalanced gravitational force at a point, altering the orbit of whatever is at that point. At the Lagrange points, the gravitational forces of the two large bodies and the centrifugal force balance each other. This can make Lagrange points an excellent location for satellites, as few orbit corrections are needed to maintain the desired orbit. L1, L2, and L3 are on the line through the centers of the two large bodies, while L4 and L5 each act as the third vertex of an equilateral triangle formed with the centers of the two large bodies. L4 and L5 are stable, which implies that objects can orbit around them in a rotating coordinate system tied to the two large bodies. Now the magic of L2 point is that it is behind the earth and the sun thus if we want to view the night sky without the earth’s intervention when can do it from this point and since it is in the Lagrange point it is orbiting in the same speed as the earth so it can be in continuous communication with the earth through the Deep Space Network using 3 large antennas on the ground located in Australia, Spain, and the USA and can uplink command sequence and downlink data up to twice per day and use minimal fuel to stay in the orbit thus increasing the lifespan of the mission.

 

The telescope is going to be 1.5 million km away from the earth and will circle about the L2 point in a halo orbit, which will be inclined with respect to the ecliptic, have a radius of approximately 800,000 km, and take about half a year to complete. Since L2 is just an equilibrium point with no gravitational pull, a halo orbit is not an orbit in the usual sense: the spacecraft is actually in orbit around the Sun, and the halo orbit can be thought of as controlled drifting to remain in the vicinity of the L2 point. It will take the telescope roughly 30 days to reach the start of its orbit in L2.

 

Unlike the Hubble telescope which can be easily serviced in case of damage, the James Webb Space Telescope cannot be repaired/serviced due to its significant distance(1.5 million km) from earth even more than the most distance traveled by the astronauts during the Apollo 13 mission in which they traveled to the far side of the moon which is approximately 400,000 km from earth. Therefore this is one of the riskiest missions in human history with 344 single points failure could lead to the end of the mission and years of research and hard work of thousands of scientists down the drain.

PARTS OF TELESCOPE

NIRCam:

INTRODUCTION:

NIRCam (Near-infrared camera) is an instrument that is part of the James Webb Space Telescope. The main tasks of this instrument include first as an imager from 0.6 to 5-micron wavelength, and second is as a wavefront sensor to keep 18 section mirrors functioning as one. It is an infrared camera with ten mercury-cadmium-telluride (HgCdTe) detector arrays, and each array has an array of 2048×2048 pixels. Also, NIRCam has coronagraphs which are normally used for collecting data on exoplanets near stars. NIRCam should be able to observe as faint as magnitude +29 with a 10000-second exposure (about 2.8 hours). It makes these observations in light from 0.6 (600 nm) to 5 microns (5000 nm) wavelength.

 

COMPONENTS:

The main components of NirCam are coronagraph, first fold mirror, collimator Pupil imaging lens, senses, dichroic beam splitter, Longwave focal plane, Shortwave filter wheel assembly, Shortwave camera lens group, Shortwave fold mirror, Shortwave focal plane

 

DESIGN:

NIRCam is designed by the University of Arizona, company Lockheed Martin, and Teledyne Technologies, in cooperation with the U.S. Space Agency, NASA. NIRCam has been designed to be efficient for surveying through the use of dichroic.

 

WORKING:

The Near Infrared Camera (NIRCam) is Webb’s primary imager that will cover the infrared wavelength range of 0.6 to 5 microns. NIRCam will detect light from the earliest stars and galaxies in the process of formation, the population of stars in nearby galaxies, as well as young stars in the Milky Way and Kuiper Belt objects.  NIRCam is equipped with coronagraphs, instruments that allow astronomers to take pictures of very faint objects around a central bright object, like stellar systems. NIRCam’s coronagraph works by blocking a brighter object’s light, making it possible to view the dimmer object nearby – just like shielding the sun from your eyes with an upraised hand can allow you to focus on the view in front of you. With the coronagraphs, astronomers hope to determine the characteristics of planets orbiting nearby stars.

James Webb Space Telescope

NIRSpec:

INTRODUCTION:

The NIRSpec (near-infrared spectrograph) is one of the four instruments which is flown with the James Webb space telescope. The main purpose of developing the NIRSpec is to get more information about the origins of the universe by observing the infrared light from the first stars and galaxies. This will also help in allowing us to look further back in time and will study the so-called Dark Ages during which the universe was opaque, about 150 to 800 million years after the Big Bang.

 

COMPONENTS:

Coupling optics, fore optics TMA, calibration mirror 1 and2, calibration assembly, filter wheel assembly, refocus mechanism assembly, micro shutter assembly, integral field unit, fold mirror, collimator TMA, grating wheel assembly, camera TMA, focal plane assembly, SIDECAR ASIC, optical assembly internal harness.

 

MICROSHUTTER:

Micro shutters are tiny windows with shutters that each measure 100 by 200 microns, or about the size of a bundle of only a few human hairs. The micro shutter device can select many objects in one viewing for simultaneous high-resolution observation which means much more scientific investigation can be done in less time. The micro shutter device is that it can select many objects in one viewing for simultaneous observation and it is programmable for any field of objects in the sky. The micro shutter is a key component in the NIRSpec instrument. Micro shutter is also known as arrays of tiny windows.

 

James Webb Space Telescope

FINE GUIDANCE SENSOR:

INTRODUCTION:

The fine guidance sensor (FGS) is a typical instrument board on a James Webb space telescope, this provides high precision pointing information as input to the telescope’s attitude control systems. FGS provides input for the observatory’s attitude control system (ACS). During on-orbit commissioning of the JWST, the FGS will also provide pointing error signals during activities to achieve alignment and phasing of the segments of the deployable primary mirror.

 

COMPONENTS:

THE FGS don’t have that much complex structure. so the following are the main components of FGS:- The large structure housing a collection of mirrors, lenses, servos, prisms, beam-splitters, photomultiplier tubes.

 

WORKING:

The FGS has mainly three functions in which this instrument was planted in our telescope:

1) TO obtain images for target acquisition. Full-frame images are used to identify star fields by correlating the observed brightness and position of sources with the properties of cataloged objects selected by the observation planning software

2) Acquire pre-selected guide stars. During acquisition, a guide star is first centered in an 8 × 8 pixel window.

3)  Provide the ACS with centroid measurements of the guide stars at a rate of 16 times per second.

 

DESIGN:

 

James Webb Space Telescope
James Webb Space Telescope

MIRI:

The mid-infrared instrument is used in the detection process of the James Webb Space Telescope. Uses camera as well as a spectroscope, in detection helps in detection from 5 microns to 28 microns of radiation to observe such a large range of wavelength we use Detectors made up of Germanium doped with arsenic these detectors are termed as Focus plane modules and have a resolution about 1024 X 1024 pixels. The MIRI system needs to be cooler than other instruments to measure such a long wavelength range and provided with cryocoolers which consist of two elements i.e. pulse tube precooler and Joule Thompson loop heat exchanger to cool down the MIRI to 7 K while operating. Consists of two types of spectroscopes 

 

  • Medium Resolution Spectroscope- it is the main spectroscope that uses Dichroic and Gratings.

  • Low-resolution Spectroscope- it helps in slitless and long-slit spectroscopy with the help of double prisms to get the spectrum from range 5 to 12 micrometer. Uses Germanium and zinc sulfide prisms to get the dispersion of light.

James Webb Space Telescope
James Webb Space Telescope

SUNSHIELD:

To observe faint heat signals the JWST must need to be extremely cold to detect those faint signals. Sunshield helps in protecting the telescope from heat and light from the sun as well as the heat of the observatory also helps in maintaining a thermally stable environment and helps in cooling to 50K. 

The sun shield is made up of a material named Kapton which is coated with aluminum and the two hottest plates facing the sun also have silicone doping to reflect heat and light from the sun. have high resistance and are stable in a wide range of temperatures. 

 The number of plates and shape of plates play an important role in the shielding process. Five layers are used to protect the telescope and the vacuum between each sheet acts as an insulating medium to heat. Each layer is incredibly thin and the layers are curved from the center. 


James Webb Space Telescope

 

Some quick facts regarding the JWST:

  • The Webb’s primary mirror is 6.5 meters wide. A mirror this large hasn’t been launched in space before.

  • It will help humans to understand the dark age before the time when the first galaxies were formed. 

  • As of now, the JWST is fully deployed in space and is in its cooldown to let its apparatus work at an optimum level. So let’s hold our breaths for the wonderful and exciting discoveries that are yet to come. 

CEV - Handout