Digital Image Processing

Reading Time: 6 minutes

In the very first Wisdom Week – LUMIÈRES conducted by CEV, Dr. Jignesh N Sarvaiya of Electronics Engineering Department gave the students some really interesting insights into Digital Image processing. Here is a brief summary of the topics covered by him.

What is a digital image?

A digital image is a representation of a two-dimensional image as a finite set of digital values, called picture elements or pixels. Pixel values typically represent gray levels, colours , heights, opacities etc. Digitization implies that a digital image is an approximation of a real scene. Common image formats include black and white images, grayscale images and RGB images.
Digital Image Processing

What is Digital Image Processing (DIP)?

Digital Image Processing means processing digital image by means of a digital computer. It uses computer algorithms, in order to get enhanced image to extract some useful information.
The continuum from image processing to computer vision can be broken up into low-, mid- and high-level processes which are explained below.
Low Level Process: where the input as well as the output is an image. Examples include noise removal and image sharpening.
Mid Level Process: where the input is an image and output is attribute. Examples include object recognition and segmentation.
High Level Process: where the input is attribute and output is understanding. Examples include scene understanding and autonomous navigation.
Representing Digital Images
An image may be defined as a two-dimensional f(x,y), where x and y are spatial coordinates and the amplitude of f at any pair of coordinates (x,y) is called the intensity of the image at that point.
A digital image can be represented as a M x N numerical array. The discrete intensity interval is [0, L-1] where L=2k
The number of bits (b) required to store M × N digitized image is given by b = M × N × k.
Digital Image Processing

Why do we need DIP?

Image processing is a subclass of signal processing concerned specifically with pictures which improves image quality for human perception and/or computer interpretation.
It is motivated by major applications such as improvement of pictorial information for human perception, image processing for autonomous machine applications, efficient storage and transmission.
DIP employs methods capable of enhancing information for human interpretation and analysis by noise filtering, content enhancement, contrast enhancement, deblurring, remote sensing etc.
Digital Image Processing

Fields Using DIP

    • Radiation from the electromagnetic spectrum
    • Acoustic
    • Ultrasonic
    • Electronic in the form of electron beams used in electron microscopy
    • Computer synthetic images used for modelling and visualisation

Digital Image Processing

DIP in Medicine

Medical imaging is the technique and process of creating visual representations of the interior of a body for clinical analysis and medical intervention, as well as visual representation of the function of some organs or tissues.
For example, we can take the MRI scan of canine heart and find boundaries between different types of tissues. We can use images with gray levels which represent tissue density and use a suitable filter to highlight the edges.
Digital Image Processing
Digital Image Processing

OVERALL CONCEPT

Digital Image Processing

Key Stages in DIP

Digital Image Processing
Let us understand these stages one by one.

  1. Image Acquisition: An image is captured by a sensor such as a monochrome or camera and digitized. If the output of the sensor is not in digital form, it is digitized with an analog to digital convertor. A camera contains two parts: a lens which collects appropriate radiation and forms a real image of the object and a semiconductor diode which converts the irradiance of an image into an electrical signal.A frame grabber requires circuits to digitize electrical signals from imaging sensor to a computer’s memory.
  2. Digital Image Processing

  3. Image Enhancement: It is used to bring out obscured details or highlight the features of interest of an image. It is commonly used to improve quality and remove noise from images.
  4. Digital Image Processing

  5. Image Restoration: It is the operation of taking a corrupt/noisy image and estimating the clean, original image. Corruption may come in many forms such as motion blur, noise and camera mis-focus.
  6. Digital Image Processing

  7. Morphological Processing: Morphological operations apply a structuring element to an input image, creating an output image of the same size. The value of each pixel in the output image is based on a comparison of the corresponding pixel in the input image with its neighbors.
  8. Digital Image Processing

  9. Segmentation: It is the process of partitioning a digital image into multiple segments to simplify and change the representation of an image into something that is more meaningful and easier to analyze.
  10. Digital Image Processing

  11. Object Recognition: Object recognition is a technique for identifying objects in digital images. It is the key output of deep learning and machine learning algorithms.
  12. Description and Representation: After an image is segmented into regions; the resulting aggregate of segmented pixels is represented & described for further computer processing. Representing region involves two choices: in terms of its external characteristics (boundary) in terms of its internal characteristics (pixels comprising the region).
  13. Digital Image Processing

  14. Image Compression: It is applied to digital images to reduce their cost for storage or transmission.
  15. Colour Image Processing: A digital color image is a digital image that includes color information for each pixel. The characteristics of color image are distinguished by its brightness and saturation.
  16. Knowledge Base: Knowledge about a problem domain is coded into an image processing system in the form of a knowledge database.

Types of Digital Images

  • Intensity image or monochrome image: Each pixel corresponds to light intensity normally represented in gray scale.
  • Color image or RGB image: Each pixel contains a vector representing red, green and blue components.
  • Binary image or black and white image: Each pixel contains one bit, 1 represents white and 0 represents black.
  • Index image: Each pixel contains index number pointing to a color in a color table.

Image Resolution

Resolution refers to the number of pixels in an image. The amount of resolution required depends on the amount of details we are interested in. We will now take a look at Image and Intensity Resolution of a digital image.
Spatial resolution: It is a measure of the smallest discernible detail in an image. Vision specialists state it with dots (pixels) per unit distance, graphic designers state it with dots per inch (dpi).
Digital Image Processing
Intensity Level Resolution: It refers to the number of intensity levels used to represent the image. The more intensity levels used, the finer the level of detail discernable in an image. Intensity level resolution is usually given in terms of the number of bits used to store each intensity level.
Digital Image Processing

Computer Vision: Some Applications

  • Optical character recognition (OCR)
  • Face Detection
  • Smile Detection
  • Vision based biometrics
  • Login without password using fingerprint scanners and face recognition systems
  • Object recognition in mobiles
  • Sports
  • Smart Cars
  • Panoramic Mosaics
  • Vision in space

Digital Image Processing
Hope you got some insights into digital image processing and computer vision. Thanks for reading !

WASHING MACHINE TEAR-DOWN

Reading Time: 11 minutes

GENERAL WORKING PRINCIPLE OF A WASHING MACHINE:

We put our clothes in, pour some water, then some detergents. The wash cycle begins, the mixture is strongly agitated, the dirt gets dissolved by detergent action, then we drain the dirty water. Again fresh water is poured, agitation begins, hence the clothes gets clearer after repeating a few cycles.

Afterwards, the clothes are transferred to the spin section, the unidirectional fast spinning motor rotates to squeeze water out of clothes. Hence clothes become drier and finally, we use natural or forced evaporation to get clean, dry and fragrant clothes.

Welcome, this is a tear-down of a semi-automatic washing machine. The company is Electrolux, 8 Kg, and top- load type. 

WASHING MACHINE TEAR-DOWN

SEMI-AUTOMATIC MACHINES:

Working:

Semi-automatic involves the involvement of the user itself to manually operate various procedures of the process.

  1. Water and detergent are poured by the user itself
  2. Timer and no of repetitions for the wash cycle is set again by the user
  3. The user himself has to transfer the clothes from the wash section to the spin section

Going through this general principle we have got two major circuitry for our machine:

Electrical circuitry to generate the mechanical force of rotation and plumbing circuitry to regulate the flow of water.To have a controlled motor operation we require the following components:

  1. Electrical circuits:
    1. Timers 
    2. Wash Motor
    3. Spin Motor
  1. To get efficient cleansing action we require following elements for Plumbing circuit
    1. Basic frame
    2. Spin gasket
    3. Agitator or Impeller
    4. Filter

PLUMBING CIRCUIT

  1. The basic frame:
WASHING MACHINE TEAR-DOWN
The wash and spin cabin

This frame provides two separate cabins for wash and spin section. The inner side of the wash section have smooth projections for rubbing of clothes and spin cabin is simply there to collect the water ejecting from the spin-gasket.

2. Spin Gasket:

WASHING MACHINE TEAR-DOWN

The spin cabin is utilised for drying clothes to some extent after washing and rinsing.

The spin motor of higher RPM rotates the laundry in the spin gasket. The large centripetal forces push the water through the small holes on the dyer gasket. The water gets collected on the outer spin cabin and is drained out.    

3. AGITATOR Vs IMPELLER:

One would find agitator and impeller arrangement in the top-load machine only. They work almost in a similar way but impact the fabric in different ways.

The agitator is a screwed, finned vertical shaft driven by the wash motor to rotate back and forth. Clothes rub against the agitator and get cleaned.Agitator

The impeller is a low profile cone or disc and driven by the wash motor to rub the clothes among each other and produce the cleansing action. The holes on the disc drain the water after the wash or rinse cycle is over.

WASHING MACHINE TEAR-DOWN
Impeller

4. Filter:WASHING MACHINE TEAR-DOWN

ELECTRICAL CIRCUIT

Technically various motor technologies are used in these devices like DC series, and highly efficient BLDC, but in today’s time the most commonly used motor is still single-phase induction m/c, yes the same machine that runs the fans, water coolers, exhaust, small water pumps, etc.

For washing machine purposes we need to have various controlled characteristics:

  1. The motor must be producing high starting torque, 
  2. it must reverse the direction of rotation after a fixed time period and, 
  3. and must be efficient.

Out of all these points, fundamentally direction reversal is most important, as the clothes or water will be spilt out and very poor cleansing action will be obtained if the motor is operated in one direction. To understand how direction is reversed it is important to understand the principle of a single-phase motor.       

This fan, for example, is a single-phase motor, and if we connect this motor in any way to the plug the rotation is always in one direction.

So how we obtain the reversal:

Full notes of 1-phase motor

WASHING MACHINE TEAR-DOWN

https://photos.app.goo.gl/9aje1F1DEzBbuzCm6

These motors have a very simple structure and low maintenance. But has poor efficiency, around 50-60 %. They will slowly become obsolete in the coming 5-10 years.  

TIMER:

For direction reversal as well as keeping the time, TIMER has a job to do. A normal semi-automatic WM uses a mechanical timer device to control the motor operation.

So let’s understand the timer first.

A timer as shown consist of four-terminal:

  1. Black: is the mains wire
  2. Brown: is the buzzer wire
  3. Yellow and red are two output wires each becomes the live terminal for half clock cycle with little delay in between. 

WASHING MACHINE TEAR-DOWN

So here is a demonstration circuit for the timer driving two bulbs alternatively.

WASHING MACHINE TEAR-DOWN

DIRECTION REVERSAL:

Let us now use the timer to reverse the motion of our washing machine motor represented by this simple single-phase table fan same machine with lower power rating.

Idea: The direction of rotation of the motor can be reversed if the current in the two windings (main and auxiliary) are made to lag and lead each other in a cyclic way. This can be done if we connect the capacitor in series with the main winding and in the next cycle connect the capacitor with the auxiliary winding. This can be simply done if just exchange the live connection to the capacitor across the capacitor terminals. We also know the timer exchanges the live wire between its two terminals in a periodic way. 

  1. Using the capacitor in winding 1, suppose motion to be clockwise:

WASHING MACHINE TEAR-DOWN

2. Using the capacitor in the winding 2, the direction of rotation of the magnetic field is reversed:

WASHING MACHINE TEAR-DOWN

3. Now just simply superimpose both of them by using the wash timer:

WASHING MACHINE TEAR-DOWN

4. Finally, we obtain an electrical circuit for direction reversal:

WASHING MACHINE TEAR-DOWN

And here is the actual demonstration model used in the tear-down event:

WASHING MACHINE TEAR-DOWN

TOP LOAD AND FRONT LOAD:

A top-load machine like this one, the clothes and water are fed into a fixed vertically mounted wash cabin with agitator or impeller at the centre which is driven by a single-phase induction motor. 

WASHING MACHINE TEAR-DOWN

In case of a front-loaded machine, the clothes are fed into a horizontally mounted drum. In this case, the whole water-tight drum rotates back and forth, and the plastic paddles mounted on the drum slosh the clothes. Mostly the semi-automatic washing machines are top-load. And we have two options for the fully-automatic machine, they are available in both top and front load.

Which is better in case of a fully-automatic washing machine?

TOP-LOADFRONT-LOAD
CheaperMore clothes can be washed in one wash cycle as a drum can be filled to the top.
ErgonomicLess water consumption

Less energy

Central agitator turns the clothes roughly Gentle gravity aided wash   
Clothes can be put inside in between the wash cycleRugged construction
 Less noisy
More features 

 

FULLY-AUTOMATIC:

Here come the wonders of 21st century the ELECTRONICS. In general, any electrical systems when accompanied (or controlled) by electronics we get a real high efficient, flexible, magical and great user- friendly products. Fully Automatic gives you the elite experience of washing your clothes at a very low cost in terms of water, energy and effort used, and give ultra-clean fabrics. The whole machine operates by just selecting a few options and you finish washing clothes even without getting your hands wet.

The striking difference between the semi and fully automatic machines is the ability of a fully automatic machine to operate completely on its own once you have selected the fabric type. The embedded micro-controller operates various components in a controlled and synchronous way according to the already saved programs in its memory.

Look at the dashboard of a typical fully automatic washing machine (IFB, 7.5 Kg ):

WASHING MACHINE TEAR-DOWN

 

You have a total of 10 different types of programs available and also you have the flexibility to set your own programs. The machine automatically set the wash, rinse and spin cycle (which includes the time and RPM), the amount of water and detergent, temperature of the water and even gives an option of delay time.

A TRAIL RUN:

Suppose you selected smart sense and pressed the start button then following is the routine the machine is subjected to go:

  1. To determine the time and RPM it necessary to know the amount of fabric in the machines, so the sensors first sense the weight and then accordingly set the timers.
  2. Then the micro-controller turns on the heating element and valve for inlet water and pours the detergent as required. 
  3. The wash cycle begins, the flappers continuously rub the fabrics to remove stains and dust.
  4. Dirty water drains out.
  5. Rinse cycle begins and is repeated several times based on load.
  6. Then the tub is rotated at exceedingly high speed to force the water out through the holes on the inner drum.
  7. The machines give a buzzer sound indicating the wash is over.

The circuit design of fully automatic machines is beyond the scope of this discussion and for the writer also to understand and comprehend.  

THE GREAT INDIAN MARKET:

Our country has shown a pretty healthy economic growth in the past 10-15 years. 

  • We were fifth-largest economy and also the fastest growing economy in 2018 (however current slowdown has dragged us down to 7th position in both categories), 
  • By this time the consumer durable has built a massive market of around 31.48 billion US dollar in 2017.
  • We are the world’s second-largest smartphone market.
  • We are the world’s third-largest television market.

Now, all the range of electrical and electronics products that you see in a typical household are broadly classified into two categories:

  1. WHITE GOODS: in other words Consumer appliances -These goods category include heavy-duty major equipment, for example, refrigerator, washing machine, induction cookers, microwave ovens, electric fans, etc.
  2. BROWN GOODS: Consumer electronics– Relatively small electronics goods like smartphones, TV, radio, digital camera, audio systems, etc. 

Goods product like TV and refrigerator has reached inflexion point (saturation) with 75% and 30% Indian homes owning them, while products like AC and washing machine still hangs at relatively lower penetration of around 4% and 11% respectively. Whereas countries like China have AC and washing machine penetration up to 60%  and 40%, clearly India still has a huge untapped potential for these goods.

The year 2018 saw a sale of 6.5 M units of Washing machine in India. So to analyse the Indian market for WM let us look at a few parameters: demand drivers, major manufacturers, technology type and new developments:

Demand drivers

  • Urban India:

Urban India shares the major demand for consumer durable market. WASHING MACHINE TEAR-DOWN

Particularly washing machine user most lives in few concentrated regions, 1/3rd of the user lives in just six major cities.WASHING MACHINE TEAR-DOWN

Reason for such demand patterns is due to the high living standards and also the increasing disposable income of middle and upper-class society of first and second-tier cities. Reduced cost and elite technology has led for reduced replacement time also.

  • Rural India:

Rural India does not show such great numbers but surely with increasing modernisation the consumer durable have now begun to be accepted as a necessity rather than a luxury. Here following opportunities are there:

    • Electrification: Govt initiative to strengthen the rural electric system would surely play a detrimental role in future.
    • Changing lifestyle: Rural India has also witnessed the pattern of changing lifestyles. There has been a significant increase in nuclear families, increase in awareness, and easier access to services, all of these makes a perfect cocktail for a consumer durable to gain some weight.
    • Working women: Women in rural India have also begun stepping out which will generate the immediate need for laundry systems.
    • Growing disposable income: The current slowdown has hit rural India the hardest- rural wage has stagnated for a while, FMCG have performed very poorly and overall demand has shown a negative slope, but still hopes of recovery from this disaster is there which will ultimately bring Indian economy back on track with more thrust.   

WASHING MACHINE TEAR-DOWN

Major manufacturers:

  • Multinational: The foreign players like South Korean LG and Samsung, Swedish Electrolux, Chinese Heir, Japanese Hitachi and Panasonic and American Whirlpool have already littered the India urban market with their cutting-edge, advanced smart-technology laundry system, which is more phantasmic in urban Indians.
  • Indian players: They are equally capable of producing advanced washing machine but they all lag at brand popularity. The giants like Videocon and IFB have also shifted their focus on the rural market. 

Technology type:

  • Fully automatic: At a minimal increase in cost compared to semi-auto counterpart the fully automatic washing machine comes with far better improved and elegant user service. Thus fully-auto machines sales contribute significantly to the overall sale of washing machines.
  • Semi-automatic: Having the advantage of low cost and simplicity they find their place in the rural region where current demands are not explosive like in urban regions.

New developments and updates:

  1. In October 2017 Flipkart launches its consumer appliances label called MARQ for selling products like ACs, WMs, etc.

WASHING MACHINE TEAR-DOWN

       2. Godrej set a 400 crore expansion plan in 2018 for washing machine industry, increased production capacity from 4 lakh units to six lakh in Mohali manufacturing unit. WASHING MACHINE TEAR-DOWN

NEW FEATURES:

Here comes the most interesting portion of the blog. In this globalised world, companies face great competition to win the consumer’s heart. And this is the engine fuel of modern innovation – to always keep one’s product a step ahead of others. Fortunately we the consumers get to see a modern marvel in this process. These days (2019) the fully automatic machines have become quite smart along with their conventional benefits of increasing the water economy, energy efficiency, child lock mechanism and user-friendly services, they now provide exciting features though at the cost of a significant amount of money. 

To name a few:

  1. LG Twin-wash introduced in 2015 have both front load and top load for washing two different types of fabric at the same time. (60,0000 INR)
  2. Samsung Addwash can add fabric during wash cycles and gives the WiFi connectivity with your smartphone to operate your machine remotely and get the status. (46,000 INR)
  3. LG Signature provides features of extremely low noise and vibration, auto detergent dosing system and screen touch control panels.
  4. Finally, Panasonic sustainable maintainer which washes, drying and folding the clothes, however, the product was kept in a trade show at Berlin and is still to hit the market.

You can also explore some of the newest washing machines entering the market using the link below:

Websites:

  1. https://www.samsung.com/in/washing-machines/front-loading-ww80k54e0wwtl/WW80K54E0WWTL/
  2. https://www.lg.com/in/washing-machines/lg-FH6G1BAPK22_FH8G5XDNK3
  3. https://www.samsung.com/us/explore/flex-wash/ 
  4. https://www.whirlpoolindia.com/washing-machines/fully-automatic-top-load#gref

Product film:

  1. https://youtu.be/PuTaiW7tj8c
  2. https://youtu.be/IN9pU2eVJdY
  3. (336) Panasonic presents a washing machine that folds your clothes and a fridge that comes when called – YouTube

References:

  1. https://www.ibef.org/download/Consumer-Durables-Report-Jan-2018.pdf
  2. https://www.livemint.com/Industry/V7wX0BAvKiko83S4atfV0I/ConsumerdurablesmarketgrowingrapidlyData.html
  3. https://www.researchandmarkets.com/reports/4114874/india-washing-machine-market-outlook-2022
  4. https://en.wikipedia.org/wiki/Washing_machine
  5. https://economictimes.indiatimes.com/industry/cons-products/durables/godrej-increases-manufacturing-capacity-at-mohali-plant/articleshow/64678101.cms?from=mdr

PPT used in the presentation:

https://docs.google.com/presentation/d/15sUvByQxmIpn4fWaHUnSRkltF5YflbG23wiX6s3ON-8/edit?usp=sharing

Keep reading, keep learning

TEAM CEV!!

AUGMENTED REALITY: More than what we see!!

Reading Time: 10 minutes

The picture depicts what Augmented Reality can be like!!!

What would you do in a foreign place whose native language you don’t know? How would you read the signs? Would you feel worried? 

Well, you don’t have to worry. With Google Translate’s new AR function, you can easily scan text using your phone’s camera and have it translated in any language. Cool right, but hold on what is AR? 

How does it work? 

What are its applications? 

Just relax and read on to find out everything you need to know about this cool technology.

So let’s get started with the definitions…

AUGMENTED REALITY: More than what we see!!

 

WHAT IS AUGMENTED REALITY (AR)?

According to a dictionary, to augment something means to make it more effective by adding something to it.

Moving onto a technical definition, augmented reality is the technology that enhances our physical world by superimposing computer-generated perceptible information on the environment of a user in real-time. 

This integrated information may be perceived by one or more senses and enhances one’s current perception of reality with dazzling visuals, interactive graphics, amazing sounds and much more. (Exciting!)

AUGMENTED REALITY: More than what we see!!

 

You must have played the popular AR game Pokemon GO which revolutionized the gaming industry and is a huge success making 2 million dollars per day even now. Pokemon GO uses a smartphone’s GPS to determine the user’s location. The phone’s camera scans the surroundings and digitally superimposes the fictional characters of the game with the real environment.

Some other popular examples of AR apps include Quiver, Google translate, Google Sky Map, Layar, Field trip, Ingress, etc. and who don’t know about cool snap chat filters!

I KNOW ABOUT VIRTUAL REALITY…HOW IS IT DIFFERENT?

Augmented reality is often confused with virtual reality. Although both these technologies offer enhanced or enriched experiences and change the way we perceive our environment, they are different from each other.

The most important distinction between augmented reality and virtual reality is that Virtual reality creates the simulation of a new reality which is completely different from the physical world whereas augmented reality adds virtual elements like sounds, computer graphics to the physical world in real-time.

AUGMENTED REALITY: More than what we see!!

A virtual reality headset uses one or two screens that are held close to one’s face and viewed through lenses. It then uses various sensors in order to track the user’s head and potentially their body as they move through space. Using this information, it renders the appropriate images to create an illusion that the user is navigating a completely different environment.

Augmented reality on the other hand, usually uses either glasses or a pass-through camera so that the user can see the physical environment around them in real-time. Digital information is then projected onto the glass or shown on the screen on top of the camera feed. 

WHERE DID IT ALL START?

In 1968, Ivan Sutherland, a Harvard professor created “The Sword of Damocles” with his student, Bob Sproull. The Sword of Damocles is a head-mounted display that hung from the ceiling where the user would experience computer graphics, which made them feel as if they

were in an alternate reality. 

In 1990, the term “Augmented Reality” was coined for the first time by a Boeing researcher named Tom Caudell.

In 1992, Louis Rosenburg from the USAF Armstrong’s Research Lab created the first real operational augmented reality system named Virtual Fixtures which is a robotic system that places information on the workers’ work environment to increase efficiency similar to what AR systems do today.

The technology has progressed significantly since then. (Now keeping aside the further details in history so that you don’t get bored!)

For details of history and development of augmented reality, check out the link given below.

https://www.youtube.com/watch?v=2PaJ_safMIo 

TYPES OF AR

1. Marker-based AR (or Image Recognition)

It produces the 3D image of the object detected by the camera when the camera is scanned over a visual marker such as QR code. This enables a user to view the object from various angles.

2. Markerless AR

This technology uses location tracking features in smartphones. This method works by reading data from the mobile’s GPS, digital compass and accelerometer to provide data based on users location and is quite useful for travellers.

3. Projection-based AR

If you are thinking that this technology has something to do with projection, then kudos you are absolutely correct! This technology projects artificial light onto surfaces. Users can then interact with projected light. The application recognizes and senses the human touch by the altered projection (the shadow).

4. Superimposition based AR

As the name suggests, this AR provides a full or partial replacement of the object in focus by replacing it with an augmented view of the same object. Object recognition plays a vital role in this type of AR.

 

HOW DOES AR WORK? 

Now that you know something about AR, your technical minds must be wondering how the technology works. Here is a brief technical explanation of the supercool technology.

AR is achieved by overlaying the synthetic light over natural light, which is done by projecting the image over a pair of see-through glasses, which allow the images and interactive virtual objects to form a layer on top of the user’s view of reality. Computer vision enhances the reality for users in real-time.

Augmented Reality can be displayed on several devices, including screens or monitors or handheld devices or smartphones or glasses. It involves technologies like S.L.A.M. (simultaneous localization and mapping) which enables it to recognize 3D objects and track physical location to overlay augmented content, depth tracking (briefly, a sensor data calculating the real-time distance to the target object). AR has the following components:

1. Cameras and sensors

 They are usually on the outside of the augmented reality device. A sensor collects information about a user’s real-world interactions and a camera visually scans the user’s surroundings to gather data about it and communicates it for processing. The device takes this information, which determines where surrounding physical objects are located, and then formulates the desired 3D model. For example, Microsoft Hololens uses specific cameras to perform specific duties, such as depth sensing. Megapixel cameras in common smartphones can also capture the information required for processing.

2. Processing: 

Augmented reality devices basically act like mini-supercomputers which require significant computer processing power and utilize many of the same components that our smartphones do. These components include a CPU, a GPU, flash memory, RAM, Bluetooth/Wifi, global positioning system (GPS) microchip, etc. Advanced augmented reality devices, such as the Microsoft Hololens utilize an accelerometer to measure the speed, a gyroscope to measure the tilt and orientation, and a magnetometer to function as a compass to provide for a truly immersive experience.

3. Projection:

This refers to a miniature projector found on wearable augmented reality headsets. The projector can turn any real surface into an interactive environment. As mentioned earlier, the data taken in by the camera is used to examine the surrounding world, is processed further and the digital information is then projected onto a surface in front of the user; which includes a wrist, a wall, or any other person. The use of projections in AR is still in the developing stage. With further developments in the future, playing a board game might be possible on a table without the use of a smartphone.

4. Reflection: 

Augmented reality devices have mirrors to assist your eyes to view the virtual image. Some AR devices have “an array of many small curved mirrors”, others have a simple double-sided mirror to reflect light to the camera and the user’s eye. In the case of Microsoft Hololens, the use of “mirrors” involves holographic lenses that use an optical projection system to beam holograms into your eyes. A so-called light engine emits the light towards two separate lenses, which consists of three layers of glass of three different primary colours. The light hits these layers and enters the eye at specific angles, intensities, and colours, producing the final image on the retina. 

 

AR: CURRENT APPLICATIONS

AR is still in the developing stage yet it has found applications in several fields from simple gaming to really important fields like medicine and military. Here are some of the current applications of AR (the list is not exhaustive).

GAMING:

AUGMENTED REALITY: More than what we see!!

The gaming industry is evolving at an unprecedented rate. Developers all over the world are thinking of new ideas, strategies and methods to design and develop games to attract gamers all across the globe. There are a wide variety of AR games available in the market ranging from simple AR indoor board games to advanced games which could include the players jumping from tables to sofas to roads. AR games such as Pokemon Go have set a benchmark in the gaming industry. Such games expand the field of gaming as they attract gamers who easily develop an interest in games that involve interaction with their real-time environment.

ADVERTISING:

AUGMENTED REALITY: More than what we see!!

AR has seen huge growth in the advertising sector over the past few years and is becoming popular among advertisers who are trying to increase their customers by making engaging ads with AR.  Buyers tend to retain information conveyed through virtual ads. AR ads provide an enjoyable 3D experience to users which gives them a better feel of the product. For example, the IKEA Place app lets customers see exactly how furniture items would look and fit in their homes. AR ads establish a connection between the consumer and brand through real-time interaction due to which consumers are more likely to buy a product. Many researchers believe that AR is similar to other digital technologies however its interactive features set it apart from other technologies.

EDUCATION:

AUGMENTED REALITY: More than what we see!!

Classroom teaching is rapidly undergoing changes. With the introduction of AR in traditional classrooms, boring lectures can become extremely interesting! Students can easily understand complex concepts and remember information better as it is easier to retain information from audio and visual stimulation compared to traditional textbooks. Today teens are increasingly owning smartphones and other electronic gadgets that they use for playing games and using social media, then why not use AR in the field of education! AR provides an interactive and engaging platform that makes the learning process enjoyable. With the development of AR, not just classroom teaching but distance learning can become more efficient giving students greater insights into the subjects they study. Google Translate now uses an augmented reality function with which students can use the camera to take a picture of the text and have it translated in real-time.

 

MEDICINE AND HEALTHCARE:

AUGMENTED REALITY: More than what we see!!

Augmented reality can help doctors to diagnose the symptoms accurately and cure diseases effectively. It is helpful to surgeons performing invasive surgeries involving complex procedures. Surgeons can detect and understand the problems in bones, muscles and internal organs of the patients and decide accordingly which medication or injection would best suit the patient. For example, AccuVein is a very useful Augmented reality application used to locate veins.  In emergency operations, surgeons can save time with the use of smart glasses which can give instant access to the patients’ medical information, surgeons need not shift their attention to anything else in the operation theatre. Medical students can get practical knowledge of all parts of the human body without having to cut it. 

 

WHAT’S IN THERE FOR THE FUTURE?

AR has captured our imagination like none other technology. From being something seen in science fiction films to something that has become an integral part of our lives, It has come a long way and has gained success in many fields.

Ever since the introduction of AR-enabled smartphones, the number of smartphone users has increased. The fastest-growing technologies AI and ML can be combined with AR to enhance the experience of mobile users.

The augmented reality saw its record growth in 2018. AR is positioned to be strong among commercial support, with big tech names like Microsoft, Amazon, Apple, Facebook, and Google making heavy investments. It is expected that by 2023, the installed user base for AR-supporting products like mobile devices and smart glasses will surpass 2.5 billion people. Revenue for the industry should hit $75 billion. Industry players in the augmented reality world expect 2019 to be a year marked by a rapid increase in the pace of industrial growth.

The future of AR is bright and it is expected that its growth will increase further with more investments from big tech companies that are realizing the potential of AR.

That’s all for this blog! 

Thanks for reading and I hope this blog gave you some new information and insights about augmented reality. Please give your valuable feedback.

-By Moksha Sood (2nd year, CHEM DEPT)

KEEP READING, KEEP LEARNING

TEAM CEV!!!!!

Why do Rockets love to fail?

Reading Time: 8 minutes

Author
Deepak Kumar
Propulsion Engineer, Dept. of Propulsion, STAR

“Rockets, they really don’t wanna work, they like to blow up a lot”

 

         – Elon Musk

If you take look at all the List of spaceflight-related accidents and incidents – Wikipedia , you’ll realize there have been countless failures. That the answer to “How many”.

 

Rockets can fail anytime. Moreover, a rocket isn’t a simple machine at all. A massive structure having around 2.5 billion dynamic parts is likely to fail anytime if any of these parts says, “ I can’t do this anymore, I’m done”.

 

Coming to some of the well known Rocket Failures, this will help you learn how rockets fail!

 

1. The Space Shuttle Challenger Disaster

Why do Rockets love to fail?

The spaceflight of Space Shuttle carried a crew of 7 members, when it disintegrated over the Atlantic Ocean. The disintegration was caused due to the failure of one of Solid Rocket Boosters(SRB). The SRB failed during the lift-off.

 

The failure of SRB was caused due to O-Rings. O-ring is mechanical gasket that is used to create a seal at the interface. And here, that interface was between two fuel segments. O-Ring was designed to avoid the escaping of gases produced due to burning of solid fuel. But extreme cold weather on the morning of launch date, the O-Ring became stiff and it failed to seal the interface.

Why do Rockets love to fail?

This malfunctioning caused a breach at the interface. The escaping gases impinged upon the adjacent SRB aft field joint hardware( hardware joining the SRB to the main structure) and the fuel tank. This led to the separation of the Right Hand SRB’s aft field joint attachment and the structural failure of external tank.

Why do Rockets love to fail?

In the video below, the speaker mentions about the weather being chilly on that morning and icicles formed on the launch pad in the morning. One of SRB is clearly visible making its own way after the failure.



2. The Space Shuttle Columbia Disaster

Unlike the above failure, this failure occurred during the re-entry. But again, the story traces back to the launch. During the launch, a piece of foam broke off from the external fuel tank and struck the left wing of the orbiter.

Why do Rockets love to fail?

This is an image of orbiter’s left wing after being struck by the foam. The foam actually broke off from the bi-pod ramp that connects the orbiter and fuel tank.

Why do Rockets love to fail?

The foam hit the wing at nearly a speed of 877 km/h causing damage to the heat shield below the orbiter. The piece of foam that broke off the external fuel tank was nearly the size of a suitcase and could have likely created a hole of 15–25 cms in diameter.

Why do Rockets love to fail?

The black portion below the nose you see is the carbon heat shield of orbiter.

On Feb 1,2003 during the re-entry, at an altitude of nearly 70 km, temperature of wing edge reached 1650 °C and the hot gases penetrated the wing of orbiter. Immense heat energy caused a lot of dange. At an altitude of nearly 60 km, the sensors started to fail, the radio contact was lost, Columbia was gone out of control and the left wing of the orbiter broke. The crew cabin broke and the vehicle disintegrated.

 

 

You can clearly see the vehicle disintegrating. **The video is a big one, hang tight. 😉

 

3. The N1 Rocket Failure

Not many people know about this programme. It was started in 1969 by the Russians. N1 rocket remains the largest rocket ever built till date. The rocket had its last launch in 1972. During this tenure, the were four launches, all of them failed. Yes you heard it right, ALL OF THEM FAILED.

Why do Rockets love to fail?

Before discussing the failures, there is one thing that I never forget to mention about this rocket. Rockets rely on TVC(Thrust Vector Control) to change the direction of the thrust. The nozzle direction is changed to alter the direction of thrust.

Why do Rockets love to fail?

This is TVC. But in case of N1 Rocket, there was something called Static Thrust Vectoring. There were 30 engines in stage 1, 8 engines in stage 2, 4 engines in stage 3 and 1 in stage 4.

Why do Rockets love to fail?

There were 24 on the outer perimeter and the remaining 6 around the centre.

In order to change the direction of rocket, the thrust was changed in the engines accordingly. The engines didn’t move like TVC at all.

Now coming to the failed launches:

Launch 1:

The engines were monitored by KORD(Control of Rocket Engines). During the initial phase of flight, a transient voltage caused KORD to shut down the engine #12. Simultaneously, engine #24 was shut down to maintain stability of the rocket. At T+6 seconds, pogo oscillation( a type of combustion instability that causes damage to the engine) in the #2 engine tore several components off their mounts and started a propellant leak. At T+25 seconds, further vibrations ruptured a fuel line and caused RP-1 to spill into the aft section of the booster. When it came into contact with the leaking gas, a fire started. The fire then burned through wiring in the power supply, causing electrical arcing which was picked up by sensors and interpreted by the KORD as a pressurization problem in the turbopumps.

Launch 2:

Launch took place at 11:18 PM Moscow time. For a few moments, the rocket lifted into the night sky. As soon as it cleared the tower, there was a flash of light, and debris could be seen falling from the bottom of the first stage. All the engines instantly shut down except engine #18. This caused the N-1 to lean over at a 45-degree angle and drop back onto launch pad 110 East. Nearly 2300 tons of propellant on board triggered a massive blast and shock wave that shattered windows across the launch complex and sent debris flying as far as 6 miles (10 kilometers) from the center of the explosion. Just before liftoff, the LOX turbopump in the #8 engine exploded (the pump was recovered from the debris and found to have signs of fire and melting), the shock wave severing surrounding propellant lines and starting a fire from leaking fuel. The fire damaged various components in the thrust section leading to the engines gradually being shut down between T+10 and T+12 seconds. The KORD had shut off engines #7, #19, #20, and #21 after detecting abnormal pressure and pump speeds. Telemetry did not provide any explanation as to what shut off the other engines. This was one of the largest artificial non-nuclear explosions in human history.

Launch 3:

Soon after lift-off, due to unexpected eddy and counter-currents at the base of Block A (the first stage), the N-1 experienced an uncontrolled roll beyond the capability of the control system to compensate. The KORD computer sensed an abnormal situation and sent a shutdown command to the first stage, but as noted above, the guidance program had since been modified to prevent this from happening until 50 seconds into launch. The roll, which had initially been 6° per second, began rapidly accelerating. At T+39 seconds, the booster was rolling at nearly 40° per second, causing the inertial guidance system to go into gimbal lock and at T+48 seconds, the vehicle disintegrated from structural loads. The interstage truss between the second and third stages twisted apart and the latter separated from the stack and at T+50 seconds, the cutoff command to the first stage was unblocked and the engines immediately shut down. The upper stages impacted about 4 miles (7 kilometers) from the launch complex. Despite the engine shutoff, the first and second stages still had enough momentum to travel for some distance before falling to earth about 9 miles (15 kilometers) from the launch complex and blasting a 15-meter-deep (50-foot) crater in the steppe.

 

Launch 4:

The start and lift-off went well. At T+90 seconds, a programmed shutdown of the core propulsion system (the six center engines) was performed to reduce the structural stress on the booster. Because of excessive dynamic loads caused by a hydraulic shock wave when the six engines were shut down abruptly, lines for feeding fuel and oxidizer to the core propulsion system burst and a fire started in the boat-tail of the booster; in addition, the #4 engine exploded. The first stage broke up starting at T+107 seconds and all telemetry data ceased at T+110 seconds.

Besides the mechanical failures, the rockets might fail due to a minute discrepancy in program’s as in case of Ariane 5.

Ariane 5: After 37 seconds later, the rocket flipped 90 degrees in the wrong direction, and less than two seconds later, aerodynamic forces ripped the boosters apart from the main stage at a height of 4km. This caused the self-destruct mechanism to trigger, and the spacecraft was consumed in a gigantic fireball of liquid hydrogen.

The fault was quickly identified as a software bug in the rocket’s Inertial Reference System. The rocket used this system to determine whether it was pointing up or down, which is formally known as the horizontal bias, or informally as a BH value. This value was represented by a 64-bit floating variable, which was perfectly adequate.

However, problems began to occur when the software attempted to stuff this 64-bit variable, which can represent billions of potential values, into a 16-bit integer, which can only represent 65,535 potential values. For the first few seconds of flight, the rocket’s acceleration was low, so the conversion between these two values was successful. However, as the rocket’s velocity increased, the 64-bit variable exceeded 65k, and became too large to fit in a 16-bit variable. It was at this point that the processor encountered an operand error, and populated the BH variable with a diagnostic value.

That’s your answer to “why”. Rockets can fail anytime due even a small malfunction in one of those 2.5 billion dynamic parts or even a small programming error.

Hope you enjoyed the writings up there!

Thank You!

Source: Google and Wikipedia

 

 

Looking forward to excel in rocket building?

Check out this link Space Technology and Aeronautical Rocketry- STAR

SPACE SHUTTLES: The Ultimate Vehicles

Reading Time: 11 minutes

WARMING-UP……..

Probably you are going to witness the greatest technological feat of human civilization that remarks not just technological advancement but bespeaks one of the greatest establishment of humankind as a whole.

Moreover, on a special note, I would like you to consider the fact that it is the core human tendency to break his own records which every time seems to appear, the final last update. I would not claim for the future but comparing the past then certainly this masterstroke ranks first.

This whole story remarks some of the most distinguishing characteristics of human society. The story of the massive vehicle taking off from womb of mother earth with million-ton heavy rocket boosters and touching down back to earth with elegant astronauts inside. This signifies the display of greatest courage, dedication, commitment and above all international cooperation and brotherhood.

I assure you that none of my blogs has the capability to swing your mind like this can. If you want to experience a complete thrill then follow this blog after you go through the Higgs boson and nuclear fusion on earth.

So that was the warm-up part of the blog, let us uncover the bottom line of it.

FROM STAR-GAZING TO MARS MISSION

Humans earlier used to sleep under the open skies and this chance of glaring the ubiquitous, vast and boundless space, the world of stars and planet have ignited the humans to know and visit them someday.

From those days of dreaming, the history records the development of theories of movement of heavenly bodies by Galileo Galilei and Isaac Newton, the launching of first liquid-fuel propelled rocket by father of rocketry, Robert Goddard, then the first human in space from Russia and landing on the moon by America. Skimming through those pages we see a story of great ups and downs and we get to know how all those audacious and beautiful things were accomplished.

These achievements are not just for the sake of scientific fantasy, in fact, is aimed at providing the exceptional services of communication, aviation, and information technology as an immediate outcome. On the other hand, remarks the very first step of humankind to become interplanetary species so as to surpass the danger of extinction in the future due to earth turning hostile.

We have talked a lot about “in general” of the topic and let us turn to more technical aspects. Let us get to know more about some major technical details about the designing, launching and maneuvering, re-entering and landing of the space shuttles. Moreover, USA’s NASA has done a whole lot of work on the space shuttle. So we will talk specifically about those American space shuttles and also talk about major timeline events.

DESIGNING

This topic covers the aspects of the basic aerodynamics, fuel system, and the thermal protection system.

Requirements:

  1. Light-weight: There is a whole lot of sensors, types of equipment, satellites that lead to the very heavy weight of the whole system. The gross weight reaches to 4.4 Million pounds of a typical space shuttle system at launch.
  2. Structural integrity: The shuttle burns 1.99 Million kg of fuel in 8.5 minutes. Which pushes the shuttle from zero to 7850 m/s in orbit with an average acceleration of 29.4 m/s^2.
  3. Reusability: The economy is the major coin side that determines the fate of any technology. Making the space shuttle partially reusable was also a major challenge.
  4. Thermal protection: The temperature of the skin of space shuttle in its journey varies from -156 degrees in space to 1650 degree Celsius on re-entry.  Advanced thermal systems are required to keep the temperature of highly explosive cryogenic propellant under required temperature and also prevent crew inside to get singed from this extreme heat or get frostbite in space.

Basic components:

Image result for space shuttle illustration

So this is a typical space shuttle comprising of three basic units:

i) the orbiter, ii)the orange coloured external fuel tank(ET), iii) two solid- rocket boosters(SRBs).

Let us explore each element one by one and see what engineering challenges they presented and how it is fixed.

THE ORBITER:

SPACE SHUTTLES: The Ultimate Vehicles

This is the most significant part of the whole system. The reusable element of the space shuttle and that too up to 100 missions with minimal maintenance.

It is exposed to extreme temperature variations from -150 degrees Celcius in space when overlapped by earth’s shadow and 1600 degrees Celsius on re-entry. Moreover, it is supposed to produce massive accelerating forces by its own engines and thus requires high structural integrity to withstand those crushing forces on itself.

The most challenging part is an advanced reliable thermal protection system.

So, what’s the hack?

Like most of the time most complex problem is best addressed by the most simple solution, same is the case here.

So what is the best way to tackle heat?

The answer is INSULATION. Simply don’t allow the heat to enter the orbiter.

Engineers turned to simple silica sand to find an insulation material to operate at 1600 degrees. An ultralight highly porous block manufactured out of silica from sand which consists of 90% of air and rests10% special grade sand. These segments are called tiles. There are over 27,000 of these tiles on the shuttle of intricate shapes and design, all just as important as the next. Also, these tiles are not mechanically bolted on the body of shuttle instead glued with normal silicone adhesive on aluminium skull. In this way segmentation of tiles allowed for reusability by replacement of small damaged segments after every mission.

SPACE SHUTTLES: The Ultimate Vehicles

They are extremely good at heat dissipation. These tiles taken from a 2,300 oF oven can be immersed in cold water without damage. The surface dissipates heat so quickly that an uncoated tile can be held by its edges with an ungloved hand seconds after removal from the oven while its interior still glows red.

SPACE SHUTTLES: The Ultimate Vehicles

Also, the temperature is not distributed uniformly through the orbiter at re-entering. The base of craft sees much higher temperature compared to the top.Hence, different composition of tiles is used in different parts of the orbiter. The leading tips of wings experience the highest of all, touching the 3000 degrees Celcius mark. Thus they are specially made up of a composite material called reinforced carbon-carbon.

SPACE SHUTTLES: The Ultimate Vehicles

Probability of tile failure is not greater than 1/10 ^8. To accomplish this magnitude of system reliability and still minimize the weight didn’t come unpaid. It was only after the Columbia Shuttle disaster on 1 Feb 2003, investigations revealed the vulnerability of the ultralightweight tile to get punched by orbital debris. India lost its daughter Miss Kalpana Chawla in this disaster.

It was the aftermath of the unfortunate disaster that NASA pushed tougher to develop a highly secure and reliable heat shield, including the reinforced carbon-carbon composite for wing edge, which was the reason of melting of the shuttle on re-entry.

SPACE SHUTTLES: The Ultimate Vehicles
The STS-107 crew includes, from the left, Mission Specialist David Brown, Commander Rick Husband, Mission Specialists Laurel Clark, Kalpana Chawla and Michael Anderson, Pilot William McCool and Payload Specialist Ilan Ramon. (NASA photo) 

SPACE SHUTTLES: The Ultimate Vehicles

This shot focuses on the bottom of an orbiter named DISCOVERY.

These short video summaries neatly the concept of the thermal protection system (TPS) using the tiles.

 

If we talk about electrical power availability for the instrumentations and other important operations then it is supplied by three hydrogen-oxygen fuel cells which is operated by the cryogenic storage tanks installed on the orbiter. They are capable of generating 21 kW of power at 28 volt DC which is then converted to 115 V, 400 Hz, three-phase AC power for orbiter and payloads.

Amazingly the byproduct of the fuel cell is water which is made available for use for crew onboard.

Now comes the backbone of the space shuttle at the launch, the most massive part of this giant, the external fuel tank (ET).

THE EXTERNAL FUEL TANK:

SPACE SHUTTLES: The Ultimate Vehicles

This 50 m high and 8 m in diameter ET provide the fuel to three main engines and structural integrity at launch, hence is known as the backbone of the space shuttle.

  • The ET carries cryogenic propellants i.e. liquid oxygen and liquid hydrogen for the combustion in three main engines on orbiter in two separate compartments divided by an unpressurised intertank that holds all the electrical components for proper operation.
  •  An empty ET weighs around 35.5 Kilo Kg and it holds about 1.6 million pounds of propellants i.e a volume of about 2 million litres (enough to drive 1000 average cars round the year).
  • The ET supplies fuel to main engines on orbiter through two feed lines measuring 43 cm in diameter. The pressurised LO2 capable of flowing at a maximum rate of 66.6 thousand litres per min and LH2 at a max rate of 179 thousand litres per minute.
  • The ET is jettisoned after a burn time of around 510 seconds, after which it returns back to earth following a predetermined trajectory and lands in the remote ocean.

Physical structure:

  • The front chamber carries liquid oxygen at 250 kPa and at −182.8 °C in tank volume of 559.1 m3.
  • The intertank hoses all the operational instruments and also receive and distribute the thrusts from the SRBs.
  • The aft chamber carries liquid hydrogen at 300 kPa and −252.8 °C in 1,514.6 m3 tank volume.
  • Although hydrogen tank is 2.5 times larger than the oxygen tank but weighs only one-third as much when filled to capacity. The reason for the difference in weight is that liquid oxygen is 16 times heavier than liquid hydrogen.
  • Each fuel chamber also includes an internal slosh baffle and a vortex baffle to dampen fluid slosh due to vibrations.

SPACE SHUTTLES: The Ultimate Vehicles

Engineering challenge:

The thermal protection system is also critical for ET so as to maintain proper fuel temperature during the ascent of 8.5 minutes. Moreover, freezing ice at standby condition due to highly-chilled cryogenics on the skin of ET, which later form debris and impacts the orbiter glass shield or damages the tiles should be checked.

The ET is covered with a 1-inch (2.5 cm) thick layer of polyisocyanurate foam insulation, which also gives its distinguishing orange colour. The insulation keeps the fuels cold, protects the fuel from the heat that builds upon the ET skin in flight, and minimizes ice formation at standby.

The proper foam material selection only came after the loss of Columbia space shuttle in which insulating foam broke off the ET and damaged the left wing of the orbiter, which ultimately caused Columbia to break up upon re-entry.

SPACE SHUTTLES: The Ultimate Vehicles

This is a close view of the insulating foam on the ET.

THE SOLID ROCKET BOOSTERS:

These reusable boosters provide 70% thrust for liftoff of the space shuttle from the launch pad. They are 45 m high, 3.5 m  in diameter and weigh up to 1.3 million pounds. The solid propellant that fuel consists of atomized aluminium (16 %) (as fuel), ammonium perchlorate (70 %) (acts as oxidiser), iron oxide powder (0.2 %) (acts as catalyst),  polybutadiene acrylonitrile (12 percent) as curing agent and epoxy resin (2 percent).

SPACE SHUTTLES: The Ultimate Vehicles

They also bear the whole weight of shuttle on launch pads. With a burn time of 127 seconds, they are jettisoned and parachuted into the ocean and recovered by signalling devices and reused.

Not this element proved good to NASA and led to the disastrous crash of Challenger Shuttle,1986 due to a technical fault in the O-rings, killing all the crew on board.

SPACE SHUTTLES: The Ultimate Vehicles

The STS-51L crewmembers are: in the back row from left to right: Mission Specialist, Ellison S. Onizuka, Teacher in Space Participant Sharon Christa McAuliffe, Payload Specialist, Greg Jarvis and Mission Specialist, Judy Resnik. In the front row from left to right: Pilot Mike Smith, Commander, Dick Scobee and Mission Specialist, Ron McNair.

 

MISSION PROFILE:

 

LAUNCHING AND MANEUVERING

SPACE SHUTTLES: The Ultimate Vehicles

REENTRY AND LANDING

SPACE SHUTTLES: The Ultimate Vehicles

POST LANDING PROCESSES

SPACE SHUTTLES: The Ultimate Vehicles

BRIEF HISTORY

The credit of the science of space shuttle taking humans to space goes unravelled to NASA. The astronauts, scientist, and engineers who worked at NASA are entitled to standing ovations by the whole human society.

U.S started his visionary program called Space Transportation System (STS) and launched the first mission in the year 12 April 1981. The STS-1 named space shuttle Columbia successfully completed its orbital test flight. A fleet of 5 space shuttles named Challenger, Endeavor, Columbia, Discovery and Atlantis executed a total of 135 mission. Out of which two of them, Challenger (STS-51) and Columbia (STS- 107) failed that lead to loss of 14 crew members, rest other missions successfully scripted in pages of history. The space shuttle Atlantis (STS – 135) marked the last mission on 8 July 2011.

SPACE SHUTTLES: The Ultimate Vehicles

By the end of this mission, the world got the valuable gifts of ISS (International Space Station), Hubble Telescope, GPS technology, mobile communication, and many scientific experiments conducted in space. All of these technology forms the backbone of 21st-century human civilization and raises in us the hope of interplanetary successful human transportation someday!!!!

SPACE SHUTTLES: The Ultimate Vehicles

The biggest achievement of space missions: THE ISS

CONCLUSION

Humans might have used the science to attack enemy nation from space which could have possibly erased the existence of the whole planet, but fortunately, this marked the cradle for a  technology which has now become the indispensable part of our life.

So, being a modern engineering marvel, it also led to the end of the cold war between the giants US and Soviet Union, which otherwise could have wiped the planet after leading to third world war.

FOR READABILITY ISSUES “THE MISSION PROFILE” TOPIC WOULD BE COVERED IN NEXT BLOG. 

In the end, a beautiful clip to experience the thrill of space shuttle launch…

Thanks for your kind attention and valuable time!!!

Stay tuned for the next blog, till then your doubts and thoughts are most welcomed.

Keep reading, Keep learning!

TEAM CEV!!

Future Coating Technologies : A REVIEW PAPER

Reading Time: 18 minutes

Author: Sanidhya Somani, ECE, 2nd Year

Abstract

For decades we have been hearing that the chemical industry and there also the coating industry, need to break free from its dependency on oil because there are finite resources. Renewable raw materials are constantly under discussion. The paint and coatings industry is focused on innovation and being “green”. Green means a smaller carbon footprint, low VOC content, high renewable content and green processes. The use of renewable ingredients to reduce the carbon footprint, eliminating the use of hazardous materials, introducing bio renewables, incorporating recycled materials, lowering VOC emissions, decreasing energy consumption and reducing waste, while proving it can all be accomplished cost-effectively to become more environmentally responsible. The reduction of environmental damage done by coatings sometimes begins before manufacturing even starts. Research into the carbon footprint of coating materials, or the overall amount of climate-affecting carbon dioxide produced in their manufacture, application, transport and disposal, shows that some coatings simply use fewer resources throughout their life cycles. This paper discusses advances in the use of renewable resources in formulations for various types of coatings. The developments in the application of (new) vegetable oils and plant proteins in coating systems are discussed here.

Introduction

As the climate continues to change, human population continues to grow, and our natural resources continue to diminish, industries have seen a global shift, placing greater importance on green design and sustainable business practices. However, green design is less about following a popular trend than it is about simply respecting our limited natural resources. The architectural coating industry is no exception to this trend, as building and construction regulations continue to evolve and incorporate higher standards for environmentally friendly practices.

Suppliers to the coating industry offer an increasing range of bio-based raw materials. For instance, a green hardener with a high carbon content from renewable resources. Raw materials from renewable resources was one of the trends. Manufacturers are working harder than ever to develop high-performance coatings that lessen the negative impact on the environment. To do this, coating developers created innovative manufacturing techniques that protect air and water quality while reducing the unnecessary consumption of natural resources. They focus on eliminating the use of hazardous materials, introducing bio-renewables, incorporating recycled materials, lowering VOC emissions, decreasing energy consumption and reducing waste, while proving it can all be accomplished cost-effectively.

However, in the past few years consumer’s and industrial interest in environmentally friendlier paints and coatings has been growing tremendously. This trend has been spurred not only by the realization that the supply of fossil resources is inherently finite, but also by a growing concern for environmental issues, such as volatile organic solvent emissions and recycling or waste disposal problems at the end of a resin’s economic lifetime. Furthermore, developments in organic chemistry and fundamental knowledge on the physics and chemistry of paints and coatings enabled some problems encountered before in vegetable oil-based products to be solved. This resulted in the development of coatings formulations with much-improved performance that are based on renewable resources.

A Look-Back


Coating manufacturers around the world worked tirelessly to create paints that eliminated adverse environmental implications and pushed the industry towards a more sustainable future. Volatile Organic Compounds (VOCs) have long been part of the coating industry as their properties have aided in the application of coatings. Recognized as a component of the common aroma of paint fumes, VOCs are believed to contribute to the formation of ground-level ozone and urban smog, which in turn, may contribute to adverse health effects. After truly understanding the effects of VOCs, coating manufacturers directed their focus to creating formulations that lessen the need to use solvents. It was able to achieve this by using a higher percentage of solids in its formulations that resulted in less coating volatizing into the air.
The next step is to look at the coating process. Even the way coil coatings are applied to the metal used for wall and roofing panels has been enhanced for better environmental performance.  Coil coating— where the paint is rolled onto the metal in a factory setting— is a pretty energy efficient technique. When coil coating metal panelling, the VOC gases that are released during the process are returned to the system, and through the use of a thermal oxidizer (also known as a thermal incinerator), become fuel for the curing process.

A view on Low VOC coatings

VOC is a general term referring to any organic substance with an initial boiling point less than or equal to 250 degrees Centigrade (European Union definition) that can be released from the paint into the air, and thus may cause atmospheric pollution. VOCs are volatile organic compounds that can be naturally occurring (such as ethanol) or can be synthesised chemically. The VOC content in water-based paints may be a very small amount of solvent or trace levels of additive in the paint that are needed to enhance its performance. Paint is made up of a number of components. Some of these may be of natural origin (such as minerals, chalk, clays or natural oils), other components (such as binders, pigments and additives) are more often synthetically-derived from different industrial chemical processes. All these components need to undergo some degree of washing, refinement, processing or chemical treatment, so they can be successfully used to make paint. These production steps necessitate the use of different process aids, including substances that are classed as VOCs. Although every effort is made to remove these VOCs through drying and purifying, there will still be trace amounts in the finished raw materials that are used to make the paint and the tinting pastes that are needed to be used. Therefore, there is no such thing as a truly 100% VOC-free or Zero VOC paint, as all paints will contain very small (trace) amounts of VOCs through their raw materials. There are several key contributors to the environmental footprint of household paint – the extraction/production of the raw materials, the cost of transporting paint from factory to retail outlet to your home, and how long the painted surface will last until it needs repainting i.e. how durable the paint film is. This last aspect is of particular interest – a durable longer-lasting paint is better for the environment. Many paints which claim ‘Zero VOC / VOC-free’ credentials are based on natural clays and oils rather than synthetic binders such as vinyl or acrylic. This has an impact on how resistant the paint film is to water or to damage – generally, synthetic-binder based paints will provide a much more durable and resistant paint film, so would be expected to last longer than a clay paint. Thus, walls with these clay paints on may need repainting more often, and the clay paints would not score so well, when viewed from an overall environmental footprinting approach. Thus, perversely, ‘Zero VOC’ clay paints may actually be more harmful to the environment than standard synthetic-binder based paints, due to this increased maintenance cycle.

Protein and vegetable oil-based coatings

As an increasing interest is observed in the development of more environment-friendly paints and coatings. In recent, the developments in the application of vegetable oils and plant proteins in coating systems are addressed. Regarding vegetable-oil-based binders, current research is focussed on an increased application of oils from conventional as well as new oilseed crops. A very interesting new vegetable oil, for example, originates from such crops as Euphorbia lagascae and Vernonia galamensis, which have high contents (>60%) of an epoxy fatty acid (9c,12,13 epoxy-octadecenoic acid or vernolic acid) that can be used as a reactive diluent. Another interesting new oil is derived from Calendula officinalis, or “Marigold”. This oil contains >63% of a C18 conjugated tri-ene fatty acid (8t,10t,12c-octadecatrienoic acid or calendic acid), analogous to the major fatty acid in tung oil. Presently, research aims at evaluating film-forming abilities of these oils and of chemical derivatives of these oils, both in solvent-borne and water-based emulsion systems. In research on industrial applications of plant proteins, corn, but particularly wheat gluten has been modified chemically to obtain protein dispersions that have excellent film-forming characteristics and strong adhesion to various surfaces. Especially wheat gluten films have very interesting mechanical properties, such as an extensibility of over 600%. Gas and moisture permeabilities were found to be easily adjustable by changing the exact formulation of the protein dispersion.

Wheat gluten coatings

In developing non-food applications of proteins, various proteins such as soy protein, corn gluten, wheat gluten, and pea proteins are being studied. Based on its unique functional properties, wheat gluten can be distinguished from other industrial proteins. Examples are its insolubility in water, adhesive/cohesive properties, viscoelastic behaviour, film-forming properties and barrier properties for water vapor and gases. Wheat gluten shows, like other amorphous polymers, a glass transition temperature (T,). Below the Tg, gluten films are brittle. To obtain rubbery gluten coatings, the addition of plasticizers is required.

Vegetable oil-based coatings

In the past many seed oils have been applied in various coatings formulations. In the 1950s the most common plant oil in trade sales paint formulations was linseed oil with a share of 50%. Since then not only the total volume of fats and oils used in drying oil products has declined, also the relative position of linseed oil has slowly declined to less than 30% of the plant oil used. Simultaneously the share of soybean oil increased such that now soybean oil is the predominant oil used in this area. The use of soybean fatty acids in ‘soybean-modified’ alkyds is obviously a contributing factor to this.

Water-borne emulsion coatings

The major advantage of water-borne emulsion coatings is the reduction in volatile organic compounds emission upon drying of the film. In the past, research has been focused on the emulsification behaviour of pure linseed oil. The application of waterborne paints to coat wood, metal, plastics or mineral substrates has increased considerably over the last ten years. The share of the market for waterborne coatings varies greatly between countries, as it does between coating market segments. The global coatings market can be categorized broadly into decorative coatings and industrial coatings. Waterborne silicate paints combine high permeability to water vapor and carbon dioxide with a very useful minimal soiling tendency. The term functional paint surface is understood to imply improvements such as the avoidance of algal and fungal growth by the introduction of nanoparticulate silver to replace biocidal compounds and improved soiling resistance and degradation of air pollutants through the use of photocatalytically active nano-titanium. More than 80% of the sealant systems for parquet floors and solid wood flooring are water-based, often using a combination of water-based polyurethane dispersions and self-crosslinking polyacrylate dispersions.

100% Renewable Ethoxylated Surfactants

Bio-based ethylene oxide (EO) will meet this demand by enabling the synthesis of various ethoxylated surfactants and emulsifiers which are 100% bio-based. Ethoxylation is a common process used to generate a range of products for emulsification and wetting, including ethoxylated alcohols, carboxylic acids and esters.  While the hydrophobic portions of many of these surfactants are already naturally sourced from plant oils, only petrochemical-derived EO has been available. With the production of bio-based EO in the near future, ethoxylated products can now be produced from 100% bio-based content, allowing customers to choose fully renewable products without sacrificing performance.  In addition, by incorporation into synthetic base materials, the bio-based content can be significantly increased, allowing formulators to meet challenging new targets. Alkyl polyglucosides are an example of a surfactant class based on renewable raw materials. Other bio-based options include some betaines and proteins, but these are rarely used in the coatings market. Fermentation is used to make some production processes more environmentally friendly and bio-catalysis is also being actively researched. Far more abundantly available and used renewable sources are the natural oils from animal fats or plant seeds. Some of the derivatives of these are oleochemicals. The fatty content from the oils can be separated by distillation into products containing chains of 12 to 18 carbon atoms in saturated or unsaturated form.  For example, lauryl, cetyl, stearyl and oleyl alcohols are commonly available and have appropriate hydrocarbon chain lengths to function as the hydrophobic tail group in surfactants. many renewable ionic surfactants can be made by this route including quaternary ammonium salts, amine oxides, and alcohol sulphates.

Biorenewable sources used during manufacturing of polyurethane (PU) adhesives have been used extensively from last few decades and replaced petrochemical based PU adhesive due to their lower environmental impact, easy availability, low cost and biodegradability. Biorenewable sources, such as vegetable oils (like palm oil, castor oil, jatropha oil, soybean oil), lactic acid, potato starch and other biorenewable sources, constitute a rich source for the synthesis of polyols which are being considered for the production of “eco-friendly” PU adhesives.

Ultraviolet curable coating technology

The advances in ultraviolet (UV)-curing coating technology to develop high performance coating systems that have zero discharge of volatile organic compound (VOC) emissions or hazardous waste generation. Included in the research was the incorporation of certain proprietary, non-toxic, corrosion-inhibiting pigments into the coating formulations. One of several problems is that of the pigments in the UV curing formulations absorbing the UV light and therefore not allowing the UV light to cure the paint. The pigments also increase the viscosity of the paint and make it more brittle than it would be unpigmented. There are low viscosity polymers available to use but they invariably have a low molecular weight which makes for low resistance to chemicals. The higher molecular weight polymers resist chemicals and solvents better but are invariably more brittle. It was an ever-present challenge to balance each coating’s UV curability against its viscosity and brittleness, and its chemical resistance against its brittleness.

Spray booth technology

This has unveiled a scrubbing system, which utilizes a regenerative dry filtration process that separates wet paint overspray from spray booth process air. The process allows significant reductions in paint spray booth energy usage and emissions. Spray booths are the leading energy consumer at most large-volume paint finishing operations. By recirculating a substantial portion of exhausted air from the spray booth back into the painting chamber, the quantity of air that must be fully conditioned is significantly reduced. The dry system operates by directing paint-laden process air into scrubber chambers located directly below and on either side of the painting chamber. Each scrubbing chamber contains an array of porous, plastic filter elements. To protect the filter elements from becoming fouled with tacky paint particles, a process referred to as pre-coating is utilized. The pre-coat process extends the life of the filter elements to a minimum of 15,000 hours.

Replacement of Commercial Silica by Rice Husk Ash in Epoxy Coating

Since epoxy resins are used as composite matrix with excellent results, and silica is one of the fillers most often employed, the rice husk ash (RHA) as filler replace high-purity silica in epoxy composites. RHA and silica exhibited similar mechanical and water absorption characteristics, indicating that rice husk ash may be a suitable replacement for silica. the good filler dispersion and distribution in the polymer matrix, highlighting the more effective adhesion interface between RHA particles and the matrix. RHA behaved similarly to crystalline silica, so it can be used as replacement of silica with little loss of properties. The tensile strength and water absorption values were around the same order of magnitude, though RHA composites exhibited better values in general. SEM analysis showed that filler particles distributed well into the polymer matrix. The adhesion interface between filler particles and polymer matrix was more effective when RHA was used, though some voids associated with porosity of this material were observed. Viscosity values revealed that viscosity of mixtures prepared with RHAs increases exponentially with the proportion of filler added (60%), pointing to the risk of problems in processing operations, depending on the application of composites. In this sense, alternative methods to control and reduce viscosity should be considered when high proportions of RHA are used. Overall, lower amounts of RHA (20% and 40%) produce composites with properties that are comparable to those prepared with commercial silica as filler.

Nanomaterials applications in “green” functional coatings

The global coating market is huge, worth over US$100 billion annually, with applications for physical and chemical protection, decoration and various other functions. In the last decade, the trend is definitely pointing toward the replacement of traditional VOC (volatile organic chemical)-based paints and polluting processes like electroplating with environmentally friendly materials and technologies. Nanomaterials play a significant role in the new generation of “green” functional coatings by providing specific functionalities to the base coating. For the replacement of electroplated metal coatings, a multilayer coating stack providing anticorrosion, mirror-like reflective and antiscratch functions was developed. Nanosized metal and ceramic particles are used to achieve these functions without the use of any polluting chemicals nor the release of any heavy metal contamination typical from electroplating processes. Furthermore, a multifunctional environmental paint was developed for wood surfaces. The key ingredient in this water-based paint is mesoporous silica nanoparticles, which offers high water resistance and a short drying time. This versatile material also offers high chemical tunability, which allows the incorporation of various additives to achieve multiple functions including antibacterial action and resistance to fire, household chemicals and UV (ultraviolet) exposure.


Powder coatings

Coatings such as water-based paints and finishes applied by the powder coating process, in which powdered material is sprayed onto a surface and then baked on to form a tough protective barrier, have lower carbon footprints — and consequently lower environmental impact — than coatings that must be thinned with chemical solvents before they are sprayed or painted onto the surface. Simply choosing a lower-impact alternative like these is an instant way to improve the green-ness of a project or product.

Likewise, advances in powder coating have made these finishes tougher, meaning that the new-generation coatings can be applied in thinner layers than their predecessors. Thus, less material gets used in the process; not only does this reduce the amount of overspray — excess powder that doesn’t adhere to the surface and has to be cleaned up afterward — but it also saves money in situations that involve coating large surfaces, such as the metal sides of shipping containers.


Solar Reflective pigments


When the strong rays of the sun strike the roof and exterior of a building, the absorbed infrared light is converted to heat, which leads to a rise in interior temperature. Within an urban sprawl, this problem compounds with smog, asphalt and a lack of vegetation creating a phenomenon known as the “heat island effect.” This effect can dramatically increase costly air conditioning and electricity expenditures for building owners.

To help mitigate the heat island effect, manufacturers turned to solar reflective pigments that reflect infrared radiation while still absorbing the same amount of visible light.  Through the incorporation of these pigments, manufactures created solar reflective coatings that stay much cooler than their non-reflective counterparts. Solar reflective coatings not only help lower energy costs without sacrificing durability, performance or beauty, but also provide an array of colours options that previously absorbed considerably higher amounts of infrared light.

CNSL: an environment friendly alternative for the modern coating industry

Considering ecological and economical issues in the new generation coating industries, the maximum utilization of naturally occurring materials for polymer synthesis can be an obvious option. In the same line, one of the promising candidates for substituting partially, and to some extent totally, petroleum-based raw materials with an equivalent or even enhanced performance properties, is the Cashew Nut Shell Liquid (CNSL). This dark brown coloured viscous liquid obtained from shells of the cashew nut can be utilized for a number of polymerization reactions due to its reactive phenolic structure and a meta-substituted unsaturated aliphatic chain. Therefore, a wide variety of resins can be synthesized from CNSL, such as polyesters, phenolic resins, epoxy resins, polyurethanes, acrylics, vinyl, alkyds, etc. The present article discusses the potential of CNSL and its derivatives as an environment friendly alternative for petroleum-based raw materials as far as polymer and coating industries are concerned. CNSL, one of the major sustainable resources, mainly extracted by hot-oil and roasting process, contains number of useful phenolic derivatives like cardol, cardanol, 2-methyl cardol, and anacardic acid with meta-substituted unsaturated hydrocarbon chain (chain length of C15). The combination of reactive phenolic structure and unsaturated hydrocarbon chain makes CNSL a suitable starting material to synthesize various resins like epoxy, alkyd, polyurethanes, acrylics, phenolic resins, etc. In addition, a number of other useful products, such as modifiers like flexibilizer and reactive diluents, adhesives, laminating resins, antioxidants, colorants and dyes, etc., have also been developed from CNSL and its derivatives. So, considering the high depletion rate of petroleum-based stocks and the range of possible applications, CNSL can be accepted as a greener and sustainable approach for future expansion in the modern coating industry.

 

How has the industry managed with all of the uncertainty related to being green? More or less all the leading coatings manufacturers have sustainability manifested in their corporate values and strategies and have implemented teams to steer the process toward more ecological solutions, from sourcing of raw materials to the development of new products and the optimization of manufacturing processes. At the same time, raw material suppliers are mainly focusing on the use of renewable raw materials, products without hazardous labelling, and energy efficiency across the value chain.

Impact of additives in green coating

Additives have a significant impact on performance and functionality, although they represent only a small fraction of the total content of a paint or coating formulation. The impact of additives on a formulation can vary depending on the application, but every component makes a difference. Green biocidal additives should be characterized by favorable human toxicology profiles. They should not cause substantial impact to the environment at use levels and should not be sensitizers. Silicon additives are absolutely necessary components of greener formulations. They can improve the longevity of paint, thus reducing the repainting frequency. They are often multifunctional, making it possible to replace two or three current additives with one. Surfactants and related additives, defoamers, dispersants, and other compounds that affect performance based on surface chemistry, are the easiest class of additives to target for developing green alternatives, since their chemical structures are well suited for synthesis from naturally derived materials.

COST AND PERFORMANCE COME FIRST

While nearly everyone across the coatings value chain, including end users, agrees that environmentally friendly products are desirable, there is a disconnect when it comes to paying for a more sustainable profile, “If current coating solutions are working for their customers and there is no regulatory drive to switch, then most end users will stay with that current technology.” Cost is a crucial factor, but many also have concerns about the long-term availability of greener or more sustainable materials— they want a that the new green products will be available for the expected lifetime of the products for which they will be used. Therefore, as green products are developed, they need to meet, or surpass, performance levels without adding cost. That can be an issue, because green products may be perceived as having the same, or worse, performance than other products while costing more. There is real demand for products that help manufacturers reduce energy consumption by requiring shorter bake times and lower curing temperatures. There has, in fact, been significant pressure from manufacturers on suppliers to make raw materials greener and more benign without compromising performance. Products with “zero-VOC” or “low-VOC” labels has continued to grow. There were initially performance challenges with low-VOC formulations, such as microfoam, blocking, compatibility, freeze/thaw resistance, open time, tack, dirt pickup, scrub resistance, and more, but there is a much greater range of available technologies today that help formulators improve the performance of paints and coatings while also meeting regulations and consumer demand for a small environmental footprint. points to new generations of low/zero-VOC products, high-performing coalescent, pigment dispersions, resins, reactive modifiers, tougheners, defoamers, and others, some of which may also incorporate biorenewable feedstocks such as plant-derived oils, fatty acids, and esters, or may avoid particular substances of increasing concern (formaldehyde, APE, bisphenol A, phthalates, etc.). Sustainable innovation is, in fact, occurring at both the process and product levels. Wet-on-wet processes are an ideal example of a new method that improves customer operations. the technologies aimed at reclaiming the water used during paint reduction, the formulation of higher-solids waterborne paints to reduce water consumption in products, as well as the costs and CO2 emissions associated with shipping latex, and improving dirt pickup resistance to reduce the need for washing and repainting, which can also help achieve water conservation goals. Across general finishing applications (metal, wood, composites), new resin developments have enabled reduction of air emissions and hazardous waste and elimination of chemicals that may potentially harm applicators. New polyurethane formulations, for example, not only provide a way for end users to meet environmental standards, but also to reduce energy consumption and inventory levels. In fact, advances in product and process environmental profiles have typically led to the need for advances in other technologies to maintain or achieve greater performance properties.

Conclusion

Coating manufacturers across the globe are continuously looking for new innovations that will push the industry to a greener future. The renewables and recycled materials are crucial elements in the implementation of a green agenda. For example, both bio-renewable materials that remain in backers and a bio-renewable polyester resin system for interior coil applications. They use recycled bio-renewables like vegetable oil, which is an effective substitute for fossil fuels. Another example is the use of both virgin vegetable oil and recycles or used oil. These products contain a resin system composed of up to 30 percent bio-renewable products, resulting in a sustainable finished product that doesn’t lose bio-renewable materials during the curing process. Used predominantly in the coatings developed for backers, giant coils of sheet metal are turned into all types of pre-painted construction products. These materials are not only eco-friendly and sustainable but can be achieved without any significant cost to the coating material.

The coating materials are also going green by changing the processes they use to handle coating-related waste. In anodizing, for example, manufacturers can use chemical flocculants that bind the toxic, waterborne aluminium hydroxide into a solid that can be compacted and handled more easily. They may also employ advanced drying technology to remove most of the water from the sludge created by the flocculent. In some cases, this leftover material contains so much aluminium – which would otherwise go into a landfill or leach into the environment – that it can be recycled and used in the production of other aluminium products.

And recycling plays other roles in helping coatings be more earth friendly. There are, however, still questions about how the impacts of raw materials and coating products should be measured. If, for example, an oil-based chemical is replaced with a renewable-based chemical, is it better for the environment? The answer is: that depends on the energy footprint it takes to make the renewable-based chemical relative to the oil-based chemical, and it also depends on how the renewable process impacts other uses of the same resources. “It is very important to the industry that all of these considerations be accounted for to ensure that we truly make the right choice for a sustainable future.”

References

Paint and coating industry https://www.pcimag.com/articles/100363-sustainability-in-the-coatings-industry

British coating federation https://www.coatings.org.uk/next-generation-raw-materials_seminar.aspx

Hydrocarbon Magazine, Oct 2014 and Scientific Design

European coating https://www.european-coatings.com/Publications/Blog/The-future-is-green-for-the-coatings-industry-tool

Coating world https://www.coatingsworld.com/issues/2018-08-01/view_features/100-renewable-ethoxylated-surfactants

ResearchGate https://www.researchgate.net/publication/311364037_Composite_ Coatings_Based_on_Renewable_Resources_Synthesized_by_Advanced_Laser_Techniques

Science Direct https://www.sciencedirect.com/science/article/pii/0926669094000392

ResearchGate https://www.researchgate.net/publication/260174804_P24_Paints_ based_on_renewable_materials

ResearchGate https://www.researchgate.net/publication/293772650_Lesquerella_ renewable_resource_for_industrial_coatings_and_polyurethane_foams

ResearchGate https://www.researchgate.net/publication/323476264_Synthesis_and_ Characterization_of_Renewable_Resource_Based_Green_Epoxy_Coating

https://www.researchgate.net/publication/27349117_Resins_and_additives_for_powder_coatings_and_alkyd_paints_based_on_renewable_resources

 

Team CEV,

By : Sanidhya Somani,

ECE Department (2 nd Year)

 

CAVITATION: An Extraordinary Phenomenon

Reading Time: 7 minutesYou can skip the 1st para if you have came for the sole purpose of exploring the topic. CEV publishes its authentic blogs mainly to discuss the important topics, phenomenon, and scientific theories to spread a sense of wonders and appreciation for the feats of heroes of science. Our formal education lacks the element of astounding a student, and instead just focus on swallowing such beautiful phenomena without tasting them, we are trying to hammer that.

Come on a journey to experience the wonders of nature.

So firstly let me set the aura for the blog.

Different matter (elements) on earth exists in such states, for example water in the liquid state, which is rarest in till now explored part of the universe by humans. It is only here, where matter has groomed to consciousness.

Water is one of the forms of matter whose role in sustaining life on earth is ineffable. H2O shows a very flexible behavior in the conditions produced at earth. It can boil to vapour state, freeze to solid state and flow in the liquid state. In winters the water froze to ice, and it starts floating. Whereas for any other element the solid is denser than liquid. This strange behavior creates an insulation layer on top and keeps most of the water in water bodies unfrozen, thereby sustaining aquatic life. In summers it evaporates and condenses to form clouds and fall back as rain, another life-sustaining behavior. Change of state of water is such an important phenomenon. Cavitation is just one aspect of it.

Cavitation is like boiling but the method is different, more technically the thermodynamical path followed is different.

Image result for state of water graph

Consider this graph, water can change state from liquid to vapour either by moving horizontally by increasing temperature (boiling) or by moving vertically down by lowering external pressure (cavitation), a variety of reasons can cause this the drop in pressure in water.

Encapsulating…

“Cavitation is a phenomenon in which liquid burst to vapors inside the bulk of the liquid due to a sudden drop in pressure below the vapour pressure.”

First, let’s have a general discussion which will subsequently lead us to advanced topics.

Once the bubbles are generated soon they travel to higher pressure region and collapse back to a liquid form. Obviously, the energy equivalent to the latent heat of vaporization will be released. Now, this energy may take the various form of energy, the form of shock waves in water, sometimes heat, sometimes even the light!

Since the phenomenon is concerned with fluid so it becomes extremely essential for mechanical and civil engineers to talk about the effects of cavitation in fluid transportation. The pumps, propellers, valves, engines, spillways, etc all of these have to suffer wear and tear due to cavitation. Let’s see the limits.

As far as general events are considered cavitation doesn’t seem to be desirable:

In propellors and pumps, the shock waves produced by the cavitating bubbles cause the material to lose its hardness and soon degrade due to continuous explosions. Also, the formation of bubbles in pumps reduce the efficiency. The crackling sound of collapsing causes the submarines more vulnerable to be detected by enemies.

In spillways of dams and canals, the irregularities present on the surface may cause a pressure drop in irregularity at a high fluid velocity hence forming cavitating bubbles which might travel downstream and cause possible damage to even concrete.

Diesel engines are also found to get damaged by the cavitation in some way.

This series of damaging effects of cavitation are not continued in every case, we have employed cavitation for benefits also.

Engineering applications:

  1. Biomedical: The principal is being used to dissolve the stones in kidneys, hence treating stone without a single stitch! The technique is called shock wave lithotripsy. It has been suggested that the sound of cracking of knuckles comes from the collapse of cavitation in the fluid within the joint.
  2. Chemical: This industry uses cavitation in profound ways to mix, dissolve, homogenize, etc. Water purification system has been designed using this.
  3. The cavitating bubble causes a very high temperature if special techniques are used, this can be employed to meet extreme temperature requirement in Nuclear Fusion plants, research is been continuously done to actually implement.
  4. Supercavitating in near future will be employed to make super speed submarines. Supercavitation torpedos have already being used. 3. and 4. will be discussed in the end.

You would be surprised to know about the scope of cavitation in nature, here are the most amazing applications.

Cavitation in nature:

TREES AND PLANTS: It is the difference in pressure in xylem due to which the water rises from roots to leaves. Sometimes the pressure drop goes below the atmospheric pressure hence the liquid burst into vapour. As the plants try to dissolve the vapour, the conversion back to liquid comes with a shock wave, which damage the plants conducting tissues.  This is the explanation the audible sound in some trees in summers when the transpiration rate is highest. Here is a beautiful video to understand more better-

 

LIFE IN OCEAN: Well animal kingdom in the ocean is also affected by cavitation. The upper limit of the speed, the sharks, tuna and other life can travel is limited by this phenomenon. Basically, the high speed causes a huge pressure drop in the area behind fins and tail, sometimes pressure is dropped below the atmospheric level that water blast into vapour, and soon normal pressure cause bubbles to collapse to liquid, this reversal cause a painful event for them and limits them. Still, a speed range of 50-110 Kmph is achieved, as some of them have evolved fins without nerve-endings like tuna, still evidence of damage by cavitation can be seen on their fins.

Life is the most wonderful event in the universe. Just imagine life is nothing but matter groomed to consciousness!!!! You know piston shrimps are so advance that they employ shock waves to kill prey. The shrimps use a pair of clamping outgrowth to bash at a colliding speed of 115 Kmph. The small area of the jet of water that moves ahead at speed around 97 Kmph and creates a pressure of 80 kPa at 4 cm from shrimp, leaving behind pressure less than that of vapour pressure, hence causing bubbles. This bubble immediately collapses due to the surrounding pressure. Small area and fraction of time cause very high energy density, the implosion causes the temperature of water to reach limits of 4700 degree Celsius (and you know surface temperature of the sun is around 5500 degree). This gives a coup de grace to the prey, no chance to escape from death. Witness the event in this short video-

Time has come to move to advanced topics in which someone might do research in.

Sonoluminescence:

We discussed that the light is a possible form of energy that can be released by the cavitating bubble. This phenomenon of production of photons is called as sonoluminescence. This transduction of sound into light cannot be described fully by the present-day equations of fluid dynamics, and thus pushes fluid dynamics beyond its limits.

CAVITATION: An Extraordinary Phenomenon

The mechanics of the process is uncertain, but event description can be given. So a bubble inside a liquid behaves differently when subject to different acoustic waves. At a particular frequency in the ultrasonic range, the gas bubble can collapse to emit a burst of light. The expansion and collapse of the bubble can be made periodic using a particular frequency and constant amount of light emission can be achieved.

CAVITATION: An Extraordinary Phenomenon

When a bubble is collapsing the acceleration of surface is great, the pressure inside the bubble keep on increasing and so the temperature. Theories say that the temperature goes so high that the gas inside the bubble ionize to form the plasma and hence charge particle produce light (in single bubble cavitation it ranges to 20,000 K). Physicists describe this phenomenon as ” a star in a jar”!

Supercavitation:

Another interesting topic, just like the fish’s speed is limited by the cavitation, the submarines and torpedoes velocity is also limited by the cavitation and more significantly by the skin friction drag.

Supercavitation is the concept that comes in. The principle is to enclose the whole object in the zone of vapour state or bubble so that the frictional effect is greatly reduced allowing greater speed.  So a supercavitating object has an arrangement at the front to create a bubble to enclose the whole object.

A simulated view of supercavitating torpedo:

Image result for supercavitating torpedoes

The original shot:

CAVITATION: An Extraordinary Phenomenon

Russia VA-111 Shkval used diverting some of the hot exhaust to the nose to evaporate the water and form a bubble. (notice the nose)

Image result for supercavitating torpedoes

Hope some of you may get vision slight more clear in the direction to select your field of research after reading the blog.

Thanks!

Keep reading, Stay blessed!

TEAM CEV.

NUCLEAR FUSION: How much it takes to mimic a star ?

Reading Time: 10 minutesThe greatest stuff that matters more than anything else for humans is the source of energy to run his dear machines. In fact, he is far more hearty and concerned about food for his machines than himself. This hunger had led him to exploit coals, gas, petroleum and in some amount the nuclear and renewable sources to meet his ever-increasing needs. There is absolutely no doubt that the standard of living of each human on earth should be equal and magnificent, this comes on the cost of energy expenditure. The point of concern is that whatever ways he has devised has many issues associated with it, the limited amount and harmful effects on the environment are the most depressing. The coal mines and oil wells in the near future will definitely exhaust. Imagine where would we stand if we have no source of energy for our hungry electric motors, IC engines, etc. The production of energy in a conventional way is fast approaching to its ends along with it causing hostile climate changes, the scary nightmare for us.

Now just imagine the deployment of a source to meet the energy demand that tends to provide humans for thousands of years, in fact, millions of years of service and also has almost negligible effect on our environment, just imagine the peace and prosperity of humans on earth, there would be heaven, no wars, no one deprived.

And I am not just asking you to imagine instead believe it. This feat if humans establish than it would surely be his greatest achievement.

You have guessed it right the NUCLEAR FUSION, creating miniature stars on earth. Yes this seems impossible surely, we have been failing at it from tens of decades but this is the reason why human rules on earth because “he believes in believing and achieving”.NUCLEAR FUSION: How much it takes to mimic a star ?Basically, this blog intends to discuss the phenomenon of the nuclear furnace of the universe and endeavor required to mimic it on earth. We will discuss in somewhat detail the classical physics and the quantum physics behind the process, and what exactly we need to do to get the sun here on earth, the advantages and the work done and the reason for failure till now thereby quenching our thirst of curiosity regarding this topic.

THE PHYSICS OF NUCLEAR FUSION

Atoms are fundamental building blocks of matter, they comprise of a nucleus and the electrons orbiting around it. The nucleus, volume wise is very small, a grain in a room if compared but carries the 99.999% mass of the whole atom. The nucleus is made up of particles called hadrons the protons and neutrons. Refer to the standard model for particle classification.

Protons have unit positive electric charge and neutrons are neutral. Since the protons are very close thus electrostatic force must be repelling with huge magnitude, but we know nucleus do exist hence we can easily conclude that some stronger attracting force must be there to bind them together, the force should be enormous as well as very short range, otherwise there would be no discrete matter at all. These forces are named nuclear force and they are characteristics property of both protons and neutron.

So as result of work done by strong nuclear force against electrostatic repulsion, a large amount of energy is released when a nucleus is formed, this energy released from the formation of a nucleus is called binding energy. You can see that more the number of nucleons are there more the energy is released but this hold true only up to nucleus which contains less than 56 nucleons, as nuclear force gets weaker exponentially with increasing distance.

NUCLEAR FUSION: How much it takes to mimic a star ?

So we can analyze that to tap this enormous binding energy we have to synthesize nucleus, there are two techniques to make this happen:

  1. Either you make a naturally available bigger nucleus unstable by some process and forcing it to decay into daughter nuclei and hence releasing the corresponding binding energies of more stable daughters, this method is called the NUCLEAR FISSION, the concept behind current nuclear power plants.
  2. The other more difficult way is to force already highly stable smaller nuclei together to form an unstable nucleus which again decays into more stable nuclei and releasing significant binding energies, this method is called NUCLEAR FUSION, the concept behind future nuclear power plants.

The complication in the second case can easily be spotted. To make a bigger nucleus unstable is far easier than to force two smaller stable nuclei to form an unstable nucleus because nucleus greater than 56 atomic number are having less binding energy per nucleon hence firing it with a neutron or something can easily make it lose its stability. Whereas forcing two nuclei to fuse, is certainly a tedious job on earth at least. We have to overcome the great electrostatic force as they approach closer to form a new nucleus, but the unstable nucleus decay to products that will have much higher binding energy, as graph increases linearly till atomic number < 56.

WHAT QUANTUM SAYS?

Before I start this section let me make you one thing clear, strictly saying “NO ONE ON THIS EARTH IS ALLOWED TO ASK THE QUESTION WHY, WHEN PHYSICISTS ARE DESCRIBING HOW NATURE WORKS!!!”, we have to just follow through their theories because it is the way it is. We just have to check if their theories actually fit into the phenomenon or not, that is the aim.

Quantum description of nature is seriously very hard to understand and far harder to sing it to others. However weird and virtual quantum theory appears, it is the reason behind this technological era and also future “the quantum computers!”, won’t go off topic.

Quantum describe particle not as like definite point. Instead, it says that particles are mere disturbances in their corresponding field and describe them mathematically as wave function or probability functions. Classical mechanics strictly deny this process to occur as electrostatic force tends to infinity as distance tends to zero, but quantum says that it can happen, there is a probability. This is called the quantum tunneling effect.

Wikipedia says: “Quantum tunneling is the phenomenon where a particle passes through a potential barrier that is classically unsurmountable ”.

Schrodinger equations when solved and analyzed, it indicate that the nucleus has a probability that it can fuse with other nucleus and this probability goes on increasing as the kinetic energy of colliding nuclei increases. They explain it like the overlapping of the waveforms that represent the two nuclei.

Again you have to agree with this description because here is series of examples of the applications of quantum tunneling: nuclear fusion in stars, tunnel junction, tunnel diode, tunnel field-effect transistor, quantum conductivity, scanning tunneling microscope, quantum biology, etc.

PHENOMENON IN STARS

Nuclear fusion can be called the ultimate source of energy of the universe. Stars are powered by this and virtually all the atoms (elements) are produced in this process which is also called stellar nucleosynthesis. The type of fusion reaction followed depends on the mass of the star, the pressure, and temperature of its core.   

If we consider our own star the sun then it utilizes the nuclear fusion of hydrogen into helium at 14 million kelvin core temperature. 620 Million tons of hydrogen per second fuses to form 606 Million metric tons of helium each second. The net rate of mass-energy conversion is 4.26 million metric tons per second, which produces the equivalent of 38,460 septillion watts (3.846×1026W) per second, this really very hugeeeee!!!!!!

NUCLEAR FUSION: How much it takes to mimic a star ?

STAR MIMIC :

THE DEUTERIUM-TRITIUM REACTION:NUCLEAR FUSION: How much it takes to mimic a star ?

We are looking for this reaction because it is the most feasible and reactants are easily and abundantly available. Considering the fueling up of smaller atoms with kinetic energy to break the Coulomb barrier, scientists have chosen single proton species to bang together. The reaction deuterium-tritium seems to be a perfect choice.

NUCLEAR FUSION: How much it takes to mimic a star ?

The DT rate peaks at a lower temperature (about 70 keV, or 800 million Kelvin) and at a higher value than other reactions commonly considered for fusion energy on earth.

FUELS:

  1. Deuterium: 1 in 5000 of hydrogen in seawater is deuterium (a total of 10^15 tons). Viewed as a potential fuel for a fusion reactor, a gallon of seawater could produce as much energy as 300 gallons of gasoline.
  2. Tritium: is little problematic. It is radioactive with a half-life of about 10 years hence no source of tritium is present on earth. This is obtained from breeding lithium with is abundant. Sometimes I just remain awestruck by the efforts and adventures of humans, it has been planned to establish a mining factory on the moon and transport helium isotope to earth via rockets in the future.

INPUT AND OUTPUT ENERGY:

In this reaction, deuterium and tritium isotopes of hydrogen are first ionized to become bare nuclei. The calculated coulomb barrier is  0.1 MeV. After crossing this limit the immediate result of fusion is the unstable 5He nucleus, which immediately ejects a neutron with 14.1 MeV. The recoil energy of the remaining 4He nucleus is 3.5 MeV, so the total energy liberated is 17.6 MeV. This is many times more than what was needed to overcome the energy barrier hence the result is net energy output.

NUCLEAR FUSION: How much it takes to mimic a star ?

PROCESS STEPS:

  1. Stage one heating: once the atoms are heated above its ionization energy, its electrons get stripped away, leaving behind an ion.
  2. Stage two heating: continues until the Coulomb barrier is not reached. The result is an extremely hot cloud of ions and electrons. This is known as another state of matter, the plasma. The state is electrically and magnetically responsive due to separated charges. Many devices take advantage of this to control plasma.

Plasma is thus magnetically and electrically responsive. In bulk its modeling is done using the science of magnetohydrodynamics, this is the combination of the motion of fluid governing equation the Navier Stokes and electric and magnetic field behavior governing equations called Maxwell’s Equations.

Now the problem is how to confine the extremely hot plasma. There doesn’t exist any material which can stand firmly at 300 Million kelvin, that doesn’t degrade by the constant bombardment of high energy neutrons and other particles.

WHAT IS THE MECHANISM?

There are two famous approaches to do this:

1. MAGNETIC CONFINEMENT

2. INERTIAL CONFINEMENT

  1. Magnetic confinement: We know that plasma is charged with help of strong superconductor magnets they can be hanged up in the vacuum without actually touching the walls of the container. This concept is called magnet mirroring. 

 

ITER (International Thermonuclear Experimental Reactor), in Latin ITER, means”a way”, a prestigious organization having headquarters in France established in 2007, its members include India, US, Russia, China, Japan, Korea, and the Euratom. Its purpose is to research and demonstrate the technological and feasibility of fusion energy for peaceful purpose. The ITER employs a mega machine called tokamak. The stellarator is also one of this kind. Here is an awesome video to make the mega monster easy to understand:

2. Inertial confinement: Fusion is achieved by compressing and heating fuel pallet. Mega Intense laser beams rapidly heat inside surface of hohlraum. The fuel is compressed by rocket-like throw-off of the outer surface material. Finally, the fuel is ignited at a temperature of about 100,000,000 degrees Celsius. And the NUCLEAR FUSION occurs producing far more energy than taken by the laser system in the beginning.

NUCLEAR FUSION: How much it takes to mimic a star ?

Just as ITER pioneers the magnetic confinement the NIF (National Ignition Facility, California, US) leads the way in inertial confinement fusion. Here the short clip on that:

DISADVANTAGES:

  1. Tritium is radioactive and also difficult to retain completely. Hence some amount of tritium would be continually released. The health risk posed is much lower than that of most radioactive contaminants, because of tritium’s short half-life (12.32 years) and very low decay energy (~14.95 keV). It also doesn’t accumulate in the body (its biological half-life is 7-14 days).
  2. The cost involved is enormous in research, instead if we invest in proven technologies then it would be the sure shot. Investment in the nuclear fusion is billion-dollar dollar gamble surely.
  3. In my opinion just as human discovered vaccines which in turn increased the average human life and hence population on earth banged in numbers, in same way limitless energy would cause the steep rise in population, the economy would entirely be changed because markets would no more depend on the rates of oil.

ADVANTAGES:

  1. The half-life of the radioactive waste is quite less as compared to the fission wastes which have thousands of years as half-life more over less toxic than emissions from fossils fuel burning.
  2. The energy supply would be uninterrupted and provide service to humans for millions of years!
  3. No greenhouse gas emissions, no global warming and zero environmental concerns in any way round (air, water, and land).
  4. No hike in energy supply cost around the fiscal year.
  5. Those importance processes which are high energy intensive can be carried at low cost like desalination of seawater for the availability of fresh drinking water.

CONCLUSION:

Here is a quote from one of the greatest physicist of the 20-th century:

NUCLEAR FUSION: How much it takes to mimic a star ?

So, there is no doubt that it was the feats of our earlier scientists and engineers who led us to where we are today and it is we who will decide the future of human civilization, and we must be happy that we are putting the efforts at an incredible rate, and till then we have to make sure that we too at individual level are playing our assigned role in this universe of the “ALMIGHTY”.

“You have patience level 10 if you have read the whole article because it is quite lengthy although keeping it short has been tried “-Rahul

Thanks for your time and patience.

Keep reading, stay blessed!

TEAM CEV

HIGGS BOSONS : Giving Universe the Mass

Reading Time: 8 minutesIn the introduction to this massive ~1600 words blog, I must let you know that this blog intends to ignite or thrill some of the interested minds by discussing how we the humans have understood the universe till now, how the path-defining experiments have to lead us to understand the universe as it is. This blog is about the Nobel prize-winning theory, the existence of the Higgs field, the experiment that had cost $13.25 Bn, the results of experiments, the conclusions, and the aftermaths.

HIGGS BOSONS : Giving Universe the Mass

To understand this science of the universe just like 2 + 2 = 4 we have to start by understanding the standard model and its flaw in the 1960s.

So let’s start with the standard model.

HIGGS BOSONS : Giving Universe the Mass

THE STANDARD MODEL :

All the particles in this universe can be divided into just two categories namely – Elementary and Composite particles. Elementary particles are those which are the fundamental one, means they are not made up of further any basic particles as per now. Combining them we get composite particles.

Elementary particles are then categorized into particles having half-integer spin, the fermions and with integers spin, the bosons.

Now the fermions are further divided into two on basis of whether they interact by a fundamental force called the strong interaction or not.

You must be knowing that there are only four fundamental forces in nature, strong interaction, weak interaction, electromagnetism, and the gravitation.

So all those fermions that interact via strong force are called quarks and those who don’t are called leptons. In the quarks category, we have six flavors up, down, top, bottom, charm and strange. In leptons, we have electrons, anti-electron, neutrino, and anti-neutrino.

Now comes the bosons. The elementary particles having integer spin also called the force particles, they are the particles that are responsible for the mediating forces between particles. The strong interaction mediated by gluons, weak interaction by W and Z bosons, electromagnetic interaction by photons and gravitons are responsible for the gravitation.

I have plotted the scene for the story, now let’s begin it. So, the quarks which can interact by strong interaction by exchanging gluons, come together to form composites like protons (uud) and neutrons (ddu) which can be electrically charged or neutral one, these protons and neutrons combine together by virtue of residual force of strong interaction, to form nucleus, it is positively charged and now negatively charged leptons, like electrons begin to get attracted and form orbits around it. This is the way atomic physics work.

But one thing could not be explained by the standard model, what is the reason behind why particles like quarks and electrons have mass whereas particles like neutrino and force carrier particles (the bosons) don’t have mass?

THE HIGGS FIELD THEORY:

The pioneering team of physicists Peter Higgs and 6 more came come with an extension to the standard model, the new type of boson called HIGGS BOSON to explain why some particles have mass whereas some don’t. So according to the theory mass is not the fundamental property of all particles, but there is some ubiquitous field that permeates in the whole universe which gave the effect of mass or inertia to some particles.

This purely hypothetical field was given the name of the Higgs field. Some of the particles like electrons, quarks, protons, neutrons, etc, interact with this field strongly and exhibit what we feel as inertia and mass. And here we go, the more an object has these type of particles, the more interaction with the field is there and hence more massive it becomes.

One more concept from the quantum theory, “every particle is a disturbance of its corresponding field”, that why Schrodinger talked of the probability distribution. For example, quarks are disturbances of the gluon field, electrons are a disturbance of the electric field and analogy to that Higgs Bosons are the disturbances of the Higgs field.

I know that it might be sounding very crummy, but if you have reached here then you must not leave till the end.

EXPERIMENT OF MILLENNIUM: THE LARGE HADRON COLLIDER 

So the experimental physicist and the engineers at a European organization of Nuclear Research, CERN (derived from the name Conseil européen pour la recherche nucléaire), in Geneva, Switzerland, began to set experimental setup for an experiment that predicted to establish the Higgs field theory by confirming the existence of the Higgs boson.

HIGGS BOSONS : Giving Universe the Mass

Here comes the major part of this blog :

AIM OF EXPERIMENT:

The scientists have with them the theory of the Higgs field, to prove it is correct, they had to show the existence of the Higgs Bosons. Considering the Einstein energy-mass equation, E = mc^2, if we can make the elementary particles cruise at speeds close to that of light and make them collide than there are chances that the kinetic energy may convert to new elementary particles never seen before, like the Higgs bosons.

After detailed study and simulations of the collision, it was indicated by the quantum mechanics that about 90% of all the Higgs boson that would be created in collisions would be produced by gluon fusion phenomenon. Gluon fusion results differ by around 20% which can be taken as the theoretical uncertainty of the gluon structure, and the chances of two gluons colliding, give rise to Higgs boson is 1 in 2 billion. However, in this experiment approximately 1 billion proton collides every second, hence the production rate for Higgs bosons is roughly one in every two seconds, which is not at all discouraging.

THE GIGANTIC NUMBERS OF LHC :

World’s largest and most powerful machine made by humans, has spectacular numerical parameters associated with it. The protons need to haul at 0.99991c hence chambers need to be close to perfect vacuum, to avoid collision with any gas molecule. Data literally would be flowing every microsecond hence the need to store and analyze it, all these require engineering at a phenomenal level. The different sections include powering, accelerating, steering and focusing, cooling, storing and computing work in sync surely earns it the tag of greatest effort by the human being to understand the mother nature.

HIGGS BOSONS : Giving Universe the Mass

1. The machine consists of a tunnel 27 kilometer in circumference and as deep as 175 meters located at France-Switzerland border.

HIGGS BOSONS : Giving Universe the Mass

2. The energy requirement is pretty high. Estimate of 800,000 Megawatt hours (MWh) annually that cost around $30 Million per year, which is enough energy to power 300,000 homes throughout the year, is consumed.

HIGGS BOSONS : Giving Universe the Mass

3. The magnets are very large, weighing several tons. They steer protons at 99.99% the speed of light. They are cooled down to 1.9 K (-271.25 degree Celsius), that is colder than the vacuum of outer space!

HIGGS BOSONS : Giving Universe the Mass

4.  This machine has six points for observation, each loaded with microscopes and digital cameras. Microscopes measuring 45m long, 25m tall and weigh about 7000 Tons.

HIGGS BOSONS : Giving Universe the Mass

5. Nearly 150 million sensors collect data during experiment generating data at about 700 Megabytes per second (Mbps). On a yearly basis, 15 Petabytes (15 million Gb) data is stored at CERN. To tackle with this problem Sir Tim Bernes Lee with engineer Robert Cailliau invented world wide web in 1989 to distribute data to university and labs around the globe to store and analyze and form the platform to discuss and share.

THE EXPERIMENT BEGINS :

  1. Hydrogen atoms at very precisely controlled rate enter the source chamber of the linear accelerator. Under high strength electric field, they are converted to bare hydrogen nuclei, the proton.
  2. To intensify the beam and give them further acceleration, they are directed into circular boosters, as linear accelerators can’t be made so long.
  3. Using oscillating electric field the kinetic energy is pulsed into the beam, the perpendicular magnetic field helps them to rotate in a circular path.
  4. The beam goes through proton synchrotron (PS) and super proton synchrotron (SPS) to get further intensified and energized. When it enters the 27 km long circular tubes they have velocity 99.99 % of “c” and energy of 450 GeV!
  5. Sophisticated kickers make the two beams exit the SPS and travel through the two 27 km long tubes in opposite direction. The pulsed electric field continues to add energy to them and large superconducting magnets are employed to rotate these particles at such speed.
  6. The velocity just before the moment of collision is only 10 kmph less than the speed of light! The collision confines a very-very large amount of energy (2*7 Tera eV) in very-very tiny space of the volume. Energy has no other option other than to convert into mass, E = mc^2 comes into action, and a wide range of interatomic particles are formed out of huge energy.

Result analysis:

On 14 March 2013 CERN confirmed that:

“CMS and ATLAS have compared a number of options for the spin-parity of this particle, and these all prefer no spin and even parity [two fundamental criteria of a Higgs boson consistent with the Standard Model]. This, coupled with the measured interactions of the new particle with other particles, strongly indicates that it is a Higgs boson.” ~CERN

The Higgs bosons as predicted was highly unstable and decayed in 10^-22 seconds into pairs of photons (γ γ), W and Z bosons (WW and ZZ), bottom quarks (bb), and tau leptons (τ τ) which was perfectly consistent with the Standard model! Hence led to confirmation of the presence of the “HIGGS BOSONS“!

Following are extended results for strong support, for further reading:

https://docs.google.com/document/d/1V8HkMjAcDEVukqUUuROC8qZCT-m04LImsmBBFCAOoPs/edit?usp=sharing

 To get better about LHC, follow the beautiful video :

Conclusion: First of all the effort and initiative taken by this organization, CERN should be appreciated. Also, this feat can serve as a great source of inspiration and motivation for the new generation of engineers and physicist. This experiment gave us the internet (www) and opened for us a huge source of knowledge. Moreover, the results of this experiment help us to understand the mother nature more closely, and these results might serve in the future for the advancement of science, here on EARTH! 
In the end a huge thanks for the time and patience.
Writer will feel appreciated if you follow by a question!
Thank you.
TEAM CEV’
CEV - Handout