New Home

BLACKOUTS: Facing The Outrageous

Reading Time: 14 minutes

INTRO:

Let us put on the table first what makes it important to immediately shift our attention to cope blackouts by analysing what destruction it is capable of.

We are guided by one question: What is the core reason for blackouts?

Are they technical fault or it is intentionally caused by few bloody minds to create outrage against a whole society or a country.

The answer is both.

There were many cases when the electrical system itself failed to keep up, either due to overloading, or some faults like birds tangling on lines, trees falling, etc.

And there were also cases where a group of wicked hackers with a simple machine called laptop caused the whole system to collapse and brought the modern urban society on knees. This terrifying act of spreading wide outrage in cities, part of countries and even the whole country are called a “CYBER ATTACK”. Some nation considers cyber attacks more outrageous than ballistic or even a nuclear missile attack.

 A doomed shut down of power plants across the nation can typically lead to the following consequences:

In the early hours of the attack, all the passenger on electrified transportation system will come to immediate rest. Trains and metros in remote locations, over the bridge, under a tunnel and all the other possibilities will only leave the passengers in turmoil.

Industries like manufacturing, packaging will be brought down causing an exponential rate of increase in financial losses. Soon houses will consume all its battery backups and important gadgets like fridge, fans, coolers, AC, lights, computer all will come to zero.

Important and critical systems like communication system, major cooling systems, all will come to halt. Services like water availability, sewage treatment plants, incarnation plants, hospitals, ATMs, banks, etc one by one will come offline.   

Now imagine yourself stuck in this situation, what would you do?

Can you imagine the loss of lives, property and other things? In this way, a city can be turned to a cremation ground in a matter of a few days.

History has been the greatest proof of destruction it is capable to cause. The Ukranian Blackout of 2015, Indian Blackouts of 2012 and 2001, the American and Italian Blackout of 2003, 1999 Blackout in Brazil are names of few. Nations around the globe consider grids as a delicate string of urban lifeline and thus show huge concern about it. 

BLACKOUTS: Facing The Outrageous

We all know that the world would never see another Great London fire of 1666, but we cannot forget it. The urban fire safety system is the aftermath of that catastrophe, but revisiting the event, we might come up with solutions which could be more viable and promising than existing ones. Similarly, let us take the two most typical cases of blackouts, and try to analyse the incidents.

CASE 1: North Indian grid failure of 2012

At the end of July 2012, the nation witnessed two back to back largest blackouts in its history. Those two blackouts debunked the weak and incompetent condition of Indian Power Grid to manage increasing load demand, on the other hand, opened up opportunities to work for the vulnerability of the grid to failure in future.

WHAT HAPPENED?

30th July 2012:

  • On 30 July 2012 at 0233 Hrs a disturbance in the Northern Regional Grid lead to a blackout, covering all 8 states that include Delhi, Uttar Pradesh, Uttarakhand, Rajasthan, Punjab, Haryana, Himachal Pradesh, Jammu and Kashmir, and Chandigarh.
  • Nearly 300 million which then was about 25% of Indian populations was affected.
  • Monday morning hours were accompanied by non-operational trains and traffic signals. Several hospitals, water treatment, refineries were shut.
  • The supply to Railway stations and airports were extended only by 0800.
  • The Northern Regional System was fully restored by 1600 Hrs.

31st July 2012:

  • Another disturbance occurred at 1300 hours of 31 st July 2012 affected the Northern, Eastern and North-Eastern Regional grids, that caused blackouts in 21 states.
  • Nearly 600 million which was 50% Indian population was without power in peak heat season.
  • More than 300 trains were cancelled, around 200 miners were trapped in eastern India as lift failed, vehicular traffic-jammed and many critical services were affected.
  • Power supply to emergency loads like Railways, Metros and airports were mostly provided by about 1530 hours. 
  • The system was restored fully by about 2130 hrs of 31 st July 2012.

The topmost organisation of power grid the CEA (Central Electricity Authority) came up with reports on what caused the failure and with suggested measures to check such cases in future. All of them have been well implemented and it seems that our power grid had become more resilient. A blog on the same is not at all needed as every minute detail can be accessed online, but the point is to analyse the event and extract lessons for ourselves. 

Here are some antecedent parameters of three main elements of the grid that gives us clues that what triggered the shut down that day of 31st July.

Load Quantum and distribution:

2012 saw a failed monsoon in Northern Indian. Also, end of July marks the peak summers, and each section grid in these seasons works at its maximum capability to supply for farmer’s heavy-duty submersible pumps and household’s ACs, coolers and fans.

Similar were conditions in Eastern Regional grids also. However, coastal regions of Western and Southern Regional grids were at low to moderate loads.

Transmission:

By the year 2006 the NR, ER and NER were synchronized at 50 Hz to form so-called NEW grid, and in 2012 grid authority were preparing to synchronise Southern Grid with the NEW grid under the govt scheme of one nation and one grid. The Grid was in a phase of expansion and transformation, laid regulation were loosely followed and the grid was not capable of significant powers transfer among the different regional grids.

Also, many major lines were under construction work or offline due to other reasons. For eg 400 kV Bina-Gwalior (2). List of all the transmission links faulty on that day is given in report no 4 in reference in the end.   

Generation:

As grid was under expansion some new power stations were running under test, like the Sipat Thermal Power of Western Regional Grid. Actually, the western region was surplus in power generation due to low load demand in the west.

Whereas some thermal and gas power stations in NR were in outage due to unavoidable reasons like coal/gas shortage, under-requisition by stakeholders. Some hydro-stations were experiencing an outage due to high slit due to summers eg Vishnuprayag HPP.  

Following is the generation and load demand at 1257:   

BLACKOUTS: Facing The Outrageous

POWER FLOW AT 1257 HOURS:

The antecedent status of load, transmission and generation resulted in power flow as given below:

BLACKOUTS: Facing The Outrageous

The scheduled load demand in NR was 2022 MW but the actual withdrawl was unexpectedly high at 4454 MW. The huge quantum difference in power consumption was all withdrawal from other power surplus regions thereby enormously stressing the inter-regional transmission links.

AND THE LARGEST EVER “BLACKOUT” BEGINS…….

  • At 13:00:13 Hrs the important high capacity 400 kV BINA-GWALIOR link, directly transferring power from WR to NR grid tripped. This event was followed by the greatest drawback of any synchronised grid, “the cascading effect”.
  • Separating the Gwalior region from WR grid, it continued to get supplies via the 400 kV Agra-  Gwalior link. This resulted in a change in the power flow direction, and massive power diverted to NR from WR via WR-ER-NR route, immediately stressing them and tripping them also. 
  • Followed by these inter-regional trippings the power transfer was hammered. 
  • The power surplus Western Regional grid’s frequency overshot immediately to 51.46 Hz, triggering regional tripping. The over speeding alternators one by one isolated from grid and WR followed a blackout. 
  • The enormously power deficit Northern Regional grid’s frequency immediately settled to 48.12 Hz. The slowed alternators were cut-out by control and feedback system and this grid also experienced a blackout. ER and NER followed the same fate.
  • Southern Regional Grid importing 2000 MW power from the NEW grid through the Talcher-Kolar HVDC bipole link saw a decline in frequency down to 48.88 Hz from 50.46 Hz, and also saw many alternators going offline.  

And by 13:00:19 all of this had happened on that day, and India witnessed the massive blackout covering 21 states and affecting 600 million end customers.

FAILURES:

Let us analyse what exactly went wrong and what could have done to avoid that:

  1. Firstly the Load forecasting went severely inaccurate for the NR and ER grids, thereby caused situations that required immediate follow back steps, like load shedding.
  2. Various defence mechanism like UFR (Under Frequency Relay), rate of change of frequency relay were, etc failed to execute load shredding that could have helped escape that blackout. 
  3. However, on the other hand, the surplus power generating WR grid injected the required quantum power through the inter-regional links and continued to do so even when the capacity of links was far exceeded from the rated.     
  4. Various power plants like under test Sipat TPS continued injecting the unscheduled power even after the WRLDC issued a verbal and written message to lower the plant output.
  5. The investigating team also suggested significant reforms in communication and SCADA system implementation.

CONCLUSION:

The Grid indiscipline and inability to strictly follow the laid regulation, inaccuracy in forecasting and failure of the protection systems were the reason for the blackout. 

The investigation team came with measures to further check the occurrence of such events like reviewing the transfer capability of important inter-regional links, special protection systems, effective enforcement of rules and strict action against the rule breakers, etc.    

THE BLACKOUT OCCURRED IN 2012, ISSUES HAS BEEN ALREADY FIXED, THEN WHY THE HELL WE STUDIED THIS??????

And certainly, this is the most urgent question to answer.

The most crucial lessons to understand is operational discipline. This event set an example of how to defy the institutional rules and regulation can be followed by a catastrophe, that has an impact on such a large scale. So, engineers, workers and executives must understand the responsibility of their jobs, and consequences if they fail to perform.

The technical lesson to be extracted is that all the engineering system like electric grids, internet, transportation systems, need always to be under the cycle of continuous improvement and evolution to improve for better and safe. Maybe the SPS needs to be reviewed again considering the current load demand, links capacity to be increased, and the rules to be applied more strictly. 

CASE 2: Ukrainian Grid Cyber Attack of 2015:

 

INTRODUCTION TO CYBER ATTACKS ON GRID:

Accommodation of renewable energy and continuously meeting the dynamic load demand on the grid requires the grid to push for modernisation by enabling remote access capability to centrally and effectively distribute power from surplus to deficit region. The recent overpaced development of electronics has opened up opportunities to equip the grid with all the superpowers like HVDC, SCADA, and many other IP-based digital technologies.

But this is not at all time to relax and chill. Smart grid technologies litter the grid with endless security vulnerabilities and flood the grid with numerous threats, and thus present very complicated and worse glitches to deal with. Hackers have a whole lot of techniques in their quivers to get into the system to bring it down.    

The world was aware of all these threats but witnessed for the first time the ravage of a cyber attack on an electric grid that caused a power outage on 23 Dec 2015 in Ukraine.  

BLACKOUTS: Facing The Outrageous

The attack was a professionally-planned, well resourced, highly synchronised, multistage, multisite, had long-lasting aftermaths. (Each term will be explained)

WHAT HAPPENED?

On 23rd Dec 2015, at 3:35 PM (local time) three power distribution companies were targeted and adversaries were successful to de-energized seven 110 kV and twenty-three 35 kV substations for three hours leading to 225000 end users without power in December-end winter. The outage came as a result of third-party entry into the company’s computer system, SCADA, ICS (Industry Control System), etc. 

Three other organizations, from other critical infrastructure sectors, were also breached but did not experience operational impacts. 

ATTACK WALK THROUGH:

Following is the timeline events of the attack, which gives us an idea of what vulnerabilities a Smartgrid is subject to:

  1. RECONNAISSANCE: The intelligence gathering, planning and preparation for the attack is estimated to well started in or before May 2014. Active reconnaissance like spying and direct interaction with employees and passive reconnaissance like open-source IG have yielded attackers information like the type of technology deployed, associated vulnerability, possible attack vectors, hardware model, operating system in workstations, etc.
  2. WEAPONIZATION: Analysing the vulnerabilities of the system an appropriate malware was developed and the delivery mode was selected. They used weaponised MS Word and MS Excel by embedding malware named Blackenergy-3, which get installed in operator machines as they enable macros script to view the content.
  3. DELIVERY: The spear-phishing technique was used in the campaign to deliver malware on such a large scale. Reports say various other malwares were also discovered like GCat, Dropbear, etc in other sectors like railways, etc.
  4. ESTABLISH A CONNECTION: The BE-3 works by modifying internet settings to establish a persistent control and command connection to an external server and give unauthorized access to system data. 
  5. HARVEST CREDENTIALS: The BE-3 was capable of receiving a range of commands from the external server like download, upload, install, uninstall, execute, update configuration, etc. The attacker used BE-3 to acquire the employee credentials by a wide range of methods like keylogging and targeting password managers.
  6. LATERAL MOVEMENT: BE-3 also helped the attackers in internal reconnaissance, helped them to identify targets and discover the whole network and navigate them into ICS network from the corporate network, which gave them access to HMI (Human Machine Interface) and field device control. 
  7. VPN GATEWAY: Access to VPN (Virtual Private Network) credential allowed attackers to gain remote access to corporate and ICS along multiple lines as well as reduced the visibility of malicious activity. 
  8. TARGET IDENTIFICATION: HMI workstations, data centre servers, control centre UPS, serial-to-Ethernet converters, and the substation breakers were selected to attack sequentially. 
  9. ATTACK PREPARATION: At this stage of attack they began developing malicious firmware to update the already installed healthy old firmware on ICS operator system, which was coded to execute after reboot or on some trigger. Investigators clearly stated that such a successful execution of such a high level of malicious firmware updates are only possible by continued capability testing which requires their own system for assessment, which indicated the state-sponsored cyber-attack. 
  10. DATA DESTRUCTION MALWARE: Alongside the firmware update to trip the breaker, they upload the networked system with malware called killdisk to erase all the history logged before the attack to avoid being traced and analyse how it happened. 
  11. TDOS ATTACK: In the Telephony denial of service the attacks managed to generate fake calls and flooded the service centre to actually stop affected from reporting to the company.
  12. SCHEDULE UPS SHUTDOWN: In this case of Ukraine distribution system the UPS were also networked hence the attackers also hijacked them to stop the critical system like telephony and data centres to operate after the tripping, hence increasing the time for recovery.

BLACKOUTS: Facing The Outrageous

AND THE ATTACK BEGINS…..

After prolonged recon, clandestine access and so much preparation the attackers on 23rd December 2015 at 1515 Hours (local time) the attackers by remote access took complete control of the system in the substation of three Ukrainian DISCOMS. In some cases, the updated firmware began tripping the breakers automatically whereas in some cases phantom mouse was used. (The operator in the control room watched the cursor tripping the breakers without even him touching his own mouse). TDOS attack eliminated the possibility to get what is happening on the ground as the telephone lines got jammed with fake calls. After tripping breakers in 57 sub-stations, the Killdisk came to action by erasing the data of the system. Before wrapping up the passwords of some user were changed to check them ever entering the system, the updated firmware left some of the devices impossible to recover ever. UPS went down as scheduled and the operators were left with no option to gain remote access over their field devices hence the breakers were manually closed individually which took nearly six hours.

No doubt that this attack has been logged in history of power grid industry as a one of evilest professionally-planned, well resourced, highly synchronised, multistage, multisite attack. 

FAILURE POINTS:

  1. No active network security measure to check malicious activity once intruder gets into the system, this would have easy ruled out the possibility of nearly year-long recon which was the core step in intelligence gathering and final execution.
  2. No two-factor authentication for the VPN access into the ICS network.
  3. No frequent change in passwords by the employees and unaware of been trapped by spear-phishing, etc.
  4. No two-step verification for even critical firmware update.
  5. Exposing critical data on open-source like company website, etc.

TAKEAWAYS:

What is astonishing is not the professionalism of the attackers in so well planning and execution of each step i.e. spear-fishing, malware and firmware development, harvesting credentials, scheduling UPS, TDOS, etc but it was their ability to study the whole system by recon, aided with BE – 3, without being traced for so long time. What happened in Ukraine was a perfect combination of very odd possibilities that the world might never see again. But surely attack present in front of us how a system that appears impenetrable can be hijacked on grounds of minute elementary flaws, human negligence and unprofessionalism.

LESSONS FOR INDIAN GRID:

Here comes the part for which we follow through two mammoth disaster analysis. However, our grid has undergone many reforms after the 2012 failure and has become more resilient after pushing the grid for

  1. synchronisation of all regional grid. 
  2. Smart Grid gives the remote operation capability and significantly increases 
    1. Reliability: by lowering the time for fault detection and hence recovery.
    2. Efficiency: using renewables
    3. Economy: better resources management 

However, it would immature to talk about the benefits of ‘Smart Grid’ without considering the threats it creates. Also, the level of skills and sophistication showed by attackers in 2015 attack alarms all the grid operator across the world to keep a step forward in preparedness to cope with them. 

BLACKOUTS: Facing The Outrageous

In the attack of 2015 distribution utilities were on target if it had been generation or transmission then certainly aftermath would have exponentially disastrous. The American test experiment called Aurora Vulnerability, in which the computer malicious firmware trip the circuit breakers and instantaneously closed pulling the generators in plants out of phase. And in a synchronised grid like that of India, nothing could more fatal than a generator of even one plant going out of phase with the grid. Immediate explosion and cascading failure would be the consequence. 

LAST WORDS:

Unfortunate but it is true that there is no such thing as absolute security. If there is a capable attacker with the correct means and motive, the targeted utility can never be protected enough. Total elimination may never be possible but surely damage can be kept at the minimum.

Strong communication protocols, multifactor authentication, continuous network security monitoring, and cyber-threats awareness are some of the areas which must be always evolving with time to keep the grid safe and customers happy!   

REFERENCES:

Numerous reports have been taken into account while studying what happened on 31st July 2012 and on 23rd December 2015:

  1. https://ics.sans.org/media/E-ISAC_SANS_Ukraine_DUC_5.pdf
  2. http://web.mit.edu/smadnick/www/wp/2016-22.pdf
  3. https://www.boozallen.com/content/dam/boozallen/documents/2016/09/ukraine-report-when-the-lights-went-out.pdf
  4. http://www.cercind.gov.in/2012/orders/Final_Report_Grid_Disturbance.pdf

 

Keep reading, keep learning!

BLACKOUTS: Facing The Outrageous

TEAM CEV!!

How to Build a Forest in your Backyard – The Miyawaki Method

Reading Time: 8 minutes

by-
Naman Mathur – CEV Member
Mechanical Engineering (NIT Surat)

Let us start this blog with a little off-topic. 

Have you seen the growing trend of JCB Construction…?
Have you ever thought, why such a hype for something which is pretty usual to find? 

It’s just a machine. Memes on JCB Construction does nothing but exposes the dark side of our earth, i.e. we have affected our environment a lot with the introduction to The Concrete World. Cities and nations tend to wear these new skyscrapers and structures as a badge of honour of development. We people too acknowledge cities like Singapore, New York and Mumbai as ‘Very Developed Cities’ giving them a misconception that development links only with giant concrete construction. But there needs to be a change in thinking which associate development with Construction of Forest.

‘Construction of Forest’ seems like a foolish concept as we caught the wrong feeling of the definition of a forest. People think the forest is an isolated piece of land where animals live together. But I believe the wood can be an integral part of urban existence. For me, a forest is a place so dense with trees that you can’t just walk into it. It doesn’t matter how big or small they are. It can be a big spread of land with acres and acres of trees, or it can be just a piece of land in your backyard. Size doesn’t matter in the functioning of the forest. Most of the world we live in today was a forest. This was, of course, before the human intervention. We built up our cities on those forests, forgetting that this motherland belongs us equally as the other 8.4 million species on the planet. We constructed a concrete world on our forests, and we all know its consequences. Global Warming, Climate Change, Depletion in groundwater, Soil Erosion and what not! 

 

Conferences and discussions are held often in understanding the issue of Global Warming, and Climate Changes with the experts stating the fate human world will have to face if we don’t do something and avert this problem. But one of the most crucial pressure point of this problem, that even the experts fail to reflect on is the sense of self-motivation towards a better environment. With the growing number of events regarding environmental awareness and social awareness, a considerable amount of people has got the gist of the issue. But a huge percentage of people fail to have that feeling to work for a better environment. We, humans, are born a little selfish. Thus, we are very inactive to find a solution to environmental issues.

 

So I think to avert the wrath of nature we need to think out of the box towards a little more conventional thought process. Just like our ancestors built a concrete world on our beautiful nature land, we now need to construct natural world back on this concrete world. Miyawaki Method of Afforestation is one such measure which if implemented and maintained adequately can be the stepping stone to a better environment. In this method, we are constructing forests on the concrete world. It also promotes the idea of the Natural Development of a Place.

ORIGIN OF MIYAWAKI METHOD

Miyawaki Forest was pioneered by Japanese Botanist Akira Miyawaki, who is an active world-renowned specialist working towards the restoration of the natural vegetation on degraded lands. Miyawaki showed that natural Japanese temperate forest should be mainly composed of deciduous trees – while in practice, conifers often dominate. Deciduous trees are still present around tombs and temples, where they get protected from exploitation for religious and cultural reasons.

How to Build a Forest in your Backyard - The Miyawaki Method

 

As his research progressed, he found that forest vegetation of Japan has declined due to the introduction of alien species by man. He immediately felt the need to take charge and stop this human interference into nature.

 

Difference between Miyawaki Method and Conventional Method of Afforestation

Consider Miyawaki Method as a more advanced version of any conventional method of the plantation. Miyawaki Method takes into account features like nutrients in the land, native plantation of the area and other scientific backgrounds of the site. Conventional Method ensures just one tree per square foot opposing to thirty trees per square foot. Limited species are available for the old method while 25-50 species are available for multi plantation method of Miyawaki Method. Miyawaki Forests, also known as Multi-Layer Forest, are self-sufficient forests after 3 -4 years while extensive maintenance is necessary for growth in the usual way of the plantation. So clearly Miyawaki Method, though a little complex can be very useful and better than the conventional method.

 

Process of Miyawaki Method of Afforestation

This article shares the primary impressions to create small forests in small urban spaces, as little as 30 square feet. If followed effectively, these steps can ensure you of creating a natural, wild, maintenance-free, native forests.

How to Build a Forest in your Backyard - The Miyawaki Method

 

STEP 1. Soil Analysis and Quantify Biomass

Soil Analysis helps in getting information about properties like water holding capacity, water infiltration, root perforation capacity, nutrient retention and credibility. Also check if the texture is sandy, loamy or clayey.

How to Build a Forest in your Backyard - The Miyawaki Method

Requirements for the soil
  1. Nutrients are essential in the ground for the healthy growth of the forest. Preferably organic fertilisers like cow manure and vermicompost. Cow Manure is found easily in dairy farms, but Vermicompost is provided small amounts of nutrients over a long period.

  1. This method of afforestation requires a considerable amount of water. So it is essential to maximise the utilisation of water. So we use water retaining materials like coco-peat or dry sugarcane stalk.  

  1. Proper air is essential for the roots to grow as they’re the base of the trees. So perforation is a requirement in the Miyawaki Method of Afforestation. Rice Husk, Wheat Husk, etc. can be used as perforator materials. 

  1. At the age of 6-8 months, when the plants are young, direct sunlight can make the soil dry and make conditions difficult for the young sample. So Mulch is used to insulate and protect the soil. Options include rice straw or corn stalk.      

 

 STEP-2 Selection of Tree Species
  • Plants are area and climate-specific in nature. So in this method, we need first to study the native plant species of the area given.

  • For this method, the forest must contain trees of different heights, age and nutrients. Ideal height varies from 50 to 80 cm.

  • Major Species: 5 major native species should be found that are commonly found in the area which will constitute 50-60 per cent of the forest.

  • Supporting Species: Other common species of the area will constitute 25-40 per cent, and other minor species will make up the rest.

 

STEP-3 Forest Designing
  • Proper planning is essential to increase the efficiency of several growing trees and to maximise the use of resources.

  • A master plan is designed to identify the exact area for afforestation to get an idea of the materials required for the method.

  • Also, to ensure that we don’t wastewater in the process, we need to plan water usage as well based on daily water access, backed by borewells and tanks.

  • If the project is big enough, we also need to identify spaces for materials, sapling, equipment, etc.
 
STEP-4 Area Preparation
  • The site should be effectively inspected to ensure the feasibility of the project. Proper fencing should be constructed to make sure that no cattle can damage the growth of the forest.

  • Weeds and debris should be removed and disposed of effectively. Ensure pulled out weeds are disposed of away of the site, else they may grow.

  • Facility for watering the plants should be installed. The requirement of water is around 5 litres/sq metre per day.

  • The site should get proper sunlight for minimum 8-9 hours a day for better growth of trees.

  • The slope of the land should be such that the water and nutrients are spread across evenly.

  • The area can be divided into parts, if the site is big, for proper monitoring and maintenance of the forest.

 

STEP-5 Tree Plantation
  • This might be the most important step of the Miyawaki Method of Afforest.

  • First, dig the soil up to 1 m depth on the land. Then again put the half of the dug soil back into the pit uniformly. This is to increase the perforations in the soil, and it loosens the soil too.

  • Also, mix the biomass with the soil to increase the nutrient count in the land.

  • In Miyawaki Method all the samplings should be planted together on the mound, rather than the conventional method of digging individual pits for the sampling.

  • To ensure the forest grows in three different layers- Shrub, Sub Tree and canopy, we need to plant the saplings in a specific manner.

    How to Build a Forest in your Backyard - The Miyawaki Method

  • Try not to place two similar trees next to each other and also ensure not to form any specific pattern while planting plants. Remember, the goal is to form a random plantation to get a dense group of trees.

  • Mixing materials like perforator and water retainers should be well mixed for each mound.

  • After the saplings are planted, proper mulch should be evenly laid out on the soil in a 6-8 inch layer. Mulch needs to be tied down with the use of bamboo ropes so that they don’t fly around.

  • As discussed earlier, watering should be performed effectively with about 5 litres per square metre.

 

STEP-6 Maintenance and Monitoring for the Forest
  • Plants are very sensitive at a young age. So the samplings should be monitored at least for the first 8-12 months, once every 1-2 months. If any changes are required in the early stages, improvisation is inevitable.

  • Watering every day is the base of the method. If there are some issues in the watering process, the whole project can be jeopardised.

  • It is very important to keep the forest weed-free for the first 2-3 years, and then it is self-sufficient to keep the weeds away. Maintain the forests clean and free from plastic, paper, etc. as we are growing a natural forest.

  • Also, one important step at Miyawaki Method of Afforestation is that there should be absolutely no use of chemical pesticides and fertilisers to kill the pests. Leave the pests untouched. The forests will slowly build its own mechanism to keep itself healthy.

  • Never remove the organic fallen leaves on the land as they can be useful to kill the soil microbes and also increase the nutrient level of the forest land. Never cut the forest in any manner.

How to Build a Forest in your Backyard - The Miyawaki Method

 

Conclusion

Miyawaki Methodology has been hugely successful, with over 17 million trees planted in 1700 locations across the globe. Such forests are multi-layered forests and mimic the densest parts of native undisturbed forests. In comparison to the conventional woods, Miyawaki forest can grow ten times faster, be 30 times denser and 100 times more bio-diverse.

How to Build a Forest in your Backyard - The Miyawaki Method

 

 ‘Construction of a Forest’ doesn’t seem too vague now after all. If Miyawaki Methodology executed effectively, we could grow a forest in our backyard or any other suitable land in this urban concrete world to take a step closer to a better world.

AUGMENTED REALITY: More than what we see!!

Reading Time: 10 minutesThe picture depicts what Augmented Reality can be like!!!

What would you do in a foreign place whose native language you don’t know? How would you read the signs? Would you feel worried? 

Well, you don’t have to worry. With Google Translate’s new AR function, you can easily scan text using your phone’s camera and have it translated in any language. Cool right, but hold on what is AR? 

How does it work? 

What are its applications? 

Just relax and read on to find out everything you need to know about this cool technology.

So let’s get started with the definitions…

AUGMENTED REALITY: More than what we see!!

 

WHAT IS AUGMENTED REALITY (AR)?

According to a dictionary, to augment something means to make it more effective by adding something to it.

Moving onto a technical definition, augmented reality is the technology that enhances our physical world by superimposing computer-generated perceptible information on the environment of a user in real-time. 

This integrated information may be perceived by one or more senses and enhances one’s current perception of reality with dazzling visuals, interactive graphics, amazing sounds and much more. (Exciting!)

AUGMENTED REALITY: More than what we see!!

 

You must have played the popular AR game Pokemon GO which revolutionized the gaming industry and is a huge success making 2 million dollars per day even now. Pokemon GO uses a smartphone’s GPS to determine the user’s location. The phone’s camera scans the surroundings and digitally superimposes the fictional characters of the game with the real environment.

Some other popular examples of AR apps include Quiver, Google translate, Google Sky Map, Layar, Field trip, Ingress, etc. and who don’t know about cool snap chat filters!

I KNOW ABOUT VIRTUAL REALITY…HOW IS IT DIFFERENT?

Augmented reality is often confused with virtual reality. Although both these technologies offer enhanced or enriched experiences and change the way we perceive our environment, they are different from each other.

The most important distinction between augmented reality and virtual reality is that Virtual reality creates the simulation of a new reality which is completely different from the physical world whereas augmented reality adds virtual elements like sounds, computer graphics to the physical world in real-time.

AUGMENTED REALITY: More than what we see!!

A virtual reality headset uses one or two screens that are held close to one’s face and viewed through lenses. It then uses various sensors in order to track the user’s head and potentially their body as they move through space. Using this information, it renders the appropriate images to create an illusion that the user is navigating a completely different environment.

Augmented reality on the other hand, usually uses either glasses or a pass-through camera so that the user can see the physical environment around them in real-time. Digital information is then projected onto the glass or shown on the screen on top of the camera feed. 

WHERE DID IT ALL START?

In 1968, Ivan Sutherland, a Harvard professor created “The Sword of Damocles” with his student, Bob Sproull. The Sword of Damocles is a head-mounted display that hung from the ceiling where the user would experience computer graphics, which made them feel as if they

were in an alternate reality. 

In 1990, the term “Augmented Reality” was coined for the first time by a Boeing researcher named Tom Caudell.

In 1992, Louis Rosenburg from the USAF Armstrong’s Research Lab created the first real operational augmented reality system named Virtual Fixtures which is a robotic system that places information on the workers’ work environment to increase efficiency similar to what AR systems do today.

The technology has progressed significantly since then. (Now keeping aside the further details in history so that you don’t get bored!)

For details of history and development of augmented reality, check out the link given below.

https://www.youtube.com/watch?v=2PaJ_safMIo 

TYPES OF AR

1. Marker-based AR (or Image Recognition)

It produces the 3D image of the object detected by the camera when the camera is scanned over a visual marker such as QR code. This enables a user to view the object from various angles.

2. Markerless AR

This technology uses location tracking features in smartphones. This method works by reading data from the mobile’s GPS, digital compass and accelerometer to provide data based on users location and is quite useful for travellers.

3. Projection-based AR

If you are thinking that this technology has something to do with projection, then kudos you are absolutely correct! This technology projects artificial light onto surfaces. Users can then interact with projected light. The application recognizes and senses the human touch by the altered projection (the shadow).

4. Superimposition based AR

As the name suggests, this AR provides a full or partial replacement of the object in focus by replacing it with an augmented view of the same object. Object recognition plays a vital role in this type of AR.

 

HOW DOES AR WORK? 

Now that you know something about AR, your technical minds must be wondering how the technology works. Here is a brief technical explanation of the supercool technology.

AR is achieved by overlaying the synthetic light over natural light, which is done by projecting the image over a pair of see-through glasses, which allow the images and interactive virtual objects to form a layer on top of the user’s view of reality. Computer vision enhances the reality for users in real-time.

Augmented Reality can be displayed on several devices, including screens or monitors or handheld devices or smartphones or glasses. It involves technologies like S.L.A.M. (simultaneous localization and mapping) which enables it to recognize 3D objects and track physical location to overlay augmented content, depth tracking (briefly, a sensor data calculating the real-time distance to the target object). AR has the following components:

1. Cameras and sensors

 They are usually on the outside of the augmented reality device. A sensor collects information about a user’s real-world interactions and a camera visually scans the user’s surroundings to gather data about it and communicates it for processing. The device takes this information, which determines where surrounding physical objects are located, and then formulates the desired 3D model. For example, Microsoft Hololens uses specific cameras to perform specific duties, such as depth sensing. Megapixel cameras in common smartphones can also capture the information required for processing.

2. Processing: 

Augmented reality devices basically act like mini-supercomputers which require significant computer processing power and utilize many of the same components that our smartphones do. These components include a CPU, a GPU, flash memory, RAM, Bluetooth/Wifi, global positioning system (GPS) microchip, etc. Advanced augmented reality devices, such as the Microsoft Hololens utilize an accelerometer to measure the speed, a gyroscope to measure the tilt and orientation, and a magnetometer to function as a compass to provide for a truly immersive experience.

3. Projection:

This refers to a miniature projector found on wearable augmented reality headsets. The projector can turn any real surface into an interactive environment. As mentioned earlier, the data taken in by the camera is used to examine the surrounding world, is processed further and the digital information is then projected onto a surface in front of the user; which includes a wrist, a wall, or any other person. The use of projections in AR is still in the developing stage. With further developments in the future, playing a board game might be possible on a table without the use of a smartphone.

4. Reflection: 

Augmented reality devices have mirrors to assist your eyes to view the virtual image. Some AR devices have “an array of many small curved mirrors”, others have a simple double-sided mirror to reflect light to the camera and the user’s eye. In the case of Microsoft Hololens, the use of “mirrors” involves holographic lenses that use an optical projection system to beam holograms into your eyes. A so-called light engine emits the light towards two separate lenses, which consists of three layers of glass of three different primary colours. The light hits these layers and enters the eye at specific angles, intensities, and colours, producing the final image on the retina. 

 

AR: CURRENT APPLICATIONS

AR is still in the developing stage yet it has found applications in several fields from simple gaming to really important fields like medicine and military. Here are some of the current applications of AR (the list is not exhaustive).

GAMING:

AUGMENTED REALITY: More than what we see!!

The gaming industry is evolving at an unprecedented rate. Developers all over the world are thinking of new ideas, strategies and methods to design and develop games to attract gamers all across the globe. There are a wide variety of AR games available in the market ranging from simple AR indoor board games to advanced games which could include the players jumping from tables to sofas to roads. AR games such as Pokemon Go have set a benchmark in the gaming industry. Such games expand the field of gaming as they attract gamers who easily develop an interest in games that involve interaction with their real-time environment.

ADVERTISING:

AUGMENTED REALITY: More than what we see!!

AR has seen huge growth in the advertising sector over the past few years and is becoming popular among advertisers who are trying to increase their customers by making engaging ads with AR.  Buyers tend to retain information conveyed through virtual ads. AR ads provide an enjoyable 3D experience to users which gives them a better feel of the product. For example, the IKEA Place app lets customers see exactly how furniture items would look and fit in their homes. AR ads establish a connection between the consumer and brand through real-time interaction due to which consumers are more likely to buy a product. Many researchers believe that AR is similar to other digital technologies however its interactive features set it apart from other technologies.

EDUCATION:

AUGMENTED REALITY: More than what we see!!

Classroom teaching is rapidly undergoing changes. With the introduction of AR in traditional classrooms, boring lectures can become extremely interesting! Students can easily understand complex concepts and remember information better as it is easier to retain information from audio and visual stimulation compared to traditional textbooks. Today teens are increasingly owning smartphones and other electronic gadgets that they use for playing games and using social media, then why not use AR in the field of education! AR provides an interactive and engaging platform that makes the learning process enjoyable. With the development of AR, not just classroom teaching but distance learning can become more efficient giving students greater insights into the subjects they study. Google Translate now uses an augmented reality function with which students can use the camera to take a picture of the text and have it translated in real-time.

 

MEDICINE AND HEALTHCARE:

AUGMENTED REALITY: More than what we see!!

Augmented reality can help doctors to diagnose the symptoms accurately and cure diseases effectively. It is helpful to surgeons performing invasive surgeries involving complex procedures. Surgeons can detect and understand the problems in bones, muscles and internal organs of the patients and decide accordingly which medication or injection would best suit the patient. For example, AccuVein is a very useful Augmented reality application used to locate veins.  In emergency operations, surgeons can save time with the use of smart glasses which can give instant access to the patients’ medical information, surgeons need not shift their attention to anything else in the operation theatre. Medical students can get practical knowledge of all parts of the human body without having to cut it. 

 

WHAT’S IN THERE FOR THE FUTURE?

AR has captured our imagination like none other technology. From being something seen in science fiction films to something that has become an integral part of our lives, It has come a long way and has gained success in many fields.

Ever since the introduction of AR-enabled smartphones, the number of smartphone users has increased. The fastest-growing technologies AI and ML can be combined with AR to enhance the experience of mobile users.

The augmented reality saw its record growth in 2018. AR is positioned to be strong among commercial support, with big tech names like Microsoft, Amazon, Apple, Facebook, and Google making heavy investments. It is expected that by 2023, the installed user base for AR-supporting products like mobile devices and smart glasses will surpass 2.5 billion people. Revenue for the industry should hit $75 billion. Industry players in the augmented reality world expect 2019 to be a year marked by a rapid increase in the pace of industrial growth.

The future of AR is bright and it is expected that its growth will increase further with more investments from big tech companies that are realizing the potential of AR.

That’s all for this blog! 

Thanks for reading and I hope this blog gave you some new information and insights about augmented reality. Please give your valuable feedback.

-By Moksha Sood (2nd year, CHEM DEPT)

KEEP READING, KEEP LEARNING

TEAM CEV!!!!!

Getting Emotionally Intelligent !!!

Reading Time: 11 minutesAll of us, throughout our life, have to undergo situations, where we feel our heart in the throat ,ears listen to own heartbeats, the head sweats like its 45-degree scorching summer and at same time legs shivers as if its -5-degree freezing winter, the hand’s muscles losses to hold even a 5-gram pen and mouth shutter to speak a word!!

We get a snapshot of all of these when we try  to recollect memories of what happened when:

  1. We were about to tell our parents of very low grades.
  2. We were about to recite our poem in the school assembly.
  3. We were about to start our presentation in college.
  4. We came to know about the loss or accident of our close one.

But not all people lose their mind that way, as we all have seen many people confessing in front of authorities, we have heard the flawless poem of the first prize winner and we have witnessed the impressive presentations in our college.

So don’t you wonder how some people around us are so cool even in the worst conditions?  

I believe that it’s not only the technical knowledge and expertise that make them different from the ordinary, but it is also the ability to not get torn out in extreme stress. They don’t lose their temper easily, think calmly and then come to a wise decision. This is the most basic skill in the corporate world also and is called “emotional intelligence”.

I was introduced to the concept of “emotional intelligence” by my friend when I failed to get marks above the bars in the JEE practice exams. He made me aware of how emotional intelligence comes into the picture by influencing the psychological state of mind and body when we are under situations where stakes are high.  This has motivated me to discuss with you a criterion that significantly determines the probability of your success when stakes are high.

We have heard about many monks and saints who had incredible control over their emotions and sentiments.  They’re excellent decision makers, and they know when to trust their intuition. Regardless of their strengths, however, they’re usually willing to look at themselves honestly. They take criticism well, and they know when to use it to improve their performance.

We live in a world where each of us sees the world in a different way, interpret it in a different way. So it is very essential in the corporate world to understand each other and accept their views. Most of the big leaders have this ability.

We will talk about different aspects of emotional intelligence and we’ll discuss how we can achieve it. Deep down you all know everything about this blog. But many of us don’t apply these things in our lives. This blog intends to make you realize the necessity of these pieces of stuff using the most effective way of learning i.e. learning by questioning. So I will answer three path-defining questions of this concept of EI:

    1. What makes a leader?

I have found, however, that the most effective leaders are alike in one crucial way: They all have a high degree of what has come to be known as emotional intelligence. It’s not that IQ and technical skills are irrelevant. They do matter, but mainly as “threshold capabilities”; that is, they are the entry-level requirements for executive positions. But my research, along with other recent studies, clearly shows that emotional intelligence is the necessary condition of leadership. Without it, a person can have the best training in the world, an incisive, analytical mind, and an endless supply of smart ideas, but he still won’t make a great leader.

It isn’t IQ or technical skills, says Daniel Goleman. It’s emotional intelligence: a group of five skills that enable the best leaders to maximize their own and their follower’s performance. When senior managers at one company had a critical mass of EI capabilities, their divisions outperformed yearly earnings goals by 20%.

Big companies are now introducing “competency model” for their employees to aid them in identifying, training, and promoting likely stars in the leadership firmament.

According to the Harvard Business Review, there are 5 components of EI.

  • Self-awareness
  • Self-regulation
  • Motivation
  • Empathy
  • Social skills

Getting Emotionally Intelligent !!!

Self-awareness:​-

In the book “The Monk Who Sold His Ferrari”, Robin Sharma said that you can’t love others unless you love and completely understand yourself. If you know who you are and what are your desires, then and then only you can understand other people’s point of view. Self-awareness is all about recognizing emotions, strengths, limitations, actions and understand how these affect others around you. By analysing yourself you can increases the likelihood of you handling and using constructive feedback effectively. You can also improve your organization’s performance. For example, you can hire an appropriate person for the post in which you struggled. Now, how can we achieve self-awareness?

The simple answer is keeping a diary. In which you’ll write about the situations that have triggered disruptive emotions in you, such as anger, and your thoughts and behaviours during those situations.

People who have a high degree of self-awareness recognize how their feelings affect them, other people, and their job performance. Thus, a self-aware person who knows that tight deadlines bring out the worst in him plans his time carefully and gets his work done well in advance. Another person with high self-awareness will be able to work with a demanding client. She will understand the client’s impact on her moods and the deeper reasons for her frustration. “Their trivial demands take us away from the real work that needs to be done,” she might explain. And she will go one step further and turn her anger into something constructive. There is a self-awareness model which is called the Johari window. Check it out. click here

 

Self-regulation:-

Imagine a situation, you are a leader of a group which has to give a presentation on the new product of your company to the CEO and board members of your company. The members made many mistakes and the show went flopped. What will your reaction to your colleagues after this mess? You might find yourself tempted to pound on the table in anger or kick over a chair. You could leap up and scream at the group. Or you might maintain a grim silence, glaring at everyone before stalking off.

But if you have a gift for self-regulation, you will choose a different approach. You will pick your words carefully, acknowledging the team’s poor performance without rushing to any hasty judgment. You will then step back to consider the reasons for the failure.

Are they personal—a lack of effort?

Are there any mitigating factors?

What was your role in the debacle?

Getting Emotionally Intelligent !!!

After considering these questions, you will call the team together, lay out the incident’s consequences, and express your feelings about it. You will then present your analysis of the problem and a well-considered solution.

See, this ability can make you work so patiently even in the worst conditions.

So, what are the ways to develop this regulation in yourself?

First of all, try to understand the situation. Don’t take the action immediately. Hold yourself. Figure out what was the mistake by your side. Admit them and make a commitment to face the consequences. Your ability to self-regulate as an adult has roots in your development during childhood. Learning how to self-regulate is an important skill that children learn both for emotional maturity and later social connections.

The best way to regulate yourself is by writing what’s on your mind in a paper and then tore it up and throw it. It might seem a bit crazy. But it’s a really nice way to be calm. Deep breathing exercises will also help you to gain self-regulation.

 

Motivation:-

Getting Emotionally Intelligent !!!

If there is one trait that virtually all effective leaders have, it is motivation. They are driven to achieve beyond expectations—their own and everyone else’s. The key word here is “achieve”. Plenty of people are motivated by external factors, such as a big salary or the status that comes from having an impressive title or being part of a prestigious company. By contrast, those with leadership potential are motivated by a deeply embedded desire to achieve for the sake of achievement.

In the Harry Potter series by J.K.Rowling, there’s a lesson of how to beat your fear. It was to think about the best memories of your life. Likewise, whenever you feel low, you should think about the victories that you had in the past. Motivation with the self-regulation makes the perfect state of mind. Another way to keep yourself motivated is to take short breaks during your work hours.

We all might find it very boring. But if we want to keep growing and motivated, we should keep a diary in which we write our abilities and disabilities. We should read it daily. I used to have a diary in my high school to keep the record of my mistakes. I read them daily and it eventually made me better and better in studies. I think it’s the best way to keep yourself on track.

The thing is we don’t spend time on ourselves. That is what pulls us back. Whenever we are free, instead of wasting that time on the phone and other silly things, we should think about ourselves. Imagine yourself what you want to be, whatever it be. Think about what will be the outcomes of your activities. By doing this you will find yourself more concentrated and satisfied.

Thoughts are vital, living things, little bundles of energy. Most people don’t give any thought to the nature of their thoughts and yet, the quality of your thinking determines the quality of your life. Thoughts are as much part of the material world as the lake you swim in or the street you walk on. Weak minds lead to weak actions.

Empathy:-

Getting Emotionally Intelligent !!!

Of all the dimensions of emotional intelligence, empathy is the most easily recognized. We have all felt the empathy of a sensitive teacher or friend; we have all been struck by its absence in an unfeeling coach or boss. But when it comes to business, we rarely hear people praised, let alone rewarded, for their empathy. The very word seems unbusinesslike, out of place amid the tough realities of the marketplace.

But empathy doesn’t mean a kind of “I’m OK, you’re OK” mushiness. For a leader, that is, it doesn’t mean adopting other people’s emotions as one’s own and trying to please everybody. That would be a nightmare—it would make action impossible. Rather, empathy means thoughtfully considering employees’ feelings—along with other factors—in the process of making intelligent decisions.

Empathy is particularly important today as a component of leadership for at least three reasons: the increasing use of teams; the rapid pace of globalization; and the growing need to retain talent.

 

Social skills:-

Getting Emotionally Intelligent !!!

The first three components of emotional intelligence are self-management skills. The last two, empathy and social skill, concern a person’s ability to manage relationships with others. As a component of emotional intelligence, social skill is not as simple as it sounds. It’s not just a matter of friendliness, although people with high levels of social skill are rarely mean-spirited. Social skill, rather, is friendliness with a purpose: moving people in the direction you desire, whether that’s agreement on a new marketing strategy or enthusiasm about a new product.

Socially skilled people tend to have a wide circle of acquaintances, and they have a knack for finding common ground with people of all kinds—a knack for building rapport. That doesn’t mean they socialize continually; it means they work according to the assumption that nothing important gets done alone. Such people have a network in place when the time for action comes.

Is social skill considered a key leadership capability in most companies? The answer is yes, especially when compared with the other components of emotional intelligence. People seem to know intuitively that leaders need to manage relationships effectively; no leader is an island. After all, the leader’s task is to get work done through other people, and social skill makes that possible. A leader who cannot express her empathy may as well not have it at all. And a leader’s motivation will be useless if he cannot communicate his passion for the organization. Social skill allows leaders to put their emotional intelligence to work.

Go through this video. It will briefly tell you the above stuff. click here

 

    2. Why it is too hard to be fair?

When people feel hurt by their companies, they tend to retaliate. And when they do, it can have grave consequences. A study of nearly 1,000 people in the mid-1990s, led by Duke’s Allan Lind and Ohio State’s Jerald Greenberg, found that a major determinant of whether employees sue for wrongful termination is their perception of how fairly the termination process was carried out. Only 1% of ex-employees who felt that they were treated with a high degree of process fairness filed a wrongful termination lawsuit versus 17% of those who believed they were treated with a low degree of process fairness. To put that in monetary terms, the expected cost savings of practicing process fairness is $1.28 million for every 100 employees dismissed. That figure—which was calculated using the 1988 rate of $80,000 as the cost of a legal defence—is a conservative estimate, since inflation alone has caused legal fees to swell to more than $120,000 today. So, although we can’t calculate the precise financial cost of practicing fair process, it’s safe to say that expressing genuine concern and treating dismissed employees with dignity is a good deal more affordable than not doing so.

 Many executives turn to money first to solve problems. But according to HBR research shows that companies can reduce expenses by routinely practicing process fairness. Think about it: Asking employees for their opinions on a new initiative or explaining to someone why you’re giving a choice assignment to his colleague doesn’t cost much money. Of course, companies should continue to offer tangible assistance to employees as well. Using process fairness, however, companies could spend a lot less money and still have more satisfied employees.

 

    3. How to build the emotional intelligence of groups?

Indeed, the concept of emotional intelligence had a real impact. The only problem is that so far emotional intelligence has been viewed only as an individual competency, whereas the reality is that most work in organizations is done by teams. And if managers have one pressing need today, it’s to find ways to make teams work better.

No one would dispute the importance of making teams work more effectively. But most research about how to do so has focused on identifying the task processes that distinguish the most successful teams—that is, specifying the need for cooperation, participation, commitment to goals, and so forth. The assumption seems to be that, once identified, these processes can simply be imitated by other teams, with similar effect. It’s not true. By analogy, think of it this way: a piano student can be taught to play Minuet in G, but he won’t become a modern-day Bach without knowing music theory and being able to play with heart. Similarly, the real source of a great team’s success lies in the fundamental conditions that allow effective task processes to emerge—and that cause member to engage in them whole-heartedly.

I hope we all would become more emotionally intelligent after continuous implementation of these measures in our daily life.

Thanks for reading, and will feel appreciated if followed by questions?

By Rushiraj Gohil

Keep reading, Keep learning

TEAM CEV!!!!

ELECTRICAL POWER SYSTEM : The Indian Frame

Reading Time: 11 minutes“The system is designed to give ultimate plug and play convenience, seemingly as dependable as the sun rising in the morning.” – Thomas Overbye
The electrical power systems are among worlds largest machines yet it goes unnoticed everyday, and that marks its success.

If you just plugged in your laptop to socket in the wall then congrats you are now part of world’s third largest machine.

First of all thanks to the power system of India which has provided you with the required electrical energy to operate your machines to read this blog and me to write.

So, the Indian Power System, technically called the Indian Power Grid, is the third largest in terms of generation and consumption of electrical power in the world after China and the USA (in 2019). It has an installed capacity of 356.100 GW as on 31 March 2019, which in any case is not possible to visualise using common examples.

What does this statement actually mean?

It states that the national grid is capable of delivering 356.100 GJ of real electrical energy in one second. In the fiscal year 2017-18, a total of 1486.5 TWh generation was recorded on the grid, again no practical way for giving an analogy for equivalent energy.

In a country of 136.665 Million population, this gives rise to the average electrical energy consumption of 1149 KWh per capita per year. Though we ranked 3rd in power generation, but we ranked very low at gross electricity consumption (140 rank), whereas China stood far better ( close to 50) inspite being the most populated nation in the world.

Howsoever, this massive engineering system comprises of thousands of generators in hundreds of power plants across the landmass, delivering power through millions of miles of transmission and distribution lines to a hungry load of 1486.5TWh spreading over an area of 3.28 Million square Km.  NOT A CAKE WALK!!!!

HOW DO THEY DO THAT???

In this blog, we will discuss in India’s perspective how does an electrical power system operate round the year to provide the customers in 29 states with reliable, secure, economic and quality power supply.

Let us grill the whole system into major components and understand it in a comprehensive way.

REQUIREMENTS

So the expectations and standards are quite high although they are not impractical because you can witness everyday occurring it, the irony is we fail to appreciate.

“Consumers demands for a reliable, secure, economic and quality power supply.”

   1. Reliability:

Reliability of the system allows customers to have continuous uninterrupted supply, in other words, minimal power cuts. So, how to make the system reliable.

One thing is for sure, various components will fail at some point or at least would go in servicing. It’s beyond our control to check the failure of system equipments completely. But to provide continuous power, the system should have a negligible impact even if a few major components collapse at a given time, else recovery should be very fast (which heavily depend on fault type).

Power system reliability comes from the heavily interconnected national grid. Numerous resources spread across the country are pooled into forming a common national grid, in which the power can be shared in all possible permutations and other resources can carry on even if some component fails.

So, sometimes when in summers the load is at the peak in North India a generator sitting in south India provide for the power at other time the very large economic generators in TPS in western and eastern India provide with power for major consumption.

Sooner or later you would come to know that the bigger a power system becomes greater is its reliability.

        2. Security:

Power security allows customers to depend on the power system for his critical assets also. For example hospitals, sewage treatment plants, transport industry, nuclear power plant cooling system, laboratories, etc. Security assures a customer for the future availability of power so that he could depend on it.

Now how to introduce the concept of security in the power system? In other words, how can a power plant ensure supply in future?

This characteristic certainly requires the power system to be able to predict the load in future, then only

  1. proper stocks of fuel can be obtained considering short terms like few days or week.
  2. proper plans for laying new plants, transmission and distribution lines can be prepared for long terms like few years or decade.

This involves massive scale forecasting, scheduling, and planning. And a hierarchical system of Energy Control System formed by area load dispatch centre, state load dispatch centres and national load dispatch centre.

      3. Economical:

Cheap affordable power is also an important factor for the success of the power industry as the annual income of an Indian is 1,13000 per annum which is extremely low for leading a decent life. A common man thus doesn’t have enough to adjust the massive electric bills in his budget. And it is only the proper management of power generating resources across the nation to produce economic power. “HOW” is discussed in the upcoming section.

      4. Quality

Power quality has three facets which we have already discussed in the blog named Electrical Power Quality.

Electrical Power Quality

We will discuss reliability and economic aspect in this blog and power security in next blog, LOAD DISPATCH CENTERS!!

 

Overview of the Power System

ELECTRICAL POWER SYSTEM : The Indian Frame

RELIABILITY

 

INTERCONNECTION

Till now you have at least come to you know the fact that reliability requires interconnection.

Let us understand the reason behind it.

We can get a gist of the concept of interconnection by a simple real-life example:

In a well planned and developed city, we see a web of roads, which gives a person numerous options to move from one starting point to any destination. In case a particular road becomes unavailable he can still reach his destination using other roads without any delay.

Similar is the case of the power system. Various parties pool their resources in such a way that failure of any major component of the system (let it be anything: power station, transmission line, or the distribution transformer itself) merely affect the system until it is again pulled back to work. The remaining healthy system is able to bear the load of the loss. Hence, in this way, we ensure reliability and consumer gets an uninterrupted continuous power supply.

Interconnection comes with many other essentials:

  1. Improved economy: different power plants have a different economy. The large thermal coal-fired power plants though have huge initial investment but highly economical compared to other plants for very large power generation. Whereas gas/ diesel fired thermal power plants is economic only at small loads. Now if we pool all types of power plants then a satisfactory economic power generation can be achieved. For eg supplying the large constant base loads with large and efficient TPS or Nuclear PS and using gas/diesel plants only at times of peak load demands. Think of the situation if we hadn’t chosen to interconnect. (Every individual system had to operate is own uneconomic sources to meet the peak loads).
  2. Environmental impact can be reduced by using the source most of the time which is more eco-friendly, like utilising the efficient large thermal power plants and the hydro-plants.  It is the same as using the most efficient pump for a larger time period.
  3. Penetration of renewables is only possible by interconnection: For example all the power obtained by windmills in Kutch, Gujarat is not completely utilised in Kutch, instead, it is transmitted via the interconnected grid to different load centres where there is power deficit.

But pooling of resources in the power system doesn’t come so easy. There a whole lot of technical glitches and drawbacks, and it requires continuous sophisticated monitoring and control to keep the interconnected grid up.

SYNCHRONISATION OF ALTERNATORS:

The very first requirement of an interconnected system is that the generating horses (three-phase AC alternator) must run in synchronism with all the generators in the system. It is the same as all the engines pulling the whole train in one direction, not operating in synchronism is just as same as engines pulling coaches in the opposite direction.

We would further discuss later that the whole nation is now synchronised.

So, an alternator in the thermal power plant in Jamnagar, Gujarat is in synchronisation with a super thermal plant in Farraka, West Bengal, similarly, the nuclear power alternators in Kalepakkam is in synch with the TPS alternator at Rupnagar, Punjab.

You can refer for complete notes on paralleling of alternators below:

https://photos.app.goo.gl/6AJqPmVvdTXNw9aFA

TRANSMISSION NETWORK:

Once proper paralleling or synchronisation is done the power is now ready to flow in desired paths from the different power plants to hungry loads. Extra high voltage transmission lines carry the massive power from generating station to the distribution centres from where they are dispatched to the load areas.

This is the physics of the phenomenon, let’s move to the engineering.

The engineering aim of the electrical power system is to keep the grid up throughout the year. Now here comes the concept of roads, which we discussed earlier. We have to build a network of heavily interconnected lines through in such a way that power continues to flow even if some lines becomes faulty.

So let us directly take a worked out example: The western grid of India.

This is the power map of the western grid taken from http://www.cea.nic.in dated 31 Dec 2018.

ELECTRICAL POWER SYSTEM : The Indian Frame

To analyse this here is a list of:

  1. Major generating stations (>1000 MW)

ELECTRICAL POWER SYSTEM : The Indian Frame

ELECTRICAL POWER SYSTEM : The Indian Frame

ELECTRICAL POWER SYSTEM : The Indian Frame

ELECTRICAL POWER SYSTEM : The Indian Frame

 

2. Major transmission lines:

1. 765 kV lines: Majorly used for very long distance transmissions (inter-regional)

    1. VIDHYACHAL – SATNA – BINA – INDORE – VADODARA – DHULE – AURANGABAD – SHOLAPUR – RAICHUR
    2. VIDHYACHAL – SATNA – BINA – JABALPUR – BHOPAL – INDORE (Bifurcates at Jabalpur towards Ranchi and Orai)
    3. VIDHYACHAL – SATNA – BINA- GWALIOR – AGRA
    4. SIPAT – SEONI – WARDHA – AURANGABAD – PADGHE (Bifurcates at Wardha towards Hyderabad)

2. 400 kV lines: form three major evacuation corridors from the east side to the west side of the western grid

  1. Upper links:
      1. VIDHYACHAL – JABALPUR – ITARSI – KHANDWA – DHULE (Line bifurcates at Itarsi towards Indore and Satpura also)
      2. VIDHYACHAL – SATNA – BINA – BHOPAL – ITARSI – INDORE

2. Middle links: BHILAI – KORADI – BHUSAWAL – ABHALESHWAR – PADGHE

3. Lower links: KORBA – RAIPUR – BHADRAWATI – PARLI – LONIKHAND – PADGHE

3. HVDC BIPOLES lines:

      1. CHANDRAPUR – PADGE, +-500 kV; 1500 MW
      2. CHAMPA – KURUKSHETRA, +-500 kV
      3. MUNDRA – MAHENDRAGARH, +-800 kV

So the whole western grid which includes MP, Maharastra, Gujarat, Chhattisgarh, Goa, Daman & Diu is linked by AC as well as DC links and whole grid run in synchronism. We saw how the major load area of the region i.e. the west part see a net flow of power from the east side which have super power station like Vidhyachal and Korba but no major load centres.

The pooling of resources has bought self- sufficiency and synchronism keep the grid stable at an operating frequency of 50 Hz.

But if we see a bigger picture and consider whole Indian peninsula then the situation changes.

ECONOMY

India is a country of extreme diversity- seasons, terrain, ethics, culture, and so many other factors, which result in highly dispersed loads, both in time and space dimension. To add to this the major energy resources are concentrated.

We have huge coal deposits in central and east India where one cannot find major load centres. The South India marks the electronics and heavy industries hub but has very limited power resources (majorly coastal and hydel). The North India, rich in Hydel power which sometimes meets the light demands in winter and monsoon and becomes a heavily deficit in hot summers as reservoirs dry up.

The following picture depicts the story:

  1. Look at the major energy sources:

ELECTRICAL POWER SYSTEM : The Indian Frame

2. And now a rough view of major consumer location:

ELECTRICAL POWER SYSTEM : The Indian Frame

Can you locate major cities like Delhi, Mumbai, Hyderabad, Ahmedabad, Bangluru, Calcutta, etc?

So we not just need a transmission system that is capable of transmitting power at a low resistive loss but a system of major arteries of electrical power that is also capable of transmitting power from regions of surplus power to power deficit regions considering the space and time diversity, not only this we want them to be highly reliable also.

All the qualities mentioned above are satisfactory if we introduce the concept of one grid, one nation.

India is divided into five regions, each having its own regional grid namely:

  1. The Northern Grid;
  2. The North-Eastern Grid;
  3. The southern Grid;
  4. The Western Grid;
  5. The Eastern Grid;

ELECTRICAL POWER SYSTEM : The Indian Frame

All the transmission lines end up at distribution substations from where power is transformed into a useable form, the 220 volt supply.

NOTE: Regional grids are networked such that power can be supplied to end loads through more than one or two transmission lines and distribution substations such that power is available even if one transmission line comes down.

Moreover, these regional grids need to be interconnected as all regional grids are not self-reliant in power generation, like the western grid.

This whole system doesn’t come out in one night, here is an excerpt from the report of Power Grid Corporation India Limited, that describes the evolution of our national grid:

Evolution of National Grid

  • Grid management on a regional basis started in the sixties.
  • Initially, State grids were inter-connected to form regional grid and India was demarcated into 5 regions namely Northern, Eastern, Western, North Eastern and Southern region.
  • In October 1991 North Eastern and Eastern grids were connected.
  • In March 2003 WR and ER-NER were interconnected.
  • August 2006 North and East grids were interconnected thereby 4 regional grids Northern, Eastern, Western and North Eastern grids are synchronously connected forming central grid operating at one frequency.
  • On 31st December 2013, Southern Region was connected to Central Grid in Synchronous mode with the commissioning of 765kV Raichur-Solapur Transmission line thereby achieving ‘ONE NATION’-‘ONE GRID’-‘ONE FREQUENCY’.

ELECTRICAL POWER SYSTEM : The Indian Frame

We have already noticed the transmission lines of 765 kV and HVDC links from the various mega power station in the western grid to head towards other grid’s states like Hyderabad (Southern), Agra & Orai (Northern), and Ranchi (Eastern).

This further interconnects various grid and we get a very large national grid in which power can be shared to increase the overall economy.

The following graphs indicate the inter-regional links: operating, under construction and proposed.

ELECTRICAL POWER SYSTEM : The Indian Frame

Good engineering must be complemented by good cooperation among the interconnected systems. This requires extensive data sharing, joint modelling, and clear communication, which pave ways for the hierarchical system of regulatory organisation which is called the LOAD DISPATCH CENTERS.

Thanks for your time.

Keep Reading, Keep Learning

TEAM CEV!!!!

 

Getting Started With Competitive Programming

Reading Time: 5 minutesHey, are you interested about starting competitive programming?

 

Here might be a Guide or maybe a Motivation of that.

 

Why Competitive Programming?

  • Competitive programming is a mind sport for software/IT industry. Also, it improves problem-solving in an interactive way. It also helps in developing the algorithm for a particular topic. Many tech companies put first round as competitive programming round to filter out candidates. It just helps you get better problem solver. And many data structure and algorithm related questions are asked in interviews. Companies give a direct interview for candidates having excellence in some competitive programming competitions.

 

Here is a guide if you want to start.

 

The Language:

  • You can use any (Programming) language(No HTML it is not a programming language). Just check if it is in ICPC official language or not.
  • But most preferred would be C, C++, JAVA. Sometimes I see people running python code not getting green tick(means all answers right) because of the time limit(python has more run time then C++ or JAVA).
  • But in good competitions, you might not get any kind of problems with that.
  • Google hash code finalist and God of machine learning Andrei Margeloiu say about JAVA: -” It’s slow. But it has Biginteger class, even if there are very few problems that require using it. If the time limit is tight, you will get Time limit exceeded. Java is not accepted in all competitions “ in his article.

Now the choice is yours.

Firstly, write your code like an artist it should be easy to debug by other guy reading your code.

Starting

  • Concrete your basics, do at least 30 problems(for each topic, 50 for arrays), an implementation in any competition you can find one or two problems based upon implementation.
  • For any language basic topics are
    1. Data Types,
    2. Branching,
    3. Looping(For looping practice pattern programming is best),
    4. Time Complexity,
    5. Array(for C,C++),(Highly recommended)
    6. Structures(for C),Classes (JAVA ,C++ or any Object oriented language)
    7. Bit manipulation(Highly recommended)

 

 

 

 

 

Some basic tricks:

You can always put

while(t–)

       {

       }

instead of

for(int i=t;i>=0;i–){…}

 

Some basic algorithms

sorting :(Bubble sort, quick sort, merge sort) implement them by yourself

Searching: linear search, Binary search, Ternary search that to by yourself

 

Now you can start competing for any competition your rank might be low but still you can try to get into the environment.

 

Data Structure:

  • Arrays
  • linked list
  • Trees
  • Graphs
  • Stack
  • Queue

 

Parallelly,

 

  • You can start learning Libraries implementation

 

  • For C++ there are standard template libraries like
    • Vector (mostly used),
    • Map(mostly used),
    • Set(mostly used),
    • Queue,
    • Dequeue,
    • Stack,
    • List(haven’t used it since I learned it).
  • Every language there exists such kind of libraries.

 

Libraries function like sort, find, reverse, gcd, etc are good but better if you practice them before.

For competitions, you can use (preferred by me) :Hackerearth,CodeChef,Codeforces

  • Hackerearth: Good for basics, for a particular topic
  • CodeChef: For ICPC region style competition as well as long style competition 3 per month.
  • Codeforces: Better for good(Efficient) approach at least 6 competitions per month.

Now there are several things to learn like, the efficient algorithm of Tree, Graph(Depth-first search, Breadth-first search) Kadane’s algorithm, Shortest path algorithms for graphs, Segment Tree.

Clearly, all it is sufficient for getting a job but there are more topics.

 

Typing Speed:

  • In CP typing speed can make a difference especially question in easy and everyone knows the approach keep it decent 55 to 60.
  • Do not focus too much on that but that can be a tiebreaker.

 

Competitions:

GOOGLE HASH CODE, ACM ICPC, GOOGLE CODEJAM, CODECHEF SMACKDOWN, and several competitions in hackerearth, codechef, codeforces.

 

GOOGLE HASH CODE: Hash Code is a team programming competition, organized by Google, for students and professionals around the world. You pick your team and programming language and we pick an engineering problem for you to solve. This year’s contest kicks off with an Online Qualification Round, where your team can compete from wherever you’d like, including from one of our Hash Code hubs. Top teams will then be invited to a Google office for the Final Round. More details

 

ACM ICPC (Association for Computing Machinery – International Collegiate Programming Contest) : The ACM ICPC is considered as the “Olympics of Programming Competitions”. It is quite simply, the oldest, largest, and most prestigious programming contest in the world.

More Details

 

Google CODEJAM: Code Jam is Google’s longest running global coding competition, where programmers of all levels put their skills to the test. Competitors work their way through a series of online algorithmic puzzles to earn a spot at the World Finals, all for a chance to win the championship title and $15,000. More details

 

CodeChef SNACKDOWN: SnackDown is a global programming event that invites teams from all over the world to take part in India’s most prestigious multi-round programming competition. Hosted by CodeChef, SnackDown is open to anyone with a knack for programming and began in the year 2009. More details

Monthly

These competitions are held every month on a specific date/week/time. These competitions help you boost your profile on the respective website by ranking you based on your performance.

Long Challenge

  • CodeChef Long Challenge is a 10-day monthly coding contest where you can show off your computer programming skills. The significance being – it gives you enough time to think about a problem, try different ways of attacking the problem, read the concepts etc. If you’re usually slow at solving problems and have ample time at hand, this is ideal for you. CodeChef

Monthly CookOff

  • CodeChef Cook-Off is a two and half hour coding contest where you can show off your computer programming skills. CodeChef

Monthly Easy

  • A 3 hours challenge conducted in the first week of every month. Comprises 6 algorithmic programming problems conducted between 21:30 IST to 00:30 IST. Hackerearth

Monthly Circuits

  • Circuits take place during the third and fourth week of every month. The objective of Monthly Circuits is to challenge the talented and creative minds in competitive programming with some interesting algorithmic problems. The participants will be challenged by Multiple Problem Setters with 8 problems of varying difficulty levels in a duration of 9 days. Hackerearth

 

The Last Lesson:

The last lesson is don’t get demotivated by cp(competitive programming), There can be a case you might won’t show any progress for 3 months. Try as many approaches as you can for a particular problem. And get one thing that CP is not the only thing in computer science if you are not interested in it you can neglect it.

That’s all from my side.

 

Golden Rule:

It is practice, practice, and practice. But don’t give some topic over time.

 

Experience:

I started on 4th December 2017. After eleven months I got the fifth rank in our college with my respective team and first in our year with ICPC rank of 534 in India region. And First rank in 2nd year. And got respected rank held in of CP several competitions held in our college NIT Surat. And it is fun to see your rank above your friend’s in competition and get motivation if his/her rank is better than you.

You can contact me at:

https://www.linkedin.com/in/dhvanil-vadher/

https://www.facebook.com/dhvanil.vadher

WhatsApp: 8780110809

Thanks for your time!!!

Why do Rockets love to fail?

Reading Time: 8 minutes

Author
Deepak Kumar
Propulsion Engineer, Dept. of Propulsion, STAR

“Rockets, they really don’t wanna work, they like to blow up a lot”

 

         – Elon Musk

If you take look at all the List of spaceflight-related accidents and incidents – Wikipedia , you’ll realize there have been countless failures. That the answer to “How many”.

 

Rockets can fail anytime. Moreover, a rocket isn’t a simple machine at all. A massive structure having around 2.5 billion dynamic parts is likely to fail anytime if any of these parts says, “ I can’t do this anymore, I’m done”.

 

Coming to some of the well known Rocket Failures, this will help you learn how rockets fail!

 

1. The Space Shuttle Challenger Disaster

Why do Rockets love to fail?

The spaceflight of Space Shuttle carried a crew of 7 members, when it disintegrated over the Atlantic Ocean. The disintegration was caused due to the failure of one of Solid Rocket Boosters(SRB). The SRB failed during the lift-off.

 

The failure of SRB was caused due to O-Rings. O-ring is mechanical gasket that is used to create a seal at the interface. And here, that interface was between two fuel segments. O-Ring was designed to avoid the escaping of gases produced due to burning of solid fuel. But extreme cold weather on the morning of launch date, the O-Ring became stiff and it failed to seal the interface.

Why do Rockets love to fail?

This malfunctioning caused a breach at the interface. The escaping gases impinged upon the adjacent SRB aft field joint hardware( hardware joining the SRB to the main structure) and the fuel tank. This led to the separation of the Right Hand SRB’s aft field joint attachment and the structural failure of external tank.

Why do Rockets love to fail?

In the video below, the speaker mentions about the weather being chilly on that morning and icicles formed on the launch pad in the morning. One of SRB is clearly visible making its own way after the failure.



2. The Space Shuttle Columbia Disaster

Unlike the above failure, this failure occurred during the re-entry. But again, the story traces back to the launch. During the launch, a piece of foam broke off from the external fuel tank and struck the left wing of the orbiter.

Why do Rockets love to fail?

This is an image of orbiter’s left wing after being struck by the foam. The foam actually broke off from the bi-pod ramp that connects the orbiter and fuel tank.

Why do Rockets love to fail?

The foam hit the wing at nearly a speed of 877 km/h causing damage to the heat shield below the orbiter. The piece of foam that broke off the external fuel tank was nearly the size of a suitcase and could have likely created a hole of 15–25 cms in diameter.

Why do Rockets love to fail?

The black portion below the nose you see is the carbon heat shield of orbiter.

On Feb 1,2003 during the re-entry, at an altitude of nearly 70 km, temperature of wing edge reached 1650 °C and the hot gases penetrated the wing of orbiter. Immense heat energy caused a lot of dange. At an altitude of nearly 60 km, the sensors started to fail, the radio contact was lost, Columbia was gone out of control and the left wing of the orbiter broke. The crew cabin broke and the vehicle disintegrated.

 

 

You can clearly see the vehicle disintegrating. **The video is a big one, hang tight. 😉

 

3. The N1 Rocket Failure

Not many people know about this programme. It was started in 1969 by the Russians. N1 rocket remains the largest rocket ever built till date. The rocket had its last launch in 1972. During this tenure, the were four launches, all of them failed. Yes you heard it right, ALL OF THEM FAILED.

Why do Rockets love to fail?

Before discussing the failures, there is one thing that I never forget to mention about this rocket. Rockets rely on TVC(Thrust Vector Control) to change the direction of the thrust. The nozzle direction is changed to alter the direction of thrust.

Why do Rockets love to fail?

This is TVC. But in case of N1 Rocket, there was something called Static Thrust Vectoring. There were 30 engines in stage 1, 8 engines in stage 2, 4 engines in stage 3 and 1 in stage 4.

Why do Rockets love to fail?

There were 24 on the outer perimeter and the remaining 6 around the centre.

In order to change the direction of rocket, the thrust was changed in the engines accordingly. The engines didn’t move like TVC at all.

Now coming to the failed launches:

Launch 1:

The engines were monitored by KORD(Control of Rocket Engines). During the initial phase of flight, a transient voltage caused KORD to shut down the engine #12. Simultaneously, engine #24 was shut down to maintain stability of the rocket. At T+6 seconds, pogo oscillation( a type of combustion instability that causes damage to the engine) in the #2 engine tore several components off their mounts and started a propellant leak. At T+25 seconds, further vibrations ruptured a fuel line and caused RP-1 to spill into the aft section of the booster. When it came into contact with the leaking gas, a fire started. The fire then burned through wiring in the power supply, causing electrical arcing which was picked up by sensors and interpreted by the KORD as a pressurization problem in the turbopumps.

Launch 2:

Launch took place at 11:18 PM Moscow time. For a few moments, the rocket lifted into the night sky. As soon as it cleared the tower, there was a flash of light, and debris could be seen falling from the bottom of the first stage. All the engines instantly shut down except engine #18. This caused the N-1 to lean over at a 45-degree angle and drop back onto launch pad 110 East. Nearly 2300 tons of propellant on board triggered a massive blast and shock wave that shattered windows across the launch complex and sent debris flying as far as 6 miles (10 kilometers) from the center of the explosion. Just before liftoff, the LOX turbopump in the #8 engine exploded (the pump was recovered from the debris and found to have signs of fire and melting), the shock wave severing surrounding propellant lines and starting a fire from leaking fuel. The fire damaged various components in the thrust section leading to the engines gradually being shut down between T+10 and T+12 seconds. The KORD had shut off engines #7, #19, #20, and #21 after detecting abnormal pressure and pump speeds. Telemetry did not provide any explanation as to what shut off the other engines. This was one of the largest artificial non-nuclear explosions in human history.

Launch 3:

Soon after lift-off, due to unexpected eddy and counter-currents at the base of Block A (the first stage), the N-1 experienced an uncontrolled roll beyond the capability of the control system to compensate. The KORD computer sensed an abnormal situation and sent a shutdown command to the first stage, but as noted above, the guidance program had since been modified to prevent this from happening until 50 seconds into launch. The roll, which had initially been 6° per second, began rapidly accelerating. At T+39 seconds, the booster was rolling at nearly 40° per second, causing the inertial guidance system to go into gimbal lock and at T+48 seconds, the vehicle disintegrated from structural loads. The interstage truss between the second and third stages twisted apart and the latter separated from the stack and at T+50 seconds, the cutoff command to the first stage was unblocked and the engines immediately shut down. The upper stages impacted about 4 miles (7 kilometers) from the launch complex. Despite the engine shutoff, the first and second stages still had enough momentum to travel for some distance before falling to earth about 9 miles (15 kilometers) from the launch complex and blasting a 15-meter-deep (50-foot) crater in the steppe.

 

Launch 4:

The start and lift-off went well. At T+90 seconds, a programmed shutdown of the core propulsion system (the six center engines) was performed to reduce the structural stress on the booster. Because of excessive dynamic loads caused by a hydraulic shock wave when the six engines were shut down abruptly, lines for feeding fuel and oxidizer to the core propulsion system burst and a fire started in the boat-tail of the booster; in addition, the #4 engine exploded. The first stage broke up starting at T+107 seconds and all telemetry data ceased at T+110 seconds.

Besides the mechanical failures, the rockets might fail due to a minute discrepancy in program’s as in case of Ariane 5.

Ariane 5: After 37 seconds later, the rocket flipped 90 degrees in the wrong direction, and less than two seconds later, aerodynamic forces ripped the boosters apart from the main stage at a height of 4km. This caused the self-destruct mechanism to trigger, and the spacecraft was consumed in a gigantic fireball of liquid hydrogen.

The fault was quickly identified as a software bug in the rocket’s Inertial Reference System. The rocket used this system to determine whether it was pointing up or down, which is formally known as the horizontal bias, or informally as a BH value. This value was represented by a 64-bit floating variable, which was perfectly adequate.

However, problems began to occur when the software attempted to stuff this 64-bit variable, which can represent billions of potential values, into a 16-bit integer, which can only represent 65,535 potential values. For the first few seconds of flight, the rocket’s acceleration was low, so the conversion between these two values was successful. However, as the rocket’s velocity increased, the 64-bit variable exceeded 65k, and became too large to fit in a 16-bit variable. It was at this point that the processor encountered an operand error, and populated the BH variable with a diagnostic value.

That’s your answer to “why”. Rockets can fail anytime due even a small malfunction in one of those 2.5 billion dynamic parts or even a small programming error.

Hope you enjoyed the writings up there!

Thank You!

Source: Google and Wikipedia

 

 

Looking forward to excel in rocket building?

Check out this link Space Technology and Aeronautical Rocketry- STAR

GIT and GITHUB: A Layman’s Guide [Part-2]

Reading Time: 5 minutes

Hello peeps!!
If you haven’t read Part-1 of the series, then take a look over it for better understanding.

In the Part-1, most of the jargons related to Git and Github and basic commands have already been discussed. Still, there is much more to learn like how to revert back the changes, what is branching, merging, etc.

 

GIT and GITHUB: A Layman’s Guide [Part-2]

Giphy GIFs

 

So hold your coffee and let’s begin & try to understand them one by one in simplest form and don’t worry we are not going to deal with any PPT, Lol!!

We have used the term master branch in the previous article several times.
So Let’s discuss it first..

Branching :

We will try to relate this with a real-life scenario at first and then we will move on to the technical explanation.
Imagine you are working on a team project. In such a project, there are often bugs to be fixed and sometimes a new feature has to be added. In a small project, it is easy to work directly on the main version, technically ‘master branch’ but in case of big projects, if you do so there is a high probability that you and other teammates may make changes which are conflicting. So the solution for this is ‘branching’ in git.

For proper understanding, you can think of the main git chain as a tree trunk which is technically called ‘master branch’.So whenever you want to work on a new feature, you can make a separate branch from the main tree trunk or master branch and start committing changes in your new branch and once you think that your feature is ready you can again merge that branch in the master branch.

Let’s understand this in a more robust way and also discuss the basic commands related to branching.

GIT and GITHUB: A Layman’s Guide [Part-2]

Branch in git is a special pointer to one of the commit. Every time you make a commit, it moves forward to the latest commit.

GIT and GITHUB: A Layman’s Guide [Part-2]

Another important point to mention is that git has a special pointer called
HEAD to keep track of the branch you are currently working on.

Let’s create a new branch with the name ‘new-feature’.

git branch new-feature

This will create a new branch(a pointer) on the same commit you were working on.

GIT and GITHUB: A Layman’s Guide [Part-2]

 

Note that HEAD is still on the master branch. You need to checkout to switch to the new branch.

 

git checkout new-feature

 

GIT and GITHUB: A Layman’s Guide [Part-2]

 

NOTE-You can create a new branch and immediately checkout to the new branch by:

git checkout -b new-feature

Let’s start working on the new feature and make a new commit.

 

 

GIT and GITHUB: A Layman’s Guide [Part-2]

 

Now if we check out to the main branch again and make a new commit, the new-feature branch will not be affected at all. Let’s do this:

git checkout master

After some changes, and commiting the new changes:

 

GIT and GITHUB: A Layman’s Guide [Part-2]

 

So, you can see how you can work on a new-feature without disturbing the master branch and once you complete your task on the new-feature you can “merge” that branch into the main branch.
Isn’t it amazing that you and your team can work on different features by creating multiple branches and later  merging them into master? Hell Yeah!!!

Now Let’s discuss a little bit about merging and basic commands related to it.

Merging :

Whenever you make a separate branch for working on a feature, you can commit your changes in that branch. But when you task related to the feature for which you make a branch completes, you need to merge that branch into the main codebase/master branch and this process is called ‘Merging’.

Suppose your task on a new-feature branch is now complete and you want to merge that branch into the master branch. Then firstly checkout to the master branch.

git checkout master

And use the following command:

git merge branchname

*Here in our case branch name is new-feature.

 

This command merge the changes you made in new-feature branch with the master branch by squashing them into a new commit that has information related to the two parent commits.See the picture…

 

GIT and GITHUB: A Layman’s Guide [Part-2]
And here comes the bad part….
Source:pando.com

Merge Conflicts :

When you merge a branch with the master branch, then there are chances you run into
‘Merge -conflicts’.
Basically, ‘merge-conflicts’ arise when you changed the line of code that someone
else also changed.

In such a situation, you manually have to decide which version of the code you want to keep that is you need to resolve the merge conflicts.

That’s All!!! Thanks for reading the article.
The next blog in the series will focus more on using GitHub.
Stay Tuned.

Happy Learning 🙂

TEAM CEV

References: https://git-scm.com/

GIT and GITHUB: A Layman’s Guide[Part-1]

Reading Time: 5 minutesIf you are new to Git and Github, and even if you are not from any technical background, this article will clear all your myths related to it.

GIT and GITHUB: A Layman's Guide[Part-1]

 Photo by Yancy Min on Unsplash

 

Git :

Git is just a software which tracks the changes in the files of your project.
It keeps the different versions of your files, hence it belongs to a category of software called Version Control System (VCS) so that different Versions of Software is in your Control
So if you are a developer, it can help you in handling situations like:

  • Reverting back to an older version of your code
  • Collaborate with the team effectively while working on the same project.

Repository :

The sole purpose of git is to track the changes in the project and collection of files and keep different versions of it.
So, the question is where does the git store these changes made in your project files. Here comes the concept of a repository, it is just a sub-directory in the root directory of your project you are working upon, which stores all the information related to changes in your files and much more useful information like who made the changes and when these were made.

Remote :

Suppose a situation where you are working on a team project consisting of different members. So the situation can’t be handled easily. A great hard-work will be required to merge the changes made by all into a single final project. And there can also be situations like there will be merge conflicts due to differences in a file which are stored on a single member’s machine. So we can say in this way, we can’t really collaborate on a team project.

Git solves this problem through ‘remote’.A git remote is a common repository residing on another central machine, which can be used by the whole team for collaboration on a team project.

Till now you know what is git, a repository and a remote. Another thing which we are going to discuss is “Github”.First of all, there is always confusion between Git and Github. Are they same thing or different?. So for more clarification-

— Git is a version control system, a tool to manage versions of your code
— GitHub is a hosting service for git repositories.

Now another question which can come into your mind is “How Git is gonna track and stage all the changes?”.The answer lies behind the distinct Git States, let’s tackle them first before proceeding-

Git States

The basic workflow of git includes the following three stages :

->Modified

It is the state when you make changes in a file, but it is untracked by the git.

->Staging

When you have modified a file or files, then you have to inform the git to look over and track the file if it is untracked till now, by taking a snapshot and this snapshot will go into the next commit. So we can say it is just the marking of a file to go into the next commit.

->Committed

It is the state when all the staged changes are stored in the local git repository or we can say the database.

GIT and GITHUB: A Layman's Guide[Part-1]

 

After this much, we can continue creating a git repository on your local machine and then pointing it to Github. All you need is git installed on your system and a GitHub account.
We will be using Ubuntu for the tutorial but most of the commands are same for Windows also. Let’s Go!!

Step 1: Configuring Git

git --version

*To check the version of git and for making sure git is installed or not.

git config --global user.name "github username"
git config — global user.email “github email”

*Replace username and email with your GitHub username and email.

Step 2: Initialising a git repository

git init

Fire this command in your project directory and this will initialize a git repository and you will see a .git folder in your project directory.

Step 3: Connecting to a repository

a)Create a repository on GitHub for your project

GIT and GITHUB: A Layman's Guide[Part-1]
b)Add a Remote
GIT and GITHUB: A Layman's Guide[Part-1]
git remote add origin url

Replace url with the https link of your repository.

*In the above command origin is just a remote name, you can also use other names for your remote.

Hola!!All set up.

In case you want to use an existing project hosted on GitHub skip all the above steps and clone the GitHub repository in your preferred directory.

git clone url

Let’s move on to 4 more essential git commands-

git add filename

This command will add the file into the staging area.

*Replace filename with the name of the file you want to add into the staging area.
*To add all the files of the directory replace filename with .[dot]

git commit -m “commit_message”

This will commit all the changes which are currently in the staging area. So this command is a bridge between the staging area and committed area.

 

git push origin master

This command will push all the files of the local repository of the ‘master’ branch to the Github/Central repository with remote name ‘origin’.

*Ignore the word branch here. We will take a look at it later. So just use master in the above command.

git pull origin master

This will pull all the contents of the master branch stored on GitHub/Central Repository to your local repository.

Other useful commands-

git status

This shows the state of your working directory and helps you see all the files which are untracked by Git, staged or unstaged.

git log

It is used to check the history of commits.

 

 

Thanks for Reading!!

This blog was focused mainly on the basics of Git And GitHub. All the concepts like branching, merging, resolving merge conflicts will be covered in detail in Part-2. Stay Tuned 🙂

Keep Reading, Keep Learning.

TEAM CEV!!!

THE SUPREME COURT : The Nation’s Lifeline

Reading Time: 9 minutesCAUTION: NO INTENTION FOR ANY DISORDER AND DEFAMATION OF ANYBODY. NO INTENTION FOR ANY ALLEGATIONS BUT TO BRING IN LIGHT THE STORIES OF UNSUNG HEROES, FEW WORDS IN THE HONOR OF THEIR DEDICATION AND DEBT TO ALL OF US.

In the world with such a little space for tolerance, harsh and fierce words of people, lets us talk of brotherhood, humanity, compassion, and sympathy.

The feeling of being safe and secure is the most treasured thing a country can give to its countrymen. There are many lines along which the topic can be introduced in the blog. Being a serious writer I would start with a concerned and serious note.

Also, I would like you to read the whole piece of writing to avoid any misleading.

Huuhhhh…

The world is following similar trends. If we look back, starting with 9/11, 13/11 to 26/11, from New Zealand’s Mosque attack to the latest 8 bombings on Churches in Sri Lanka and, the face of crime, threats and terror have turned ravage and the limit of barbarism and vulgarity have been crossed far-far away.

I can agree with you on the opinion that history has seen even more demolishing and barbaric human behaviour. But the question is of the present. What’s done is done nothing can change. But we can determine what happens to our future generation. Blaming the history is not going to affect us in any way but surely cogitating today, tomorrow can be made a little better than today.

Whatsoever, it seems so beautiful to me that it is human which commit such crimes and it is not the god but it is the human themselves who bring justice to the fellow victims. In this blog, we are going to talk optimistically about the institution and the group, the lifeline and proud of the nation, the Judicial System i.e. the Supreme Court of India.

Some professions are not just professions they are little more than that.

The blog aim is to share information, spread awareness and also breed appreciation and respect for the judges and lawyers among the readers. I would also point out some of the critical issues and concerns of our judicial system.

Also, I would like to share various sources of inspiration for the blog. I was motivated by the 2014 and 2016 speeches of former Chief Justice of India T.S. Thakur, works of Justice Markandey Katju, reports and articles published on a website named daskhindia, the companion, prs legislative research, Wikipedia, etc. For statistical data, I have referred to the National Judicial Data grid (NJDG), e-courts and Dept of law and justice:

https://njdg.ecourts.gov.in/njdgnew/index.php

https://ecourts.gov.in/ecourts_home/

http://doj.gov.in/

and other government verified sources.

 

WHY…

If not them whom should we talk about. Not satisfied? Read till the end, I won’t need to answer it again.

HISTORY

The Supreme Court of India came into being on 28 January 1950. The first Chief Justice of India was H. J. Kania. The Supreme court is one of four pillars of Democracy and has a crucial role to play. The whole judiciary system of India had a fearless and unbiased history with very bold and appreciable justices: including the invalidation of candidature of Late Indira Gandhi G in 1975, 2G Spectrum case, recognition of transgender as the third gender in law, online ballot for NRIs, etc.

Throughout, the supreme court had played a major role in determining the course of the country after independence.

WHAT IS THE ISSUE THEN?

We are a country of 133.92 Million people. With as many as 10 religions, over a thousand communities which speak over 22 different scheduled languages on a landmass of 3.28 Million sq Km. Our country is amazingly diverse in terms of culture, religion and geography. We can never be tired, to take credit of being the biggest secular country on our earth.

For over decades since independence in 1947, this feeling of secularism and federalism have been nation binding forces and cause of our unity in diversity.

This survey report clearly speaks one thing very firmly that the common men have in themselves great secular feeling and the indifference of the religion is not the major cause of havoc.

THE SUPREME COURT : The Nation's Lifeline

Now consider some other numbers and ratios:

THE SUPREME COURT : The Nation's Lifeline

Above is a screenshot taken from the NJDG website which simply speaks tons of words about the condition of pending cases in all the courts (Supreme, high, and district courts), which sums up to 30.385 million cases closed in files waiting to be opened, heard and disposed off.

Now let us analyse more specifically. Considering the most spoken and heard quote about justice i.e. “JUSTICE DELAYED IS JUSTICE DENIED”, let us see how far this statement itself is justified in the context of Indian Judiciary.

Let us assume that we, the people of India are satisfied if we get justice in 3 years.

First for District and talukas courts:

THE SUPREME COURT : The Nation's Lifeline

40% of cases which account to a total of 3.087 million cases are 3- 30 years. Would it be fine for you to get justice for your loved one murdered after 3 years, illegally occupied your property after 5 years, etc? Of course not.

Now let us move to high courts if you not get satisfied by district courts.

THE SUPREME COURT : The Nation's Lifeline

Again situation critical. Cases between 3-30 years slots add up to 51 per cent. So, after fighting for 3-30 years in district court with a probability of 0.40 you have to fight for 3-30 years in the high court with a probability of 0.51.

THE SUPREME COURT : The Nation's Lifeline

Statewise performance of respective high courts:

THE SUPREME COURT : The Nation's Lifeline

And finally, if you approach the Supreme court then you would be somewhere between 26- 80 years of age if you had started fighting at 20 years of age.

Sadly you wouldn’t be relieved even then.

THE SUPREME COURT : The Nation's Lifeline

If you had filed your case on 1 May 2019, then your turn would come after the current 29 judges will pass judgement on 58,168 cases before yours.

Let us be optimist, you got justice from the Supreme court, but you would hardly be able to jump and scream out of joy, as you would have crossed your 40s.

Hey, don’t be stressed, have faith!!

WHERE THE HELL IS THE PROBLEM?

To blame for such a huge pendency of cases in various level courts of India greatly goes to one single fact. And it is the extreme shortage of judges at all levels.

A small team of 21,598 courageous judges (sanctioned strength till December 31, 2015) which includes 20,502 judges in lower courts, 1,065 high court judges and 31 Supreme Court judges, are serving a country of population 1.35 Billion, world’s 17 % human race.
This stats gives rise to 19 judges per 1 million countrymen.
THE SUPREME COURT : The Nation's Lifeline
Ideally, we would have been 50 judges per million and we are far short of figures.

How can we dream of becoming a superpower if our judiciary is so weak in comparison to other leading nations?

THE SUPREME COURT : The Nation's Lifeline

The graph displays the horrifying condition of the Indian judiciary. Well, you would say the data is of the year 2009. Unfortunately, “Nothing is Moving” said the former CJI TS Thakur. The graph shifts to 19 judges per million people in India in 2019 which in the year 2009 was 10.5.

JUDGES VACANCIES:

Currently, 12% of the sanctioned strength of the Supreme Court is vacant. The corresponding figure for High Courts is 26%, and for Lower Courts, it is 18%. Among large High Courts, vacant positions in Allahabad HC amount to 45%, followed by 32% in Punjab and Haryana HC.

Here is the press release statement of Ministry of Law & Justice of 1 May 2019 of the vacancies:

Vacancy (01.05.2019)

Trends in high courts:

THE SUPREME COURT : The Nation's Lifeline

Trends in district courts:

THE SUPREME COURT : The Nation's Lifeline

 

THE GREAT INDIAN JUDGE:

In 2016, compared to 2006, the number of cases disposed of increased approximately from 57,000 to 76,000 in Supreme Court; from 14.4 lakh cases to 16 lakh cases in High Courts and from 1.6 crore cases to 1.9 crore cases in subordinate courts.

An Indian judge disposes-off nearly 2500 cases each year which is equivalent to 7 cases a day. A case file can range from 50 pages to 100 pages hence summing up to 525 pages a day which in our perspective (engineering student) is equivalent to preparing for two end-semester paper a day. We, students, consider it as hell week, and these judges follow this routine round the year.

This is insane, surely we are harassing them.

Even if it doesn’t chatter your teeth then either you have taken your end sem exam quiet lightly or you haven’t read what is written above.

This is a one-month report card of high-courts of India. Nearly 1,000 high court judges were able to dispose of 1,31,125 (1.31 lakh) in last month of April 2019.

AVG OF 4.37 CASES PER DAY PER JUDGE.

THE SUPREME COURT : The Nation's Lifeline

 

WHAT IS THE WAY OUT?

It is very astounding to know that even at such an exponential rise of cases filed every day these handfuls no of judges are able to dispose of them as well as some of pending cases also.

Here is the report of their efforts:

THE SUPREME COURT : The Nation's Lifeline

THE SUPREME COURT : The Nation's Lifeline

In the past few years, we have seen the drop in the number of the pending case though marginally.

But at the cost of injustice to judges itself as no of increase in judges is still dangerously low.

There is no other way except to increase the number of judges for the sake of the welfare of victims as well as judges also.

Along with this, there are other contributing reasons also for the turmoil. Unawareness is also one of the issues. There are only a few students who dare to take streams other than science after they pass matriculation with flying colours and in this way a huge potential is diverted from delivering its service to the nation by giving justice to the people.

Parents don’t want to see their children pursuing a legal career because of impending danger and very poor and stressful environment.

At last the speech which provoked me, CJI T.S. Thakur speech excerpt:

“… It is not only in the name of a litigant or people languishing in jails but also in the name of development of the country, its progress that I beseech you to rise to the occasion and realise that it is not enough to criticise. You cannot shift the entire burden on the judiciary”.

 

CONCLUSION:

What have you heard people saying in a typical Indian dispute?

“I will see you the court”.

It is not just a mere statement said when people are angry but more importantly signifies their faith on Judicial System of India.

The people of India have deep faith in its judicial system and on the judges and will continue to have it. But the point is, serious and immediate actions are required for the survival of both.

I know well very this blog is not going to lead any kind of impact or healing for the situation at the ground level yet I was unable to find a single reason to not write it.

The writer is always ready to take any claims of faulty data and takes responsibility for the same and would feel pleased if followed by a question or arguments.

KEEP READING, KEEP LEARNING!!

TEAM CEV!

 

CEV - Handout