The Social Dilemma

Reading Time: 7 minutes

“Nothing vast enters the life of mortals without a curse.”

In 2020, Netflix released a documentary drama movie named “The Social Dilemma” directed by Jeff Orlowski which explores the rise of social media and the damage it has caused to society, focusing on its exploitation and manipulation of its users for financial gain through surveillance capitalism and data mining. According to recent estimates, approximately 3.8 billion people are active on social media worldwide which means that today more people are connected than ever through various social media platforms. Look around yourselves, which are the most visited Apps on your smartphones, you get to know how deep social media has penetrated our life. When asked about the impact of social media, creators said that they had never imagined to which extent their product would go on impacting the lives of common people across the globe. Social media did a fantastic job in helping people in their difficult times, it helped in searching the donor for organ donation, helped the needy to get donations, helped students to get free study materials online very easily, helped beginners to start cooking and there are endless examples of how social media has helped humans. But something has changed over the years. The world is changing at an unprecedented rate like never imagined before and that not in a good direction. 


Earlier the social media platforms were used for sharing photos and videos and connecting to people. The Internet was simple at that time. Now social media platforms like Facebook, Snapchat, Twitter, Tiktok, Google, Pinterest, Reddit, Linkedin, etc. compete for our attention. 

Today’s big tech giant companies are making their product keeping three main goals in their mind:- 


1.) Engagement goal- They want to drive up usage and keep you scrolling on their platforms. They want you to scroll through their platforms as much as you can do. But the question is how do they do that, right? They do it by using the machine as persuasive social media actors. It is called persuasive technology. Let me explain by giving a reference to two studies that were conducted at Stanford University in the mid-1990s that showed how the similarity between computers and the people who use them makes a difference when it comes to persuasions. One study examined the similarities in personalities while another study examined similarities in affiliation. Research highlights of the study are below.


Research Highlights: The Personality Study:

  • Created dominant and submissive computer personalities 
  • Chose as participants people who were at extremes of dominant or submissive 
  • Mixed and matched computer personalities with user personalities 
  • Result: Participants preferred computers whose “personalities” matched their own. 

Research Highlights: The Affiliation Study:

  • Participants were given a problem to solve and assigned to work on the problem either with a computer they were told was a “teammate” or a computer that was given no label. 
  • For all participants, the interaction with the computer was identical; the only difference was whether or not the participant believed the computer was a teammate. 
  • The results compared to responses of other participants: people who worked with a computer labeled as their teammate reported that the computer was more similar to them, that it was smarter, and that it offered better information. These participants also were more likely to choose the problem solutions recommended by the computers.

2.) Growth goal- They want you to connect with your relatives, your friends, even strangers and make them your friends, explore various attractive locations, crave tasty food, invite more people on the platform for engagement, etc. for one and only one reason, You visit their platforms more and more. Let me give you some examples from your daily social media experience. There are two forms of interactions that take place on Facebook: active interaction (liking, sharing, commenting, reacting) and passive interaction (clicking, watching, viewing/hovering).


  • Active interaction: Whenever someone likes your post or vice-a-versa, it gives a sense of joy that they like us or we like them. It creates a loop for you and them to visit each other’s profile more often and chat which means you will chat with them on social media platforms and you visit more. You share memes with them, react to their stories, you react to their reactions and ultimately you end up spending more time on their platform. It also creates a rat race for more no. of likes which can affect mental health. The more you crave for likes, the more you are expected to spend time on social media figuring out how you can increase your likes and get recognition amongst your peers. Below is the excerpt from a study on “The social significance of the Facebook Like button” by Veikko Eranti and Markku Lonkila.
The Social Dilemma

The figure suggests, first, that the relationship with the original poster of an object may have an impact on likes: We are more prone to like a post by a close Facebook friend than one by an acquaintance whom we have accepted as our friend somewhat reluctantly. Second, the quality, number, and network structure of previous likers are likely to affect one’s likes. This is probably even truer in the case of a sensitive or contradictory topic (e.g., a post on a political issue). Thus, if F1, F2, and F3 are close friends, F3 is more prone to like a post of controversial nature if F1 and F2 have both already liked it. Third, the imagined audience constructed subjectively by the user of the pool of all Facebook friends (some subset of F1–F4) is likely to influence liking behavior. 

  • Passive interaction: Now remember when you were not talking with anybody, not reacting to any stories, not commenting on any post but still active on social media, what were you doing? You were seeing videos and simply scrolling through various posts, memes, videos, reels hoping for the one post that you may find interesting and can like or comment on it, isn’t it? How long it took you to find the post you wanted to see. Probably not too much, your social media platform did not take a long time to guess what you want to see, but the question is how? Adam Mosseri, head of Instagram might answer your question, “Today we use signals like how many people react to, comment on, or share posts to determine how high they appear in News Feed. With this update, we will also prioritize posts that spark conversations and meaningful interactions between people. To do this, we will predict which posts you might want to interact with your friends about and show these posts higher in the feed. These are posts that inspire back-and-forth discussion in the comments and posts that you might want to share and react to – whether that’s a post from a friend seeking advice, a friend asking for recommendations for a trip, or a news article or video prompting lots of discussions.”
The Social Dilemma

3.) Advertising goal- When two people are connecting on the social media platform for free, it’s obvious someone is paying for it. A third party is paying for manipulation for those two, the other two, and every other person who is communicating through social media. We are in the era of surveillance capitalism where big tech giants are collecting a massive amount of data and collecting them at one place to show personalized ads to their customers and earn maximum money from advertising. It’s the gradual, slight, imperceptible change in your behavior and perception that is the product.


“If you’re not paying for the product, then you are the product.”


In one of the experiments conducted by Facebook on “Experimental evidence of massive-scale emotional contagion through social networks,” they found, “people who had positive content reduced in their News Feed, a larger percentage of words in people’s status updates were negative and a smaller percentage were positive. When negativity was reduced, the opposite pattern occurred. These results suggest that the emotions expressed by friends, via online social networks, influence our moods.” that suggests that Facebook can now affect or say change one’s real-life behavior, political viewpoint, and many more things. Effects of it have been felt across the globe in the form of fake news, disinformation, rumors, etc. Terrorist organizations used the very same formula and brainwashed hundreds of thousands to fight for them and kill innocent people. Now very same techniques are used by right-wing hate groups across the globe like white supremacists groups. We have seen examples of mob lynching in India due to rumors spread in the area. It is not just about fake news but it has more dangerous fake news of consequences. According to a recent study, fake news is five times more likely to speak than real news. We are transforming from the information age to the disinformation age. Democracy is under assault, tools are starting to erode the fabric of how society works. If something is a tool, it genuinely is just sitting there, waiting patiently. If something is not a tool, it’s demanding things from you. It’s seducing you. It’s manipulating you. It wants things from you. And today’s big tech giants have moved away from having a tools-based technology environment to an addiction and manipulation based technology environment. 


“Only two industries call their customers ‘users’, illegal drugs and software”


Big Tech giants namely Facebook, Amazon, Apple, Alphabet, Netflix, and Microsoft have grown tremendously over the past years. They have established monopolies in their respective industries where other smaller companies are either wiped out or struggling very hard to survive. The reason behind this is the cutting-edge technology developed by these companies which other companies can’t even compete on with them along with the unbelievable amount of data that they possess which makes their innovation more effective.

The Social Dilemma

Steps can be taken to make people aware of social media and its dangers. Chapters or subjects can be introduced at school levels to make children aware of the difference between social media and social life. Monopolies of the companies can be destroyed by the governments using anti-trust laws which would allow more competitors to enter the industries and create a safe and user-friendly environment on social media platforms. And lastly, strict laws should be made on data privacy and data protection.


“Any sufficiently advanced technology is indistinguishable from magic”

Making of 21st Century Solar Cell

Reading Time: 12 minutes

The manufacturing sector has been on the ventilator for a long time……

Despite a demand of 1.36 B we import quite a large portion of the products employing moderate to high-level technology, from electronic toys to smartphones to high power Induction motors of Indian Railways Engines. We don’t have any airliner manufacturing except HAL, we don’t have chip manufacturing even though we are land of powerful the Shakti microprocessors. How much sadness this fact bring home to us!

Consider Solar Cell & Module Manufacturing industry.

We have a small number of solar modules manufactures who import solar cells largely from China & Taiwan paste them on a tough polymer sheet, use some power electronics and meet the large demand of India solar needs.

We have even much small solar cell capability, who import wafers, own some mega turn-key solar line manufacturing unit mostly set up by European Companies. You see, we have to be very precise in claiming what is ours and what is not.

We import 80% solar cells and solar modules and a domestic manufacturing capacity of only 3 GW for solar cells. -Source: 

In this blog let us at least critically understand what goes in the making of 21st century solar cell. And try to figure is that so hard, that we really need to import end-tailored billion euros turnkey lines to get the solar industry flying.

For good assimilation of the content, one needs to be familiar with a solar cell. One might answer the following question to get a temporary check-pass.

  1. How does charge generation, charge separation and charge collection phenomenon occur in a solar cell?
  2. What is meant by the short circuit current and open-circuit voltage of cell?
  3. Difference between the solar cell and solar module.
  4. On what factors does the form factor depend?

Notice the nature of the question, they are descriptive and have straight forward answers.

We don’t have here full degree of freedom to ask any wild question. For example, one cannot ask what would be voltage measured by non-ideal voltmeter across the photocell under no illumination, would current flow through external resistance in a not illuminated solar cell or a regular diode.

The reason is, from the engineering point of view we always study an abstract model of a solar cell or p-n junction. Physicists have very smartly built a layer over all the intricate things going inside the cell, we don’t care much about the exact phenomenon inside of the device, yet with the help of modified equations, we can deduce engineering relevant parameters like FF, Rsh, Rs, Isc, Voc, etc and can do clever things like MPPT, etc.

Similarly, using our conventional theory, one cannot explain the presence of intrinsic carrier at room temperature.

A pure silicon crystal has a bandgap of 1.12 eV, electrons on the other hand according to classical theory have thermal energy of kT (i.e. 0.026 eV or 26 meV). So intuitive physics would lead us to conclude that at room temperature there should be no electron in the conduction band. Still, at 25 degrees 10^10 electrons per cubic cm are available in the conduction band in a pure silicon crystal, called as intrinsic carrier density.

Think for a second how would you explain this paradox?

All these questions, wild or sober, can surely are answered satisfactorily (multiplying and integrating maxwell Boltzmann density of states and Fermi-Dirac probability distribution) but the point I want to highlight is that they really unfold the need of another kind of theory to explain, and let us reveal to you that is what the world knows as the quantum theory of matter.

Notice the power of our wild questioning, one correct question has simply enabled us to knock the door of mighty quantum physics. What a pleasure to discover for ourself the need for new theory, the theory which the world has been developing for the past 130 years.

On the other hand, if we think we are done with the p-n junction, simply-just by being able to describe the formation of depletion region and calculating the build-in voltage by a sweet formula without a taste of weirdness of quantum physics, then we should really reconsider our beliefs.

The flow

Ingot Growth

Wafer Slicing

Saw Damage Etch


Emitter Diffusion

Anti-Reflection Coating

Front Contact

Back Contact


This blog won’t be really spitting out crude information throughout as it seems from the flow, rather it aims to induce self-questions in readers and thus provokes the reader to discover for themselves the tight constraint the solar cell manufacturing posses at every stage.

Now the first input is the silicon wafers. It itself takes a whole manufacturing industry, it has it’s own difficultly why India doesn’t have that, so we will not dive deep rather just walk through it until solar domain actually begins, you can even directly jump to Saw damage etch.


Silicon crystal falls broadly in two categories the monocrystalline silicon and polycrystalline silicon. Monocrystalline crystals contain continuous single crystal orientation.

Making of 21st Century Solar Cell

Polycrystalline crystal, however, has much less regularity, and have many grain boundaries. The solar industry is always on toes to minimize the cost per unit energy produced as its competitor is the outlet in our homes, so it can’t afford at any stage a high price manufacturing technology.

Making of 21st Century Solar Cell

Polycrystalline silicon is formed using the Siemens process, a faster and cheaper growth method as compared to Czocharalski, and float zone process for crystalline silicon.


Wafer Slicing

The next obvious step is the sawing out the wafers, evident from the ingot structure that the monocrystalline will be circular and polycrystalline will be square type. Slurry based sawing and diamond-based sawing are two popular techs, out of which diamond-based become much popular because it is fast and produce more yield as silicon dust produced is less.

No matter what techniques is used the roughness on the surface is way more than acceptable for the solar use or any (IC industry).

      Making of 21st Century Solar Cell

 Pseudosquare shape to optimize the material requirement  Making of 21st Century Solar Cell

Saw damage Etch

Enough of the peripheral walks, now we are entering the woods, from here we are entering the solar manufacturing.

To smooth out the scratches and remove the surface contaminants caused by sawing, the p-doped wafers are treated with a strong hot alkaline bath, like NaOH or KOH. We can also leverage the non-uniform surface to increase the probability for light to enter the silicon but it’s avoided as any deep crack has a chance to develop into a larger hairline fracture as Silicon is brittle at room temperature and hence breaking cell in some time.

The alkaline solution dissolves the 5-10 um thick layer from both ends, resulting in very fine surfaces and a p-type wafer of width in the range of 170 um. Precise control of temperature, concentration and time is required in the bath for desired outcomes.


If the surface is perfectly smooth the light won’t get any chance to re-strike the surface again. The greater the number of times the light is reflected by the surface the more chances it has to enter the bulk of the silicon. However, for adequately rough surface the light reflected from the edges have more chances to enter the silicon.

Making of 21st Century Solar Cell

Image courtesy:

The process of saw damage etching and texturing only differ in concentration and temperature of alkaline. A much lower concentration alkaline is observed to yield pyramid-like structure over the silicon surface, which aids the cell to greatly reduce the reflectivity of the surface.

Making of 21st Century Solar CellImage courtesy:

A great amount of attention is given to tiny-tricky light management techniques. Using the principles of optics, the solar cell is optimized to somehow get the maximum photons inside (or increase the path length inside the silicon). These include Texturing, anti-reflection, back internal reflection, etc., in fact you would be surprised to know that attempts have been made by some companies to even texture the surface of fingers and busbars to divert the light falling on them towards the silicon, and like that.


Emitter Diffusion

The presence of the electric field is inevitable for charge separation as photons knock out the electron from Si atoms. Thus, next in line is the formation of the n-type region to develop a depletion region (p-n junction) inside the cell which assists in change separation.

The process is quite straight forward. We have a heated POCl3 gas inside a chamber and correct temperature and vapour density are maintained to allow the phosphorous atoms to diffuse into the silicon base.

The trick is how will one decide the doping density of the emitter layer and the thickness of it.

High doping density is desirable to have a good contact (less metal contact resistance) and low lateral series resistance as charges moves along the emitter, however higher doping density causes to decrease the bandgap of Silicon (as at extreme doping the crystals begins to become highly irregular, thus shrinking the band-gap) hence the blue light (high-frequency radiation) is not absorbed well, also recombination ( a type called Auger Recombination) increases in emitter leading to dragging down of the open-circuit voltage of the cell and hence the performance.

Now think about the thickness of emitter, ideally, the emitter should be narrow so that the time it remains inside the gas chamber is less and the process is faster and cheaper.

But if it is narrow there is a great chance that the metal will leach through it into the p-type directly shunting them, leading to extremely poor-quality cells.

Notice that every piece of solar cell development is a tight problem of optimization.

We require two contrasting qualities of the emitter, narrow and lightly doped for good light response and low recombination, and deep and heavily doped for good contact and low series resistance.

Selective Emitter is quite a smart way to accommodate both of them.

A shallow lightly doped emitter is formed first then by proper masking deep heavily doped contact regions are obtained.

Making of 21st Century Solar Cell

Anti-reflection Coating

This is one more way to increase the probability of light to get absorbed in the solar cell. Using a Silicon Nitride coating the light is reflected back into the cell.

The process generally used is called PVCED (Plasma Enhanced Chemical Vapor Deposition).

Making of 21st Century Solar CellImage courtesy:

Silane (SiH4) and Ammonia (NH3) are filled in a chamber and excited by high-frequency waves. Obeying the rules of chemistry and fine-tuning the process an extremely thin 70-nm layer of Silicon Nitride is formed above the emitter junction.

The added benefit is that the hydrogen released in the process bonds with dangling Si atoms which otherwise would have led to increased recombination, anyways this process of filling the holes is called passivation.

Making of 21st Century Solar Cell

The way in which this anti-reflecting coating works is truly an elegant piece of physics.

They work on principles of interference. We know that rays of monochromatic light can interfere depending on the distance (optical) travelled, as it causes a change in phase. The famous Michelson experiment produced constructive interference if path difference was λ, 2λ, 3λ, whereas produced destructive interference for λ/2, 3 λ/2, 5λ/2, etc.

Making of 21st Century Solar Cell

Magnified ARC layer

On similar lines, these 70 nm manages to produce a destructive interference of waves, thus suppressing the reflection from the surface and constraining the entire intensity to get transmitted.

For normal incidence the light travels twice the thickness of ARC, so for destructive interference, the optical path length difference between the two waves must be λ/2. Due to decreased speed of light inside higher refractive index material, the optical path length will increase by a factor of n.

Making of 21st Century Solar Cell

Where n is the refractive index of ARC.

Now, solar radiation is not monochromatic, hence we can never obtain destructive interference for all the wavelengths for one thickness of ARC. Thus, the thickness is optimized for wavelength at which peak of solar radiation occurs, i.e. 2.3 eV (550 nm). Given Silicon Nitride has refractive index of 2, plugging in the numbers we get:

Making of 21st Century Solar Cell

It is here from where we get the golden number of 70 nm, which is so popular in solar cell industry.

Front Contact printing

This is also one of the typical optimization problems in solar cell design.

For good ohmic contact (low contact resistance) the fingers must be wide, but for maximizing the amount of light entering inside the cell the fingers must be as narrow as possible.

Even finger spacing is a critical design parameter. Small finger spacing is desirable to keep the series resistance low, but it will lead in a larger portion of cell area to get shadowed by the front contact, again an engineering decision has to be made to optimize the net performance.

In fact, optimization constraint occurs in one more dimension here, the height of the fingers. One would like to have increased height to increase the cross-section for the current but again it would be limited as when the sun falls slantly the large shadow of these fingers would be casted if the height is large.

Same problem for the busbars too.

However, once the design is optimized the printing as easy as t-shirt printing. Making a mask and applying the paste and then drying.

Generally, a silver-based paste is used for the purpose.

Making of 21st Century Solar Cell

Back Contact Printing

Back Contact seems simple at first sight but like all the solar cell stuff it too poses optimization problems of its own. The Solar cell is supposed to operate in quite large temperature ranges.

Silicon has a lower thermal expansion coefficient than that of the metallic aluminium. If appropriate care of thickness of aluminium back is not taken then the difference in thermal coefficient might lead to intolerable bending of the cell, leading to even separation of contacts in the extreme case.

A layer of aluminium is developed on the back surface, the thickness of which typically lies in the range of 30 um.

However, this Al layer has an added benefit of what is called the back-surface field (BSF). Some of the Al diffuse into the p-type base and thus making it p++ type. The direction of the field developed to repel the minority carrier electron away from the back surface, and this also reduces recombination at back.

Making of 21st Century Solar Cell


Technically called post-deposition high-temperature annealing.

Notice that the front metal doesn’t make electrical contact with the emitter. So, the cells are lastly sent in a furnace of accurately controlled temperature. The heated silver etches through the tough 70 nm ARC and makes just suitable contact with the emitter.

This process has to be very finely tuned if the temperature is not high or cell is kept in the furnace for small-time the contacts will not be firm and hence result in high series resistance. If the temperature is high or the time is more than the molten silver will breach through the emitter to base, thereby directly shunting the device and giving rise to extremely small shunt resistance and hence again a poor performance device.

Making of 21st Century Solar Cell



The General Conclusion:

One can conclude for ourself that the manufacturing of solar cell is not so advanced as engineering quantum systems like manipulating Q-bits or fusion of atoms or replicating human brain, it is an arena of extreme fine-tuning and very precise control of temperature, concentration, motion.

The Technical Conclusion:

The solar cell is the best example of a most well-optimized system, in the real commercial scenario, it takes into account 30+ parameters.

It is also a standing example that little things in life matters and sometimes even more. Just like a team is only fast as slowest guy in the team similarly any engineering system is only efficient as least efficient component in the system, so nothing has to considered trivial or irrelevant or less worthy of attention, and it applies equally to life and non-living systems.

Some cool websites to learn and understand the solar cell in greater deaths:



Keep Reading, keep learning,

Team CEV!

Featured Image courtesy

3 Horizons of Innovation

Reading Time: 6 minutes

- by Aman Pandey

Being in a technical club, we often discuss about innovation 💡 . Anyways, it is not just about being in a tech club 🔧 it is all about being a visionary, you frequently ponder into the thoughts of How an Idea come into existence

Ever thought about actually making a solution and creating its “actual” value 💸. (don’t care, it’s just an emoji). Value is not always about money, it about how much and how great effect it is making on the lives of this magnificient earth 🌏 . Money is just a side effect of creating value.

" A very simple definition of innovation 💡 can be thought of as A match between the SOLUTION 🔑 & the NEED 🔒 that creates some form of VALUE. "

It is all about the game of a GOOD Product strategy, that turns the solution into a value.

Whenever a new solution is launched for the society, it curbs across a different set of people 👥 👤 . Infact there’s a chart which will explain the things better than anything else.*2kIL4HV7-y2MbzfMRmHQAQ.jpeg

You see the first portion of the curve? The Innovators? These are more of a tech enthusiasts 📟 who are crazy about any new technology, and just want to innovate. Then the Early Adopters ☎️, who actually see some value in the solution. These are the Visionaries 📣 . They are willing to see through business and value of a solution. Then comes the Early Majority, known as the Pragmatist 😷 , they are the first adopters of a technology in the market. They always seek out to improve their companies’ works by obtaining the new technology. Rest are the Late Majority, popularly known as skeptics, they usually look out for recommendations and then the Laggards, idk what they are called.

So there are certain strategies involved in the phases of transiting an innovation to a startup and to a company. This processs is known as Customer Development.

Oh wait ⚠️, looks like we forgot something.

You see a little gap 🌉 between the early adopters and the early majority, The Chasm. This is prolly the hardest and most important bridge that a solution needs to cover in order to create its value 💸 .

There are many startups which might make to that side of chasm, and the startups which might not make. In the most common terms, the first set of customers/buyers of your tech, who agrees to give a try on your innovation.

But, let us keep it for some other time.

Now the stuff, might depend upon certain criteria.

  1. There already exists some market and you want to capture that market.
  2. There are several markets, and you want to Re-Segment them according to your innovation.
  3. You don’t have any market, i.e. you create your own for your product.

But this is the talk of some other time. Let’s pretend we are not going deep into this. We know that, we have a market, which already have customers, a market which exists but isn’t used, and the market, which is still out of existence. You understand the difficulty in all the cases right. 📈

Baghai, Coley, and White came up with something in 2000, called as the Three Horizons of Innovation., more formally known as McKinsey’s Three Horizon of Innovation.

Let us now understand this, with a little example of Sleep medicine industry. 💊

According to a study, in America, around 5~10% population is affected by insomnia, and 2-4% by Sleep Apnea. So, there is already a good market.

Now, the disruption in sleep medicine industry led to a several researches 🔎.

One research was super disruptive, the innovation of Transcranial System.

After a lot of researches on its subjects, collecting data through fitbands, and devices like Beddit which were kept under the mattress of the subject, the researchers collected a lot of data about sleeping patterns. The researchers 🔎, came with the solution of Transcranial systems, which is a device, in which changing magnetic fields stimulates the brain signals and lets you sleep.
Source: Wikipedia

And most of all, this is an non invasive device, i.e. it need not to be planted inside your brain. How do you think the researchers were able to do this?

Well this is all because of Artificial Intelligence.

  • The wrists bands ⌚️ used to monitor sleep activities. The fitbit bands accumulated around 7 billion nights of sleep😪.
  • The beddit devices, were kept under the mattresses, that records your pulses(could not record your oxygen levels though).
  • Apple🍎 watches, are so sharp in their tracking systems, that sometimes they are used as medical diagnosis devices.

So, what transcranial systems do, they track the abnormal pattern in the sleep signals, and send electrical signals, to let the person sleep comfortably.

Now there’s a bigger picture to understand here. If such a solution exists, then why ❓ is it not being used.

To understand this, let us now see the 3 horizon of Innovations:

The horizontal axis, is about how new the innovation is, and the vertical axis is about the novelty of the market, that if the users already exists.

-> The Trancranial System lies, somewhere in the bottom right, where we know the existing market, which in this case are the APNEA patients, but the tech is still new to be used.

This makes it a bit difficult to convert this innovation to a company. 🎬

This still needs a lot of research and finally the makers have to tamper the already existing market, and bring in their device.

Let us take one more example. Support your plan to make a device, that tracks the breathing patterns or pulse rate, and you get data on your mobile phone. Now this data, after going through a series of AI models, lets the Doctor diagnose the severity of the disease and correctly cure you. ⭕️

In this case, you know the solution, and exactly what might solve your problem. Plus hand, you know the target customers. So is possible that this product can be shipped like in the next month.

This App lies somewhere in the lower left.

Now, let me clarify something for you.

  • Horizon 1 is considered to be of Not Much risk ⚪️, and these just need the improvements and cost reductions from the item the customer used before this (because you are targeting already existing customers)
  • Horizon 2 is the More Risk zone🔵 , and thus should be approached with care
  • Horizon 3 is the Highest risk zone 🔴, and you never know, whether the innovation will be able to even make it to that side or not. And might even take next 5 years to come into proper existence.

So, looking at the picture, from the farther point, we spot a sense of the patience and efforts required to give an innovation, a value.

Just like, Apple beat blackberry by making a device which served more as a personal device, unlike Blackberry which focused only on business users. So, in a short span of time, just in 2 years after launching iPhone in 2007, it took over Blackberry as the leading Mobile phone seller in the world.

You have to be a visionary to understand it.

Thank You.


The Harmonic Analyzer: Catching the Spurious

Reading Time: 10 minutes

“Do you have the courage to make every single possible mistake, before you get it all-right?”

-Albert Einstein

**Featured image courtesy: Internet

THE PROJECT IN SHORT: What this is about?

The importance of analyzing harmonics has been enough stressed upon in the previous blog, Pollution in Power Systems. 

So, we set out to design a system for real-time monitoring of voltage and current waveforms associated with a typical non-linear load. Our aim was “to obtain the shape of waveforms plus apply some mathematical rigour to get the harmonic spectrum of the waveforms”.   

THE IDEA: How it works?

Clearly, real-time capabilities of any system are analogous to deployment of intelligent microcontrollers to perform the tasks and since this system also demanded some effective visualization setup, so we linked the microcontroller with the desktop (interfacing aided by MATLAB). Together with MATLAB, we established a GUI platform to interact with user to get the required results:

  1. The shape of waveforms and defined parameters readings,
  2. Harmonic spectrum in the frequency domain.  

The voltage and current signal are first appropriately sampled by different resistor configurations, these samples are then conditioned by analog industry’s workhorses, the op-amps, and are fed into the ADC of microcontroller (Arduino UNO) for digital discretization. These digital values are accessed by MatLab to apply mathematical techniques according to commands entered by user at the GUI to finally produce required outcome on screen of PC.

The Harmonic Analyzer: Catching the Spurious

ARDUINO and MATLAB INTERFACING: Boosting the Computation

Arduino UNO is 32K flash memory and 2K SRAM microcontroller which sets limit to the functionality of a larger system to some extent. Interfacing the microcontroller with a PC not only allows increased computational capability but more importantly it serves with an effective visual tool of screen to display the waveforms of the quantities graphically, import data and save for future reference and so on.

TWO WAYS TO WORK: Simulink and the .m

The interfacing can be done via two modes, one is directly building simulation models in Simulink by using blocks from the Arduino library and second is to write scripts (code in .m file) in MatLab by including a specific set of libraries for given Arduino devices (UNO, NANO, etc.).

Only the global variable “arduino” needs to be declared in the program and rest codes are as usual and normal. We have used the second method as it was more suitable for the type of mathematical operation we wanted to perform.


  1. The first method could also be utilised by executing the required mathematical operation using available blocks in the library.
  2. Both of these methods of interfacing require addition of two different libraries.

THE GUI: User friendly

Using Arduino interfaced with PC also gives another advantage of user-interactive analyzer. Sometimes the visual graphics of waveform distortion is important and sometimes the information in frequency domain is of utmost concern. Using a GUI platform provided by MatLab, to give the option to user to select his choice adds greatly to the flexibility of analyzer.  

The GUI platform appears like this upon running the program.

The Harmonic Analyzer: Catching the Spurious

MatLab gives you a very user-friendly environment to build such useful GUI. Type guide in command window select the blank GUI and you are ready to go.

Moreover, you can follow this short 8 minutes tutorial for the introduction, by official MatLab YouTube channel:

REAL-TIME PROGRAM: The Core of the System

Once GUI is designed and saved, a corresponding m-file is automatically generated by the MatLab. This m-file contains the well-structured codes as well as illustrative comments to show how to program further. The GUI is now ready to be impregnated with the pumping heart of the project, the real codes.


The very first task is to start collecting data-points flushing-in from the ADC of the microcontroller and save it in an array for future reproduction in the program. This should be executed upon the user pressing the START button at the GUI.


Since we have shifted our whole signal waveform by 2.5 V so we have to continuously check for 127 level which is actually the zero-crossing point, and then only start collecting data.  


% --- Executes on button press in start.
function start_Callback(hObject, eventdata, handles)
% hObject    handle to start (see GCBO)
% eventdata  reserved - to be defined in a future version of MATLAB
% handles    structure with handles and user data (see GUIDATA)
V = zeros(1,201);
time = zeros(1,201);
vstart = 0;
while(vstart == 0)
    value = readVoltage(ard ,'A1');
    if(value > 124 && value < 130)
        vstart = 1;
for n = 1:1:201
    value = readVoltage(ard ,'A1');
    value = value – 127;
    V(n) = value;
    time(n) = (n-1)*0.0001;


The data-points saved in the array now required to be produced and that too in a way which makes sense to the user, i.e. the graphical plotting.



As mentioned previously we aimed to obtain the frequency domain analysis for the waveform of concern. The previous blog was presented with insights of mathematical formulation required to do so.

Algorithm: Refer to blog Pollution in power systems


% --- Executes on button press in frequencydomain.
function frequencydomain_Callback(hObject, eventdata, handles)
% hObject    handle to frequencydomain (see GCBO)
% eventdata  reserved - to be defined in a future version of MATLAB
% handles    structure with handles and user data (see GUIDATA)
%Ns=no of samples
%a= coeffecient of cosine terms
%b =coefficient of sine terms
%A = coefficient of harmonic terms
%ph=phase angle of harmonic terms wrt fundamental
n=9   %no of harmonics required
for i=1:1:Ns
for i=1:1:n
    for j=1:1:Ns
       M(i,j)=V(j)*cos(2*pi*(j-1)*i/Ns);%matrix M has order of n*(Ns)
for i=1:1:n
    for j=1:1:Ns
        if j==1 || j==Ns
            sum= sum+M(i,j);
        elseif mod((j-1),3)==0
            sum=sum+ 2*M(i,j);
   a(i)= 3/4*sum/Ns;
for i=1:1:n
    for j=1:1:Ns
       N(i,j)=V(j)*sin(2*pi*(j-1)*i/Ns);%matrix M has order of n*(Ns+1)
for i=1:1:n
    for j=1:1:Ns
        if j==1 || j==Ns
            sum= sum+N(i,j);
        elseif mod((j-1),3)==0
            sum=sum+ 2*N(i,j);
    b(i)= 3/4*sum/Ns;
for i=1:1:n
for i=1:1:n
 x = 1:1:n;
 hold on;
 datacursormode on;
 grid on;
 xlabel('nth harmonic');


The section appears quite late in this documentation but ironically this is the first stage of the system. As we have seen in the power module the constraints on signal input to ADC of microcontroller:

  1. Peak to peak signal magnitude should be within 5V.
  2. Voltage Signal must be always positive wrt to the reference.

To meet the first part, we used a step-down transformer and a voltage divider resistance branch of required values to get a peak to peak sinusoidal voltage waveform of 5V.

Now current and voltage waveforms obviously would become negative wrt to reference in AC systems.

Think for a second, how to shift this whole cycle above the x-axis.  

To achieve this second part, we used an op-amp in clamping configuration to obtain a voltage clamping circuit. We selected op-amp due to their several great operational qualities, like accuracy and simplicity.

Voltage clamping using op-amps:

The Harmonic Analyzer: Catching the Spurious

The circuit overall layout:The Harmonic Analyzer: Catching the Spurious

IMP NOTE: While taking signals from a voltage divider always keep in mind that no current is drawn from the point of sampling, as it will disturb the effective resistance branch and hence required voltage division won’t be obtained. Always use an op-amp in voltage follower configuration to take samples from the voltage divider.

Current Waveform****(same as power module setup)

A Power Module


Now it is always preferable to first model and simulate your circuit and confirming the results to check for any potentially fatal loopholes. It helps save time to correct the errors and saves elements from blowing up during testing.

Modelling and simulation become of great importance for larger and relatively complicated systems, like alternators, transmission lines, other power systems, where you simply cannot afford hit and trial methods to rectify issues in systems. Hence, having an upper hand in this skill of modelling and simulating is of great importance in engineering.

For an analog system, like this, MatLab is perfect. (We found Proteus not showing correct results, however, it is best suited for the simulating microcontrollers-based circuits).

The Harmonic Analyzer: Catching the Spurious

Simulation results confirm a 5V peak to peak signal clamped at 2.5 V.

The Harmonic Analyzer: Catching the Spurious

The real circuit under test:

The Harmonic Analyzer: Catching the Spurious

Case of Emergency:

Sometimes we find ourselves in desperate need of some IC and we didn’t get it. At that time our ability to observe might help us get some. In our surroundings, we are littered with IC of all types, and op-amp is one of the most common. Sensors of all types use an op-amp to amplify signals to required values. These IC fixed on the chip can be extracted by de-soldering using solder iron. If that doesn’t seem possible use something that gets you the results. Like in power module project we manage to get three terminals of the one op-amp from IR sensor chip, here we required two op-amps.

First, trace the circuit diagram of the chip by referring the terminals from the datasheet, you can cross-check all connections by using the multimeter in connectivity-check mode. Then use all sorts of techniques too somehow obtain the desired connections.  

The Harmonic Analyzer: Catching the Spurious   The Harmonic Analyzer: Catching the Spurious

Reference Voltages

Many times, in circuits different levels of reference voltages are required like 3.3V, 4.5V etc. here we require 2.5 V.

One can-built reference voltage using:

  1. resistance voltage dividers (with op-amp in voltage follower configuration),
  2. we can directly use an op-amp to give required gain to any source voltage level,
  3. the variable reference voltage can be obtained by the variable voltage supply, we built-in rectifier project using the LM317.      


For program testing, we required different typical waveforms like square and triangle wave. These types of waveforms can be obtained in two different ways: the analog way and the digital way.  

The Analog Way

Op-amps again come for our rescue. Op-amps when accompanied by resistors, capacitors and inductor seemingly provide all sorts of functionalities in analog domain like summing, subtracting, integrating, differentiating, voltage source, current source, level shifting, etc.

Using a Texas Instrument’s handbook on Op-amp, we obtained the circuit for triangle wave generation as below:
The Harmonic Analyzer: Catching the Spurious

The Harmonic Analyzer: Catching the Spurious

The Digital Way

Another interesting way to obtain all sorts of desired waveforms is by harnessing microcontroller. One can vary the voltage levels, frequency and other waveform parameters directly in the code.

Here we utilised two Arduinos, one stand-alone Arduino 1 which is programmed to generate square wave and another Arduino 2 interfaced with Matlab to check the results.

The Harmonic Analyzer: Catching the Spurious          The Harmonic Analyzer: Catching the Spurious

Now already stated the importance of simulation.

So, here for the simulation of Arduino we used “Proteus 8”.

The code is written in Arduino App, compiled and HEX code is burnt in the model in proteus.

The Harmonic Analyzer: Catching the Spurious

The real-circuit:

The Harmonic Analyzer: Catching the Spurious

The results displayed by the Matlab:

The Harmonic Analyzer: Catching the Spurious


To generate different waveforms other than square-type one thing that has to consider is the PWM mode of operation of Digital pins. The 13 digital pins on Arduino generates PWM.

At 100% duty cycle 5 V is generated at the output terminal.

digitalWrite (PIN, HIGH): This code line generates a PWM of 100% DT whose DC value is 5V.

So, by changing the duty cycle of PWM we can obtain any level between 0-5 V.

analogWrite (PIN, Duty_Ratio): this code line generates a PWM of any duty-ratio (0-100%) hence any desired value of voltage level on a digital pin.   

For example:

analogWrite (2, 127): gives an output of 2.5 V at D-pin 2.

Moreover, timer functionalities can be utilized for a triangle wave generation.


It is very saddening for us to not able to finally check our results and terminate the project at 75% completion due to unavoidable instances created by this COVID thing.

THE RESOURCES: How you can do it too?

List of the important resources referred in this project:

  1. MatLab 2020 download:
  2. MatLab official YouTube channel provides great lessons to master MatLab

  1. Matlab and Simulink introduction, free self-paced courses by MatLab:
  2. Simulink simulations demystified for analog circuits:
  3. Proteus introduction:
  4. MatLab with Arduino:
  5. Op-amp cook book: Handbook of Op-amp application, Texas Instruments

THE CONCLUSIONS: Very Important take-away


If we (you and us) desire to take-on venture into the unknown, something never done before and planning to do it all alone, trust our words failure is sure. It gets tough when we get stuck somewhere and it gets tougher only.

We all have to find the people who have the same vision as ours, share some interests and with whom we love work alongside. We all have compulsorily to be a part of a team, otherwise life won’t be easy nor pleasing. There is a great possibility of coming out a winner if we get into it as a team, even if the team fails, we don’t come out frustrated at least.

Each member brings with themselves their own special individual talent to contribute to the common aim. The ability to write codes, the ability to do the math, the ability to simulate, the ability to interpret results, the ability to work on theory and work on intuition, etc. A good teamwork is the recipe to build great things that work.

So, we conclude from the project that teamwork was the most crucial reason for the 75% completion of this venture, and we look forward to make it 100% asap.

Team-members: Vartik Srivastava, Anshuman Jhala, Rahul

Thankyou❤ Ujjwal, Hrishabh, Aman Mishra, Prakash for helping us in resolving software related issues.


Team CEV    

Pollution in Power Systems

Reading Time: 14 minutes


The Non-Sinusoids

What’s the conclusion?


THD and Power Factor

Harmonics Generation: Typical Sources of harmonics


**Featured image courtesy: Internet 


If we were in ideal world then we would have all honest people, no global issues of Corona and climate crisis, also gas particles would have negligible volume (ideal gas equation), etc. and in particular in the power systems we would have only sinusoidal voltage and current waveforms. 😅😅

But in this real beautiful world we have bunch of dear dishonest people; thousands die of epidemics, globe becoming hotter and also gas particles have volume similarly having pure sinusoidal waveforms is a luxury and unconceivable feat to be achieved in any large power system.


We have tried to get launched from very beginning so only a strong will to understand is enough but still we will suggest to once you to go through the power quality blog, it will help develop some important insights.

Electrical Power Quality

Let’s go yoooo!!🤘🤘🤘

Now, why we are talking about shape of waveforms? Well, you will get to know about it by the end on your own, for now let us just tell you that the non-sinusoidal nature of waveform is considered as pollution in electrical power system, effects of which ranges from overheating to whole system ending up in large catastrophes.

Non-sinusoidal waveforms of currents or voltages are polluted waveforms.

But how it can be possible that if voltage implied across some load is sinusoidal but current drawn is non-sinusoidal.

Hint: V= IZ

Yes, it is only possible if the impedance plays some tricks. So, the very first conclusion that can be drawn for the systems that create electrical pollution is that they don’t have constant impedance in one time-period of voltage cycle applied across it, hence they draw non-sinusoidal currents from source. These systems are called non-linear loads or elements. Like this most popular guy:

Pollution in Power Systems

The diode

Note that the inductive and capacitive impedances are frequency variant and remains fixed over a voltage cycle for fixed frequency that’s why resistors, inductor and capacitor are linear loads. In this modern era of 21st century the power system is cursed to be literally littered with these non-linear loads and it is estimated that in next 10-15 years 60% of total load will be non-linear type, well the aftermath of COVID19 has not been considered.

The list of non-linear loads includes almost all the loads you see around you, the gadgets- computers, TVs, music system, LEDs, the battery charging systems, ACs, refrigerators, fluorescent tubes, arc furnaces, etc. Look at the following waveforms of current drawn by some common devices:

Pollution in Power Systems

Typical inverter Air-Conditioner current waveform (235.14 V, 1.871 A)

Source: Research Gate  

Pollution in Power Systems

Typical Fluorescent lamp

Source: Internet

Pollution in Power Systems

Typical 10W LED bulb

Source: Research Gate  

Pollution in Power Systems

Typical battery charging system

Source: Research Gate

Pollution in Power Systems

Typical Refrigerator

Source: Research Gate

Pollution in Power Systems

Typical Arc furnace current waveform

Source: Internet   

Name any modern device (microwave-oven, washing machine, BLDC fans, etc.) and their current waveforms are severely offbeat from desired sine-type, given the no of such devices the electrical pollution becomes a grave issue for any power system. Now the pollution in electrical power system is not a phenomenon of this 21st century rather electrical engineers have struggled to check the non-sinusoidal waveforms throughout 20th century and one can find description of this phenomenon as early as in 1916 in Steinmetz ground-breaking research paper named “Study of Harmonics in three-phase Power System”. However, the source and reasons of power pollution have ever-changing since then. In early days transformers were major polluting devices now 21st gadgets have taken up that role, but the consequences have remained disastrous.

WAIT, WAIT, WAIT…. What’s that “Harmonics”?

Before we even introduce the harmonics let just apply our mathematical rigor in analyzing the typical non-sinusoidal waveforms, we encounter in the power system.


From the blog on Fourier series, we were confronted with one of most fundamental laws of nature:

FOURIER SERIES: Expresssing the alphabets of Mathematics

Any continuous, well-defined periodic function f(x) whose period is (a, a+2c) can be expressed as sum of sine and cos and constant components. We call this great universal truth as the Fourier Expansion, mathematically:

Pollution in Power SystemsWhere,

Pollution in Power Systems

Square-wave, the output of the inverter circuits:

Pollution in Power Systems

Pollution in Power SystemsFor all even n:

Pollution in Power Systems

For all odd n:

    Pollution in Power Systems

Just for some minutes hold in mind the result’s outline:

Pollution in Power Systems



We will draw some very striking conclusions.

Now consider a triangular wave:

Pollution in Power Systems

The function can be described as:Pollution in Power Systems

Calculating Fourier coefficients:

Pollution in Power Systems

Which again simplifies to zero.

Pollution in Power Systems

So, we have-

Pollution in Power Systems

Applying the integration for each interval and putting the limits:Pollution in Power Systems

For even n,


For odd n,




Pollution in Power Systems


For even n:




Are these equations kidding us???

For odd n:

Pollution in Power Systems

So finally, summary of result for the triangle waveform case is as follows:

Pollution in Power Systems

Did you noticed that if these two waveforms were traced in negative side of the time axis than they could be produced by:

Pollution in Power Systems

This property of the waveforms is called the odd symmetry. Since sine wave have this same fundamental property hence only components of sine waves are found in the expansion.

Now consider this waveform:

Pollution in Power Systems

This waveform unlike the previous two cases, if the negative side of waveform had to obtained than it must be:

Pollution in Power Systems

Now this is identified as the even symmetry of waveform, so which components do you expect sine or cos???

The function can be described as:

Pollution in Power SystemsHere again,

Pollution in Power SystemsFor the cos components:Pollution in Power Systems

This equation reduces to:

Pollution in Power Systems

For the sine components:

Pollution in Power Systems

This equation reduces to Zero for all even and odd “n”.

Well we have guessed it already🤠🤠.

Summary of coefficients for a triangle waveform, which follows even symmetry is as follows:Pollution in Power Systems

Very useful conclusions:

  1. a0 = 0: for all the waveform which inscribe equal area with x-axis, under negative and positive cycle. This happens because the constant component is simply the algebraic sum of these two areas.
  2. an =  0: for all the waveform which follows odd symmetry. Cos is an even symmetric functions, it simply can’t be component of a function which is odd symmetric.
  3. bn = 0: for all the waveform which follows even symmetry. by the same logic sine function which is itself odd symmetric, cannot be component of an even symmetry.
  4. The fourth very critical conclusion which can be drawn for the waveforms which follow this is:

Pollution in Power Systems

Where T is time period of waveform.

For then the even ordered harmonics aren’t present, only odd orders. This is property is identified as half-wave symmetry, and are present in most power system signals.

Now, these conclusions are applicable to the numerous current waveforms in the power system. Most of the devices with which we have begun with were seemed to follow the above properties, they all are half-symmetric and either odd or even. These conclusions result in great simplification while formulating the Fourier series for power systems waveforms.

So, consider a typical firing angle Current:

Pollution in Power Systems

So, apply the conclusions drawn for this case. Since the waveform has no half-wave symmetry but is odd symmetric.

Pollution in Power Systems

The Harmonics

Hope you had enjoyed utilizing the greatest mathematical tool and amazed to break the intricate waveforms into fundamental sines or cosines.

“Like matter is made up of fundamental units called atoms, any periodic waveform consists of fundamental sine and cosine components.”

It is these components of any waveform, which we call in electrical engineering language the Harmonics.

Pollution in Power Systems

The Mathematics gives you cheat codes to understand and analyze the harmonics. It just simply opens up the whole picture to very minute details.

So, what we are going to do now, after calculating the components, the harmonics?

So first all we need to quantify how much harmonic content is present in the waveform. The term coined for this purpose is called total harmonic distortion:

THD, total harmonic distortion:

It is a self-explanatory ratio, the ratio of rms of all harmonics to the rms value of fundamental.

Now since harmonics are sine or cos waves only so the RMS is simply:

Pollution in Power Systems

same definition the RMS of fundamental becomes:

Pollution in Power Systems

So, THD is:

Pollution in Power Systems

The next thing we are concerned about is power. So, we need to find the impact of harmonics on power transferred.

Power and the Power Factor

The power and power factor are so intimately related. It becomes impossible to talk about power and not of power factor.

So, the conventional power factor definition for any load (linear and non-linear load) is defined as the ratio of active power to the apparent power. It basically is an indicator of how well the load is able to utilize the current it draws; this statement is consistent with statement that a high pf load draws less current for same real power developed.

Pollution in Power Systems


  1. Active power is: average of instantaneous power over a cycle

Pollution in Power Systems

Pollution in Power Systems

Assuming the sinusoidal current and the voltage have a phase difference of theta, the integration simplifies to:

Pollution in Power Systems

2. Apparent power is by its name simply VI product, since quantities are AC so RMS values.Pollution in Power Systems

The pf becomes cos(theta), only when waveforms are sinusoidal.

NOTE: The assumption must be kept in mind.

So, what happens when the waveforms are contaminated by harmonics:

There are many theories for defining power when harmonics are considered. Advanced once are very accurate and older once are approximate but are equally insightful.

Let the RMS of the fundamental, first second, the nth component of voltage and current waveform be

Pollution in Power Systems

The most accepted theory defines instantaneous power as:

Pollution in Power Systems

Expanding and integrating over a cycle will cancel all the terms of sin and cos product, and would reduce to:

Pollution in Power Systems

Apparent power remains the same mathematically:

Pollution in Power Systems

Including the definition of THDs for voltage and current the equation modifies to:

Pollution in Power Systems

Now this theory uses some important assumptions to simplify the results, which are quite reasonable for particular cases.

  1. Harmonics contribute negligibly small in active power, so neglecting the higher terms:

Pollution in Power Systems

2. For most of devices the terminal voltages don’t suffer very high distortions, even though the current may be severely distorted. More on this in next section but for now:

Pollution in Power Systems


Pollution in Power Systems


The power factor for a non-linear load depends upon two factors, one is cosø and the another is current distortion factor.

If we wish to draw less current, we need to have high overall power factor. Once cosø component is maximized to one, then distorted current sets the upper limit for the true power factor. Following data accessed by virtue of will make it visualize better how much significant the current distortion are.

Pollution in Power Systems                                           Pollution in Power Systems

Notice the awful THD for these devices, clearly, it severely reduces the overall pf.

However, these dinky-pinky household electronics devices are of low power rating so current drawn is not so significant, if they were high powered it would have been a disaster for us.

NOTE: For most of the devices listed above the assumption are solidly valid.

Are you thinking of adding a shunt capacitor across the laptop or the electronic gadgets to improve power factor to get low electric bills, for god sake don’t ever try, your capacitor will be blown in air, later we will understand!!!

These harmonics by a phenomenon of “Harmonic Resonance” with the system and the capacitor banks, amplify horribly. There have been numerous industrial catastrophes that have occurred and still continue to happen because people ignore the Harmonic Resonance.

Our Prof Rakesh Maurya had been involved in solving out one such capacitor bank burn-out issue with Adjustable Speed Drive (ASD) at LnT.

Harmonics Generation: Typical Sources of harmonics

Most of the time in electrical engineering transformers and motors are not visualized as:

Pollution in Power Systems    Pollution in Power Systems

Instead, it is preferred to see transformers and electrical motors like this, respectively:

Pollution in Power Systems   Pollution in Power Systems 

These diagrams are called the equivalent circuits, these models are simply the abstraction developed to let as calculate power flow without considering many unnecessary minute details.

The souls of these models are based on some assumptions which lead us to ignore those minute details, simplify our lives and give results with acceptable error.

Try to recall those assumptions we learned in our classrooms.

The reasons for harmonics generation by these beasts lie in those minute details.


It is only under the assumption of “no saturation” that for a sinusoidal voltage implied across primary gives us sinusoidal voltage at secondary.

Sinusoidal Pri. Voltage >>> Sinusoidal Current >>> Sinusoidal Flux >>> Sinusoidal Induced Sec. EMF 

With the advancement in material science now special core materials are available which saturates rarely, but the older and conventional saturated many times and are observed to generated 3rd harmonics majorly.   

Details right now are beyond our team’s mental capacity to comprehend.

Electrical Motors

From this stand-point of cute equivalent circuit the electrical motors seem so innocent, simple RL load certainly not capable to introduce any harmonics. But as stated this abstraction is a mere approximation to obtain performance characteristics as fast and reliably as possible.

Remember while deriving the air-gap flux density it was assumed that the spatial distribution of MMF due to balanced winding is sinusoidal, but more accurately it was trapezoidal, only fundamental was considered. Due to this and many other imperfections, motor is observed to produce 5th harmonics, largely.

NOTE: Third harmonics and its multiples are completely absent in three-phase IMs. Refer notes.


Disgusting, they don’t need any explanation. 😏😏😏


                Power Loss

Most common, however least impactful effect of power harmonics are increased power loss leading to heating and decreased efficiency of the non-linear (devices that causes) and also later we will learn it affects linear devices too, that are connected to the synchronous grid.

The Skin Effect:

Lenz law states that a conducting loop/coil always oppose the change in magnetic flux linked by it, by inducing an emf which leads to a current.

Consider a rectangular two-wire system representing a transmission line having here a circular cross-section wire carrying a DC current I.

Now one loop is quite obviously visible, the big rectangular one. The opposition to change in magnetic field linked by this loop gives us transmission line inductance.


At frequencies relatively higher than power frequency 50 Hz, another kind of current loops begin to magnify. So, as we said this will cause another type of inductance.

Look closely the magnetic field inside the conducting wire is also changing, as a result, inside the conductor itself loops of currents called eddy current set up, which lead to some dramatic impact.


Consider two cases, a current element dx at r and R distance from the center. Which current element will face greater opposition by the eddy currents due their changing nature??

Pollution in Power Systems Pollution in Power Systems 

Yes, true, the element lying closer from the center, as the loop area available is more for eddy currents, this difference in opposition from the eddy current to different elements cause the current distribution inside the conductor to shift towards the surface as least eddy current opposition would be there.

A technical account for this skin effect in given in this manner:

  1. The flux linked by the current flowing at the center region is more than the elements of current at outer region of cross-section;
  2. Larger flux linkage leads to increased reactance of central area than the periphery;
  3. Hence current chose the path of least impedance, that is surface region.

Eddy current phenomenon is quite prevalent in AC systems. Since the AC systems are bound to have changing magnetic fields thus eddy currents are induced everywhere from conductors to transformer’s core to the motor’s stator, etc.

Now when higher frequency components of harmonics are present in the current, the skin effect becomes quite magnified, most of the current takes up the surface path as if central region is not available which is equivalent to reduced cross-section i.e. increased resistance, hence magnified joule’s heating (isqR). Thus, heating is increased considerably due these layers on layer reasons (one leads to another).

Other grave effects include false tripping, unexplained failures due to the mysterious harmonic resonance.

All of these motivated us to build our own harmonic analyzer, follow up the next blog.

Wonder, Think, Create!!!

Team CEV


Let’s Torrent

Reading Time: 4 minutes

We all have witnessed this technology for downloading our favorite movie which wasn’t available elsewhere. It is one of the most impeccable techs in the world of data sharing ever thought and brought to reality by a human.


BitTorrent is a communication protocol for peer-to-peer file sharing (P2P) which is used to distribute data and electronic files over the Internet in a decentralized manner.”

The protocol came into existence in 2001(thanks to Bram Cohen) and is an alternative to the older single source, multiple mirrors (user) sources technique for distributing data.

A Few terms

  • BitTorrent or Torrent: Well, BitTorrent is the protocol as per its definition, whereas Torrent is the initiating file which has the metadata(source) of the file. 
  • BitTorrent clients: A computer program that implements the BitTorrent Popular clients include μTorrent, Xunlei Thunder, Transmission, qBittorrent, Vuze, Deluge, BitComet, and Tixati.
  • Seed: To “seed” the file denotes to “download” the file.
  • Seeding: Uploading the file by a peer after their downloading is finished.
  • Peer: (The downloader) Peer can refer either to any client in the swarm or specifically to a downloader, a client that has only parts of the file.
  • Leecher: Similar to peer, but these guys have poor share ratio i.e. they doesn’t contribute much in uploading but only download the files.
  • Swarm: The group of peers.
  • Endgame: an applied algorithm for downloading the last pieces of any file. (Not the Taylor swift’s Endgame).
  • Distributed Hash Tables(DHTs): A decentralized distributed system. In layman language, hash tables are used to provide encryption using something similar to lock and key model.


Let’s have the gist of what happens while torrenting.

The following GIF explains this smoothly.

Let’s Torrent

First, the server sends the pieces(colored dots) of the files to a few users(peers). After a successful download of a piece of the file, they are ready to act as a seeder to upload the file to other users who are in need of that file.       

As each peer receives a new piece of the file, it becomes a source (of that piece) for other peers i.e., the user becomes seeder, giving a sigh of relief to the original seed from having to send that piece to every computer or user wishing a copy.

In this way, the server load is massively reduced and the whole network is boosted as well.

Once a peer is down with downloading the complete file, it could in turn function as a seed i.e. start acting as a source of file for other peers.

Speed comparison:
Regular download vs BitTorrent Download

Download speed for BitTorrent increases with an increase in peers joining to form  the swarm. It may take time to establish connections, and for a node to receive sufficient data to become an effective uploader. This approach is particularly useful in the transfer of larger files.

Regular download starts promptly and is preferred for smaller files. Max speed is achieved promptly too.

Benefits over regular download

  • Torrent networking doesn’t depend on the server being distributed among the peers. Data is downloaded from peers which eventually become seeds.
  • Torrent files are open source and ad-free. An engrossing fact about the same is that TamilRockers use torrent to act as the Robin hood for pirated movies and songs, which is apparently an offensive act.
  • Torrent judiciously uses the upload bandwidth to speed up the network: after downloading, the peers’ upload bandwidth is used for sending the file to other peers. This reduces the load on the main server.
  • A File is broken into pieces that helps in resuming the download without any kind of data loss, which in turn makes BitTorrent certainly useful in the transfer of larger files.

Torrenting or infringing?

Using BitTorrent is legal. Though, Downloading copyrighted material isn’t. So torrenting isn’t infringing.

Most BitTorrent clients DO NOT support anonymity; the IP address of all peers is visible in the firewall program. No need to worry though, Indian govt. has clarified that streaming a pirated movie is not illegal.

Talking about the security concerns, each piece is protected by a cryptographic hash contained in the torrent descriptor. This ensures that modification of any piece can be reliably detected, and thus prevents both accidental and malicious modifications of any of the pieces received at other nodes. If a node starts with an authentic copy of the torrent descriptor, it can verify the authenticity of the entire file it receives.

Further Reading:                    

IPFS is not entirely new but is still not widely used.
Read it here on medium.

Written by Avdesh Kumar

Keep Thinking!

Keep Learning!



Reading Time: 5 minutesIoT Overview 

We are living in a world where technology is developing exponentially. You might have heard the word IoT, Internet of Things. You might have heard about driverless cars, smart homes, wearables.

The Internet of things is a system of interrelated computing devices, mechanical and digital machines provided with unique identifiers and the ability to transfer data over a network without requiring human-to-human or human-to-computer interaction

IoT is also used in many places such as farms, hospitals, industries. You might have heard about smart city projects too (in India). We are using lots of sensors, embedded systems, microcontrollers and lots of other devices connecting them to the internet to use those data and improve our current technology.

Our sensors will capture lots of data and it will be used further depending on the user or owner. But what if I say this technology can be harmful too? It may or may not be safe to use it. How?

These data transferring from using IoT from source to its destination can be intercepted in between and can be altered too. It can be harmful if the data is very important, For ex. Reports of a patient generated using IoT can be intercepted and altered so the doctor can not give the correct treatment to the patient. Also, some IoT devices can be used by the Army transferring very secret data. If it can get leaked, then it can create trouble for the whole country.

The Information-technology Promotion Agency of Japan (IPA) has ranked “Exteriorization of the vulnerability of IoT devices” as 8th in its report entitled “The 10 Major Security Threats”.

So, can we just stop using IoT? No, we can’t. We have to secure our data or encrypt our data so the eavesdropper can never know what we are transferring.

Cryptography Overview :

Cryptography is a method of Protecting information and communications through the use of codes, so that only those for whom the information is intended can read and process it.

There are mainly two types of encryption methods.

  1. Symmetric key
  2. Asymmetric key 

Symmetric key uses the same secret key to encrypt or decrypt data while Asymmetric key has one public key and one private key. A public key is used to encrypt data and it is not a secret, anyone can have it and use it to encrypt data but only a private key (of the same person whose public key was used) can be used to decrypt that plaintext.

In Cryptography, We usually have a plaintext and we use some functions, tables and keys to generate ciphertext depending on our encryption method. Also In order to make our data exchange totally secure, we need a good block cypher, secure key exchange algorithm, hash algorithm and a message authentication code.


Block cipher – It is a computable algorithm to encrypt a plaintext block-wise using a symmetric key. 

Key Exchange Algorithm – It is a method to share a secret key between two parties in order to allow the use of a cryptography algorithm. 

Hash Algorithm – It is a function that converts a data string into a numeric string output of fixed length. The hash data is much much smaller than the original data. This can be used to produce message authentication schemes.

Message Authentication Code (MAC) – It is a piece of information used to authenticate the message. Or in simple words, to check that the message came from the expected sender and the message has not been changed by any eavesdropper.   

NOTE: you might wonder why we don’t just send data using key exchange algorithms when it is reliable to share secret keys. You can search for it or tell you in short. It is neither reliable nor secure to share data using key exchange algorithms.

LightWeight Cryptography:

Encryption is already applied at the data link layer of communication systems such as the cellphone. Even in such a case, encryption in the application layer is effective in providing end-to-end data protection from the device to the server and to ensure security independently from the communication system. Then encryption must be applied at the processor processing the application and on unused resources and hence should desirably be as lightweight as possible.

There are several constraints required to achieve encryption in IoT.

  1. Power Consumption
  2. Size of RAM / ROM
  3. Size of the device
  4. Throughput, Delay

Embedded systems are available in the market with 8bit, 16-bit or 32-bit processors. They have their own uses. Suppose we have implemented a system of Automated doors which open and close automatically at a bank. Which also counts how many people entered or left the bank. We want to keep this record secret and store it on the cloud. Using a 1GB RAM, 32bit / 64bit processor with a very good ROM just to ensure the privacy of data doesn’t make sense here. Because we will need a good space to install our setup, we will need to spend a lot more money than we should while this thing can be achieved with cheaper RAM, ROM and processor.

Keeping the above points in mind, implementing conventional cryptography in IoT which are used for Mobile Phones, Tablet, Laptop / PC, Server is not possible. We have to develop a separate field “Lightweight Cryptography” which can be used in Sensor networks, Embedded systems etc.

Applying encryption to sensor devices means the implementation of data protection for confidentiality and integrity, which can be an effective countermeasure against the threats. Lightweight cryptography has the function of enabling the application of secure encryption, even for devices with limited resources.


Talking about AES, It usually takes 128bit long keys with 128 lock size. It uses 10 rounds of different steps like subbytes, shift rows, mix columns and add round keys. Implementing this requires a good amount of space, processing speed and power. We can implement it in IoT with reduced length of key or length of the blocksize but then it will take less than 30 minutes to break AES. 


There are many Lightweight cryptography algorithms developed like TWINE, PRESENT, HEIGHT etc. Discussing all of them requires a series of blogs but I am adding a table showing a comparison of some Lightweight Cryptography.  You can observe changes in block size from 64 to 96 can create a huge difference in power consumption and area requirement. 

Lightweight cryptography has received increasing attention from both academic and industry in the past two decades. There is no standard lightweight cryptosystem like we have AES in conventional cryptography. Research is still going on. You can get updates of the progress at  

The whole idea behind this blog is to discuss lightweight cryptography and overview of it. 🙂

Author: Aman Gondaliya

Keep reading, keep learning!


FPGA – An Overview (1/n)

Reading Time: 7 minutes


Field Programmable Gate Arrays, popularly known as FPGAs, are taking over the market by storm. They are widely used nowadays, due to their simplicity in reusability and reconfiguration. Simply put, FPGAs allow you flexibility in your designs and is a way to change how parts of a system work without introducing a large amount of cost and risk of delays into the design schedule. FPGAs were first conceptualized and fabricated by Xilinx in the late 80s, and since then, other big companies such as Altera(now Intel), Qualcomm, Broadcom have followed suit. From industrial control systems to advance military warheads, from self-driving cars to wireless transceivers, FPGAs are everywhere around us. With knowledge of Digital Designing and Hardware Descriptive Languages (HDL), such as Verilog HDL or VHDL, we can configure our own FPGAs. Though first thought of as the domain of only Electronics Engineers, FPGAs can now be programmed by almost anyone, thanks to the substantial leaps in OpenCL (Open Computer Language).

I have tried to lay down the concept in terms of 5 questions, to cover the majority of the spectrum.

What is an FPGAs exactly?

An FPGA is a semiconductor device on which any function can be defined after manufacturing. An FPGA enables you to program new product features and functions, adapt to new standards and reconfigure hardware for specific applications ever after the product has been installed in the field – hence the term field programmable. Gate arrays are 2-dimensional logic gates that can be used in any way we wish. An FPGA consists of 2 parts, one customizable (containing programmable logic) and another non-customizable. Simply put, it is an array of logic gates and wires which can be modified in any way, according to the designer.

Customizable Part

As rightfully said by Andrew Moore, you can build almost anything digital with three basic components – wires (for data transfer), logic gates (for data manipulation) and registers (for storage reasons). The customizable part consists of Logic Elements (LEs) and a hierarchy of reconfigurable interconnects that allow the LEs to be physically connected. LEs are nothing but a collection of simple logic gates. From simply ANDing/ORing 2 pulses to sending the latest SpaceX project into space, logic gates, if programmed correctly and smartly, can do anything. 

Non-customizable Part

The non-customizable part contains hard IPs (intellectual property) which provides rich functionality while reducing power and lowering cost. Hard IP generally consists of memory blocks (like DRAMs), calculating circuits, transceivers, protocol controllers, and even whole multicore microprocessors. These hard IPs free the designer from reinventing these essential functions every time he wants to make something, as these things are commodities in most electronic systems.

As a designer, you can simply choose whichever essential functionality you want in your design, and can implement any new functionality from the programmable logic area.

Why are FPGAs gaining popularity?

FPGA - An Overview (1/n)

Electronics are entering every field. Consider the example of a car. Nowadays, every function of a car is controlled by electronics. Drivetrain technologies like engine, transmission, brakes, steering, and tires use electronics to control and monitor essential conditions like amount of fuel required, optimal air pressure according to usage and surroundings, lucid transmission and even better brakes are achieved due to this. Infotainment in cars is also gaining popularity, such as real-time traffic displays, digital controls, and comfort and cruise control settings according to driver’s conditions. Even modern-day driving assistance like lights, back-ups, lane-exits guiding and collision avoidance techniques. We are also using sensors like cameras, LASERs, and RADARs for an optimal driving and parking conditions.

A lot to digest, isn’t it?

All these technologies are implemented on an SoC (System on Chip). But suppose there comes out a better way for gear transmission, or a better algorithm for predictive parking or the government changes its guidelines about the speed limit for cruise control situations or fuel usage. We can’t change the entire SoC just for some versions. Moreover, these “updates” come often, and we can’t always build new, custom made SoC every time, as the time required to build a new one would increase, whilst also increasing the design and cost load, and on the top of it all, replacing the entire system. 

Our humble FPGA comes to the rescue here. SoC FPGAs which can implement changes in specific parts without affecting the other parts, reducing design and time load, and most important of all, reusability of the same hardware by reconfiguring the requisite changes.

FPGAs are gaining popularity because

1. They are reconfigurable in real-time

2. Costs less in long runs as compared to ASICs (Application Specific Integrated Circuits). Though ASICs are faster than FPGAs and consume less power, they are not reconfigurable, meaning once made, we can’t add/remove or update any functionalities.

3. They reduce the design work and design time considerably due to inbuilt hard IPs

4. You can build exactly whatever you need using an FPGA.

When was the 1st FPGA fabricated?

FPGA was a product of advances in PROMs (Programmable Read-Only Memory) and PLDs (Programmable Logic Devices). Both had the option of being programmed in batches or in the field (thereby, field-programmable). However programmable logic was hardwired between logic gates.

Altera (now Intel) delivered the industry’s first reprogrammable device – the EP300, which allowed the user to shine an ultra-violet lamp on the die to erase the EPROM cells that held the device configuration.

Ross Freeman and Bernard Vonderschmidt (Xilinx co-founders) invented the 1st commercially viable FPGA in 1985 – the legendary XC2064. The XC2064 had programmable gates and programmable interconnects between gates, which marked the beginning of new technology and market. 

FPGA - An Overview (1/n)

The 90s showed the rapid growth for FPGAs, both in terms of circuit sophistication and volume of production. They were mainly used in Telecommunications and Networking industry, due to their reconfigurability, as these industries demanded changes often and sometimes, in real-time.

By the dawn of the new millennium, FPGAs found their way into consumer, automobile and industrial applications.

In 2012, the first complete SoC (System on Chip) chip was built from combining the logic blocks and interconnects of traditional FPGA with an embedded microprocessor and related peripherals. A great example of this would be Xilinx Zynq 7000 which contained 1.0 GHz Dual Core ARM Cortex A9 microprocessor embedded with FPGA’s logic fabric.

FPGA - An Overview (1/n)

Since then, the industry has never looked back, seeing unforeseen growth and applications in recent years.

Where are FPGAs used?

FPGAs are used everywhere where there is a need for frequent reconfiguration or where there is a need for the addition of new functions, without affecting other functionalities. The car functionalities discussed earlier is a great example in terms of consumer usage.

They are widely used in industries too. Let’s take an example of SoC FPGA for a motor control system, which is used in every industry. It includes a built-in processor that manages the feedback and control signals. The processor reads the data from the feedback system and runs an algorithm to synchronize the movement of the motors as well as control their rotation speeds. By using an SoC FPGA, you can build your own IP that can be easily customized to work on other motor controls. There are several advantages to using an SoC FPGA for motor control instead of a traditional microcontroller viz.  Better system integrations (remember the customizable areas in FPGAs?), scalable performances (rapid and real-time reconfigurability) and comparatively better functional safety (computing real-time data and taking industrial regulations in mind).

Any computable problem can be solved using an FPGA. Their advantage lies in that they are significantly faster for some applications because of their parallel nature and optimality in terms of the number of gates used for certain processes.

Another trend in the use of FPGAs is hardware acceleration, where one can use the FPGA to accelerate certain parts of an algorithm and share part of the computation between the FPGA and a generic processor (Bing using FPGA for its search algorithm accelerations) FPGAs are seeing increased use as AI accelerators for accelerating artificial neural networks for machine learning applications.

How can you configure an FPGA yourself (and why to do it anyway?)?

As we know, to make any chip using logic gates, we need Hardware Descriptive Languages such as Verilog HDL or VHDL. These languages are generally known only by people with Electronics Engineering backgrounds, thereby keeping these magnificent pieces of machinery away from other engineers, thereby increasing the need for a heterogeneous environment for exploiting hardware. OpenCL (developed by Apple Inc.) a pioneer in this field, is a framework for writing programs that execute across heterogeneous platforms consisting of CPUs, GPUs, DSPs, FPGAs, and other types of processors. OpenCL includes a language for developing kernels (functions that execute on hardware devices) as well as application programming interfaces (APIs) that allow the main program to control the kernels. OpenCL allows you to develop your code in the familiar C programming language. Then, using the additional capabilities provided by it, you can separate your code into normal software and kernels that can execute in parallel. These kernels can be sent to the FPGAs without you having to learn the low-level HDL coding practices of FPGA designers.

Sounds too much? Let’s simplify the stuff.

Many of you have had experience with Arduino or similar small microcontroller projects. With these projects, you usually breadboard up a small circuit, connect it to your Arduino, and write some code in the C to perform the task at hand. Typically your breadboard can hold just a few discrete components and small ICs. Then you go through the pain of wiring up the circuit and connecting it to your Arduino with a bird’s nest of jumper wires.

Instead, imagine having a breadboard the size of a basketball court or football field to play with and, best of all, no jumper wires. Imagine you can connect everything virtually. You don’t even need to buy a separate microcontroller board; you can just drop different processors into your design as you choose. Now that’s what I’m talking about!

Welcome to the world of FPGAs!


1. Intel:

2. Wikipedia:

3. Makezine:

4. Xilinx:

Electric Vehicle

Reading Time: 9 minutes

Electrical vehicles

The future of fossil fuels hasn’t been kind to human beings, we need electrical vehicle!

A talk by

Apurva Randeria

Ashutosh Desai

What is electrical vehicle?

Connect electrical motor to wheel and supply power from battery that’s it !!!boom!

But, what else is there inside an electrical vehicle?will see each one .

Why electrical vehicles?

We already have the technology we need to cure our addiction to oil, stabilize the climate and maintain our standard of living, all at the same time. But, by transitioning to sustainable technologies, such as solar and wind power, we can achieve energy independence and stabilize human-induced climate change.

So whenever we think about electrical vehicle the first thing that comes to our mind is Tesla,

Tesla, Inc. (formerly Tesla Motors, Inc.), is an American electric vehicle and clean energy company based in Palo Alto, California.

The company specializes in electric vehicle manufacturing, battery energy storage from home to grid scale and, through its acquisition of SolarCity, solar panel and solar roof tile manufacturing.

Who is the founder Tesla ? :


Electric Vehicle

                 (double click on image for better view)

Tesla Motors was founded in July 2003 by engineers Martin Eberhard and Marc Tarpenning. The company’s name is a tribute to Serbian inventor and electrical engineer Nikola Tesla. Elon Musk was responsible for 98% of the initial funding, and served as chairman of the board.

Which are the  other companies that make electrical vehicle?:

The electric car snowball has been growing rapidly in recent years and, at this point, it’s only a matter of time before the trend will make the transition to norm. While nowadays most carmakers offer some sort of electrification in their lineups, 2020 is expected to bring a sustained push in this direction, with more and more manufacturers joining (or strengthening) the electric bandwagon.

Tesla is most popular electric car manufacturing company . Tesla has maintained it’s name by successful car mostly Tesla model S, Model 3, Model X, Model Y and most recent cyber truck.  Besides tesla the following companies are also entered in EV market .

These are some coolest electric car we can ever imagine :

Audi E-tron GT(582 hp),Tesla Roadster(0-60 mph (96 km/h) in 1.9 seconds, 0-100 mph (161 km/h) in 4.2 seconds), RIMAC C-Two,  BMW Vision Next, Lamborghini Terzo Millennio( The concept car).

Electric Vehicle

These are the most popular car manufacturers across world involved in electric vehicles.



The following shows the efficiency of electric vehicle:

Electric Vehicle


 Battery technology used in electrical vehicles :

In The above image the small units, those are small batteries connected in series and parallel as per requirement.

Electric Vehicle

What does a one battery unit looks like?

Electric Vehicle


These batteries are lithium ion batteries , but Why is lithium ion is used in batteries?

Lithium-ion batteries are popular because they have a number of important advantages over competing technologies: They’re generally much lighter than other types of rechargeable batteries of the same size.  This translates into a very high energy density for lithium-ion batteries.

John Goodenough, Akira Yoshino and Stanley Whittingham have won the 2019 Nobel prize in chemistry today ‘for the development of lithium–ion batteries’.

Comparison of lead battery 🔋and lithium ion battery🔋:

Electric Vehicle



Mostly  lithium ion 🔋are used in EVs.

There are problems like temperature management, cell failures, state of charge discharge rate and Cell aging.

The Lithium-ion batteries have proved to be the battery of interest for Electric Vehicle manufacturers because of its high charge density and low weight. Even though these batteries pack in a lot of punch for its size they are highly unstable in nature. It is very important that these batteries should never be overcharged or under discharge at any circumstance which brings in the need to monitor its voltage and current. This process gets a bit tougher since there are a lot of cells put together to form a battery pack in EV and every cell should be individually monitored for  its safety and efficient operation which requires a special dedicated system called the Battery Management System. Also to get the maximum efficiency from a battery pack, we should completely charge and discharge all the cells at the same time at the same voltage which again calls in for a BMS.

So comes battery management system in picture .

Battery Management System (BMS) :

Electric Vehicle



There are a lot of factors that are to be considered while designing a BMS. The complete considerations depend on the exact end application in which the BMS will be used. Apart from EV’s BMS are also used wherever a lithium battery pack is involved such as a solar panel array, windmills, power walls etc. Irrespective of the application a BMS design should consider all or many of the following factors.

Discharging Control: The primary function of a BMS is to maintain the lithium cells within the safe operating region. For example a typical Lithium 18650 cell will have an under voltage rating of around 3V. It is the responsibility of the BMS to make sure that none of the cells in the pack get discharged below 3V.

Charging Control: Apart from the discharging the charging process should also be monitored by the BMS. Most batteries tend to get damaged or get reduced in lifespan when charged inappropriately. For lithium battery charger a 2-stage charger is used. The first stage is called the Constant Current (CC) during which the charger outputs a constant current to charge the battery. When the battery gets nearly full the second stage called the Constant Voltage (CV) stage is used during which a constant voltage is supplied to the battery at a very low current. The BMS should make sure both the voltage and current during charging does not exceed permeable limits so as to not over charge or fast charge the batteries.  The maximum permissible charging voltage and charging current can be found in the datasheet of the battery.

State-of-Charge (SOC) Determination: You can think of SOC as the fuel indicator of the EV. It actually tells us the battery capacity of the pack in percentage. Just like the one in our mobile phone. But it is not as easy as it sounds. The voltage and charge/discharge current of the pack should always be monitored to predict the capacity of the battery. Once the voltage and current is measured there are a lot of algorithms that can be used to calculate the SOC of the Battery pack. The most commonly used method is the coulomb counting method; we will discuss more on this later in the article. Measuring the values and calculating the SOC is also the responsibility of a BMS.

State-of-Health (SOC) Determination: The capacity of the battery not only depends on its voltage and current profile but also on its age and operating temperature. The SOH measurement tells us about the age and expected life cycle of the battery based on its usage history. This way we can know how much the mileage (distance covered after full charge) of the EV reduces as the battery ages and also we can know when the battery pack should be replaced. The SOH should also be calculated and kept in track by the BMS.

Cell Balancing: Another vital function of a BMS is to maintain cell balancing. For example, in a pack of 4 cells connected in series the voltage of all the four cells should always have equal. If one cell is less or high voltage than the other it will affect the entire pack, say if one cell is at 3.5V while the other three is at 4V. During charging these three cells will attain 4.2V while the other one would have just reached 3.7V. Similarly this cell will be the first to discharge to 3V before the other three. This way, because of this single cell all the other cells in the pack cannot be used to its maximum potential thus compromising the efficiency.

To deal with this problem the BMS has to implement something called cell balancing. There are many types of cell balancing techniques, but the commonly used ones are the active and passive type cell balancing. In passive balancing the idea is that the cells with excess voltage will be forced discharge through a load like resistor to reach the voltage value of the other cells. While in active balancing the stronger cells will be used to charge the weaker cells to equalize their potentials.

Thermal Control: The life and efficiency of a Lithium battery pack greatly depends on the operating temperature. The battery tends to discharge faster in hot climates compared with normal room temperatures. Adding to this the consumption of high current would further increase the temperature. This calls for a Thermal system (mostly oil) in a battery pack. This thermal system should only be able to decrease the temperature but should also be able to increase the temperature in cold climates if needed. The BMS is responsible for measuring the individual cell temperature and control the thermal system accordingly to maintain the overall temperature of the battery pack.

Powered from the Battery itself: The only power source available in the EV is the battery itself. So a BMS should be designed to be powered by the same battery which it is supposed to protect and maintain. This might sound simple but it does increase the difficulty of the design of the BMS.

Less Ideal Power: A BMS should be active and running even if the car is running or charging or in ideal mode. This makes the BMS circuit to be powered continuously and hence it is mandatory that the BMS consumes a very less power so as not to drain the battery much. When a EV is left uncharged for weeks or months the BMS and other circuitry tend to drain the battery by themselves and eventually requires to be cranked or charged before next use. This problem still remains common with even popular cars like Tesla.

Galvanic Isolation: The BMS acts as a bridge between the Battery pack and the ECU of the EV. All the information collected by the BMS has to be sent to the ECU to be displayed on the instrument cluster or on the dashboard. So the BMS and the ECU should be continuously communicating most through the standard protocol like CAN communication or LIN bus. The BMS design should be capable of providing a galvanic isolation between the battery pack and the ECU.

Data Logging: It is important for the BMS to have a large memory bank since it has to store a lot of data. Values like the Sate-of-health SOH can be calculated only if the charging history of the battery is known. So the BMS has to track of the charge cycles and charge time of the battery pack from the date of installation, and interrupt these data when required. This also aids in providing after sales service or analyzing a problem with the EV for the engineers.

Processing Speed: The BMS of an EV has to do a lot of number crunching to calculate the value of SOC, SOH etc. There are many algorithms to do this, and some even use machine learning to get the task done. This makes the BMS a processing hungry device. Apart from this it also has to measure the cell voltage across hundreds of cells and notice the subtle changes almost immediately.

Building charging infrastructure:

The big challenge is of course charging infrastructure which will need to be combined with existing refuelling stations and at alternative locations closer to homes. According to Aryan, improving battery swapping stations will eliminate wait time for charging, make better use of land, reduce the size of batteries in vehicles and will give an increased available range.

Further, the country’s charging infrastructure will need to be standardized. EV charging station vendors are perplexed at the moment, regarding the standard that should be adopted for fast charging.

Add on :

You can watch this amazing videos for visualisation

1.Learn Engineering

2.Hybrid EV

3.BMW i3

4. Lamborghini Terzo Millennio( youtube)


Digital Image Processing

Reading Time: 6 minutesIn the very first Wisdom Week – LUMIÈRES conducted by CEV, Dr. Jignesh N Sarvaiya of Electronics Engineering Department gave the students some really interesting insights into Digital Image processing. Here is a brief summary of the topics covered by him.

What is a digital image?

A digital image is a representation of a two-dimensional image as a finite set of digital values, called picture elements or pixels. Pixel values typically represent gray levels, colours , heights, opacities etc. Digitization implies that a digital image is an approximation of a real scene. Common image formats include black and white images, grayscale images and RGB images.
Digital Image Processing

What is Digital Image Processing (DIP)?

Digital Image Processing means processing digital image by means of a digital computer. It uses computer algorithms, in order to get enhanced image to extract some useful information.
The continuum from image processing to computer vision can be broken up into low-, mid- and high-level processes which are explained below.
Low Level Process: where the input as well as the output is an image. Examples include noise removal and image sharpening.
Mid Level Process: where the input is an image and output is attribute. Examples include object recognition and segmentation.
High Level Process: where the input is attribute and output is understanding. Examples include scene understanding and autonomous navigation.
Representing Digital Images
An image may be defined as a two-dimensional f(x,y), where x and y are spatial coordinates and the amplitude of f at any pair of coordinates (x,y) is called the intensity of the image at that point.
A digital image can be represented as a M x N numerical array. The discrete intensity interval is [0, L-1] where L=2k
The number of bits (b) required to store M × N digitized image is given by b = M × N × k.
Digital Image Processing

Why do we need DIP?

Image processing is a subclass of signal processing concerned specifically with pictures which improves image quality for human perception and/or computer interpretation.
It is motivated by major applications such as improvement of pictorial information for human perception, image processing for autonomous machine applications, efficient storage and transmission.
DIP employs methods capable of enhancing information for human interpretation and analysis by noise filtering, content enhancement, contrast enhancement, deblurring, remote sensing etc.
Digital Image Processing

Fields Using DIP

    • Radiation from the electromagnetic spectrum
    • Acoustic
    • Ultrasonic
    • Electronic in the form of electron beams used in electron microscopy
    • Computer synthetic images used for modelling and visualisation

Digital Image Processing

DIP in Medicine

Medical imaging is the technique and process of creating visual representations of the interior of a body for clinical analysis and medical intervention, as well as visual representation of the function of some organs or tissues.
For example, we can take the MRI scan of canine heart and find boundaries between different types of tissues. We can use images with gray levels which represent tissue density and use a suitable filter to highlight the edges.
Digital Image Processing
Digital Image Processing


Digital Image Processing

Key Stages in DIP

Digital Image Processing
Let us understand these stages one by one.

  1. Image Acquisition: An image is captured by a sensor such as a monochrome or camera and digitized. If the output of the sensor is not in digital form, it is digitized with an analog to digital convertor. A camera contains two parts: a lens which collects appropriate radiation and forms a real image of the object and a semiconductor diode which converts the irradiance of an image into an electrical signal.A frame grabber requires circuits to digitize electrical signals from imaging sensor to a computer’s memory.
  2. Digital Image Processing

  3. Image Enhancement: It is used to bring out obscured details or highlight the features of interest of an image. It is commonly used to improve quality and remove noise from images.
  4. Digital Image Processing

  5. Image Restoration: It is the operation of taking a corrupt/noisy image and estimating the clean, original image. Corruption may come in many forms such as motion blur, noise and camera mis-focus.
  6. Digital Image Processing

  7. Morphological Processing: Morphological operations apply a structuring element to an input image, creating an output image of the same size. The value of each pixel in the output image is based on a comparison of the corresponding pixel in the input image with its neighbors.
  8. Digital Image Processing

  9. Segmentation: It is the process of partitioning a digital image into multiple segments to simplify and change the representation of an image into something that is more meaningful and easier to analyze.
  10. Digital Image Processing

  11. Object Recognition: Object recognition is a technique for identifying objects in digital images. It is the key output of deep learning and machine learning algorithms.
  12. Description and Representation: After an image is segmented into regions; the resulting aggregate of segmented pixels is represented & described for further computer processing. Representing region involves two choices: in terms of its external characteristics (boundary) in terms of its internal characteristics (pixels comprising the region).
  13. Digital Image Processing

  14. Image Compression: It is applied to digital images to reduce their cost for storage or transmission.
  15. Colour Image Processing: A digital color image is a digital image that includes color information for each pixel. The characteristics of color image are distinguished by its brightness and saturation.
  16. Knowledge Base: Knowledge about a problem domain is coded into an image processing system in the form of a knowledge database.

Types of Digital Images

  • Intensity image or monochrome image: Each pixel corresponds to light intensity normally represented in gray scale.
  • Color image or RGB image: Each pixel contains a vector representing red, green and blue components.
  • Binary image or black and white image: Each pixel contains one bit, 1 represents white and 0 represents black.
  • Index image: Each pixel contains index number pointing to a color in a color table.

Image Resolution

Resolution refers to the number of pixels in an image. The amount of resolution required depends on the amount of details we are interested in. We will now take a look at Image and Intensity Resolution of a digital image.
Spatial resolution: It is a measure of the smallest discernible detail in an image. Vision specialists state it with dots (pixels) per unit distance, graphic designers state it with dots per inch (dpi).
Digital Image Processing
Intensity Level Resolution: It refers to the number of intensity levels used to represent the image. The more intensity levels used, the finer the level of detail discernable in an image. Intensity level resolution is usually given in terms of the number of bits used to store each intensity level.
Digital Image Processing

Computer Vision: Some Applications

  • Optical character recognition (OCR)
  • Face Detection
  • Smile Detection
  • Vision based biometrics
  • Login without password using fingerprint scanners and face recognition systems
  • Object recognition in mobiles
  • Sports
  • Smart Cars
  • Panoramic Mosaics
  • Vision in space

Digital Image Processing
Hope you got some insights into digital image processing and computer vision. Thanks for reading !

CEV - Handout