The following document records the working and insights of a R&D department of an Electrical Instrumentation Manufacturing company, born and bought-up in India in early 1980s, and eventually extending its roots over 70+ countries and sustainably competing in European, Middle-East, Russian, South-East Asian, American and Latin American, and of course Indian Markets.
From advanced Multifunction Meters to legacy Analog Panel Meters, from handheld Multimeters to patented Clamp Meters, from Digital Panel Meters to Temperature Controller, from 10 kV Digital Insulation Testers to 30 kW Solar Inverters, from Current transformers to Genset Controllers, from Power Factor Controller to Power Quality Analyzers, from Batter Chargers to Transducers. From making best-selling products to white labeling for German, American, Polish and UK’s tech giants. From being major supplier of measuring instruments for BHEL, Railways, NTPC, and big and small manufacturing facilities in India to be able to send its devices in SpaceX rockets. This is not a description of a company located in a tech savvy Silicon Valley of most superior nation of world. This is description of just one of many such growing companies in far obscure industrial regions of our Indian sub-continent.
Purpose of this accounting:
To introduce and highlight the major working, thinking and organizing methods of a world that awaits the footsteps of the hopeful graduates out from a relatively cozy boundaries of their college campus.
To produce a testimony of fact that in exact same environment, with exact same people backed by exact same education system, with same so called incompetent Indian working class a company not just leads a product-based market but also beat it’s so called advanced European counter-parts and bring in collective consciousness the descriptions that seriously challenges the conventional assumption of ailing Indian Manufacturing Industry.
To reinforce and bear witness to the fact that the truths and advices we all get to hear from people around us, are not mere variation of pressure in air but if followed with true spirit it literally creates magic and destined to get one to a point of breath taking, heart pounding and soul touching experiences.
Work on Solutions Not on Problems
The key spirit of professional execution is logical optimistic solution finding approach. The problems in front of all of us are all equally compelling and evident, absolutely no doubt in that, resources are limited, time is little, skills are moderate, support is not there, etc. But the point is the R&D mindset will never except them. If resources are not there let’s check-out the savings/loan, if time is not there let’s think about multiplexing, if skill is not there let’s talk to an expert and reach out for help, if support is not there let’s start reading ourselves. With optimism you ask, What exactly is the problem and what needs to be done to counter it, you present yourselves with options select one with maximum logical connections and go do it. If failed, with same optimism you ask that same question again. If you take decision with logical grounded thinking every time you almost hit the solution and in that is the drive for next try.
For example, in our setup, even on just one section of product (let’s say LCD) as many as 24 revisions are made until you uncover the design that has best readability with maximum features considering the space limitations of mechanical housing and even at that point the owner of design will not say no to 25th revision if that’s better than 24th. And here the catch is when someone starts, he can just say no, it is not possible to accommodate all these things on such a small screen, either readability or extra features can be provided, logically that’s correct until someone comes with an optimistic solution finding approach and say let’s first accommodate the unavoidable one, then let’s try some alignments, let’s try some tilts, let’s try some symbols, lets try some overlapping.
An striking example of human genius, the size of screen is less than a little finger of a 5-year old, but is capable to display great deal of data on it.
Doing Detailed and Exhaustive Documentation
It is well accepted and proven thing that if we work well with the documents at office, one is destined to have peaceful life at home, as one does not have to remember any dumb piece of information. You have a flawless access to a time machine kind of thing. A window to look your past works and track back any spurious design back to its origin in a very less time and less frustrating way. Without documentation at any point the situation which appears to perfectly under control can turn into a knife in windpipe type of jacked-up mess. So, maintaining organized folders, Read_Me files with time stamps and quick notes is indispensable.
Organization of Big and Small Things
Organization of assets and swift and flawless access to our resources always help us to do the mundane things in a highly efficient manner. Think about it you are working on your dream project and the time when you got a breakthrough idea in your work you spend next two hours searching the resistor in the plethora of mess you created and never finding it out and in a snip the time you got to try out that idea is gone. Life is fast for all of us, so being ever ready handy with our tools and hacks is always advantageous.
From the 15K Solder gun to a 1 Rupee pin that you may use to temporarily replace the fallen button on your shirt, everything shall be at its designated place. In such degree of organization of everything around us, one feels that readiness and calm to make it through all those massive problems that all of us have.
Organization of assets just not saves time, money and energy but always creates a welcoming environment to get into. And whichever phase of our lives we may be in a high-school student, a college grad or a professional, we can never isolate our work life from the usual personal life that has to go alongside. One may get ill, one may have unsettled debates with parents, one may have problems with food, water, home, discomfort with neighbors, heavy traffic and extra chilled office spaces, etc., all that non-sense that always plagues us, are anyways an inseparable part of life. The things that walk you through is that the highest degree of organization of small things, big assets and of course thoughts in the head.
Choice is at Last Always Ours!
There comes a time when we all get stuck. Some gets resolved after few hours or debugging some stretches over a day, and some extend up to a working week. Rare are those problems that walk along your side for over a month’s duration. If someone is sufficiently in line with ongoing then most of time our divine intuition lets us get to the root of issues in one or two shots.
You found that EEPROM isn’t responding, you take out the datasheet verifies for the connections, check out the supplies found a cap dry, gave a magic touch with your gun and boom the EEPROM rocks.
You found that device is not measuring the current, you took out the circuit and assembly diagrams, verify the components, found all good. Took out the DSO and plug it across the shunts and find that the resistor is burnt. Replace it with new and, boom that’s fixed.
So, every time you take help of logical reasoning of what has to happen to make that happen and that pretty much shows the light. Eliminating one by one the most obvious reasons for the problems. This doesn’t take courage, but the fun starts to fade out as we run out of logical possibilities. It is from here the test of gut starts. When all logical traces have been checked, everything is just as expected to be, except the final output.
In those moments of defeat and dead-ends one gets subjected to an entirely new dimension of thinking which causes a serious humbling effect on professional’s character. When you look back at those time of intense desperations and using your most forceful impacts and still not hitting the thing, the only thing that comes from within is great calm and respect for the nature of reality for being whatever it is.
How would you handle a situation in which accidently a plugin slot gets locked by you in 5 Lakh high priority high-use equipment?
How would you handle the situation in which after months of workings you are just about to shoot for the hand-over of a product to the production and QA teams suddenly reports to you the most dreaded failure of your product, which is expected to drive a long process of iterative tuning?
How would you handle the situation in which you checked, double-checked, triple-checked and still an error made it into your product’s datasheet?
These types of situations lead to increase in speed of blood in veins, ringing in head, and absolute blow to our spirit and whatnot. But even in that chaos things really moves based on the choices we make. One can accept that truth as it is and chose to question what needs to be done and just take that one next step to address it or accept feeling desolated, beaten, and slapped by life like anything.
Choice is ours!
Try out these fundamental methods of organization, thinking and working, and get astounded by the power of it.
The sudden adoption of Western Education System inspired course structuring in Indian Education System has opened up a humongous range of possibilities for young new graduates. Few students find this ideal for their exploring journey, were as many struggle to chose what to pick from the such a large plate of options. The student needs to anticipate the common and advanced skills in their field of liking. Getting the intuition behind the theory, enabling oneself with mathematical tools and methods, getting comfortable with open source environments, getting hands fluent in hardware handling, ability to do documentation and working in an organized and structured manner, all these set of skills proves to be an asset for every team member during the product development.
The IP rights are conserved, names of companies and writer remains anonymous.
In the summers of 2019, CEV Aantarak began studying Blackouts, namely the Indian Blackout of 2012 and the Ukrainian Blackout of 2015. That time we didn’t get too much of technical details, rather just getting the things on periphery of the event, by studying the reports of CEA, POSOCO and other concerned authority, until one of team member Anshuman Singh secured a research intern at IITKGP to study the exact phenomenon which triggered that largest blackout of the history, the 2012’s. Sir carried out his preliminary works and finally got to put up his great work in the blog Fault Analysis in Power Systems. The approach which he and his colleagues used to solve the problem was indeed a decent one, however, due to the inherent glitches in that particular protection philosophy itself, it didn’t fix the problem completely. And finally, now in 2021, we moved one more step ahead by studying the technology which addresses that old doomed problem, “zone 3 maloperation of distance relay due to load encroachment”, and more importantly “the drawbacks of conventional SCADA system”.
Recap: What was Zone 3 Maloperation of Distance Relay?
When any kind of fault occurs in any component of power system, what basically happens is that a high potential point gets connected to a low potential point (typically ground) via a very small resistance path, leading to flow of dangerously high current by virtue of Ohm’s Law, and thereby dissipating great thermal energy as indicated by joules law, or i^2r.
All components, especially high voltage systems must be protected against this possibility. Transmission lines obviously subjected to the external environment are most prone to faults.
Relaying is what technically called arrangement to protect against the destructive effects of faults. Based on economy and other factors like accuracy and fastness various types of relaying schemes are employed.
Recall the consequences of fault:
Small impedance (resistance)
Based on these two criteria we have an overcurrent relaying scheme and distance relay scheme respectively. So, when the current goes beyond a certain threshold or when the impedance goes below a certain threshold, the scheme correspondingly generates (or issues) a trip command to Circuit breakers to open up and isolate the faulted point from the healthy system.
For strategically significant lines distance relay is technically superior to overcurrent relay.
A distance relays works by categorizing its area of operation into three zones. This is done in order to provide backup protection by introducing increasing time delays for successive zones.
However, distance relay also has its own limitation.
The most prominent of them is its maloperation under heavy load conditions.
The relay misidentifies the fault when the line is heavily loaded and as Anshuman explained losing a line when it is heavily loaded is seriously fatal. (Hint: leads to cascaded tripping). In simple language distance relay works on the principle of sensing the impedance and operating when impedance falls below a threshold. Increasing loading is also manifested as decreasing impedance of system (Analogy: smaller is the value of resistance more is power dissipation for a given voltage level), thus causing the relay to trip the CBs.
This is a zone 3 maloperation of distance relay due to load encroachment.
Anshuman’s Solution in Short and problem with solution
The distance relay, unfortunately, is not blessed by his masters, i.e., the EEs, with the intelligence to distinguish the fall of impedance due to increased loading or due to a genuine fault. The relay is more like a Pharmacist who gives paracetamol to anyone having a fever.
Anshuman and his team demonstrated a procedure, which though is certainly a viable economical method to avert an impending blackout however is not so all-in-one fix and consumer-friendly.
It is basically directly addressing the cause which is causing a drop in impedance, i.e., the increasing active power consumption. So, the idea is to drop some quantum of the load off the grid to stop the impedance from further dropping.
This leads to an implementation question.
There are hundreds if not thousands of buses connected to a transmission line end. So, the load shedding at which bus shall be performed, in order to achieve a certain increment in impedance for a minimum amount of load shedding and also the considering the fact that we don’t push the buses into voltage instability.
However, the issue that remains unresolved by this approach is quite obvious.
The mathematical answers that we get from the algorithms may not be practically feasible. That is this approach does offer the method to distinguish between the VQ sensitivity of buses but doesn’t take into account the criticality of buses i.e., a hospital is connected or a night irrigation facility.
Apart from this Zone 3 Maloperation problem, we have another setback that significantly threatens the security of the power system in general, called the conventional SCADA (Supervisory Control and Data Acquisition Systems), which quite contritely is deployed to provide control over the large grid operations.
The Inherent Problems of conventional SCADA systems
No measurement of voltages and current phase angles: This problem can be understood better in terms of another question.
What is the phase of this signal?
A trash question, phase is a relative quantity and thus, we need to define a reference first.
Undoubtedly the measurement of angles of voltage and current phasor in power system which rotates at a rate close to 50 Hz or 314.6 rad/sec, requires a reference. Considering, the vastness of the landscape over which PS is spread it becomes a technically challenging task to provide the same reference to all the locations. This makes the unavailability of the angular separation between bus voltages and limits the ability of operator to get the true nerves of the system (i.e., the transient stability).
Time skew between measurements: RMS Voltage measurements made using SCADA even have no common time reference, hence one has no means to differentiate whether data coming are made at same instant or not.
Low update time i.e., large scan cycle time: with the methodology it takes around few seconds to few minutes to get new values of variables, so the operator lags the systems by about a few seconds or few minutes, hence no real-time system awareness. It is exactly like a MARS mission, where you get to know about the touch-down 12 minutes later, as light travel at finite speed and delay generated by the communication equipment, only difference is power system engineers have much more wide options to trigger some preventive measures to avert a catastrophe if they get system parameters on time.
Stringent requirement on Control Center computational capabilities: since the data streaming has so many uncertainties, to extract the useful data and figure out the true condition of the power system puts a challenging task to computers. All these problems are more severe and serious than they sound. The North American blackout of 2001 and the European blackout of 2003 were results of the foggy image that the SCADA presented to the control center. The investigative task force committees independently recommended the use of Synchrophasor technology in real-time monitoring of the system, which back then was only used in small numbers to store data and conduct post-event analysis.
The Synchrophasor Technology
The inability to do phase angle measurement as well as time skew and slower update rate of Voltage measurements were prime setbacks of the SCADA system.
Synchrophasor technology comes to address those problems. This method of measurement is significantly advance than the conventional SCADA system. The Synchrophasor measurements provide following services:
Measurement of RMS bus voltages and current along with phase angle wrt to a common reference signal shared by the whole power system.
No time skew: all measurements voltage magnitude, phase angles, frequency are also time-synchronized and are even time stamped
High-speed update rate: from 25 samples to 50 samples per second depending on PMU devices: all these lead to give operators the wide-area situational awareness in real-time, and enables them to take much better decision to shred load or generation, trip a CB, direct the line flow, add C-banks, etc.
Accurate measurements thus significantly lower computational requirements for state estimators.
The Idea of Phase Angle Measurement: Using the GPS signals
We saw the need for a common reference signal as inevitable for phase angle measurement.
This system however depends quite heavily on two things:
Accuracy of common reference i.e., the GPS clock
Communication systems reliability
The GPS provides one pulse per second at all the locations spread over the entire peninsula. The pulse received simultaneously by all the measurement units triggers them to begin their measurement, wrt to an imaginary zero phase sine wave reference.
So, a GPS receiver is required.
These measurements to be made successfully require stringent requirements on the waveform to be measured itself. A waveform having harmonics will lead to significant errors.
So filtering is required.
Also, the kind of mathematical operations required to be made on signal requires it to be in represented in digital equivalent.
So, analog to digital converter is required.
Fourier transform can now be carried out on digital samples, using a commonly available economical microprocessor, to yield the magnitude and what we can say absolute phase angle.
So, a microprocessor is required.
The data contained in a GPS has incredible amount of other useful data, including the time and date, location coordinates, etc., which can now also be stamped with the power systems measurement to be sent to the control center.
So, a secure, reliable, and fast communication terminal is required.
The PMUs: Device that executes the idea of phasor measurements
The Basic Schematics:
We have extensively described and executed 1st stage conversion on how to obtain 5 V peak sine wave from 230 V mains supply in many previous accounts.
In power systems where voltage levels are of the order of hundreds of kV and current in kA, the potential and current transformers are used to step down these values.
It is said that in analog engineering 90% of the stuff is just filtering, 9% amplification and the rest 1% is other nuts and bolts. That gives quite a clear-cut indication of fact that how much essential filtering is. Filtering is our first defense against errors.
Errors in various kinds of signal are generally indented by their characteristic frequency finger-prints. For general power systems the signals of interest i.e., the voltage and the current signal meander in a narrow range band of 49.5 to 50 Hz. So, a low pass filter is appropriate to stop most of the measurement noise, in the signal. Practically a filter is implemented by active and passive components.
A/D Converter and the GPS:
Here at ADC the core PMU function is operated. The key difference between the SCADA and the PMU system is the availability of common time reference at all the terminals. So, all the ADC, every time, begin their measurement at the same instant and thus the information required to find the relative phase displacement between that signal is captured.
Once we have got a faithful digitized replica signal of analog version of voltages and currents, we enter a comfort zone, by the virtue of powers offered by a modern computing platform like a microcontroller. Instead of building physical circuits using passive components, we simply write down our mathematical tricks in a precise language (Programming Language as they call it) and print it on uC and we are done.
Here in MATLAB, we are rescued by already available setups to perform DFT. We have both Simulink block as well as in-built function.
FFT function description:
The “fft” MATLAB’s inbuilt function that generates an output vector [1*n] consisting of complex data points for an input of discrete time-domain signal having n samples.
Equivalently it generates n number of what is defined as bins, each having a corresponding magnitude and the phase angle value. This is essentially frequency domain representation of the input signal, as each bin corresponds to some corresponding frequency, depending on sample frequency and length of signal.
Now the magnitude and phase angle of a particular bin related to actual magnitude and phase angle of corresponding frequency in a defined way as follows:
where x[n] is magnitude of bin number n.The point to be noted here is we were working in Simulink till now but to apply FFT we typically wrote a script in m-file. This one of the greatest advantages of the MATLAB platform. All of the stuff we do in the simulation is to be implemented practically, so each component has a corresponding hardware counterpart. For CT, PT electrical systems made of copper and iron (loosely speaking), ADC, and GPS receiver are implemented by dedicated integrated circuits, and for signal processing and data visualization, we use microcontrollers, which are operated by the brunt codes. The former is executed in Simulink and later is exactly mimicked by m-file very conveniently.
The block used to import data is “To workspace”.
Windowing and Zero padding: When we take samples of voltage and current waveforms using ADC and if the no of waves captured is not integral in number, then we get what is called as spread of frequency spectrum, which can be seen in terms of side lobes around the central lobe. This leads to the decreased measured magnitude of the fundamental, i.e., error in measurement. We can manage to get integral no. of cycles of measurement by essentially fixing the time for which ADC collects sample of a 50 Hz sinusoidal waveform, however, in the practical world, the frequency never settles at 50 Hz and tend to meander around 50 Hz (in range of 49.5- 50.5 Hz) as a result of disbalance in instantaneous real power generation and consumption. This in turn causes a non-integral no of waves to be captured. To deal with this, a typical hannowing window is applied to the ADC digital output signal to compensate for the trailing edges of the signal.
%% PERFORMING FFT on Bus 1
% Sample frequency
%Storing phase V of bus 1
% Pre-signal Conditioning
%to improve the fft accuracy for non-integral waves
v1r = v1r.*hanning(length(v1r))';
V1R = [v1r zeros (1, 10000)];
% Performing FFT
V1R = fft(V1R);
% Obtaining the magnitude and phase values for Bus 1
V1R_mag = abs(V1R);
V1R_phase = angle(V1R);
Notice the absence of any side lobes in the pre-conditioned signal.
What we have after FFT is magnitude and phase information of each phase at each bus. When unsymmetrical faults occur in system, we get unsymmetrical phase voltage magnitude readings as unsymmetrical faults lead to unsymmetrical currents and hence unsymmetrical voltage drops in generator and transformer windings and thus unbalanced voltage at the buses. This data cannot be used to effectively to interpret the system, leave aside the detection of fault and tripping correct circuit breakers. CL Fortescue is his ground-breaking mathematical work showed us an effective way to deal with unsymmetrical systems. The unsymmetrical components can be resolved into three sets of balanced components. Based on those components the faults can be identified by their characteristic resolutions.
%% Sequence Analyzer for BUS 1
%Define alpha and alpha squared
a = -0.5 + 0.866*i;
%Define sample frequency and max bin number
N = length(V1R);
fs = 100000;
bin_max = 10;
%Phase Voltage vector representation
vrb_1 = 0.37792*z*V1R_mag(bin_max) *(cos (V1R_phase(bin_max)) + i*sin (V1R_phase(bin_max)));
vyb_1 = 0.37792*z*V1R_mag(bin_max) *(cos (V1Y_phase(bin_max)) + i*sin (V1Y_phase(bin_max)));
vbb_1 = 0.37792*z*V1R_mag(bin_max) *(cos (V1B_phase(bin_max)) + i*sin (V1B_phase(bin_max)));
v1_pos = 0.3333*(vrb_1 + b*vyb_1 + a*vbb_1);
v1_neg = 0.3333*(vrb_1 + a*vyb_1 + b*vbb_1);
v1_zero = 0.3333*(vrb_1 + vyb_1 + vbb_1);
%Bus 1 Voltage plotting
bin_vals = [0: N-1];
fax_Hz = bin_vals*fs/N;
N_2 = ceil(N/100);
subplot (4, 2, 1)
A = 0.37792*z*V1R_mag;
plot (fax_Hz (1: N_2), A (1: N_2))
xlabel ('Frequency (Hz)')
ylabel ('RMS in kV');
title ('Bus 1 Phase Voltage - R Phase');
Apart from visualization of waveforms in time and frequency domain, we built a GUI to help see and comprehend the RMS magnitude, phase angle information, frequency and Circuit breakers status is more easy and convenient way. The app designer application of MatLab is used to build the GUI in graphical mode and then automatically generate its m-file to be embedded within the main code.
How it solves Zone 3 Maloperation?
Distance Relay works on the principle of impedance measurement. For a measured value of impedance less than the set value the relay issues a trip command. For zone 3 the relay maloperate as the measured impedance reduces below a threshold value either due to fault or even in cases for overloading (fanatically called load encroachment). Ideally, the distance relay shall operate for the first case but not for the second case. However, there is no true way to differentiate between the two, unfortunately, we had to go for load shedding, which just tends to avoid the locus of the impedance seen by relay to entering from zone 3.
Notice that the zone 3 protection is backup protection, thus operates with a time delay of 1 second. Now, this backup protection responsibility can be given to PMUs. Since there will always be a communication delay which is of the order of few milliseconds, so it cannot replace the instantaneous primary protection provided by distance relay. However, by measurement of voltage and phase angle, we can very well distinguish between the fault and overloading, this distinction is strictly not required for primary protection.
The Backup Protection by PMU: A two bus testbed
A simulation of three-phase bolted faulted at the bus 2.
Unbalanced current depending on the instant of fault a particular follows the highest peak. As expected, an increase in the current due to the fault, since the fault is symmetric hence the fault current settles to a balanced set steady state.
The frequency-domain information of current and voltages by PMU shows the presence of frequency other than fundamental, 50Hz. Time-domain representation accurately captures the R-phase fault current and voltage.
Also notice the presence of significant magnitude of negative and zero sequence voltages and currents, giving a reliable indication of the faulted state of the system.
This blog ventured to prove that the PMU data can be very effectively used to differentiate the faulted condition from a healthy or heavily loaded system. Unlike distance relay protection it provides reliable backup protection which is resilient towards load encroachment. And since PMU data takes few milliseconds due to communication delay it thus cannot be utilized for the primary protection.
Quite evidently all the usual SCADA problems are effectively handled by PMUs. Based on the availability of phase angle data, the angular separation data between the voltages of different buses gives much better visibility of the true state of system.
The applications of synchronized measurements are numerous in number and tremendous in their scope. The conventional ways of doing things like fault analysis, tripping events analysis, state estimation, grid monitoring, black start, etc. which were barely and insufficiently carried out by the SCADA system can be now done easily and accurately using synchronous measurement data. It has to be further noted that, after 30 years since inception, now being in the advanced stage of development PMUs are now being deployed for modern applications like renewable integration, voltage instability problems, highly complex grid monitoring and control.
Fault Analysis and related Technical Problems in Power Systems: Anshuman Singh Jhala
Power System Backup Protection in Smart Grid: Ms. SU Karpe and Prof. MN Kalgunde
Synchronized Phasor Measurement and their Application, AG Phadke and JS Thorpe.
Synchrophasor Initiative in India, June 2012, POSOCO-India
Novel Usage of Synchrophasor for system improvement: POSOCO, New Delhi, India
CEV had its first practical hands-on with MOSFETS when we tried to implement a primitive inverter circuit. Device used was IRF540. Back then we didn’t find it so fascinating, considering it just one chisel in our tool-box like resistors, capacitors and inductors, battery, diodes, etc. Only did we moved forward in our lives we realized how one single device characteristic if carefully manipulated can help us to build so many useful stuffs.
If we look at statistics, MOSFETs is most widely manufactured electronic device or component in the entire 200 years of human technical endeavour. The number in fact overshadows all of the other devices lined up altogether. Wikipedia says the total number of MOSFETs manufactured since its invention is order of 10^22. This is just a number we don’t have anything much familiar to correlate and help understand how really big it is.
Systems like an ordinary radio contain in order of thousands of MOSFETS to provide enough gain to EM waves to finally yield audible audio signals, the smartphone on an average contains in order of 10 Million, an i5 intel core processor contains in order of 1.5 Billion of them, the power supplies for electronic gadgets we use though utilize another variety of MOSFETS called power MOSFETS. The circuitry (power and control) used in handheld devices like trimmer, hair-dryers, toasters, washing machines (automatic), efficient motor assemblies, cars, airplanes, satellites, space shuttles, particle accelerators and what not………., all of them essentially have insane amount of no. of MOSFETs operating in one of its particular desired regions of operating characteristics depending on analog, digital or power device category, very silently and calmly doing its job it is supposed to.
MOSFETS single-handedly forms the backbone of entire analog and digital electronics. Yes, you heard it right, both analog and digital. It lies at the heart of almost all the basic components which are used to build higher-order circuits or devices.
Wait, wait, we promised ourselves to not take anything for granted so when we say analog and digital electronics what do we mean exactly?
Essentially analog and digital are two ways of playing with signals (of voltage or current). Playing here might literally mean fun like playing a song over a speaker, displaying a video on LCD, LED or CRT, talking with loved ones over cellular network, enjoying a live broadcast of a soccer match and capital FM or even as simple as using TV IR remote to frustratingly switch over news channels which spread crap at 9 PM oooooooorrrrrrr playing could also mean stakes as high as using an ECG and other biomedical sensors and instruments to save lives, sending and receiving radio signals of a pilot messages to ATCs, or implementing something as necessary as what we call www.
It is hard to think all of these sharing anything common, right, but in all of the cases we are simply manipulating signals all the time in order to just somehow do what we want using the analog ways or digital ways or most of times both.
Well, it may be hard to think what signal manipulating exactly means here, nor we intend to talk about the grudging details but what we want to first appreciate is the profound immensity and necessity of the things which we are going to talk about.
Again, taking nothing for granted, the first question to address is what exactly signal manipulation would be using analog way or the digital way?
The core requirement of real life the Amplification of signals:
Consider all the different kinds of sensors deployed on field to measure any physical parameter of interest like a temperature sensor in Air conditioners, a metal detector at airports, a stain gauge sensor, an antenna for radio waves detection, a heart-beat or pulse sensor, etc. In all the cases we exploit natural phenomenon to get variation of temperature, strain, EM waves, vibration converted to electrical signals (maybe voltage or current variations). The strength of converted electrical signal is by nature too weak for any purposeful use, like displaying the values of temperature or beats per second on some kind of screen, playing the song received on antenna, etc. The circuits that produce these magical outcomes can’t be driven using signals of such feeble power. We need a man-made device which can significantly boost the signal power.
Graphically. Amplification be like:
2. Filtering is another core requirement of real life:
In the electrical signal at the output of any practical sensors, we have by nature something called a noise. These noises are result of different reasons for different systems. To separate the noise from the useful signal based on the characteristics of systems we use signal manipulation technique called filtering, using something called as filters.
3. Along with these basic kinds of manipulation we have another range of signal manipulation, which essentially helps us to do computation. Like mathematical operations like addition, subtraction, integration, etc. can be achieved using voltage dividers, RC circuits, etc.
In these cases, we by default assumed that signal voltage or current can take infinite number of possible levels in between any two finite levels, between 3 V and 4V, our signal can be 3.11V, 3.111V, 3.1111V, etc.
Why go digital, if we can do it all in analog?
Most of time in digital world first we learn how to do it, then do it and only then we understand why we did it. Digital way of doing things is especially advantageous in doing things described in (3).
Digital way is moving from representing infinite levels signals to no levels between signal levels, only two levels called high and low. This doesn’t make direct intuitive sense unless we study them first.
However, some obvious motivating reasons for moving for digital way is inherent noise immunity, and simplicity.
The digital world has its own kind of signal manipulation requirements like inverter (NOT), adding (AND), orring (OR), etc, in general elements which execute these are called gates.
The layer upon layers upon layers…………
All of this begins by looking at nature. Because we are simply restricted to things, she can provide us, no other choice. Our role is to observe, modify and manipulate whatever she can offer us to make some good use for ourselves.
Resistors, capacitor, inductors, battery, semiconductor switches (Diodes and Transistors) all of this forms the most primitive components which are most basic building blocks. Also, in this category we have devices which exploit natural phenomenon like Photoelectric Effect, Piezoelectric effect, etc. to make sensors like photodiode, strain gauge, etc.
Using these components, we build a little higher order systems, say for example a voltage divider (using battery and resistances), a primitive filter circuits (using resistors, caps and inductors), or maybe most importantly the center of this discussion, an amplifier circuit (resistor, transistor, and battery).
The next order of systems now comprises of these little systems as basic blocks. Like an operational amplifier which uses many amplifier circuits and voltage divider bridges. Something called as gates (NOT, NAND and NOR) are also build using the twisting the same basic amplifier configuration and adding more switches, etc. This layer also set forward two categories we lovingly call analog and digital electronics.
The next layer uses op-amps and gates as their building blocks. For examples in analog world, we can have a comparator, a voltage follower, an integrator, a differentiator, an oscillator, etc. And in digital world we can have what we call combinational logic circuits like flip-flops of varieties D, F, JK, etc.
Things getting interesting right, however still not that useful.
The next layers use these elements as building blocks. Using comparators, integrators etc., we can now start making something like trivial voltage, current and frequency measurement units, we can have active filters, a small power supply, and so on. In digital world the notion of time is introduced by using time signal (clock signals), which is a giant leap.
Now we can have these systems deployed for forming part of even bigger layers. In analog domain we can implement control system feedbacks and jillions other circuits called integrated chips (ICs). Digital world however these days go on building more layers of complexities. The layer of assembly languages, and then higher-level languages like C++ all of them takes off right from here. It becomes so far-reaching that entire branch starts up from here, the CS.
Using these same blocks microprocessors are built, computers also somewhere follow up as we go on and on. EEs have limits on how far they can go, so we stop here, to give the lead for Comps folks.
Personal computers and smartphones are most popular example of highly complex layer upon layers of analog and digital circuits which tends to response to the applied input signal in quite a predictable way. However, the layers of complexity are so magnificent that it is hard to believe that at the core they are made up of fundamental components no different than that of a small TV remote or a decent bread-baking automatic toaster, it is analogous to seeing humans and amoeba under one umbrella, both made of strikingly similar fundamental biological concepts.
One can literally draw the single line connecting these basic elements layer by layer to all sorts of final-end technologies.
Where does MOSFETs fits in all of this?
To have a more insightful view consider these examples:
MOSFETS are fundamental element used in amplifiers.
MOSFETS are fundamental element used in gates.
Amplifiers are themselves basic building blocks of all analog systems. Gates themselves are building block of digital systems.
In this piece, we will see how MOSFETS unanimously able to take fundamentals roles in all the above-mentioned systems.
It all began with Mahammad Attala in Bell laboratories trying to overcome the bottlenecks of BJTs. Namely the higher power dissipation due to base current and hence low packing density, making it impossible to build advanced circuit smaller in size.
MOSFET Physical Construction
Now as engineers we have to be careful in understanding device details as a complete understanding would require backing-up with quantum physics explanations and at least 10 years of dedicated focused study. The key is to carefully listen to physicist and simply ask only for the details which are of our interest.
As far as device is considered, as engineers we need to know is answers to hows and whats only, but strictly no whys.
WHAT is a MOSFET?
MOSFET is a four-terminal semiconductor device, in which the resistance between two of the terminals is determined by the magnitude of the voltage applied at the remaining two terminals. The range of variation in resistance between two interchangeable terminals called source and drain is very large, extending from few milliohms to 100s of megaohms on relatively small voltage changes at the two terminals called gate and the base (or substrate). For simplicity manufactures internally short the source and the base, it thus becomes a three-terminal device and thus a voltage across gate and source changes the resistance between the source and the drain. This is not all to it, the variation of resistance is not simply linear, it is somewhat weirder, involving several twist and drama of semiconductor physics.
The gate terminal is metal plate separated from the body by an intermediate dielectric layer, SiO2.
The source and drain are two oppositely doped regions as compared to the parent base body of MOSFET.
HOW does it work?
At zero source (or base) to gate voltage, the source and drain terminals are essentially open-circuited, as two p-n junctions appears between them in reverse.
For an n-channel type MOSFET:
As we begin increasing the gate voltage (positive wrt source/base), positive charges begin to accumulate on the metal gate. The corresponding electric field is allowed to penetrate through the intermediate dielectric into the p-type base region between the source and the drain terminal. The exact distribution of field is however currently is beyond our strengths to explain. But the effect is quite intuitive that the minority carrier in p-type will start getting accumulating just below the gate. Not knowing the exact physics but at certain magnitude of voltage level, the devices develop a region so full of electrons that it acts as n-type doped region, and so is called n-channel. This particular voltage is called threshold voltage. The appearance of n-channel effectively results as if the source and drain were connected by a resistance. This 3- D channel’s length and width are inherently fixed by device construction however the depth is determined by the voltage magnitude. The depth is proportional to the excess of the gate voltage above the threshold voltage. This channel indeed truly acts as a resistor, if separation is more the resistance is more (r proportional to length), if the width is more resistance is less (r inversely proportional to the area), and similarly the depth dependence.
Current still won’t flow between the source and drain. If we now also begin increasing the drain voltage wrt source, the ammeter needle comes alive. So common sense says if we go on increasing the DS voltage the current will go increasing linearly, as the channel is an epitome of resistance😂😂😂, but not. The channel depth is proportional to the excess voltage Vgs – Vt. As we go on increasing the drain voltage this excess of voltage mainly responsible for the depth of the channel, constant at the gate end but begins to drop at the drain end. At a certain point, the channel shuts off at the drain end. It is obvious to suspect that current should drop to zero, but instead the current saturates to some constant value, and the phenomenon is catalogued in literature as pinching-off, and device is said to gone in saturation mode.
What are the operating characteristics and relevant equations?
We study the MOSFET characteristics for different values of gate voltage. Until the Vgs is less than Vt the drain current remains zero for all Vds, as if open-circuited. For some Vgs greater than the threshold voltage, we plot Ids vs Vds. At much smaller values of Vds the current increases almost linearly, then due to narrowing of channel at drain end due to increasing Vds, the current saturates to a value at the pinch-off point.
The drain-source is open-circuit:For all:
The source-drain current is given by:For small Vds, the square term can be neglected and response is approximately linear:
For all Vds ≥ Vgs – Vt, the current saturates at a fixed value, given by substituting Vds = Vgs – Vt:
“What is the distribution of electric field, why at pitching-off it still conducts current, derive the expressions”. All these are extremely interesting questions to take up, but as far as engineering is concerned it won’t help design the circuit any better, so we don’t mind answering them in free time.
The most repeating circuit pattern of our Electrical lives, we can’t trace anything down to something more fundamental than this. Right here we saw for the first time the gate and the amplifier. Let this pattern dissolve in our blood, imprinted in our DNA, memorized in our brains and printed on walls of our heart. Well, that’s how fundamental it is. 😂😂😂
Before directly jumping to equations, let us first build intuition of how this circuit will respond to different applied input, which will allow us to flow through equations smoothly and swiftly.
So, what we need to imagine is the response of the circuit for different applied inputs.
For some applied value of drain voltage Vdd, we begin increasing the gate voltage slowly. As expected, until it reaches the threshold point, drain and source remains open circuited. Current through drain resistor is zero and hence output voltage equals Vdd.
As the threshold potential is reached, the device just develops the so-called n-channel. Notice the current will just begin to flow and DS voltage will thus start dropping. Since the excess voltage is still smaller, and the DS voltage is sufficiently large to drive the MOSFET into the saturation region.
If we still increase the gate voltage then excess gate voltage would be too much for the DS voltage to keep the MOSFET in saturation region. With increasing excess voltage, the channels widen, dropping the resistance, increasing the drain to source current and thus dropping the drain to source voltage, and at one point DS voltage is lower than Vgs – Vt and the MOSFET enters the linear region. (often called triode region)
Notice we understood the operating characteristics is reverse order. To visualize in terms of how the MOSFET operating point moves on the operating characteristics will give more better idea.
At 2, the device just turns on and large value of Vdd immediately drives the MOSFET into saturation up to 3 where the MOS starts entering the triode region. Large dropping the DS, thus the output voltage to a very small value.
Applying KVL, we have:
For region 1 to 2:
2. For region 2 to 3:
Current saturates at:
Thus, we have:
Parabolic drop confirmed.
3. For region 3 to 4:
Current should be given by equation:
Thus, we have:A rather useless relation. 😀😀😀
MOSFETs as GATES:
We know that any kind of combinational logic can be implemented using three fundamental gates namely NOR, NAND and NOR. How to use this circuit for a NOT operation is quite evident from the transfer curve itself.
For small input voltage range, the output lies in range of some high voltage level, representing digital high logic.
For a range of high input voltage range, the output drops down to a range of small voltage levels, representing a digital low. So, all we need to do is to set Vdd and strictly define the input and voltage range for low and high logic., and we are done, we have got an inverter (NOT).
MOSFETS as Amplifiers
We have seen the requirement of a man-made device called amplifier to obtain a crucial signal manipulation, called signal amplification.
Amplifier in most general way could be called a source of energy which can be controlled by some input. Anyways there may be many more ways to look at amplifier, for example the earlier description of a transfer function block. More specifically this fits better into what we can call a dependent source. Before we understand what is amplifier let us understand what is not an amplifier. So, the element to be first excluded is a potential transformer. Though we can have a voltage amplification (step-up) we also have the currents transformation in inverse proportion so that power remains constant, similarly current transformer, a resistor divider, a boost configuration, etc. in which we have no power gain couldn’t be called amplifier. On the other hand, a MOSFET or a BJT appropriately biased, an op-amps, differential amps, instrumentation amps all are collectively called amplifier. Because we have a power gain at the output port wrt to an input port.
With one port as output and one input and third of course power port, theoretically speaking we can have at max 4 combination. Namely, we can have a current or voltage source at output, and we could have voltage or current control at input.
Any device for purpose of amplification invented in past or been invented or to be invented in future will fall in any one category.
The two-port theory becomes of immense utility, to easily describe different amplifiers in different matrix form, like Z-parameter, Y-parameter, h-parameter and g-parameter. We are constrained to not describe the theory in full detail; however, we will be building insight and motivation to study them.
We will use the same trademark configuration to do the amplification too. Isn’t this ground breaking? We had already built fundamental block for digital systems, and now we will again be using the same circuit for amplification which is of course an analog block.
So here it is:
Remember, we didn’t talk about the region between 2 -3 when we studied this circuit acting as an inverter. We strictly worked in 1-2 or 3-4 region only.
The transfer functions in 2-3 region as previously computed is:
Though output voltage is proportional to input voltage, but nowhere close to linear. Remember what we have and compare it with what we wanted:
And here is the greatest revelation as the legends in this field had described for decades.
“The input signal is constrained such that the circuit approximately gives a linear response.”
And the revolutionary constraints are:
Giving a DC level shift, to drive the MOSFET in the saturation region, popularly called biasing voltage, and
if the input signal is small enough the transfer curve is much close to a negative sloped straight line, which is in fact linear amplification.
If we zoom enough, here is how the amplification would look like. Notice inversion is there but a good linear amplification is also achieved.
We can also show that using the equation below that for small changes in input voltage indeed cause a linear change in the output voltage.
So, we now comprehend the design problem of the amplifier as selection and operation at biasing point to get the best possible linear amplification for a given gain requirement.
And that’s a wrap. From here on we go on learning cascading amplifiers as one unit is not always enough to give desirable gain, which leads us to study the effects of stray and coupling capacitance which becomes especially troublesome when dealing with high-frequency signals, which then leads us to something called differential amplifiers, operational amplifiers, and as already describe we eventually take off from here.
All of this would be no so much use unless we also consider the energy consumption. Why it becomes so important can be understood by walking through some numbers.
Consider an inverter gate is build using the exactly as we have described.
For SMD MOSFETs of today’s technology, typically
K is 1 mA/V^2, Vt =1 V, Vdd we take 5 V (TTL Logic), and let low logic at the output is defined between 0-0.2 V
When gate is OFF, high level at input and low level at output:
Power consumed by circuit is:
For order of 10 million of them:
This very rough approximation of power consumption is not at all pleasant to see for 10 million inverters in days when processors are reaching the range of 4-5 Billion of them.
We would require a dedicated diesel-generator set for one 200-gm machine. Of course, we do something about it, that’s why our laptops could be powered by a 60 W Lithium battery. The solution is quite a creative one. They call it CMOS (Complementary MOS).
In order to have incredibly high resistance, when the gate is off and very small resistance when the gate is on, a PMOS is used to replace the resistor. PMOS transistor has exactly the same operation as NMOS, except it is open-circuited for the high level at input and short-circuited at a low level at the input. Also, Vdd has managed to reduce to 3.3 V to reduce power consumption.
We didn’t learn all of the stuffs by sitting down and just glaring at MOSFETs. The entire credit for vivid imagination and connecting the dots goes to numerous books, all the lecture series, few research papers, beloved Wikipedia and all the awesome discussions we had with our friends.
We are thankful to a Lecture Series on Fundamentals of Digital and Analog Electronics, 6.002 MIT OCW by Prof Anant Aggarwal, two 40 lectures series by NPTEL on Analog Electronics by Prof Radhakrishnan, an introductory lecture series on Semiconductor Physics and Devices by Prof D Das IISc B, Basic Electronics Course by Prof Behzad Razavi of Princeton University. This article is result of rigorous brainstorming of ideas, concepts and insights gained from all the above-mentioned sources and then making our own speculations.
Our final aim in one sentence is “to make safe electrical power available to all 24*7 round the year, round the decade and so on”.
And that phrase says almost everything we require to do.
As an electrical engineer that’s all what we want to do in our life, everything for it. From now on, anything we think or do professionally is going to manifest this final aim, have you ever come across anything holier than this.
We have very carefully phrased the paragraph to capture whole of electrical engineering in its entirety.
So it goes….
“Safe electrical power”: indicates the first necessity i.e. the safety of electrical power, which is all about operating the power system in a strict pre-defined range of parameters including active and reactive power levels, voltage, current, power factor, and distortions.
“Available to all”: indicates the affordability and treating electricity as not just mere commercial commodity rather a basic service for all. The economy of the power system is essentially a science of figuring out how much to turn the knob of which power plant.
“24*7 round the year”: set for us the reliability feature of the power system. Now, this includes very smartly designed protection systems which largely sits idle just waiting for the time to be called in.
“Decade and so on”: indicates the security feature of the power system that we wish to keep on powering the world as long as human exist which requires to keep looking up for new sources of energy. Notice we may be interested in anything that can jiggle the electrons in the wire at 50 Hz. So solar, wind, geothermal, tidal, and even Nuclear Fusion and Fusion are all the cards we keep stocking in our free times and weekends.
Any subject you will ever study have its application in at least any one of the above-mentioned categories. Just fast-forward how the subject will help in achieving this final aim and you will get hugely motivated and interested to take it.
Another facet we miss is enabling ourselves with the tools of engineering. And one of them is Simulation Software. The simulation software has immense capability to add to the fire and gives wings to our imagination. No doubt anything that you could ever do with simulation software could also be done by your hand on a white sheet. But the sheer advantage of vivid visualization of things, accuracy and validation of results and ease with which things could be done is truly great. On your desktop you can build anything you want a large power system to visualize the load flow and system natural frequency (as we did in Harmonic Resonance Study and fault analysis) using MATLAB, a microcontroller-based system to do crazy things (as we did in Harmonic Analyzer) using Proteus, an analog circuit comprising of the wonderful op-amps to perform any mathematical functions (as we did in power module) using MATLAB, you can plot with extreme accuracy and detail and easiness the response of any transfer function using bode-plots, pole-zero plots, Nyquist plot, (as we did in designing buck converter) using Scilab You can tweak and play with the drive system of any machine like PMSM, BLDC, Induction motor, DC motor, etc.
One of the crucial practices in engineering is a sound appreciation of comparison between all ranges of systems and equipment.
Various systems types (machines, circuit configurations, etc.) are available at our disposal, what enables us to make a good engineering decision to go with a particular type and not with another for a given specific application is our ability to distinguish between all options available.
Will you use a DC machine, a Squirrel Cage Induction Machine, or a Synchronous Motor?
Will you use a Cylindrical Rotor or a Salient Pole Rotor Synchronous generator?
Will you use a Ward Leonard Drive or a Static Ward Leonard Drive?
Will you use an HVDC line or an EHV-AC line?
Will you use a Voltage Source Inverter Drive or a Cycloconverter Drive for V/F control?
Will you use a Synchronous Condenser or a Static VAR Compensator?
Will you use a MOSFET or an IGBT?
Will you use an Overcurrent Scheme or a Differential Scheme for transformer protection?
It will take another 5000 words to carefully analyze which choices to make under which conditions, so we will leave on your own to figure out why!
Sooner or later we will be confronted by all these sort of real-life MCQs in our career, to make a good, economic and futuristic decision one has to be very critical minded while studying and comparing all varieties of systems among systems.
Another thing we want to bring to your attention is having a mindset to pay attention to all the electrical engineering stuffs going around you. Like noticing the voltage and power levels of various equipment and systems (traction operating at single phase 25 KV, wattage ratings of household items), noticing design and structural details (the reason behind the shape of a three-pin plug), visualizing and analyzing waveforms and distribution of fields in 3D space of the street power lines, even noting which brand of EV uses which types of machines and so on. This helps in either answering a wide range of short questions asked throughout and more importantly helps understand and connect better while actually studying those things.
Having a technical discussion with a loving friend can immensely help in getting oneself easy and clear with the terms and concepts which are otherwise sound so technical. It is a very effective way to sharpen one’s engineering language accent of talking and thinking. So, we are not just engineers on our working table, in our classes and in our labs, to unleash the full potential we need to be literally obsessed with these stuffs in all spheres of our lives from personal to public!
Since we have so much stressed upon enabling nature of these tools learnt in four-year course, we must now lay down its disabling feature.
And lets us illustrate this with a small regular classroom incident:
In our second-year lecture, Prof. AKP Sir asked us to differentiate between the underground system and the OHT system, each of us made him count every technical detail like less corona loss, lightning protection, fault location, etc., very technically. But all of us missed the most critical point for which some great engineer had devised the underground system for, we failed to see that the OHT occupies more physical space than the underground system. That was the evidence that our natural intellect was hijacked by the professional knowledge.
We had acquired the technical knowledge in the wrong motive. We think that it is the most crucial tool to enable us to see different and otherwise difficult things, whereas the truth is that it is just an aid to our natural thinking to understand and describe the things easily. We are so trained to think in a loop that we literally miss very crucial points which if we were not trained could have thought about.
So, it is very important to be always grounded in terms of thinking and not take many facts for granted.
In the end, we have:
Image Courtesy: Goalcast
Engineering in the 21st century has become quite well defined, we now have sophisticated understanding of things, unlike in the past when people considered magnetism and electricity different. Now problems have become accurate in their own terms, there are much fewer compelling questions of “why” rather than “how”. For example, how to accommodate renewables on the grid, how to solve the battery problem, how to spin motor greener and smarter, etc. Throughout the course we are presented with all the necessary tools and hacks which are very logical and easy to understand with little mind-force.
On the other hand, in our everyday life due to some reasons we take up the wrong fight. We are busy somehow dogging the assignments and the quizzes and so on, completely missing the true fight we actually are in, and that makes a difference between enjoyment and getting oneself literally tortured.
NOTE: All the statement made in this blog are authors own mere speculations it may be wrong, so an active reading is greatly expected. Don’t’ keep the statements until you yourself get sure of validity.
The military world has a striking work culture, in fact there are many, and today we are here to reflect on one particular culture of our interest. When a group of soldiers come from any dangerously tiring mission, they don’t drop their weapons and just fall to their beds, as we folks do after our classes and labs. They wash their wounds and immediately sit down to catalogue with utmost honesty an account of what happened in the battle-ground. They critically examine what went well and what went wrong. The leader then reaches to high commands to give a debrief of the operation.
Well, this has a very precise purpose; it aims to carefully learn and bring lessons to their fellow generations of young soldiers which otherwise could unleash catastrophic fates.
They keep on updating the never-ending list of how to not get killed in a fierce encounter with the most inhuman truth of humans.
If we could bring a minute fraction of how things are done in the military, we can have profound changes in our everyday conditions deep inside our national boundaries. On the same line, we are here to note down with similar honesty a journey of four years which we popularly call engineering.
With the same vision to give an account of what all went well and what went utterly wrong.
Electrical Engineering is a 200 years old science having Michael Faraday and JC Maxwell as forefathers followed by the genuineness of Nikola Tesla, T.A Edison, Steinmetz, CL Fortescue, Harold Black, M Atalla and a long legacy of great exploring minds. The course condenses important and most relevant works in just four years, which is in fact small as compared to two centuries, but still not a cup of tea.
Four years is quite a large time to hang-on, thus many times people lose the bearing of what they are into, unable to situate themselves with ongoings and hence lose their sight.
By the end of this piece, you would be presented with a panoramic view of the scene you hopefully be confronted-with after your own four years, so that you can always reflect and find yourself.
In the beginning, you may be very interested in learning how the whole energy system works and the hard truth is you will never get to know about it in the first year itself, in fact even the slightest gist is rare. You have to go through many building theories, sometimes grudging math, few “boring appearing” experiments, etc. to finally be able to appreciate the whole picture. You will come across Fourier series, Solutions of differential equations, Complex Algebra, Symmetrical Components, Laplace and Perks Transformation, Tylor expansion, some dead appearing theorems like superposition, Thevenin, etc., behavior of electric, magnetic forces and electromagnetic phenomenon, a mesh of transistors and MOSFET called operational amplifiers, which at first sight will hardly make sense for great application in power systems. But when you develop your arsenal consisting of all these simple but powerful theories, tools and gadgets, later you literally get amazed by their capacities.
You have the Eureka moment in final year!
The significance of Fourier analysis to understand and analyze behavior of wide range of non-linear systems (like inverters, rectifiers, etc) and applications in the study of power harmonics, the solution of differential equation to figure out the transient behavior of almost all the electrical subsystems from machines to faults in transmission lines to the study of the opening of the circuit breaker, the use of complex algebra to facilitate AC calculations, the utility of symmetrical components to study unbalanced conditions in polyphase systems, the use of Laplace Transform to turn differential equation to simple algebra, the Tylor expansion to approximate trigonometric values using analog circuit, the use of Thevenin and superposition to enormously simplify network calculations, operational viability of electric and magnetic forces and electromagnetic phenomenon to executes all range of machines, measurement instruments and relays, the op-amps to amazingly implement any desired mathematical operation and so on.
We goanna list the important theorems and prevailing concepts and their application in the larger scheme of things, we want to put in front of you a panoramic view of how it would look like after you get through this amazing four-year journey. We wish to put all the pieces together to help to get a grand view of the symphony of 21st-century power systems.
The Maxwell’s Laws
“The scope of these equations is remarkable, including as it does the fundamental operating principles of all large-scale electromagnetic devices such as motors, cyclotrons, electronic computers, television, and microwave radar.”
-Halliday and Resnick
Majority part of Electrical engineering is the manifestation of Maxwell’s laws. The KVL, KCL, the machine theory almost all of it can be understood by starting with four maxwell law or conversely start reasoning any stuff it will eventually boil down to Maxwell Equations.
Let us illustrate that the most basic laws the KVL and KCL are mere special case of the third and the fourth equation.
Consider a simple resistive circuit excited by a DC voltage source.
So here current would be:
Because: V= IR
Because: V-IR = 0 😂😂😂
Too much obvious, isn’t? Just be in game, it will show how facts which we take for too granted come to diss us someday.
Let us ask one more, “why?”.
So, answer is because the algebraic sum of voltage in a loop is zero.
Now you see our EE theories falling apart. Many of us would not be able to answer this “why”, because we take KVL for granted.
Let us put one example where KVL will just tear apart completely.
Assume, this coil now is in a magnetic field and is externally rotated at constant RPM, somehow maintain the contacts with battery.
Apply KVL now, V= IR should still hold, but we get horrified by what ammeter looks like, it shakes.
So, the catch is KVL is just a special case of some other law. That other law is the third law of Maxwell. It says line integral of the electric field around a loop is equal to the rate of change of surface integral of magnetic field with the loop.
Popularly quoted as “the EMF induced in coil is rate of change of flux through the coil”.
If the right-hand term is forced to zero, we get KVL.
So, whenever we apply KVL to any loop of a circuit we unknowingly set the rate of change of flux linking the circuit to zero, if that is not the case as above, we get wrong answers.
We know for sure that KCL is also a special case of Maxwell Equation, but we by now, are not quite able to manipulate the equations, it would be updated shortly.
You can also write to us.
The Grand Theory of Machines
Very broadly speaking, there exist two types of machines, the machines running on DC Supply and another on the AC Supply. Under the DC category, we have shunt, series, compounded motors, under AC category we have Induction horses, Cylindrical Synchronous and Salient Synchronous machines. Other machines like BLDC, Stepper, SRM are extension of these basic machines only. It takes over two semesters to get all of them in our heads, and that to just vaguely.
Nearly all the important calculations on DC Machines can be made by three simple equations.
All the engineering relevant parameter of an Induction Motors can be deduced by drawing the equivalent circuit.
This simple diagram all the details the rotor speed (if synchronous speed is known), the current, power factor, the core loss, the air-gap power, the rotor copper loss, the mechanical power and the torque.
For Synchronous Machine we usually draw phasors to get all-important numbers like Torque, power, current, power factor.
Dropping down all the details of how the machine works almost all numerical can be solved out if one remembers the equations, equivalent circuit and phasors as mentioned above.
But it becomes tricky when one tries to explain them, as for different machines we have to follow different approaches to explain the generation of forces and so on.
General approaches are:
The DC motors rotate as current wire experience a force in a magnetic field, Induction Motors runs by virtue of the principle of Electromagnetic Induction, and Synchronous motor runs as rotor field gets locked with the stator field.
We know you would have squeezed your eyebrows as you walked through this paragraph.
There may be several lines along which they may be unified, but here we present our own speculation to understand and explain the operation of all kinds of motors using one single theory.
Basically, the force generated in all the motors (DC or AC) is analogous to force experienced by a magnetic dipole in a magnetic field, trying to align along it. Theoretically, if the angular position in space is stationary or both moving but stationary to each other, then the torque will be constant.
Notice that for given magnitude of the vectors the forces depend on the angle between them, maximizing at 90-degrees.
In DC motors the field produced by stator is fixed in one direction, the rotor though rotating using the help of brushes maintain the current distribution at any angular position in space fixed, irrespective of which conductor coincide at that position it always carries same current in same direction, thus giving rise to another magnetic field (or we shall say dipole) which also remains fixed in space at 90-deg. It is these two dipoles which generate torque and make the rotor rotate.
Induction Machines doesn’t seem to work on this analogy in any way.
Well, let us check.
So, the stator produces a rotating magnetic field at a fixed synchronous speed, meanwhile, the rotor rotates at speed somewhat less than synchronous speed, depending upon the load on the shaft, or depending on slip “s” as we say. Speed of rotating magnetic field of rotor is sNs wrt to rotor structure, this structure is itself rotating with a speed Nr, so the speed of rotor field as seen by the stator field is sNs + Nr- Ns i.e. is luckily zero, hence again we can explain the torque on the rotor by the interaction of two moving magnetic vectors but fixed relatively.
In Synchronous Machines, it’s easily identifiable that field produced by the rotor is fixed wrt to rotor structure but rotates at synchronous speed wrt stator as rotor itself rotates mechanically at synchronous speed. The other field produced by a balanced stator is obviously rotating at synchronous speed, thus again allowing us to imagine torque on the rotor due to interaction of fixed magnetic field and a dipole.
That gives us the freedom to explain the working of all those machines in one shot-one go.
We are also trying this theory to get around Vector Control of Induction motor.
We haven’t figured out but we wonder if similar analogy could also be applied to BLDC, SRM and Stepper too!
Until now we were uniting all the machines to understand the rotating machines as a whole, now let’s divide them.
And when we try to divide them, basically we are entering a domain called Electrical Drives System, in which clear and very sharp boundaries are drawn to distinctly identify the machines for their purposeful use in a given required operation.
Almost all the major subjects of electrical engineering come under the umbrella of electrical drives. Obviously, the machines itself, power electronics for proper power conditioning, control systems for power electronics, the analog and digital electronics for control systems, and lastly the microprocessor for making those electronics alive.
The DC though provide easy speed control, but the problem of sparks and heavy maintenance makes them unfavourable.
The Induction Motor being a very simple, rugged, cheap device and sparkless operation makes them suitable for almost 75% industrial applications today. The bottleneck of Induction Machine is its inherent characteristics to draw reactive power from mains. At higher power levels power factor becomes a crucial parameter as it greatly affects the efficiency, motor heating, overall power system overloading, drop in supply lines, etc.
A higher power rating synchronous motor with god-like control over the power factor is an obvious choice. Not just UPF it can be made to operate at leading power factor, thus balancing the reactive power requirement of industrial setup as a whole.
The Power Flow Equation
Consider a very generalized two-port network. Using KVL we can figure out the current and hence the active and reactive power flow at both sides.
The apparent power at receiving end:
As a power system subject always tends to neglect resistances, so the active power can be approximated to:
And the reactive power can be simplified by assuming δ small:
These two equations apply for transmission lines, synchronous motors and generators as well, and are very prominent equation in EE.
Assuming the bus to which synchronous generator is infinite bus, this real power equation becomes useful in the swing curve and is used to study the steady state and transient stability of the gens.
We will see how it is useful in transmission line.
This equation (1) indicates that the direction of power flow is determined by the delta angle (commonly called power angle) and it worth noting that the power flow can occur from to low voltage to high voltage level too, if delta allows.
The equation (2) indicates towards a very crucial phenomenon in transmission line. If the Reactive power is more than there will be more drop in voltage, and conversely more voltage sag indicates towards more reactive power being extracted. Transmission lines being a very high voltage system hence the Voltage Regulation of 5-8% is required, so a strict control of Q is desired.
Moreover, the reactive component of current would cause an unnecessary ohmic loss in long lines as well as underutilization of every component. Reactive power rather than being supplied from generating station it is good to provide it locally hence distribution station began switching their capacitor backs when such voltage sags occur.
The Indispensable Control Theory
Control Theory is about dealing with disturbances which is the absolute nature of nature anyways.
If we had known for sure the response of any system for given input then achieving any desired output would haven’t been much difficult.
For example, if you know for sure person will slap you back if you slap him first, then it won’t be a difficult task to get oneself slapped. The catch is “surety is not there”, he might forgive if he is in a good mood you or even give you a headshot if annoyed, at worse.
Control theory largely accounts for the disturbance, how to still maintain the desired output even under any uncertain disturbances.
All the parameters on which we judge the system performance like fast settling time, less steady-state error can be easily achieved with a suitable controller in an open loop. Close loop, on the other hand, creates stability issues to most of the stable plants, brings the problem of sensor noise, but the greatest advantage is that it takes into account the disturbances (changes in plant models, or externals disturbances, etc.), which becomes of extreme interest in a natural environment.
Power System is a dynamic system, by that we mean it keeps on changing all the time. The tremendous amount of energy that is being generated should always be equal to the energy consumed at any instant because there is no storage in between. Thousands of generators are just only spun and excited to just exactly meet the load demand of millions of consumers spread over a vast geographical area.
This is a huge challenge if we think more deeply.
If we had known exactly the load demand (say 10 W) and we know the loss in lines (say 2 W) and generator losses (say 0.01 W) then we would have calculated the exact rate at which we should fire the coal and we are done and have gone out to play soccer.
Problem is that 10 W never settles. Every time we turn-on even a light bulb the power system adjusts itself to a new equilibrium state.
The pressing of a switch, falling of a tree on lines, or falling of electric poles itself etc. are all different types of disturbances and fixed safe level of parameters like voltages and frequency are desired output of the system, with input being the coal fire rate or diesel-burning rate, or watergate opening, etc. Without feedbacks, we can never do that. Though it’s not like there is just one feedback going back to power stations from the load centres, control system exists at all levels, and it leads to the overall system working as if it were a one close loop. Hence studying Control Systems and Theory becomes of extreme utility to us.
How would we do that will unravel the need to study Analog Electronics, Digital Electronics and Microprocessor and Microcontroller Systems carefully.
The Leverages of Power Electronics
One of the leading reasons why Edison lost to Nikola Tesla in the war of current was the inability to manipulate the DC unlike AC power whose voltage levels could be pushed to extremely high levels with easy by the use of transformers and thereby improving the efficiency and performance of whole power system, leading to the concept of centralization and utilizing the economy of scale.
DC systems like DC motors and DC transmission lines, were hence largely suppressed as the growth of AC accelerated, but they do have their own advantages. And now with hacks of power electronics, DC systems are now gaining ground as complementary to AC systems.
Let us illustrate by a few examples where AC systems have bottleneck and the power electronics comes for their rescue.
Case 1: Power flow in AC lines
The limit on maximum power transfer through a line is the thermal limit and dielectric limit, if the system is already at ultra-high voltage levels then thermal limit becomes the ultimate limit.
So, what we can do to achieve the maximum transferable power.
Voltage levels are raised to dielectric failure, Delta we cannot increase beyond 30-degree. So, the only controllable parameter in our hands in line impedance.
To achieve that max limit decreasing the line impedance is only at our disposal, if there is no control over line impedance then the power lines will be greatly underutilized.
Power electronics allows us for a clean, simple stepless control of effective line impedance, called the series compensation.
Not only that PE now has matured enough to DC manoeuvrings at extraordinarily high voltage and current levels, which enables the concept like HVDC lines that has extremely desirable properties of easy power control.
The direction of power flow was dependent on delta angle, and there is not too much freedom in manipulating this angle nor it is easy it light of stability problems.
HVDC however doesn’t suffer from this issue.
Follow this ABB Hitachi Power Grid commercial advertisement on how the HVDC was only capable to do what they did.
Case 2: Speed Control Problem in Induction Motors
DC Motors were predicted to become obsolete by the end of 1960s but one can see them alive, in fact quite prosperously.
It has very desirable operating characteristics.
For shunts motors, if torque demand increases so armature current would be increased proportionally if the terminal voltage is kept fixed the change in current would not affect E much as small armature resistance diminishes the effect. Hence speed remains almost constant.
If we want a higher rotor speed, we just simply decrease the field flux.
If we want to operate in a lower speed range, we decrease the supply voltage.
Notice y appropriately changing the parameters many desired characteristics can e obtained.
However, for induction machines, one doesn’t have such a degree of freedom.
Ones a machine is designed its maximum slip gets fixed. Obviously, the maximum slip would be kept low for better efficiency. So its rotor RPM gets limited to a very narrow range. Beyond this limit, the machine would be unstable as we know.
So, we can get a great range of torque for almost constant speed. But varied speed operation is abandoned.
Power Electronics has helped overcome the speed control problems of Induction machines and synchronous machines with the advent of VFDs and other advanced scheme called vector control which almost transforms an Induction Machine to a DC Shunt Motor.
The Torque speed characteristic can be squeezed or expanded by varying the frequency and taking care of supply voltage as other saturation problems or insulation kicks in.
Only at the mercy of power electronics, those drives could be built.
And this list is getting larger and larger every day where power electronics somehow imparts the most favourable characteristics of DC systems to AC systems.
Well devices used in “Power Electronics” called power diodes, power transistors, power MOSFETS, IGBT, don’t differ from “Electronics” counterpart in terms of what they do, however, a simple diode has two layers p and n but power diode have three, so along with having power in front of their names power electronics devices vary greatly in construction.
To be continued………….
NOTE: All the statement made in this blog are authors own mere speculations it may be wrong, so an active reading is greatly expected. Don’t’ keep the statements until you yourself get sure of validity.
Reading Time: 12minutesOn the occasion of auspicious Diwali, team CEV wonders what could be more relevant and important other than to talk about harmonic resonance!!
Haha, but no kidding!
Since Diwali is a festival of “lights” and in these days harmonics are unanimously voted as the most popular villain in electrical world to turn the “lights” out!
Well if you have been a new reader at CEV, we would like to bring into your notice that our CEV’s Aantarak division have been literally obsessed with power harmonics for a long time now. We had carried out in-depth preliminary literature recon followed by collaborative effort to develop our own harmonic analyzer from scratch. Both can be accessed by following links respectively:
Continuing the same lines, we walked another mile to get ourselves around the harmonic resonance phenomenon, which otherwise has been tagged as seriously spurious.
We really hope to wind up our intuition for harmonics and related phenomenon in this last blog of the series, so we wish to describe it in its full glory. So, you might encounter some repeating themes, apologies for that.
For any domain, having a glance of history really helps in getting a larger picture of the things. Being aware of the historical background greatly aids in understanding the things with continuity and help in extrapolating the ongoings to get some future insights.
So EE folks haven’t begun struggling with harmonics in recent times, infact one can trace it back to early 20th century when power systems were in its earliest phase. Charles Proteus Steinmetz, yes the same engineer who taught the world how to draw the equivalent circuit diagram of induction motor and gave us a handy notation of “j” to simplify our AC calculations had made an excellent introductory paper in harmonics. At that time due to inferior core materials transformers and motors saturated, giving rise to these problems. However, now the problems- harmonics pose remain unchanged but the sources and impact have been magnified manifold.
21st-century power system seems to be literally littered with inky-dinky semiconductor devices which draws currents which are severely offbeat from sinusoidal nature, moreover, the advent of high power electronics has made the situation more vulnerable. Technically these devices/loads are called Non-linear devices/loads, and problem they pose are quite spurious in nature. We know that high-frequency components of these currents called harmonics interact with power system in ways leading to overheating of components, flickering, circuit breaker false trippings, or even causing catastrophic events like a wide-area power outage (aka. Blackouts), as reported by many utilities in recent times across the world.
These harmonics can tune the capacitor banks used for power factor improvement and voltage stability in resonance with the power system components and lead to blowing up of the banks and causing further contingencies, like voltage collapse, etc.
In this blog, a detailed analysis of power factor banks, non-linear loads, resonance phenomenon in RLC, and lastly the resonance in power system due to harmonics is carried out.
One more note, you might be aware of the MATLAB company tagline, it reads “accelerating the pace of science and engineering”, and CEV is really goanna help MATLAB do that here. We will use appropriate MATLAB simulation models to verify the theory and bring home to the readers a sophisticated understanding of the phenomenon.
The Skyrocketing hopes!!!
Power Factor Capacitor Banks
Thevenin’s Equivalent of Power System
Electrical Resonance in RLCs
Harmonic Resonance with PF Capacitor Banks
Harmonic Resonance, is among the most dreaded phenomenon the power system harmonics are observed to unroll.
ABB, the mega-giant in the power system industry tries to bring on the table the significance of eliminating power harmonics by its product commercial.
Though being regarded as the most suspected reason for unexplained failures of electric utilities the harmonic resonance is a phenomenon that could be explained in a paragraph of no more than 100 words only, you would be able to do it the small kids around you, by the end.
The story begins from the power factor capacitors banks……
You might have appreciated the fact that the use shunt capacitor banks across the electrical motors (lagging loads) can improve the power factor greatly.
The underlying idea is to provide the reactive power locally instead of drawing it from the system thereby reducing the supply current and preventing the elements of the whole power system (from T-lines down to the generators) from overloading.
This concept can be intuitively understood by use of following graphs.
Consider a sinusoidal voltage applied across an inductive load, result is a lagging current.
So, the convention is to simply connect a capacitor bank of required capacitance. Since the capacitor is in parallel so the voltage across it is in phase with load terminal voltage, and the current through it is obviously 90 degrees leading the voltage across it.
The phasor diagram:
The waveforms are like:
Adding the parallel currents to get the supply current:
So, it could be easily seen that the peak of the resultant current has been reduced, at the same time the power factor angle is also reduced hence power factor improved!!
This same result can be concluded by simply adding the current vectors mathematically.
So, what exactly is happening here?
The picture becomes crystal clearer if we try to simulate an RL load with shunt capacitor and visualize the instantaneous power consumed by each element.
By putting appropriate parameter values, it could be seen that when inductor is absorbing power the capacitor is releasing its stored power, and when the inductor is releasing the stored inductive power in its magnetic field, the capacitor is absorbing it in its electric field. It is this inductive and capacitive power, are collectively called reactive power, which just flows in the system but never manifests itself as real power, rather just oscillates. If this power exchange becomes equal then no net reactive power is drawn from the source.
So final result is “significantly reduced net reactive power drawn from the source and so is the supply current”.
Question: Do you think that the capacitor in ceiling fans of households serves the purpose of PF improvement?
Now to analyze the effect of shunt capacitor for non-linear loads, i.e. loads that produce harmonics, we have to follow a different approach, a completely different line of attack.
However, the theory we just saw is equally true, but as far as harmonics are concerned, we are more interested in first understanding the frequency response, rather than power calculation.
Some of the most basic and prevalent techniques used everywhere and all the time in power system analysis are first required to be grasped before we try to understand what happens for non-linear loads.
The concept of Thevenin’s equivalent;
The concept of current injecting vector
The concept of superposition
Thevenin’s Equivalent of Power System
Consider this point of view, the two-terminal is supplying a single-phase non-linear load, also conventionally a power capacitor is applied in parallel to supply reactive power locally. The black box here is an abstraction of all the distribution and transmission transformers, transmission lines and the generators and whatnot, all working in synchronism.
So, the Thevenin theorem says that the black box can be represented by an equivalent emf source and an equivalent impedance in series, called Thevenin’s voltage and Thevenin’s impedance respectively.
The Thevenin voltage is simply the open circuit terminal voltage.
And the Thevenin impedance is the impedance seen by the load given all the voltage and current sources are deactivated.
Once the Vth and Zth are known, to know the impact of connecting a load impedance to already loaded grid we don’t go on solving whole vast electric mesh again. A revolutionary French electrical engineer LC Thevenin in 1880s came up with a revolutionary method to enormously simplify the large electrical circuit.
Find Vth and Zth. Now turn off all the sources, connect the load wherever required, excite the point with the negative Vth, find the drop and add the drops algebraically to already existing system. This is applicable only for linear system by virtue of superposition theorem. This line of attack is chosen when the load impedance is center point (i.e. load impedance is known). This is quite a popular technique and is implied to calculate the impact of loading on different buses of system, fault analysis for a known value of fault impedance, etc.
Now, if impact of a given load current is point of attention (rather than the load impedance) then we use slightly different approach. We turn off the source and inject an equal load current at the point of connection of load, find drops at different nodes and again added algebraically to the existing system.
Now in this case of harmonic resonance study, notice we are utterly concerned with the load current. Our prime moto is to see the impact of a given non-sinusoidal load current on the system.
Here it is important to reflect to one important fact. Our power system is built up of thousands of different kinds of elements, the generators synchronous and asynchronous IMs, the transformers, T-lines, cables, a huge variety of loads, yet all of them can be modelled as a combination of just three fundamental elements, resistance, inductance and capacitance.
Q. How would you modify the Thevenin equivalent if the power systems have power electronic components?
So, it is all those little-tiny things learnt in early engineering classes of circuit theory comes back to manifest in harmonic resonance and other complicated higher phenomenon. Here we realise that solving the RLC circuit is not dull, unless we know how far-reaching are the meaning of those Rs, Ls and Cs in a practical applications.
But all of these theories are strictly applicable to a linear system.
Think for a second how to manipulate the tools for the non-linear currents.
So, lets revisit our aim, our aim is to find the impact of non-sinusoids, that means we are trying to see the response of system subjected to different frequencies. Now this leading us to a completely different space. Did you remember a phenomenon related when we check the response of a system to input of different frequencies?
You guessed it right, the series and parallel RESONANCE!!!!!
Moreover, we are finding the frequency response and by the time we have completed the course in control engineering, frequency response characteristics of any system almost become synonymous to bode plot.
It becomes as good as people screaming to you to “draw frequency characteristics” and you literally hear “draw plot bode-plot”!
And why not, after all, bode plot is a plot of the logarithm of magnitude of steady state output to input for different frequency of sinusoidal input excitations.
Electrical Resonance in RLCs
Resonance in series circuit can be identified as a phenomenon in which for a given magnitude of sinusoidal voltage source, current through the branch reaches maximum at some angular frequency of voltage source.
Here is bode-plot for the system considering the voltage signals as input and the current in the branch as output:
The plot indicates that at a certain frequency of voltage excitation the current through the circuit reaches its maximum value.
Similarly, parallel resonance can be identified as a phenomenon in which for a given magnitude of sinusoidal current source, the voltage across the branch reaches a maximum at some angular frequency of the current source.
Reflecting on these two base-statement rest all of the conditions of resonance can be deduced.
So here is a bode-plot of parallel RLC circuit taking voltage across the elements as output and total current as input.
In this case the voltage reaches a peak corresponding to the resonant frequency.
Harmonic Resonance with PF Capacitor Banks
We have built all the necessary parts and now it’s the time to put all the parts together to see the larger picture, and really wind-up our intuition around the harmonic resonance. We started with this not so technical diagram:
Reflect back and finally, we have:
It is now quite evident that parallel resonance is seen where parallel elements are excited by a range of angular frequency currents. These parallel elements in a power system are formed by the PF capacitors and the Thevenin’s equivalent at the node. The non-linear load is going to act as a source of different angular frequency current source. So, if the non-linear loads have the harmonic component which has a frequency as the natural frequency of the RLC then a parallel resonance is unavoidable fate.
And this is in-short the hack of harmonic resonance in power systems.
Wouldn’t it be delightful to let a kid know about this?
A Practical Approach
How to obtain the harmonic spectrum of a non-linear load?
Matlab gives you an elegant way forward, use a spectrum analyzer (in a correct configuration)
A sample case of a popular non-linear load, a three-phase rectifier:
A severely off-beat source current:
Here is what its harmonic spectrum looks like:
NOTE: 6-Pulse rectifiers have a current THD of 26% and significant harmonics are 5th (250 Hz), 7th (350 Hz) and 11th.
How to obtain the Thevenin equivalent of a power system?
The answer remains the same the MATLAB provides an elegant way to do it.
Using an impedance measurement block:
What you get is:
If you are observant enough, these plots contain all of the data that we are searching to be able to predict a harmonic resonance in capacitor bank across the non-linear load.
Well, we will leave it to you to build and run the models for yourself because we don’t want to steal your pride of finding and fixing things out on your own, so good-luck…………
However, in the end, we will be kind enough to atleast make a conclusion:
The conclusion reached is, when the non-linear load has a current component of frequency close or equal to the natural frequency, the system goes in parallel resonance i.e. system impedance is highest. For a given current value at the highest impedance would clearly result in the highest voltage drop across the capacitor, hence maximum current through it (notice the value of capacitive reactance decrease at higher frequencies).
The capacitor is immediately blown, as a result, the reactive power is drawn from the supply leading to increased current, thereby blowing the main fuse also. And the last sad thing to be noted is that if the capacitor comes out to be a utility capacitor and non-linear load is quite heavy then a blackout in the area is unavoidable destiny.
What is even more surprising is that current harmonics produce parallel resonance that we just saw, however, if there are harmonics presence in voltage waveform then series resonance could also occur in a dramatic way. Causing the collapse of a perfectly healthy bus due to non-linear load at another bus. One can also work-out its details on our own!
We hope we have inspired you enough to get yourself easy with the extremely useful tools in Electrical engineering, the massive MatLab and the sweet Scilab, and hope that CEV team effort boosts you a step towards your holy dream vision for the world!!
A controlled buck converter finds its application in innumerous platforms. It elegantly executes the mobile fast charging algorithm, MPPT algorithm in some Solar modules, robotics, etc. with optimal desired performance. It is elementary power converter, used as a power source for other electronic equipments like microprocessors, relays, etc.
One can jokingly say it the 1:1 auto-transformer of DC electricity world.
Buck converters which are also known as step-down choppers, are much ubiquitous hence it becomes very handy to have a design scheme, tested procedures and simulation models to fastly and accurately build a ready to deploy DC Buck converter. We will not describe in great depths the working as the principle of operation can be found in any standard power converter textbook, however, in this blog we wish to present a step by step guide to design a buck by taking into account all important practical considerations.
The circuit operation can easily be understood by sketching the waveforms in two states, i.e. when the semiconductor switch is triggered and when it is not triggered.
ON-STATE: Inductor current rises linearly with time as voltage source get directly applied across the inductor and load.
OFF-STATE: Inductor current decreases linearly as the circuit gets short-circuited by the forward-biased diode, which allows for current free-wheeling.
The average voltage applied is a function of time for which the semi-conductor is turned on and turned off, which is indicative of the duty cycle of the pulse generator.
The first thing we require is all the desired ratings and performance of the buck converter. These specifications ultimately determine the device parameters, which will give the desired operation. Consider the sample case in which we are operating a constant power load with a variable input DC voltage source, for example a solar module.
Input: 150 V- 400 V
Output: 120 V
Switching Frequency: 100 kHz (typical for choppers)
Load current: 50 A
Ripple (P-P) in load current: 10%
Ripple (P-P) in load voltage: 5%
Max Load Power support: 25%
Max Voltage drop during support: 10%
Backup duration: 10 ms
Keeping in mind these desired performance parameters the ratings of the various elements will be decided.
Circuit Element Rating Calculations
The value of Inductor determines the ripple in the load current. Having large ripples in load causes poor performance of DC load, like lights will flicker, DC fans will produce pulsating torque and noise, etc.
Since varying the duty-cycle will result in different turn-on and turn-off time thus causing varying ripples. All we have to do is to do a trial and error procedure to find the value of L to get ripple below permissible limits under all possible cases:
Test case 1: Vin = 150 V; Vout = 120 V
For peak to peak ripple current of 10%:
Now inductor equation during on-time is:
*Assuming load voltage remains almost constant during the entire cycle
Now here comes very crucial part. The theoretical value of inductor has been calculated, but the important things is, in the real environment we always need to overrated our circuit elements to accommodate the uncertainty of the real world. If we are designing a commercial product there is a very tight margin for these over-ratings. That’s why all the gadgets are always rated to operate in a specified environments, like temperature, moisture, etc.
It is good practice to keep a safety factor of 25% for operating temperature changes and 20% for derating of inductor coil over time:
Extreme Test case 2: Vin = 400 V; Vout = 120 V **Worst case calc
For peak to peak ripple current of 10%:
Now inductor equation during on time is:
*Assuming load voltage remains almost constant during the entire cycle
Again, keeping a safety factor of 25 % for operating temperature changes and 20% for derating over time:
Now since worst-case requirement doesn’t meet previous case value thus the inductor value should be updated to at least 252 uH.
We must also verify the ripple current requirement at met for input voltage in between 150 V and 400 V:
Random Test case 3: Vin = 250 V; Vout = 120 V
Now max current through inductor:
So finally, inductor ratings are:
Ripple current is less than 10% for all cases.
Peak Reverse voltage occurs under off-time:
Considering safety factor of 30%:
Peak current would be same as inductor current, and taking safety factor of 25% and 30% for spikes due to stray inductance and temperature rise;
So, the semiconductor switch ratings are:
*RdsON should be as low as possible.
*Now since the reverse peak voltage is less than 600 V so a MOSFET can be employed, however if gating loss has also to be considered than IGBTs would be preferable.
Diode will also be subjected to same voltage and current ratings as that of the MOSFET.
*In addition, care must be taken to select a diode will high frequency operating capabilities in order of 100 kHz.
The high-frequency ripple present in the inductor current will be bypassed by the capacitor, as its impedance varies inversely with frequency. However, in an ideal capacitor, there is always some series resistance with leads to ripples in voltage across the C terminal, inturn the load terminal.
Effective Series Resistance (ESR) Ratings:
A ripple of less than 2% is desired in output voltage, so:
Since ripple in load voltage is largely caused by the series resistance,
Load Ripple voltage of less than 2% is obtained for all cases since 5A is the maximum ripple in the current.
Moreover, this charged capacitor discharges to meet the load current for a small duration when supply is lost or small increase in load. This same principle is applied in many electronic gadgets like PC, laptops, etc to bridge the power loss during switching from mains supply to back-up power.
2. Capacitance value:
For a load change of 25% a corresponding load voltage dip of 10% and a backup time of 10 ms is desired.
10% Dip in voltage:
25% change in load is:
This power should be supplied by the capacitor and thus will discharge it:
Making critical approximations, which we all engineers so good at:
Capacitor voltage with 30% safety factor:
The load voltage drop of less than 10% is obtained for 10 msec for a load increase of 25%.
Displays show the result for 400V input, notice 120V output and 50 A load current.
Now comes the most elegant part of designing a buck converter, modelling a buck to understand and predict the performance in a closed-loop operation.
Like any linear control system, we first need to identify the input and the output. Here we have reduced the buck converter to a simple RLC circuit to check the response of the system for various input of duty cycle:
The transfer function model obtained for this open-loop system is as follows:
Now as per one’s convenience we can either go with root-locus analysis or with the frequency domain analysis.
We know from control theory that by obtaining the bode-plot of an open-loop system we can say a lot about the closed-loop operation of the system. We can comment on the stability, relative stability as well as with little speculations we can also comment on the transient response!!
We might have dived in depths of Control Theory, but we restrict ourself to buck only. Probably, we will find some other fine day to do that.
Obtaining the bode-plot for above open-loop transfer function by running the following code in SCILAB:
From bode-plot it can be directly concluded that the close loop system will be unstable as the phase cross over frequency is less than the gain cross-over frequency.
By the conventional steps, we need to first use a lag compensator to make gain-cross over frequency less than the phase cross-over frequency.
Adding a lag-compensator around the gain crossover frequency of around 2000 Hz.
Adding lag compensation at around 2000 Hz is given by:
Bode-plot for the lag compensated system:
It is evident that now the close-loop system of this open system will be stable but the margin of stability is less.
So, using a lead compensator to provide the required phase margin at the gain cross over frequency, i.e. around 2000 kHz.
TF for required lead compensation should be:
A well-compensated and stable system:
*If desired more lead compensation can be provided according to the design specs.
The final open-loop gain becomes (assuming unity feedback system):
Now op-amp can be used to make these lag and lead compensators, and using analog electronics duty ratio generation could also be done. CEV ask for apologies to not do that today.
The Last Words
Team CEV’s purpose of posting technical blogs is to help out some of the folks who have been completely or partially saddened by the conventional ways of teaching and have been extremely demotivated to keep their interest in these kinds of stuff which is otherwise so rich and interesting.
We are aware that the system has failed us to boost and strengthen our interest in the subjects. 1/7 th of humanity shall not be devoid of fun and joy of falling in love with the subjects, by no fault of their own.
This is simply not acceptable to CEV.
We are not here to just do casual criticizing about the things rather we understand the severity of the situation and quiet boldly take the ownership to undo the damage, even by a fraction of %.
We believe that people in light of their own personal insights can put out things is much appealing and fascinating way, unlike the usual exam-focused, dull and dead description of things. We intend to rekindle the fire of curiosity and interest and help keep the learning spirits of our generation of student community real-high.
The manufacturing sector has been on the ventilator for a long time……
Despite a demand of 1.36 B we import quite a large portion of the products employing moderate to high-level technology, from electronic toys to smartphones to high power Induction motors of Indian Railways Engines. We don’t have any airliner manufacturing except HAL, we don’t have chip manufacturing even though we are land of powerful the Shakti microprocessors. How much sadness this fact bring home to us!
Consider Solar Cell & Module Manufacturing industry.
We have a small number of solar modules manufactures who import solar cells largely from China & Taiwan paste them on a tough polymer sheet, use some power electronics and meet the large demand of India solar needs.
We have even much small solar cell capability, who import wafers, own some mega turn-key solar line manufacturing unit mostly set up by European Companies. You see, we have to be very precise in claiming what is ours and what is not.
We import 80% solar cells and solar modules and a domestic manufacturing capacity of only 3 GW for solar cells. -Source: livemint.com
In this blog let us at least critically understand what goes in the making of 21st century solar cell. And try to figure is that so hard, that we really need to import end-tailored billion euros turnkey lines to get the solar industry flying.
For good assimilation of the content, one needs to be familiar with a solar cell. One might answer the following question to get a temporary check-pass.
How does charge generation, charge separation and charge collection phenomenon occur in a solar cell?
What is meant by the short circuit current and open-circuit voltage of cell?
Difference between the solar cell and solar module.
On what factors does the form factor depend?
Notice the nature of the question, they are descriptive and have straight forward answers.
We don’t have here full degree of freedom to ask any wild question. For example, one cannot ask what would be voltage measured by non-ideal voltmeter across the photocell under no illumination, would current flow through external resistance in a not illuminated solar cell or a regular diode.
The reason is, from the engineering point of view we always study an abstract model of a solar cell or p-n junction. Physicists have very smartly built a layer over all the intricate things going inside the cell, we don’t care much about the exact phenomenon inside of the device, yet with the help of modified equations, we can deduce engineering relevant parameters like FF, Rsh, Rs, Isc, Voc, etc and can do clever things like MPPT, etc.
Similarly, using our conventional theory, one cannot explain the presence of intrinsic carrier at room temperature.
A pure silicon crystal has a bandgap of 1.12 eV, electrons on the other hand according to classical theory have thermal energy of kT (i.e. 0.026 eV or 26 meV). So intuitive physics would lead us to conclude that at room temperature there should be no electron in the conduction band. Still, at 25 degrees 10^10 electrons per cubic cm are available in the conduction band in a pure silicon crystal, called as intrinsic carrier density.
Think for a second how would you explain this paradox?
All these questions, wild or sober, can surely are answered satisfactorily (multiplying and integrating maxwell Boltzmann density of states and Fermi-Dirac probability distribution) but the point I want to highlight is that they really unfold the need of another kind of theory to explain, and let us reveal to you that is what the world knows as the quantum theory of matter.
Notice the power of our wild questioning, one correct question has simply enabled us to knock the door of mighty quantum physics. What a pleasure to discover for ourself the need for new theory, the theory which the world has been developing for the past 130 years.
On the other hand, if we think we are done with the p-n junction, simply-just by being able to describe the formation of depletion region and calculating the build-in voltage by a sweet formula without a taste of weirdness of quantum physics, then we should really reconsider our beliefs.
Saw Damage Etch
This blog won’t be really spitting out crude information throughout as it seems from the flow, rather it aims to induce self-questions in readers and thus provokes the reader to discover for themselves the tight constraint the solar cell manufacturing posses at every stage.
Now the first input is the silicon wafers. It itself takes a whole manufacturing industry, it has it’s own difficultly why India doesn’t have that, so we will not dive deep rather just walk through it until solar domain actually begins, you can even directly jump to Saw damage etch.
Silicon crystal falls broadly in two categories the monocrystalline silicon and polycrystalline silicon. Monocrystalline crystals contain continuous single crystal orientation.
Polycrystalline crystal, however, has much less regularity, and have many grain boundaries. The solar industry is always on toes to minimize the cost per unit energy produced as its competitor is the outlet in our homes, so it can’t afford at any stage a high price manufacturing technology.
Polycrystalline silicon is formed using the Siemens process, a faster and cheaper growth method as compared to Czocharalski, and float zone process for crystalline silicon.
The next obvious step is the sawing out the wafers, evident from the ingot structure that the monocrystalline will be circular and polycrystalline will be square type. Slurry based sawing and diamond-based sawing are two popular techs, out of which diamond-based become much popular because it is fast and produce more yield as silicon dust produced is less.
No matter what techniques is used the roughness on the surface is way more than acceptable for the solar use or any (IC industry).
Pseudosquare shape to optimize the material requirement
Saw damage Etch
Enough of the peripheral walks, now we are entering the woods, from here we are entering the solar manufacturing.
To smooth out the scratches and remove the surface contaminants caused by sawing, the p-doped wafers are treated with a strong hot alkaline bath, like NaOH or KOH. We can also leverage the non-uniform surface to increase the probability for light to enter the silicon but it’s avoided as any deep crack has a chance to develop into a larger hairline fracture as Silicon is brittle at room temperature and hence breaking cell in some time.
The alkaline solution dissolves the 5-10 um thick layer from both ends, resulting in very fine surfaces and a p-type wafer of width in the range of 170 um. Precise control of temperature, concentration and time is required in the bath for desired outcomes.
If the surface is perfectly smooth the light won’t get any chance to re-strike the surface again. The greater the number of times the light is reflected by the surface the more chances it has to enter the bulk of the silicon. However, for adequately rough surface the light reflected from the edges have more chances to enter the silicon.
Image courtesy: pv-manufacturing.org
The process of saw damage etching and texturing only differ in concentration and temperature of alkaline. A much lower concentration alkaline is observed to yield pyramid-like structure over the silicon surface, which aids the cell to greatly reduce the reflectivity of the surface.
Image courtesy: pv-manufacturing.org
A great amount of attention is given to tiny-tricky light management techniques. Using the principles of optics, the solar cell is optimized to somehow get the maximum photons inside (or increase the path length inside the silicon). These include Texturing, anti-reflection, back internal reflection, etc., in fact you would be surprised to know that attempts have been made by some companies to even texture the surface of fingers and busbars to divert the light falling on them towards the silicon, and like that.
The presence of the electric field is inevitable for charge separation as photons knock out the electron from Si atoms. Thus, next in line is the formation of the n-type region to develop a depletion region (p-n junction) inside the cell which assists in change separation.
The process is quite straight forward. We have a heated POCl3 gas inside a chamber and correct temperature and vapour density are maintained to allow the phosphorous atoms to diffuse into the silicon base.
The trick is how will one decide the doping density of the emitter layer and the thickness of it.
High doping density is desirable to have a good contact (less metal contact resistance) and low lateral series resistance as charges moves along the emitter, however higher doping density causes to decrease the bandgap of Silicon (as at extreme doping the crystals begins to become highly irregular, thus shrinking the band-gap) hence the blue light (high-frequency radiation) is not absorbed well, also recombination ( a type called Auger Recombination) increases in emitter leading to dragging down of the open-circuit voltage of the cell and hence the performance.
Now think about the thickness of emitter, ideally, the emitter should be narrow so that the time it remains inside the gas chamber is less and the process is faster and cheaper.
But if it is narrow there is a great chance that the metal will leach through it into the p-type directly shunting them, leading to extremely poor-quality cells.
Notice that every piece of solar cell development is a tight problem of optimization.
We require two contrasting qualities of the emitter, narrow and lightly doped for good light response and low recombination, and deep and heavily doped for good contact and low series resistance.
Selective Emitter is quite a smart way to accommodate both of them.
A shallow lightly doped emitter is formed first then by proper masking deep heavily doped contact regions are obtained.
This is one more way to increase the probability of light to get absorbed in the solar cell. Using a Silicon Nitride coating the light is reflected back into the cell.
The process generally used is called PVCED (Plasma Enhanced Chemical Vapor Deposition).
Image courtesy: pv-manufacturing.org
Silane (SiH4) and Ammonia (NH3) are filled in a chamber and excited by high-frequency waves. Obeying the rules of chemistry and fine-tuning the process an extremely thin 70-nm layer of Silicon Nitride is formed above the emitter junction.
The added benefit is that the hydrogen released in the process bonds with dangling Si atoms which otherwise would have led to increased recombination, anyways this process of filling the holes is called passivation.
The way in which this anti-reflecting coating works is truly an elegant piece of physics.
They work on principles of interference. We know that rays of monochromatic light can interfere depending on the distance (optical) travelled, as it causes a change in phase. The famous Michelson experiment produced constructive interference if path difference was λ, 2λ, 3λ, whereas produced destructive interference for λ/2, 3 λ/2, 5λ/2, etc.
Magnified ARC layer
On similar lines, these 70 nm manages to produce a destructive interference of waves, thus suppressing the reflection from the surface and constraining the entire intensity to get transmitted.
For normal incidence the light travels twice the thickness of ARC, so for destructive interference, the optical path length difference between the two waves must be λ/2. Due to decreased speed of light inside higher refractive index material, the optical path length will increase by a factor of n.
Where n is the refractive index of ARC.
Now, solar radiation is not monochromatic, hence we can never obtain destructive interference for all the wavelengths for one thickness of ARC. Thus, the thickness is optimized for wavelength at which peak of solar radiation occurs, i.e. 2.3 eV (550 nm). Given Silicon Nitride has refractive index of 2, plugging in the numbers we get:
It is here from where we get the golden number of 70 nm, which is so popular in solar cell industry.
Front Contact printing
This is also one of the typical optimization problems in solar cell design.
For good ohmic contact (low contact resistance) the fingers must be wide, but for maximizing the amount of light entering inside the cell the fingers must be as narrow as possible.
Even finger spacing is a critical design parameter. Small finger spacing is desirable to keep the series resistance low, but it will lead in a larger portion of cell area to get shadowed by the front contact, again an engineering decision has to be made to optimize the net performance.
In fact, optimization constraint occurs in one more dimension here, the height of the fingers. One would like to have increased height to increase the cross-section for the current but again it would be limited as when the sun falls slantly the large shadow of these fingers would be casted if the height is large.
Same problem for the busbars too.
However, once the design is optimized the printing as easy as t-shirt printing. Making a mask and applying the paste and then drying.
Generally, a silver-based paste is used for the purpose.
Back Contact Printing
Back Contact seems simple at first sight but like all the solar cell stuff it too poses optimization problems of its own. The Solar cell is supposed to operate in quite large temperature ranges.
Silicon has a lower thermal expansion coefficient than that of the metallic aluminium. If appropriate care of thickness of aluminium back is not taken then the difference in thermal coefficient might lead to intolerable bending of the cell, leading to even separation of contacts in the extreme case.
A layer of aluminium is developed on the back surface, the thickness of which typically lies in the range of 30 um.
However, this Al layer has an added benefit of what is called the back-surface field (BSF). Some of the Al diffuse into the p-type base and thus making it p++ type. The direction of the field developed to repel the minority carrier electron away from the back surface, and this also reduces recombination at back.
Technically called post-deposition high-temperature annealing.
Notice that the front metal doesn’t make electrical contact with the emitter. So, the cells are lastly sent in a furnace of accurately controlled temperature. The heated silver etches through the tough 70 nm ARC and makes just suitable contact with the emitter.
This process has to be very finely tuned if the temperature is not high or cell is kept in the furnace for small-time the contacts will not be firm and hence result in high series resistance. If the temperature is high or the time is more than the molten silver will breach through the emitter to base, thereby directly shunting the device and giving rise to extremely small shunt resistance and hence again a poor performance device.
The General Conclusion:
One can conclude for ourself that the manufacturing of solar cell is not so advanced as engineering quantum systems like manipulating Q-bits or fusion of atoms or replicating human brain, it is an arena of extreme fine-tuning and very precise control of temperature, concentration, motion.
The Technical Conclusion:
The solar cell is the best example of a most well-optimized system, in the real commercial scenario, it takes into account 30+ parameters.
It is also a standing example that little things in life matters and sometimes even more. Just like a team is only fast as slowest guy in the team similarly any engineering system is only efficient as least efficient component in the system, so nothing has to considered trivial or irrelevant or less worthy of attention, and it applies equally to life and non-living systems.
Some cool websites to learn and understand the solar cell in greater deaths:
“Do you have the courage to make every single possible mistake, before you get it all-right?”
**Featured image courtesy: Internet
THE PROJECT IN SHORT: What this is about?
The importance of analyzing harmonics has been enough stressed upon in the previous blog, Pollution in Power Systems.
So, we set out to design a system for real-time monitoring of voltage and current waveforms associated with a typical non-linear load. Our aim was “to obtain the shape of waveforms plus apply some mathematical rigour to get the harmonic spectrum of the waveforms”.
THE IDEA: How it works?
Clearly, real-time capabilities of any system are analogous to deployment of intelligent microcontrollers to perform the tasks and since this system also demanded some effective visualization setup, so we linked the microcontroller with the desktop (interfacing aided by MATLAB). Together with MATLAB, we established a GUI platform to interact with user to get the required results:
The shape of waveforms and defined parameters readings,
Harmonic spectrum in the frequency domain.
The voltage and current signal are first appropriately sampled by different resistor configurations, these samples are then conditioned by analog industry’s workhorses, the op-amps, and are fed into the ADC of microcontroller (Arduino UNO) for digital discretization. These digital values are accessed by MatLab to apply mathematical techniques according to commands entered by user at the GUI to finally produce required outcome on screen of PC.
ARDUINO and MATLAB INTERFACING: Boosting the Computation
Arduino UNO is 32K flash memory and 2K SRAM microcontroller which sets limit to the functionality of a larger system to some extent. Interfacing the microcontroller with a PC not only allows increased computational capability but more importantly it serves with an effective visual tool of screen to display the waveforms of the quantities graphically, import data and save for future reference and so on.
TWO WAYS TO WORK: Simulink and the .m
The interfacing can be done via two modes, one is directly building simulation models in Simulink by using blocks from the Arduino library and second is to write scripts (code in .m file) in MatLab by including a specific set of libraries for given Arduino devices (UNO, NANO, etc.).
Only the global variable “arduino” needs to be declared in the program and rest codes are as usual and normal. We have used the second method as it was more suitable for the type of mathematical operation we wanted to perform.
The first method could also be utilised by executing the required mathematical operation using available blocks in the library.
Both of these methods of interfacing require addition of two different libraries.
THE GUI: User friendly
Using Arduino interfaced with PC also gives another advantage of user-interactive analyzer. Sometimes the visual graphics of waveform distortion is important and sometimes the information in frequency domain is of utmost concern. Using a GUI platform provided by MatLab, to give the option to user to select his choice adds greatly to the flexibility of analyzer.
The GUI platform appears like this upon running the program.
MatLab gives you a very user-friendly environment to build such useful GUI. Type guide in command window select the blank GUI and you are ready to go.
Moreover, you can follow this short 8 minutes tutorial for the introduction, by official MatLab YouTube channel:
Once GUI is designed and saved, a corresponding m-file is automatically generated by the MatLab. This m-file contains the well-structured codes as well as illustrative comments to show how to program further. The GUI is now ready to be impregnated with the pumping heart of the project, the real codes.
The very first task is to start collecting data-points flushing-in from the ADC of the microcontroller and save it in an array for future reproduction in the program. This should be executed upon the user pressing the START button at the GUI.
Since we have shifted our whole signal waveform by 2.5 V so we have to continuously check for 127 level which is actually the zero-crossing point, and then only start collecting data.
% --- Executes on button press in start.
function start_Callback(hObject, eventdata, handles)
% hObject handle to start (see GCBO)
% eventdata reserved - to be defined in a future version of MATLAB
% handles structure with handles and user data (see GUIDATA)
V = zeros(1,201);
time = zeros(1,201);
vstart = 0;
while(vstart == 0)
value = readVoltage(ard ,'A1');
if(value > 124 && value < 130)
vstart = 1;
for n = 1:1:201
value = readVoltage(ard ,'A1');
value = value – 127;
V(n) = value;
time(n) = (n-1)*0.0001;
The data-points saved in the array now required to be produced and that too in a way which makes sense to the user, i.e. the graphical plotting.
Algorithm: ISSUES STILL UNRESOLVED!!!
As mentioned previously we aimed to obtain the frequency domain analysis for the waveform of concern. The previous blog was presented with insights of mathematical formulation required to do so.
Algorithm: Refer to blog Pollution in power systems
% --- Executes on button press in frequencydomain.
function frequencydomain_Callback(hObject, eventdata, handles)
% hObject handle to frequencydomain (see GCBO)
% eventdata reserved - to be defined in a future version of MATLAB
% handles structure with handles and user data (see GUIDATA)
%Ns=no of samples
%a= coeffecient of cosine terms
%b =coefficient of sine terms
%A = coefficient of harmonic terms
%ph=phase angle of harmonic terms wrt fundamental
n=9 %no of harmonics required
M(i,j)=V(j)*cos(2*pi*(j-1)*i/Ns);%matrix M has order of n*(Ns)
if j==1 || j==Ns
N(i,j)=V(j)*sin(2*pi*(j-1)*i/Ns);%matrix M has order of n*(Ns+1)
if j==1 || j==Ns
x = 1:1:n;
CIRCUIT DESIGNING: The Analog Part
The section appears quite late in this documentation but ironically this is the first stage of the system. As we have seen in the power module the constraints on signal input to ADC of microcontroller:
Peak to peak signal magnitude should be within 5V.
Voltage Signal must be always positive wrt to the reference.
To meet the first part, we used a step-down transformer and a voltage divider resistance branch of required values to get a peak to peak sinusoidal voltage waveform of 5V.
Now current and voltage waveforms obviously would become negative wrt to reference in AC systems.
Think for a second, how to shift this whole cycle above the x-axis.
To achieve this second part, we used an op-amp in clamping configuration to obtain a voltage clamping circuit. We selected op-amp due to their several great operational qualities, like accuracy and simplicity.
Voltage clamping using op-amps:
The circuit overall layout:
IMP NOTE: While taking signals from a voltage divider always keep in mind that no current is drawn from the point of sampling, as it will disturb the effective resistance branch and hence required voltage division won’t be obtained. Always use an op-amp in voltage follower configuration to take samples from the voltage divider.
Now it is always preferable to first model and simulate your circuit and confirming the results to check for any potentially fatal loopholes. It helps save time to correct the errors and saves elements from blowing up during testing.
Modelling and simulation become of great importance for larger and relatively complicated systems, like alternators, transmission lines, other power systems, where you simply cannot afford hit and trial methods to rectify issues in systems. Hence, having an upper hand in this skill of modelling and simulating is of great importance in engineering.
For an analog system, like this, MatLab is perfect. (We found Proteus not showing correct results, however, it is best suited for the simulating microcontrollers-based circuits).
Simulation results confirm a 5V peak to peak signal clamped at 2.5 V.
The real circuit under test:
Case of Emergency:
Sometimes we find ourselves in desperate need of some IC and we didn’t get it. At that time our ability to observe might help us get some. In our surroundings, we are littered with IC of all types, and op-amp is one of the most common. Sensors of all types use an op-amp to amplify signals to required values. These IC fixed on the chip can be extracted by de-soldering using solder iron. If that doesn’t seem possible use something that gets you the results. Like in power module project we manage to get three terminals of the one op-amp from IR sensor chip, here we required two op-amps.
First, trace the circuit diagram of the chip by referring the terminals from the datasheet, you can cross-check all connections by using the multimeter in connectivity-check mode. Then use all sorts of techniques too somehow obtain the desired connections.
Many times, in circuits different levels of reference voltages are required like 3.3V, 4.5V etc. here we require 2.5 V.
One can-built reference voltage using:
resistance voltage dividers (with op-amp in voltage follower configuration),
we can directly use an op-amp to give required gain to any source voltage level,
the variable reference voltage can be obtained by the variable voltage supply, we built-in rectifier project using the LM317.
For program testing, we required different typical waveforms like square and triangle wave. These types of waveforms can be obtained in two different ways: the analog way and the digital way.
The Analog Way
Op-amps again come for our rescue. Op-amps when accompanied by resistors, capacitors and inductor seemingly provide all sorts of functionalities in analog domain like summing, subtracting, integrating, differentiating, voltage source, current source, level shifting, etc.
Using a Texas Instrument’s handbook on Op-amp, we obtained the circuit for triangle wave generation as below:
The Digital Way
Another interesting way to obtain all sorts of desired waveforms is by harnessing microcontroller. One can vary the voltage levels, frequency and other waveform parameters directly in the code.
Here we utilised two Arduinos, one stand-alone Arduino 1 which is programmed to generate square wave and another Arduino 2 interfaced with Matlab to check the results.
Now already stated the importance of simulation.
So, here for the simulation of Arduino we used “Proteus 8”.
The code is written in Arduino App, compiled and HEX code is burnt in the model in proteus.
The results displayed by the Matlab:
To generate different waveforms other than square-type one thing that has to consider is the PWM mode of operation of Digital pins. The 13 digital pins on Arduino generates PWM.
At 100% duty cycle 5 V is generated at the output terminal.
digitalWrite (PIN, HIGH):This code line generates a PWM of 100% DT whose DC value is 5V.
So, by changing the duty cycle of PWM we can obtain any level between 0-5 V.
analogWrite (PIN, Duty_Ratio):this code line generates a PWM of any duty-ratio (0-100%) hence any desired value of voltage level on a digital pin.
analogWrite (2, 127):gives an output of 2.5 V at D-pin 2.
Moreover, timer functionalities can be utilized for a triangle wave generation.
It is very saddening for us to not able to finally check our results and terminate the project at 75% completion due to unavoidable instances created by this COVID thing.
THE RESOURCES: How you can do it too?
List of the important resources referred in this project:
Op-amp cook book: Handbook of Op-amp application, Texas Instruments
THE CONCLUSIONS: Very Important take-away
If we (you and us) desire to take-on venture into the unknown, something never done before and planning to do it all alone, trust our words failure is sure. It gets tough when we get stuck somewhere and it gets tougher only.
We all have to find the people who have the same vision as ours, share some interests and with whom we love work alongside. We all have compulsorily to be a part of a team, otherwise life won’t be easy nor pleasing. There is a great possibility of coming out a winner if we get into it as a team, even if the team fails, we don’t come out frustrated at least.
Each member brings with themselves their own special individual talent to contribute to the common aim. The ability to write codes, the ability to do the math, the ability to simulate, the ability to interpret results, the ability to work on theory and work on intuition, etc. A good teamwork is the recipe to build great things that work.
So, we conclude from the project that teamwork was the most crucial reason for the 75% completion of this venture, and we look forward to make it 100% asap.
Harmonics Generation: Typical Sources of harmonics
**Featured image courtesy: Internet
If we were in ideal world then we would have all honest people, no global issues of Corona and climate crisis, also gas particles would have negligible volume (ideal gas equation), etc. and in particular in the power systems we would have only sinusoidal voltage and current waveforms. 😅😅
But in this real beautiful world we have bunch of dear dishonest people; thousands die of epidemics, globe becoming hotter and also gas particles have volume similarly having pure sinusoidal waveforms is a luxury and unconceivable feat to be achieved in any large power system.
We have tried to get launched from very beginning so only a strong will to understand is enough but still we will suggest to once you to go through the power quality blog, it will help develop some important insights.
Now, why we are talking about shape of waveforms? Well, you will get to know about it by the end on your own, for now let us just tell you that the non-sinusoidal nature of waveform is considered as pollution in electrical power system, effects of which ranges from overheating to whole system ending up in large catastrophes.
Non-sinusoidal waveforms of currents or voltages are polluted waveforms.
But how it can be possible that if voltage implied across some load is sinusoidal but current drawn is non-sinusoidal.
Hint: V= IZ
Yes, it is only possible if the impedance plays some tricks. So, the very first conclusion that can be drawn for the systems that create electrical pollution is that they don’t have constant impedance in one time-period of voltage cycle applied across it, hence they draw non-sinusoidal currents from source. These systems are called non-linearloads or elements. Like this most popular guy:
Note that the inductive and capacitive impedances are frequency variant and remains fixed over a voltage cycle for fixed frequency that’s why resistors, inductor and capacitor are linear loads. In this modern era of 21st century the power system is cursed to be literally littered with these non-linear loads and it is estimated that in next 10-15 years 60% of total load will be non-linear type, well the aftermath of COVID19 has not been considered.
The list of non-linear loads includes almost all the loads you see around you, the gadgets- computers, TVs, music system, LEDs, the battery charging systems, ACs, refrigerators, fluorescent tubes, arc furnaces, etc. Look at the following waveforms of current drawn by some common devices:
Typical inverter Air-Conditioner current waveform (235.14 V, 1.871 A)
Source: Research Gate
Typical Fluorescent lamp
Typical 10W LED bulb
Source: Research Gate
Typical battery charging system
Source: Research Gate
Source: Research Gate
Typical Arc furnace current waveform
Name any modern device (microwave-oven, washing machine, BLDC fans, etc.) and their current waveforms are severely offbeat from desired sine-type, given the no of such devices the electrical pollution becomes a grave issue for any power system. Now the pollution in electrical power system is not a phenomenon of this 21st century rather electrical engineers have struggled to check the non-sinusoidal waveforms throughout 20th century and one can find description of this phenomenon as early as in 1916 in Steinmetz ground-breaking research paper named “Study of Harmonics in three-phase Power System”. However, the source and reasons of power pollution have ever-changing since then. In early days transformers were major polluting devices now 21st gadgets have taken up that role, but the consequences have remained disastrous.
WAIT, WAIT, WAIT…. What’s that “Harmonics”?
Before we even introduce the harmonics let just apply our mathematical rigor in analyzing the typical non-sinusoidal waveforms, we encounter in the power system.
From the blog on Fourier series, we were confronted with one of most fundamental laws of nature:
Any continuous, well-defined periodic function f(x) whose period is (a, a+2c) can be expressed as sum of sine and cos and constant components. We call this great universal truth as the Fourier Expansion, mathematically:
Square-wave, the output of the inverter circuits:
For all even n:
For all odd n:
Just for some minutes hold in mind the result’s outline:
We will draw some very striking conclusions.
Now consider a triangular wave:
The function can be described as:
Calculating Fourier coefficients:
Which again simplifies to zero.
So, we have-
Applying the integration for each interval and putting the limits:
For even n,
For odd n,
For even n:
Are these equations kidding us???
For odd n:
So finally, summary of result for the triangle waveform case is as follows:
Did you noticed that if these two waveforms were traced in negative side of the time axis than they could be produced by:
This property of the waveforms is called the odd symmetry. Since sine wave have this same fundamental property hence only components of sine waves are found in the expansion.
Now consider this waveform:
This waveform unlike the previous two cases, if the negative side of waveform had to obtained than it must be:
Now this is identified as the even symmetry of waveform, so which components do you expect sine or cos???
The function can be described as:
For the cos components:
This equation reduces to:
For the sine components:
This equation reduces to Zero for all even and odd “n”.
Well we have guessed it already🤠🤠.
Summary of coefficients for a triangle waveform, which follows even symmetry is as follows:
Very useful conclusions:
a0 = 0: for all the waveform which inscribe equal area with x-axis, under negative and positive cycle. This happens because the constant component is simply the algebraic sum of these two areas.
an = 0: for all the waveform which follows odd symmetry. Cos is an even symmetric functions, it simply can’t be component of a function which is odd symmetric.
bn = 0: for all the waveform which follows even symmetry. by the same logic sine function which is itself odd symmetric, cannot be component of an even symmetry.
The fourth very critical conclusion which can be drawn for the waveforms which follow this is:
Where T is time period of waveform.
For then the even ordered harmonics aren’t present, only odd orders. This is property is identified as half-wave symmetry, and are present in most power system signals.
Now, these conclusions are applicable to the numerous current waveforms in the power system. Most of the devices with which we have begun with were seemed to follow the above properties, they all are half-symmetric and either odd or even. These conclusions result in great simplification while formulating the Fourier series for power systems waveforms.
So, consider a typical firing angle Current:
So, apply the conclusions drawn for this case. Since the waveform has no half-wave symmetry but is odd symmetric.
Hope you had enjoyed utilizing the greatest mathematical tool and amazed to break the intricate waveforms into fundamental sines or cosines.
“Like matter is made up of fundamental units called atoms, any periodic waveform consists of fundamental sine and cosine components.”
It is these components of any waveform, which we call in electrical engineering language the Harmonics.
The Mathematics gives you cheat codes to understand and analyze the harmonics. It just simply opens up the whole picture to very minute details.
So, what we are going to do now, after calculating the components, the harmonics?
So first all we need to quantify how much harmonic content is present in the waveform. The term coined for this purpose is called total harmonic distortion:
THD, total harmonic distortion:
It is a self-explanatory ratio, the ratio of rms of all harmonics to the rms value of fundamental.
Now since harmonics are sine or cos waves only so the RMS is simply:
same definition the RMS of fundamental becomes:
So, THD is:
The next thing we are concerned about is power. So, we need to find the impact of harmonics on power transferred.
Power and the Power Factor
The power and power factor are so intimately related. It becomes impossible to talk about power and not of power factor.
So, the conventional power factor definition for any load (linear and non-linear load) is defined as the ratio of active power to the apparent power. It basically is an indicator of how well the load is able to utilize the current it draws; this statement is consistent with statement that a high pf load draws less current for same real power developed.
Active power is: average of instantaneous power over a cycle
Assuming the sinusoidal current and the voltage have a phase difference of theta, the integration simplifies to:
2. Apparent power is by its name simply VI product, since quantities are AC so RMS values.
The pf becomes cos(theta), only when waveforms are sinusoidal.
NOTE: The assumption must be kept in mind.
So, what happens when the waveforms are contaminated by harmonics:
There are many theories for defining power when harmonics are considered. Advanced once are very accurate and older once are approximate but are equally insightful.
Let the RMS of the fundamental, first second, the nth component of voltage and current waveform be
The most accepted theory defines instantaneous power as:
Expanding and integrating over a cycle will cancel all the terms of sin and cos product, and would reduce to:
Apparent power remains the same mathematically:
Including the definition of THDs for voltage and current the equation modifies to:
Now this theory uses some important assumptions to simplify the results, which are quite reasonable for particular cases.
Harmonics contribute negligibly small in active power, so neglecting the higher terms:
2. For most of devices the terminal voltages don’t suffer very high distortions, even though the current may be severely distorted. More on this in next section but for now:
WHAT’S THE CONCLUSION?
The power factor for a non-linear load depends upon two factors, one is cosø and the another is current distortion factor.
If we wish to draw less current, we need to have high overall power factor. Once cosø component is maximized to one, then distorted current sets the upper limit for the true power factor. Following data accessed by virtue of sciencedirect.com will make it visualize better how much significant the current distortion are.
Notice the awful THD for these devices, clearly, it severely reduces the overall pf.
However, these dinky-pinky household electronics devices are of low power rating so current drawn is not so significant, if they were high powered it would have been a disaster for us.
NOTE: For most of the devices listed above the assumption are solidly valid.
Are you thinking of adding a shunt capacitor across the laptop or the electronic gadgets to improve power factor to get low electric bills, for god sake don’t ever try, your capacitor will be blown in air, later we will understand!!!
These harmonics by a phenomenon of “Harmonic Resonance” with the system and the capacitor banks, amplify horribly. There have been numerous industrial catastrophes that have occurred and still continue to happen because people ignore the Harmonic Resonance.
Our Prof Rakesh Maurya had been involved in solving out one such capacitor bank burn-out issue with Adjustable Speed Drive (ASD) at LnT.
Harmonics Generation: Typical Sources of harmonics
Most of the time in electrical engineering transformers and motors are not visualized as:
Instead, it is preferred to see transformers and electrical motors like this, respectively:
These diagrams are called the equivalent circuits, these models are simply the abstraction developed to let as calculate power flow without considering many unnecessary minute details.
The souls of these models are based on some assumptions which lead us to ignore those minute details, simplify our lives and give results with acceptable error.
Try to recall those assumptions we learned in our classrooms.
The reasons for harmonics generation by these beasts lie in those minute details.
It is only under the assumption of “no saturation” that for a sinusoidal voltage implied across primary gives us sinusoidal voltage at secondary.
Sinusoidal Pri. Voltage >>> Sinusoidal Current >>> Sinusoidal Flux >>> Sinusoidal Induced Sec. EMF
With the advancement in material science now special core materials are available which saturates rarely, but the older and conventional saturated many times and are observed to generated 3rd harmonics majorly.
Details right now are beyond our team’s mental capacity to comprehend.
From this stand-point of cute equivalent circuit the electrical motors seem so innocent, simple RL load certainly not capable to introduce any harmonics. But as stated this abstraction is a mere approximation to obtain performance characteristics as fast and reliably as possible.
Remember while deriving the air-gap flux density it was assumed that the spatial distribution of MMF due to balanced winding is sinusoidal, but more accurately it was trapezoidal, only fundamental was considered. Due to this and many other imperfections, motor is observed to produce 5th harmonics, largely.
NOTE: Third harmonics and its multiples are completely absent in three-phase IMs. Refer notes.
Disgusting, they don’t need any explanation. 😏😏😏
Most common, however least impactful effect of power harmonics are increased power loss leading to heating and decreased efficiency of the non-linear (devices that causes) and also later we will learn it affects linear devices too, that are connected to the synchronous grid.
The Skin Effect:
Lenz law states that a conducting loop/coil always oppose the change in magnetic flux linked by it, by inducing an emf which leads to a current.
Consider a rectangular two-wire system representing a transmission line having here a circular cross-section wire carrying a DC current I.
Now one loop is quite obviously visible, the big rectangular one. The opposition to change in magnetic field linked by this loop gives us transmission line inductance.
NOTE: THE INDUCTANCE AND LOOPS OF CURRENT ARE FACET OF SAME COIN, ONE LEADS TO ANOTHER. Think about it!!!!
At frequencies relatively higher than power frequency 50 Hz, another kind of current loops begin to magnify. So, as we said this will cause another type of inductance.
Look closely the magnetic field inside the conducting wire is also changing, as a result, inside the conductor itself loops of currents called eddy current set up, which lead to some dramatic impact.
EDDY CURRENT ARE SIMPLY MANIFESTATION OF SIMPLE LENZ LAW, RESPONSE OF A CONDUCTING MATERIAL TO CHANGING MAGNETIC FIELD.
Consider two cases, a current element dx at r and R distance from the center. Which current element will face greater opposition by the eddy currents due their changing nature??
Yes, true, the element lying closer from the center, as the loop area available is more for eddy currents, this difference in opposition from the eddy current to different elements cause the current distribution inside the conductor to shift towards the surface as least eddy current opposition would be there.
A technical account for this skin effect in given in this manner:
The flux linked by the current flowing at the center region is more than the elements of current at outer region of cross-section;
Larger flux linkage leads to increased reactance of central area than the periphery;
Hence current chose the path of least impedance, that is surface region.
Eddy current phenomenon is quite prevalent in AC systems. Since the AC systems are bound to have changing magnetic fields thus eddy currents are induced everywhere from conductors to transformer’s core to the motor’s stator, etc.
Now when higher frequency components of harmonics are present in the current, the skin effect becomes quite magnified, most of the current takes up the surface path as if central region is not available which is equivalent to reduced cross-section i.e. increased resistance, hence magnified joule’s heating (isqR). Thus, heating is increased considerably due these layers on layer reasons (one leads to another).
Other grave effects include false tripping, unexplained failures due to the mysterious harmonic resonance.
All of these motivated us to build our own harmonic analyzer, follow up the next blog.