LogEagle

Reading Time: 3 minutes

This blog is about creating an app called Log Eagle that monitors different kinds of web services and catches errors in production.

There are already several monitoring services. But the goal of this app is to create a highly scalable and flexible service that is easy to deploy. The backend is written in Go, which is a powerful tool for these types of applications. And the frontend is written in React. 

C:\Users\DELL\Desktop\xyz.png

To get started, there needs to be at least one service. A service can be anything like an Express server, a mobile app, or your frontend. All such services belong to an organization. Admins of that organization can invite and remove additional users to their organization. There are also adapters, that can be installed on the service to automatically catch and report errors. It also gives you the flexibility to write your own adapter and error reporting logic in whatever language you prefer.

C:\Users\DELL\Desktop\xyz2.png

When selecting a service, it shows all the errors that were reported by that particular service with a couple of details. Each service has a so-called ticket, which is used to assign the reported event to the service.

C:\Users\DELL\Desktop\xyz3.png

You can click on an error for further insights. It is also possible to add custom information to the event, which can be handy if you, for example, run your service in different clusters and want to add the name of the cluster to the event.

C:\Users\DELL\Desktop\xyz4.png

It will also give you detailed information when and how often the error was reported.

C:\Users\DELL\Desktop\xyz5.png

Besides the raw stack trace, it will show clearly where the error occurred. Furthermore, you will be able to see previous console logs and information about the adapter.

If you want to check it out, you can create your own organization on a demo instance deployed here and give it a try. Currently, there is a NodeJS adapter available. In the repositories, you will find information on how to create an adapter in any other language or host the service yourself.

NANOMATERIALS

Reading Time: 10 minutes

INTRODUCTION

Nano means one billionth that means 10^-9 times in scientific notation. Have you ever thought how small it is? Avg human height is around 1.5-2m, size of ants are about 2mm, the diameter of a human hair is around 100mm and size of our DNA is around 2nm that means it is 10^-9 times smaller than average human height. To imagine how small is one-billionth let’s go on the other side and see how big an object would be if we are one billionth time larger than the humans. The diameter of the sun is about one billionth times larger than a human. That’s pretty big. So our DNA is as small as humans as humans are from the sun.

What are nanomaterials?? What is its importance? Where are they used? Let’s dive into the world of smallness!!!

Nanomaterials include a broad class of materials, which has at least one dimension less than 100nm. Depending on their shape, they can be 0-D, 1-D, 2-D or 3-D. You may be thinking what this small piece of material can do?? Nanomaterials have an extensive range of applications. The importance of these materials was realized when it was found that size can influence the physicochemical properties of a substance. Nanoparticles have biomedical, environmental, agricultural and industrial based applications.

Nanoparticles are composed of 3 layers-

  • The Surface Layer- It may be functionalized with a variety of small molecules, metal ions, surfactants and polymers.

  • The Shell Layer- It is a chemically different material from the core in all aspects.

  • The Core- It is the central portion of the nanoparticle and usually referred to as nanoparticle itself.

These materials got immense interest from researchers in multidisciplinary fields due to their exceptional characteristics.

CLASSIFICATION OF NANOPARTICLES

Based on the physical and chemical characteristics, some of the well-known classes of NPs are-

  1. CARBON-BASED NPs

  • FULLERENES- It contains nanomaterials that are made up of globular hollow cage such as allotropic forms of carbon. They have properties like electrical conductivity, high strength, structure, electron affinity and versatility. They possess pentagonal and hexagonal carbon units, while each carbon is sp2 hybridized. The structure of C-60 is called Buckminsterfullerene

  • CARBON NANOTUBES(CNTs)- They have elongated, tubular structure, 1-2nm in diameter. They structurally resemble graphite sheets rolling upon itself, which can have single double and many walls and therefore are named as single-walled (SWNTs), double-walled (DWNTs) and multi-walled carbon nanotubes (MWNTs) respectively. They are widely synthesized by decomposition of carbon, especially atomic carbons, vaporized from graphite by laser or by an electric arc to metal particles. Chemical Vapour Deposition (CVD) technique is also used to synthesize CNTs. They can be used as fillers, efficient gas absorbents and as a support medium for different inorganic and organic catalysts.

NANOMATERIALS

  1. METAL NPs

They are purely made up of metal precursors. Due to Localized Surface Plasmon Resonance (LSPR) characteristic, they possess unique optoelectrical properties. Due to excellent optical properties, they find their application in various research areas. For example, gold nanoparticles are used to coat the sample before analyzing in SEM.

  1. CERAMIC NPs

They are inorganic, nonmetallic solids, synthesized via heat and continuous cooling. They are made up of oxides, carbides, carbonates and phosphates. They can be found in amorphous, polycrystalline, dense, porous or hollow forms. They found their application in catalysis, photocatalysis, photodegradation of dyes and imaging application.

  1. SEMICONDUCTOR NPs

They possess wide band gaps and therefore show significant alteration in their properties with bandgap tuning. They are used in photocatalysis, photo optics and electronic devices. Some of the examples of semiconductor NPs are GaN, GaP, InP, InAs.

  1. POLYMERIC NPs

They are organic-based NPs, mostly nanospheres and nanocapsules in shape. They are readily functionalized and therefore have a wide range of applications.

  1. LIPID NPs

They contain liquid moieties and are effectively used in many biomedical applications. They are generally spheres with diameters ranging from 10 to 1000nm. They have a solid core made of lipid, and a matrix contains soluble lipophilic molecules.

SYNTHESIS OF NPs

There are various methods used for the synthesis of NPs, which are broadly classified into two main classes-

  1. TOP-DOWN APPROACH

Top-down routes are included in the typical solid-state processing of the materials. It is based on bulk materials and makes it smaller, thus using physical processes like crushing, milling and grinding to break large particles. It is a destructive approach, and it is not suitable for preparing uniformly shaped materials. The biggest drawback in this approach is the imperfections of the surface structure, which has a significant impact on physical properties and surface chemistry of nanoparticles. Examples of this approach include grinding/milling, CVD, PVD and other decomposition techniques.

NANOMATERIALS

  1. BOTTOM-UP APPROACH

As the name suggests, it refers to the build-up of materials from the bottom: atom by atom, molecule by molecule or cluster by cluster. They are more often used for preparing most of the nanoscale materials which have the ability to generate uniform size, shape and distribution. It effectively covers chemical synthesis and precisely controls the reaction to inhibit further particle growth. Examples are sedimentation and reduction techniques. It includes sol-gel, green synthesis, spinning and biochemical synthesis.

CHARACTERIZATION OF NPs

Analysis of different physicochemical properties of NPs is done using various characterization techniques. It includes techniques such as X-ray diffraction (XRD), X-ray photoelectron spectroscopy (XPS), Infrared (IR), SEM, TEM and particle size analysis.

  1. MORPHOLOGICAL CHARACTERIZATION

Morphology always influences most of the properties of the NPs. Microscopic techniques are used for characterization for morphological studies such as a polarized optical microscope, SEM and TEM.

SEM technique is based on electron scanning principle. It uses a focused beam of high energy electrons to generate a variety of signals at the surface of solid specimens. It is not only used to study the morphology of nanomaterials, but also the dispersion of NPs in the bulk or matrix.

TEM is based on electron transmission principle so that it can provide information on bulk material from very low to higher magnification. In TEM a high energy beam of electrons is shone through a skinny sample. This technique is used to study different morphologies of gold NPs. It also provides essential information about two or more layer materials.

NANOMATERIALS

  1. STRUCTURAL CHARACTERIZATION

Structural characteristics are of primary importance to study the composition and nature of bonding materials. It provides diverse information about the bulk properties of the subject material. XRD, Energy dispersive X-ray (EDX), XPS, IR, Raman and BET are the techniques used to study the structural properties of NPs.

XRD is one of the most used characterization techniques to disclose the structural properties of NPs. Crystallinity and phases of nanoparticles can be determined using this technique. Particle size can also be determined by using this technique. It worked well in identification of both single and multiphase NPs.

EDX is usually fixed with field emission-SEM or TEM device is widely used to know about the elemental composition with a rough idea of per cent weight. Nanoparticles comprise constituent elements, and each of them emits characteristic energy X-rays by electron beam eradication.

XPS is one of the most sensitive techniques used to determine the exact elemental ratio and exact bonding nature of elements in nanoparticles materials. It is a surface-sensitive technique used in-depth profiling studies to know the overall composition and the compositional variation with depth.

  1. PARTICLE SIZE AND SURFACE AREA CHARACTERIZATION

Size of the particle can be estimated by using SEM, TEM, XRD and dynamic light scattering (DLS). Zeta potential size analyzer/DLS can be used to find the size of NPs at a deficient level.

NTA is another new and exclusive technique which allows us to find the size distribution profile of NPs with a diameter ranging from 10 to 1000nm in a liquid medium. By using this technique, we can visualize and analyze the NPs in a liquid medium that relates the Brownian motion rate to particle size. It can be helpful in biological systems such as protein and DNA.

NPs have large surface areas, so it offers excellent room for various applications. BET is the most used technique to determine the surface area of nanoparticles material. Principle of this technique is adsorption and desorption and Brunauer-Emmett-Teller (BET) theorem.

  1. OPTICAL CHARACTERIZATION

Optical properties are of great concern in photocatalytic applications. These characterizations are based on Beer-lambert law and basic light principles. The techniques used to give information about absorption, luminescence and phosphorescence properties of NPs. The optical properties of NPs materials can be studied by well-known equipment like Ultraviolet-visible, photoluminescence and the ellipsometer.

PHYSICOCHEMICAL PROPERTIES OF NPs

So it’s all about the size, isn’t it? Yes and no. When a material becomes a nanomaterial is not so simple. A nanomaterial may have different properties compared to the same substance in bulk form. That means that a material could change when it goes from bulk to nanoform, but at what size that happens varies depending on the substance.Nanoparticles are used in various applications due to their unique properties such as large surface area, strength, optically active and chemically reactive.

  1. ELECTRONIC AND OPTICAL PROPERTIES

The optical and electronic properties of nanoparticles are dependent on each other. For example, gold colloidal nanoparticles are the reason for the rusty colours seen in blemished glass windows, while Ag NPs are typically yellow. The free electrons on the surface of nanomaterials are free to move across the material. The mean free path of Ag and gold is ~50nm, which is greater than the NPs size of these materials. Therefore, no scattering is expected from the bulk, when light interacts. Instead, they set into a standing resonance condition, which is responsible for LSPR in the NPs.

  1. MAGNETIC PROPERTIES

There is a class of nanoparticles known as magnetic nanoparticles that can be manipulated using magnetic fields. Such particles consist of two components- a magnetic material and chemical component that has functionality. These types of materials have a wide range of applications which includes heterogeneous and homogeneous catalysis, biomedicine, magnetic fluids, MRI and also in water decontamination. Magnetic properties of NPs dominate when its size is less than the critical value, i.e. 10-20nm. The reason for these magnetic properties is the uneven electronic distribution in NPs.

  1. MECHANICAL PROPERTIES

To know the exact mechanical nature of NPs different mechanical parameters such as elastic modulus, hardness, stress and strain, adhesion and friction are surveyed. Due to distant mechanical properties of NPs, it finds its application in fields like tribology, surface engineering, nanofabrication and nanomanufacturing. NPs shows different mechanical properties as compared to microparticles and their bulk materials.

  1. THERMAL PROPERTIES

It is well known that metals have better thermal conductivities than that of fluids. Same is the case of NPs. Thermal conductivity of copper is much higher than water and engine oil. Thermal conductivity of fluids can be increased by dispersing solid particles in them. Using the same way nanofluids are produced which have nanometric scales solid particles dispersed into a liquid such as water, ethylene glycol or oils. They are expected to exhibit superior properties relative to those of conventional heat transfer fluids and fluids containing microscopic solid particles. As heat transfer takes place at the surface of the particles, it is better to use the particles with large surface area, and it also increases the stability suspension.

APPLICATIONS

As discussed above the nanoparticles have various unique properties. Due to their properties, they find their applications in multiple fields, including drugs, medication, manufacturing, electronics, multiple industries and also in the environment.

NANOMATERIALS

Nano-sized inorganic particles have unique, physical and chemical properties. They are an essential material in the development of various nanodevices which can be used in multiple physical, biological, biomedical and pharmaceutical applications. Particles of an iron oxide such as magnetite (Fe3O4) or its oxides from maghemite (Fe2O3) are used in biomedical applications. Polyethene oxide (PEO) and polylactic acid (PLA) NPs have been revealed as up-and-coming systems for the intravenous administration of drugs. Biomedical applications require NPs with high magnetization value, a size smaller than 100nm and a narrow particle size distribution. Most of the semiconductor and metal NPs have immense potential cancer diagnosis and therapy.

Image shows the bamboo-like structure of nitrogen-doped carbon nanotubes for the treatment of cancer.

NANOMATERIALS

In specific applications within the medical, commercial and ecological sectors manufacturing NPs are used which show physicochemical characteristics that induce unique electrical, mechanical, optical and imaging properties. Nanotechnology is used in various industries, including food processing and packaging. The unique plasmon absorbance features of the noble metals NPs have been used for a wide variety of applications including chemical sensors and biosensors.

Nanomaterials are also used in some environmental applications like green chemistry, pollution prevention, the recommendation of contaminated materials and sensors for ecological stages.

NPs such as metallic NPs, organic electronic molecules, CNTs and ceramic NPs are expected to flow as a mass production process for new types of electronic equipment.

NPs can also offer applications in mechanical industries, especially in coating, lubricants and adhesive applications. Its mechanical strength can be used to produce mechanically more reliable nanodevices.

CONCLUSION

Nanomaterials are no doubt the future of technology, being the smallest material they have a wide range of applications due to their unique physical and chemical properties. Due to their small size, NPs have a large surface area which also makes them suitable candidates for many applications. Even at that size, optical properties dominate, which further increase their importance in photocatalytic applications. Though NPs are used for various applications, still they have some health hazard concerns due to their uncontrollable use and discharge to the natural environment, which should be considered to make the use of NPs more convenient and environmentally friendly.

WONDER, THINK, CREATE!!!

Keep Learning!, Keep Growing!

Team CEV

The Harmonic Analyzer: Catching the Spurious

Reading Time: 10 minutes

“Do you have the courage to make every single possible mistake, before you get it all-right?”

-Albert Einstein

**Featured image courtesy: Internet

THE PROJECT IN SHORT: What this is about?

The importance of analyzing harmonics has been enough stressed upon in the previous blog, Pollution in Power Systems. 

So, we set out to design a system for real-time monitoring of voltage and current waveforms associated with a typical non-linear load. Our aim was “to obtain the shape of waveforms plus apply some mathematical rigour to get the harmonic spectrum of the waveforms”.   

THE IDEA: How it works?

Clearly, real-time capabilities of any system are analogous to deployment of intelligent microcontrollers to perform the tasks and since this system also demanded some effective visualization setup, so we linked the microcontroller with the desktop (interfacing aided by MATLAB). Together with MATLAB, we established a GUI platform to interact with user to get the required results:

  1. The shape of waveforms and defined parameters readings,
  2. Harmonic spectrum in the frequency domain.  

The voltage and current signal are first appropriately sampled by different resistor configurations, these samples are then conditioned by analog industry’s workhorses, the op-amps, and are fed into the ADC of microcontroller (Arduino UNO) for digital discretization. These digital values are accessed by MatLab to apply mathematical techniques according to commands entered by user at the GUI to finally produce required outcome on screen of PC.

The Harmonic Analyzer: Catching the Spurious

ARDUINO and MATLAB INTERFACING: Boosting the Computation

Arduino UNO is 32K flash memory and 2K SRAM microcontroller which sets limit to the functionality of a larger system to some extent. Interfacing the microcontroller with a PC not only allows increased computational capability but more importantly it serves with an effective visual tool of screen to display the waveforms of the quantities graphically, import data and save for future reference and so on.

TWO WAYS TO WORK: Simulink and the .m

The interfacing can be done via two modes, one is directly building simulation models in Simulink by using blocks from the Arduino library and second is to write scripts (code in .m file) in MatLab by including a specific set of libraries for given Arduino devices (UNO, NANO, etc.).

Only the global variable “arduino” needs to be declared in the program and rest codes are as usual and normal. We have used the second method as it was more suitable for the type of mathematical operation we wanted to perform.

NOTE:

  1. The first method could also be utilised by executing the required mathematical operation using available blocks in the library.
  2. Both of these methods of interfacing require addition of two different libraries.

THE GUI: User friendly

Using Arduino interfaced with PC also gives another advantage of user-interactive analyzer. Sometimes the visual graphics of waveform distortion is important and sometimes the information in frequency domain is of utmost concern. Using a GUI platform provided by MatLab, to give the option to user to select his choice adds greatly to the flexibility of analyzer.  

The GUI platform appears like this upon running the program.

The Harmonic Analyzer: Catching the Spurious

MatLab gives you a very user-friendly environment to build such useful GUI. Type guide in command window select the blank GUI and you are ready to go.

Moreover, you can follow this short 8 minutes tutorial for the introduction, by official MatLab YouTube channel:

https://youtu.be/Ta1uhGEJFBE

REAL-TIME PROGRAM: The Core of the System

Once GUI is designed and saved, a corresponding m-file is automatically generated by the MatLab. This m-file contains the well-structured codes as well as illustrative comments to show how to program further. The GUI is now ready to be impregnated with the pumping heart of the project, the real codes.

COLLECTING DATA

The very first task is to start collecting data-points flushing-in from the ADC of the microcontroller and save it in an array for future reproduction in the program. This should be executed upon the user pressing the START button at the GUI.

Algorithm:

Since we have shifted our whole signal waveform by 2.5 V so we have to continuously check for 127 level which is actually the zero-crossing point, and then only start collecting data.  

Code:

% --- Executes on button press in start.
function start_Callback(hObject, eventdata, handles)
% hObject    handle to start (see GCBO)
% eventdata  reserved - to be defined in a future version of MATLAB
% handles    structure with handles and user data (see GUIDATA)
V = zeros(1,201);
time = zeros(1,201);
vstart = 0;
while(vstart == 0)
    value = readVoltage(ard ,'A1');
    if(value > 124 && value < 130)
        vstart = 1;
    end
end
 
for n = 1:1:201
    value = readVoltage(ard ,'A1');
    value = value – 127;
    V(n) = value;
    time(n) = (n-1)*0.0001;
end

DISPLAYING WAVEFORM

The data-points saved in the array now required to be produced and that too in a way which makes sense to the user, i.e. the graphical plotting.

Algorithm: ISSUES STILL UNRESOLVED!!!

HARMONIC ANALYSIS

As mentioned previously we aimed to obtain the frequency domain analysis for the waveform of concern. The previous blog was presented with insights of mathematical formulation required to do so.

Algorithm: Refer to blog Pollution in power systems

Codes:

% --- Executes on button press in frequencydomain.
function frequencydomain_Callback(hObject, eventdata, handles)
% hObject    handle to frequencydomain (see GCBO)
% eventdata  reserved - to be defined in a future version of MATLAB
% handles    structure with handles and user data (see GUIDATA)
 
%Ns=no of samples
%a= coeffecient of cosine terms
%b =coefficient of sine terms
%A = coefficient of harmonic terms
%ph=phase angle of harmonic terms wrt fundamental
 
%a0
sum=0;
n=9   %no of harmonics required
[r,Ns]=size(V);
for i=1:1:Ns
   sum=sum+V(i);
   Adc=sum/Ns;
end
 
 
for i=1:1:n
    for j=1:1:Ns
       M(i,j)=V(j)*cos(2*pi*(j-1)*i/Ns);%matrix M has order of n*(Ns)
    end
end
 
 
for i=1:1:n
    for j=1:1:Ns
        if j==1 || j==Ns
            sum= sum+M(i,j);
        elseif mod((j-1),3)==0
            sum=sum+ 2*M(i,j);
        else
            sum=sum+3*M(i,j);
        end
    end
   a(i)= 3/4*sum/Ns;
   sum=0;
end
 
for i=1:1:n
    for j=1:1:Ns
       N(i,j)=V(j)*sin(2*pi*(j-1)*i/Ns);%matrix M has order of n*(Ns+1)
    end
end
    
for i=1:1:n
    for j=1:1:Ns
        if j==1 || j==Ns
            sum= sum+N(i,j);
        elseif mod((j-1),3)==0
            sum=sum+ 2*N(i,j);
         else 
            sum=sum+3*N(i,j);
        end
    end
    b(i)= 3/4*sum/Ns;
    sum=0;
end 
 
for i=1:1:n
    A(i)=sqrt(a(i)^2+b(i)^2);
end
 
for i=1:1:n
    ph=-atan(b(i)/a(i));
end
 
figure;
 x = 1:1:n;
 hold on;
 datacursormode on;
 grid on;
 stem(x,A,'filled');
 xlabel('nth harmonic');
 ylabel('amplitude');

CIRCUIT DESIGNING: The Analog Part

The section appears quite late in this documentation but ironically this is the first stage of the system. As we have seen in the power module the constraints on signal input to ADC of microcontroller:

  1. Peak to peak signal magnitude should be within 5V.
  2. Voltage Signal must be always positive wrt to the reference.

To meet the first part, we used a step-down transformer and a voltage divider resistance branch of required values to get a peak to peak sinusoidal voltage waveform of 5V.

Now current and voltage waveforms obviously would become negative wrt to reference in AC systems.

Think for a second, how to shift this whole cycle above the x-axis.  

To achieve this second part, we used an op-amp in clamping configuration to obtain a voltage clamping circuit. We selected op-amp due to their several great operational qualities, like accuracy and simplicity.

Voltage clamping using op-amps:

The Harmonic Analyzer: Catching the Spurious

The circuit overall layout:The Harmonic Analyzer: Catching the Spurious

IMP NOTE: While taking signals from a voltage divider always keep in mind that no current is drawn from the point of sampling, as it will disturb the effective resistance branch and hence required voltage division won’t be obtained. Always use an op-amp in voltage follower configuration to take samples from the voltage divider.

Current Waveform****(same as power module setup)

A Power Module

MODELLING AND SIMULATIONS:

Now it is always preferable to first model and simulate your circuit and confirming the results to check for any potentially fatal loopholes. It helps save time to correct the errors and saves elements from blowing up during testing.

Modelling and simulation become of great importance for larger and relatively complicated systems, like alternators, transmission lines, other power systems, where you simply cannot afford hit and trial methods to rectify issues in systems. Hence, having an upper hand in this skill of modelling and simulating is of great importance in engineering.

For an analog system, like this, MatLab is perfect. (We found Proteus not showing correct results, however, it is best suited for the simulating microcontrollers-based circuits).

The Harmonic Analyzer: Catching the Spurious

Simulation results confirm a 5V peak to peak signal clamped at 2.5 V.

The Harmonic Analyzer: Catching the Spurious

The real circuit under test:

The Harmonic Analyzer: Catching the Spurious

Case of Emergency:

Sometimes we find ourselves in desperate need of some IC and we didn’t get it. At that time our ability to observe might help us get some. In our surroundings, we are littered with IC of all types, and op-amp is one of the most common. Sensors of all types use an op-amp to amplify signals to required values. These IC fixed on the chip can be extracted by de-soldering using solder iron. If that doesn’t seem possible use something that gets you the results. Like in power module project we manage to get three terminals of the one op-amp from IR sensor chip, here we required two op-amps.

First, trace the circuit diagram of the chip by referring the terminals from the datasheet, you can cross-check all connections by using the multimeter in connectivity-check mode. Then use all sorts of techniques too somehow obtain the desired connections.  

The Harmonic Analyzer: Catching the Spurious   The Harmonic Analyzer: Catching the Spurious

Reference Voltages

Many times, in circuits different levels of reference voltages are required like 3.3V, 4.5V etc. here we require 2.5 V.

One can-built reference voltage using:

  1. resistance voltage dividers (with op-amp in voltage follower configuration),
  2. we can directly use an op-amp to give required gain to any source voltage level,
  3. the variable reference voltage can be obtained by the variable voltage supply, we built-in rectifier project using the LM317.      

WAVEFORM GENERATION

For program testing, we required different typical waveforms like square and triangle wave. These types of waveforms can be obtained in two different ways: the analog way and the digital way.  

The Analog Way

Op-amps again come for our rescue. Op-amps when accompanied by resistors, capacitors and inductor seemingly provide all sorts of functionalities in analog domain like summing, subtracting, integrating, differentiating, voltage source, current source, level shifting, etc.

Using a Texas Instrument’s handbook on Op-amp, we obtained the circuit for triangle wave generation as below:
The Harmonic Analyzer: Catching the Spurious

The Harmonic Analyzer: Catching the Spurious

The Digital Way

Another interesting way to obtain all sorts of desired waveforms is by harnessing microcontroller. One can vary the voltage levels, frequency and other waveform parameters directly in the code.

Here we utilised two Arduinos, one stand-alone Arduino 1 which is programmed to generate square wave and another Arduino 2 interfaced with Matlab to check the results.

The Harmonic Analyzer: Catching the Spurious          The Harmonic Analyzer: Catching the Spurious

Now already stated the importance of simulation.

So, here for the simulation of Arduino we used “Proteus 8”.

The code is written in Arduino App, compiled and HEX code is burnt in the model in proteus.

The Harmonic Analyzer: Catching the Spurious

The real-circuit:

The Harmonic Analyzer: Catching the Spurious

The results displayed by the Matlab:

The Harmonic Analyzer: Catching the Spurious

NOTE:

To generate different waveforms other than square-type one thing that has to consider is the PWM mode of operation of Digital pins. The 13 digital pins on Arduino generates PWM.

At 100% duty cycle 5 V is generated at the output terminal.

digitalWrite (PIN, HIGH): This code line generates a PWM of 100% DT whose DC value is 5V.

So, by changing the duty cycle of PWM we can obtain any level between 0-5 V.

analogWrite (PIN, Duty_Ratio): this code line generates a PWM of any duty-ratio (0-100%) hence any desired value of voltage level on a digital pin.   

For example:

analogWrite (2, 127): gives an output of 2.5 V at D-pin 2.

Moreover, timer functionalities can be utilized for a triangle wave generation.

THE RESULTS

It is very saddening for us to not able to finally check our results and terminate the project at 75% completion due to unavoidable instances created by this COVID thing.

THE RESOURCES: How you can do it too?

List of the important resources referred in this project:

  1. MatLab 2020 download: https://www.youtube.com/watch?v=TLAzzxKU5ns
  2. MatLab official YouTube channel provides great lessons to master MatLab

https://www.youtube.com/channel/UCgdHSFcXvkN6O3NXvif0-pA

  1. Matlab and Simulink introduction, free self-paced courses by MatLab:
    1. https://www.mathworks.com/learn/tutorials/matlab-onramp.html
    2. https://www.mathworks.com/learn/tutorials/simulink-onramp.html
  2. Simulink simulations demystified for analog circuits: https://www.youtube.com/watch?v=Zw0oiduA70k
  3. Proteus introduction: https://youtu.be/fK5oI1XfDYI
  4. MatLab with Arduino: https://youtu.be/7tcEs0QOiBk
  5. Op-amp cook book: Handbook of Op-amp application, Texas Instruments

THE CONCLUSIONS: Very Important take-away

“TEAMWORK Works”

If we (you and us) desire to take-on venture into the unknown, something never done before and planning to do it all alone, trust our words failure is sure. It gets tough when we get stuck somewhere and it gets tougher only.

We all have to find the people who have the same vision as ours, share some interests and with whom we love work alongside. We all have compulsorily to be a part of a team, otherwise life won’t be easy nor pleasing. There is a great possibility of coming out a winner if we get into it as a team, even if the team fails, we don’t come out frustrated at least.

Each member brings with themselves their own special individual talent to contribute to the common aim. The ability to write codes, the ability to do the math, the ability to simulate, the ability to interpret results, the ability to work on theory and work on intuition, etc. A good teamwork is the recipe to build great things that work.

So, we conclude from the project that teamwork was the most crucial reason for the 75% completion of this venture, and we look forward to make it 100% asap.

Team-members: Vartik Srivastava, Anshuman Jhala, Rahul

Thankyou❤ Ujjwal, Hrishabh, Aman Mishra, Prakash for helping us in resolving software related issues.

WONDER, THINK, CREATE!!

Team CEV    

Pollution in Power Systems

Reading Time: 14 minutes

Introduction

The Non-Sinusoids

What’s the conclusion?

Harmonics

THD and Power Factor

Harmonics Generation: Typical Sources of harmonics

Effects

**Featured image courtesy: Internet 

Introduction

If we were in ideal world then we would have all honest people, no global issues of Corona and climate crisis, also gas particles would have negligible volume (ideal gas equation), etc. and in particular in the power systems we would have only sinusoidal voltage and current waveforms. 😅😅

But in this real beautiful world we have bunch of dear dishonest people; thousands die of epidemics, globe becoming hotter and also gas particles have volume similarly having pure sinusoidal waveforms is a luxury and unconceivable feat to be achieved in any large power system.

Prerequisite

We have tried to get launched from very beginning so only a strong will to understand is enough but still we will suggest to once you to go through the power quality blog, it will help develop some important insights.

Electrical Power Quality

Let’s go yoooo!!🤘🤘🤘

Now, why we are talking about shape of waveforms? Well, you will get to know about it by the end on your own, for now let us just tell you that the non-sinusoidal nature of waveform is considered as pollution in electrical power system, effects of which ranges from overheating to whole system ending up in large catastrophes.

Non-sinusoidal waveforms of currents or voltages are polluted waveforms.

But how it can be possible that if voltage implied across some load is sinusoidal but current drawn is non-sinusoidal.

Hint: V= IZ

Yes, it is only possible if the impedance plays some tricks. So, the very first conclusion that can be drawn for the systems that create electrical pollution is that they don’t have constant impedance in one time-period of voltage cycle applied across it, hence they draw non-sinusoidal currents from source. These systems are called non-linear loads or elements. Like this most popular guy:

Pollution in Power Systems

The diode

Note that the inductive and capacitive impedances are frequency variant and remains fixed over a voltage cycle for fixed frequency that’s why resistors, inductor and capacitor are linear loads. In this modern era of 21st century the power system is cursed to be literally littered with these non-linear loads and it is estimated that in next 10-15 years 60% of total load will be non-linear type, well the aftermath of COVID19 has not been considered.

The list of non-linear loads includes almost all the loads you see around you, the gadgets- computers, TVs, music system, LEDs, the battery charging systems, ACs, refrigerators, fluorescent tubes, arc furnaces, etc. Look at the following waveforms of current drawn by some common devices:

Pollution in Power Systems

Typical inverter Air-Conditioner current waveform (235.14 V, 1.871 A)

Source: Research Gate  

Pollution in Power Systems

Typical Fluorescent lamp

Source: Internet

Pollution in Power Systems

Typical 10W LED bulb

Source: Research Gate  

Pollution in Power Systems

Typical battery charging system

Source: Research Gate

Pollution in Power Systems

Typical Refrigerator

Source: Research Gate

Pollution in Power Systems

Typical Arc furnace current waveform

Source: Internet   

Name any modern device (microwave-oven, washing machine, BLDC fans, etc.) and their current waveforms are severely offbeat from desired sine-type, given the no of such devices the electrical pollution becomes a grave issue for any power system. Now the pollution in electrical power system is not a phenomenon of this 21st century rather electrical engineers have struggled to check the non-sinusoidal waveforms throughout 20th century and one can find description of this phenomenon as early as in 1916 in Steinmetz ground-breaking research paper named “Study of Harmonics in three-phase Power System”. However, the source and reasons of power pollution have ever-changing since then. In early days transformers were major polluting devices now 21st gadgets have taken up that role, but the consequences have remained disastrous.

WAIT, WAIT, WAIT…. What’s that “Harmonics”?

Before we even introduce the harmonics let just apply our mathematical rigor in analyzing the typical non-sinusoidal waveforms, we encounter in the power system.

THE NON-SINOSOIDS

From the blog on Fourier series, we were confronted with one of most fundamental laws of nature:

FOURIER SERIES: Expresssing the alphabets of Mathematics

Any continuous, well-defined periodic function f(x) whose period is (a, a+2c) can be expressed as sum of sine and cos and constant components. We call this great universal truth as the Fourier Expansion, mathematically:

Pollution in Power SystemsWhere,

Pollution in Power Systems

Square-wave, the output of the inverter circuits:

Pollution in Power Systems

Pollution in Power SystemsFor all even n:

Pollution in Power Systems

For all odd n:

    Pollution in Power Systems

Just for some minutes hold in mind the result’s outline:

Pollution in Power Systems

 

 

We will draw some very striking conclusions.

Now consider a triangular wave:

Pollution in Power Systems

The function can be described as:Pollution in Power Systems

Calculating Fourier coefficients:

Pollution in Power Systems

Which again simplifies to zero.

Pollution in Power Systems

So, we have-

Pollution in Power Systems

Applying the integration for each interval and putting the limits:Pollution in Power Systems

For even n,

=0

For odd n,

=0

😂😂😂

Now,

Pollution in Power Systems

 

For even n:

 

=0

😂😂😂

Are these equations kidding us???

For odd n:

Pollution in Power Systems

So finally, summary of result for the triangle waveform case is as follows:

Pollution in Power Systems

Did you noticed that if these two waveforms were traced in negative side of the time axis than they could be produced by:

Pollution in Power Systems

This property of the waveforms is called the odd symmetry. Since sine wave have this same fundamental property hence only components of sine waves are found in the expansion.

Now consider this waveform:

Pollution in Power Systems

This waveform unlike the previous two cases, if the negative side of waveform had to obtained than it must be:

Pollution in Power Systems

Now this is identified as the even symmetry of waveform, so which components do you expect sine or cos???

The function can be described as:

Pollution in Power SystemsHere again,

Pollution in Power SystemsFor the cos components:Pollution in Power Systems

This equation reduces to:

Pollution in Power Systems

For the sine components:

Pollution in Power Systems

This equation reduces to Zero for all even and odd “n”.

Well we have guessed it already🤠🤠.

Summary of coefficients for a triangle waveform, which follows even symmetry is as follows:Pollution in Power Systems

Very useful conclusions:

  1. a0 = 0: for all the waveform which inscribe equal area with x-axis, under negative and positive cycle. This happens because the constant component is simply the algebraic sum of these two areas.
  2. an =  0: for all the waveform which follows odd symmetry. Cos is an even symmetric functions, it simply can’t be component of a function which is odd symmetric.
  3. bn = 0: for all the waveform which follows even symmetry. by the same logic sine function which is itself odd symmetric, cannot be component of an even symmetry.
  4. The fourth very critical conclusion which can be drawn for the waveforms which follow this is:

Pollution in Power Systems

Where T is time period of waveform.

For then the even ordered harmonics aren’t present, only odd orders. This is property is identified as half-wave symmetry, and are present in most power system signals.

Now, these conclusions are applicable to the numerous current waveforms in the power system. Most of the devices with which we have begun with were seemed to follow the above properties, they all are half-symmetric and either odd or even. These conclusions result in great simplification while formulating the Fourier series for power systems waveforms.

So, consider a typical firing angle Current:

Pollution in Power Systems

So, apply the conclusions drawn for this case. Since the waveform has no half-wave symmetry but is odd symmetric.

Pollution in Power Systems

The Harmonics

Hope you had enjoyed utilizing the greatest mathematical tool and amazed to break the intricate waveforms into fundamental sines or cosines.

“Like matter is made up of fundamental units called atoms, any periodic waveform consists of fundamental sine and cosine components.”

It is these components of any waveform, which we call in electrical engineering language the Harmonics.

Pollution in Power Systems

The Mathematics gives you cheat codes to understand and analyze the harmonics. It just simply opens up the whole picture to very minute details.

So, what we are going to do now, after calculating the components, the harmonics?

So first all we need to quantify how much harmonic content is present in the waveform. The term coined for this purpose is called total harmonic distortion:

THD, total harmonic distortion:

It is a self-explanatory ratio, the ratio of rms of all harmonics to the rms value of fundamental.

Now since harmonics are sine or cos waves only so the RMS is simply:

Pollution in Power Systems

same definition the RMS of fundamental becomes:

Pollution in Power Systems

So, THD is:

Pollution in Power Systems

The next thing we are concerned about is power. So, we need to find the impact of harmonics on power transferred.

Power and the Power Factor

The power and power factor are so intimately related. It becomes impossible to talk about power and not of power factor.

So, the conventional power factor definition for any load (linear and non-linear load) is defined as the ratio of active power to the apparent power. It basically is an indicator of how well the load is able to utilize the current it draws; this statement is consistent with statement that a high pf load draws less current for same real power developed.

Pollution in Power Systems

Where

  1. Active power is: average of instantaneous power over a cycle

Pollution in Power Systems

Pollution in Power Systems

Assuming the sinusoidal current and the voltage have a phase difference of theta, the integration simplifies to:

Pollution in Power Systems

2. Apparent power is by its name simply VI product, since quantities are AC so RMS values.Pollution in Power Systems

The pf becomes cos(theta), only when waveforms are sinusoidal.

NOTE: The assumption must be kept in mind.

So, what happens when the waveforms are contaminated by harmonics:

There are many theories for defining power when harmonics are considered. Advanced once are very accurate and older once are approximate but are equally insightful.

Let the RMS of the fundamental, first second, the nth component of voltage and current waveform be

Pollution in Power Systems

The most accepted theory defines instantaneous power as:

Pollution in Power Systems

Expanding and integrating over a cycle will cancel all the terms of sin and cos product, and would reduce to:

Pollution in Power Systems

Apparent power remains the same mathematically:

Pollution in Power Systems

Including the definition of THDs for voltage and current the equation modifies to:

Pollution in Power Systems

Now this theory uses some important assumptions to simplify the results, which are quite reasonable for particular cases.

  1. Harmonics contribute negligibly small in active power, so neglecting the higher terms:

Pollution in Power Systems

2. For most of devices the terminal voltages don’t suffer very high distortions, even though the current may be severely distorted. More on this in next section but for now:

Pollution in Power Systems

So,

Pollution in Power Systems

WHAT’S THE CONCLUSION?

The power factor for a non-linear load depends upon two factors, one is cosø and the another is current distortion factor.

If we wish to draw less current, we need to have high overall power factor. Once cosø component is maximized to one, then distorted current sets the upper limit for the true power factor. Following data accessed by virtue of sciencedirect.com will make it visualize better how much significant the current distortion are.

Pollution in Power Systems                                           Pollution in Power Systems

Notice the awful THD for these devices, clearly, it severely reduces the overall pf.

However, these dinky-pinky household electronics devices are of low power rating so current drawn is not so significant, if they were high powered it would have been a disaster for us.

NOTE: For most of the devices listed above the assumption are solidly valid.

Are you thinking of adding a shunt capacitor across the laptop or the electronic gadgets to improve power factor to get low electric bills, for god sake don’t ever try, your capacitor will be blown in air, later we will understand!!!

These harmonics by a phenomenon of “Harmonic Resonance” with the system and the capacitor banks, amplify horribly. There have been numerous industrial catastrophes that have occurred and still continue to happen because people ignore the Harmonic Resonance.

Our Prof Rakesh Maurya had been involved in solving out one such capacitor bank burn-out issue with Adjustable Speed Drive (ASD) at LnT.

Harmonics Generation: Typical Sources of harmonics

Most of the time in electrical engineering transformers and motors are not visualized as:

Pollution in Power Systems    Pollution in Power Systems

Instead, it is preferred to see transformers and electrical motors like this, respectively:

Pollution in Power Systems   Pollution in Power Systems 

These diagrams are called the equivalent circuits, these models are simply the abstraction developed to let as calculate power flow without considering many unnecessary minute details.

The souls of these models are based on some assumptions which lead us to ignore those minute details, simplify our lives and give results with acceptable error.

Try to recall those assumptions we learned in our classrooms.

The reasons for harmonics generation by these beasts lie in those minute details.

Transformers

It is only under the assumption of “no saturation” that for a sinusoidal voltage implied across primary gives us sinusoidal voltage at secondary.

Sinusoidal Pri. Voltage >>> Sinusoidal Current >>> Sinusoidal Flux >>> Sinusoidal Induced Sec. EMF 

With the advancement in material science now special core materials are available which saturates rarely, but the older and conventional saturated many times and are observed to generated 3rd harmonics majorly.   

Details right now are beyond our team’s mental capacity to comprehend.

Electrical Motors

From this stand-point of cute equivalent circuit the electrical motors seem so innocent, simple RL load certainly not capable to introduce any harmonics. But as stated this abstraction is a mere approximation to obtain performance characteristics as fast and reliably as possible.

Remember while deriving the air-gap flux density it was assumed that the spatial distribution of MMF due to balanced winding is sinusoidal, but more accurately it was trapezoidal, only fundamental was considered. Due to this and many other imperfections, motor is observed to produce 5th harmonics, largely.

NOTE: Third harmonics and its multiples are completely absent in three-phase IMs. Refer notes.

Semiconductors

Disgusting, they don’t need any explanation. 😏😏😏

Effects

                Power Loss

Most common, however least impactful effect of power harmonics are increased power loss leading to heating and decreased efficiency of the non-linear (devices that causes) and also later we will learn it affects linear devices too, that are connected to the synchronous grid.

The Skin Effect:

Lenz law states that a conducting loop/coil always oppose the change in magnetic flux linked by it, by inducing an emf which leads to a current.

Consider a rectangular two-wire system representing a transmission line having here a circular cross-section wire carrying a DC current I.

Now one loop is quite obviously visible, the big rectangular one. The opposition to change in magnetic field linked by this loop gives us transmission line inductance.

NOTE: THE INDUCTANCE AND LOOPS OF CURRENT ARE FACET OF SAME COIN, ONE LEADS TO ANOTHER. Think about it!!!!

At frequencies relatively higher than power frequency 50 Hz, another kind of current loops begin to magnify. So, as we said this will cause another type of inductance.

Look closely the magnetic field inside the conducting wire is also changing, as a result, inside the conductor itself loops of currents called eddy current set up, which lead to some dramatic impact.

EDDY CURRENT ARE SIMPLY MANIFESTATION OF SIMPLE LENZ LAW, RESPONSE OF A CONDUCTING MATERIAL TO CHANGING MAGNETIC FIELD.

Consider two cases, a current element dx at r and R distance from the center. Which current element will face greater opposition by the eddy currents due their changing nature??

Pollution in Power Systems Pollution in Power Systems 

Yes, true, the element lying closer from the center, as the loop area available is more for eddy currents, this difference in opposition from the eddy current to different elements cause the current distribution inside the conductor to shift towards the surface as least eddy current opposition would be there.

A technical account for this skin effect in given in this manner:

  1. The flux linked by the current flowing at the center region is more than the elements of current at outer region of cross-section;
  2. Larger flux linkage leads to increased reactance of central area than the periphery;
  3. Hence current chose the path of least impedance, that is surface region.

Eddy current phenomenon is quite prevalent in AC systems. Since the AC systems are bound to have changing magnetic fields thus eddy currents are induced everywhere from conductors to transformer’s core to the motor’s stator, etc.

Now when higher frequency components of harmonics are present in the current, the skin effect becomes quite magnified, most of the current takes up the surface path as if central region is not available which is equivalent to reduced cross-section i.e. increased resistance, hence magnified joule’s heating (isqR). Thus, heating is increased considerably due these layers on layer reasons (one leads to another).

Other grave effects include false tripping, unexplained failures due to the mysterious harmonic resonance.

All of these motivated us to build our own harmonic analyzer, follow up the next blog.

Wonder, Think, Create!!!

Team CEV

  

Blockchain and Security Blog Series

Reading Time: < 1 minute

This Blog Series is meant for everyone willing to grab a look underhood the Blockchain and Security system. Doesn’t matter you are a freshman or a senior, do a diligent reading and try to comprehend whatever is written.

This is especially to show a “Road not Taken” usually by the undergrads of the college. Hope this insight helps!!!

In case of any query, shoot an email to aman0902pandey(@)gmail.com

Share this Page
 

Authors

Aman Pandey (BTech III)
Kaushik Chandra (MSc Physics I)
Gaurav Kumar (BTech I – ECE)

Let’s Torrent

Reading Time: 4 minutes

We all have witnessed this technology for downloading our favorite movie which wasn’t available elsewhere. It is one of the most impeccable techs in the world of data sharing ever thought and brought to reality by a human.

Definition               

BitTorrent is a communication protocol for peer-to-peer file sharing (P2P) which is used to distribute data and electronic files over the Internet in a decentralized manner.”

The protocol came into existence in 2001(thanks to Bram Cohen) and is an alternative to the older single source, multiple mirrors (user) sources technique for distributing data.

A Few terms

  • BitTorrent or Torrent: Well, BitTorrent is the protocol as per its definition, whereas Torrent is the initiating file which has the metadata(source) of the file. 
  • BitTorrent clients: A computer program that implements the BitTorrent Popular clients include μTorrent, Xunlei Thunder, Transmission, qBittorrent, Vuze, Deluge, BitComet, and Tixati.
  • Seed: To “seed” the file denotes to “download” the file.
  • Seeding: Uploading the file by a peer after their downloading is finished.
  • Peer: (The downloader) Peer can refer either to any client in the swarm or specifically to a downloader, a client that has only parts of the file.
  • Leecher: Similar to peer, but these guys have poor share ratio i.e. they doesn’t contribute much in uploading but only download the files.
  • Swarm: The group of peers.
  • Endgame: an applied algorithm for downloading the last pieces of any file. (Not the Taylor swift’s Endgame).
  • Distributed Hash Tables(DHTs): A decentralized distributed system. In layman language, hash tables are used to provide encryption using something similar to lock and key model.

Working

Let’s have the gist of what happens while torrenting.

The following GIF explains this smoothly.

Let’s Torrent

First, the server sends the pieces(colored dots) of the files to a few users(peers). After a successful download of a piece of the file, they are ready to act as a seeder to upload the file to other users who are in need of that file.       

As each peer receives a new piece of the file, it becomes a source (of that piece) for other peers i.e., the user becomes seeder, giving a sigh of relief to the original seed from having to send that piece to every computer or user wishing a copy.

In this way, the server load is massively reduced and the whole network is boosted as well.

Once a peer is down with downloading the complete file, it could in turn function as a seed i.e. start acting as a source of file for other peers.

Speed comparison:
Regular download vs BitTorrent Download

Download speed for BitTorrent increases with an increase in peers joining to form  the swarm. It may take time to establish connections, and for a node to receive sufficient data to become an effective uploader. This approach is particularly useful in the transfer of larger files.

Regular download starts promptly and is preferred for smaller files. Max speed is achieved promptly too.

Benefits over regular download

  • Torrent networking doesn’t depend on the server being distributed among the peers. Data is downloaded from peers which eventually become seeds.
  • Torrent files are open source and ad-free. An engrossing fact about the same is that TamilRockers use torrent to act as the Robin hood for pirated movies and songs, which is apparently an offensive act.
  • Torrent judiciously uses the upload bandwidth to speed up the network: after downloading, the peers’ upload bandwidth is used for sending the file to other peers. This reduces the load on the main server.
  • A File is broken into pieces that helps in resuming the download without any kind of data loss, which in turn makes BitTorrent certainly useful in the transfer of larger files.

Torrenting or infringing?

Using BitTorrent is legal. Though, Downloading copyrighted material isn’t. So torrenting isn’t infringing.

Most BitTorrent clients DO NOT support anonymity; the IP address of all peers is visible in the firewall program. No need to worry though, Indian govt. has clarified that streaming a pirated movie is not illegal.

Talking about the security concerns, each piece is protected by a cryptographic hash contained in the torrent descriptor. This ensures that modification of any piece can be reliably detected, and thus prevents both accidental and malicious modifications of any of the pieces received at other nodes. If a node starts with an authentic copy of the torrent descriptor, it can verify the authenticity of the entire file it receives.

Further Reading:                    

IPFS is not entirely new but is still not widely used.
Read it here on medium.

Written by Avdesh Kumar

Keep Thinking!

Keep Learning!

TEAM CEV

IoTOGRAPHY

Reading Time: 5 minutesIoT Overview 

We are living in a world where technology is developing exponentially. You might have heard the word IoT, Internet of Things. You might have heard about driverless cars, smart homes, wearables.

The Internet of things is a system of interrelated computing devices, mechanical and digital machines provided with unique identifiers and the ability to transfer data over a network without requiring human-to-human or human-to-computer interaction

IoT is also used in many places such as farms, hospitals, industries. You might have heard about smart city projects too (in India). We are using lots of sensors, embedded systems, microcontrollers and lots of other devices connecting them to the internet to use those data and improve our current technology.

Our sensors will capture lots of data and it will be used further depending on the user or owner. But what if I say this technology can be harmful too? It may or may not be safe to use it. How?

These data transferring from using IoT from source to its destination can be intercepted in between and can be altered too. It can be harmful if the data is very important, For ex. Reports of a patient generated using IoT can be intercepted and altered so the doctor can not give the correct treatment to the patient. Also, some IoT devices can be used by the Army transferring very secret data. If it can get leaked, then it can create trouble for the whole country.

The Information-technology Promotion Agency of Japan (IPA) has ranked “Exteriorization of the vulnerability of IoT devices” as 8th in its report entitled “The 10 Major Security Threats”.

So, can we just stop using IoT? No, we can’t. We have to secure our data or encrypt our data so the eavesdropper can never know what we are transferring.

Cryptography Overview :

Cryptography is a method of Protecting information and communications through the use of codes, so that only those for whom the information is intended can read and process it.

There are mainly two types of encryption methods.

  1. Symmetric key
  2. Asymmetric key 

Symmetric key uses the same secret key to encrypt or decrypt data while Asymmetric key has one public key and one private key. A public key is used to encrypt data and it is not a secret, anyone can have it and use it to encrypt data but only a private key (of the same person whose public key was used) can be used to decrypt that plaintext.

In Cryptography, We usually have a plaintext and we use some functions, tables and keys to generate ciphertext depending on our encryption method. Also In order to make our data exchange totally secure, we need a good block cypher, secure key exchange algorithm, hash algorithm and a message authentication code.

IoTOGRAPHY

Block cipher – It is a computable algorithm to encrypt a plaintext block-wise using a symmetric key. 

Key Exchange Algorithm – It is a method to share a secret key between two parties in order to allow the use of a cryptography algorithm. 

Hash Algorithm – It is a function that converts a data string into a numeric string output of fixed length. The hash data is much much smaller than the original data. This can be used to produce message authentication schemes.

Message Authentication Code (MAC) – It is a piece of information used to authenticate the message. Or in simple words, to check that the message came from the expected sender and the message has not been changed by any eavesdropper.   

NOTE: you might wonder why we don’t just send data using key exchange algorithms when it is reliable to share secret keys. You can search for it or tell you in short. It is neither reliable nor secure to share data using key exchange algorithms.

LightWeight Cryptography:

Encryption is already applied at the data link layer of communication systems such as the cellphone. Even in such a case, encryption in the application layer is effective in providing end-to-end data protection from the device to the server and to ensure security independently from the communication system. Then encryption must be applied at the processor processing the application and on unused resources and hence should desirably be as lightweight as possible.

There are several constraints required to achieve encryption in IoT.

  1. Power Consumption
  2. Size of RAM / ROM
  3. Size of the device
  4. Throughput, Delay

Embedded systems are available in the market with 8bit, 16-bit or 32-bit processors. They have their own uses. Suppose we have implemented a system of Automated doors which open and close automatically at a bank. Which also counts how many people entered or left the bank. We want to keep this record secret and store it on the cloud. Using a 1GB RAM, 32bit / 64bit processor with a very good ROM just to ensure the privacy of data doesn’t make sense here. Because we will need a good space to install our setup, we will need to spend a lot more money than we should while this thing can be achieved with cheaper RAM, ROM and processor.

Keeping the above points in mind, implementing conventional cryptography in IoT which are used for Mobile Phones, Tablet, Laptop / PC, Server is not possible. We have to develop a separate field “Lightweight Cryptography” which can be used in Sensor networks, Embedded systems etc.

Applying encryption to sensor devices means the implementation of data protection for confidentiality and integrity, which can be an effective countermeasure against the threats. Lightweight cryptography has the function of enabling the application of secure encryption, even for devices with limited resources.

IoTOGRAPHY

Talking about AES, It usually takes 128bit long keys with 128 lock size. It uses 10 rounds of different steps like subbytes, shift rows, mix columns and add round keys. Implementing this requires a good amount of space, processing speed and power. We can implement it in IoT with reduced length of key or length of the blocksize but then it will take less than 30 minutes to break AES. 

IoTOGRAPHY

There are many Lightweight cryptography algorithms developed like TWINE, PRESENT, HEIGHT etc. Discussing all of them requires a series of blogs but I am adding a table showing a comparison of some Lightweight Cryptography.  You can observe changes in block size from 64 to 96 can create a huge difference in power consumption and area requirement. 

Lightweight cryptography has received increasing attention from both academic and industry in the past two decades. There is no standard lightweight cryptosystem like we have AES in conventional cryptography. Research is still going on. You can get updates of the progress at https://csrc.nist.gov/Projects/lightweight-cryptography.  

The whole idea behind this blog is to discuss lightweight cryptography and overview of it. 🙂

Author: Aman Gondaliya

Keep reading, keep learning!

TEAM CEV

FALL OF USSR

Reading Time: 5 minutes

1. WHAT WAS USSR?

The ‘union of soviet socialists republics’ or USSR, which was established in 1917 after the Russian revolution against the absolute Tsar, was formally established in 1922. There were 15 subnational Soviet states including Russia, Armenia, Georgia, Kazakhstan, Turkmenistan, Uzbekistan, Kyrgyzstan, Tajikistan, Belarus, Ukraine, and the Baltic countries of Estonia, Latvia, Lithuania. 

It was established as a ‘communist country’. As its name suggests, it was the Union of soviets that were based on concepts of socialism. SOVIET is a Russian term that refers to a law-making body and works as a legislature. Every nation in the union had their own soviet as a law-making body like parliament. These soviets were in union with a centralized economy and planning in the capital, Moscow. 

USSR was first to introduce the 5 years planning concept which worked very well in the beginning. There was 1 party rule and the representative must be from the communist party. A sequence of soviets was established starting from workers and agriculture soviet to industrial soviet, to regional soviet and finally the CENTRAL SOVIET IN MOSCOW. 

As it was a communist union, there were strict prohibitions on private businesses and industries and all the party belongs to the state. 

2.GEOGRAPHICAL LOCATION

FALL OF USSR

The soviet union was the largest country in the world alone contributing to one-seventh part of total landmass on the planet. And this huge size posed great challenges to USSR. It was a difficult task to control the whole territory and people with different ethnicities living in USSR. 

3.CAUSES OF DISINTEGRATION

  • SPACE RACE: they spent billions of roubles to compete with the USA in space exploration. They were firsts in satellite (SPUTNIK), animal in space(LYKA), and cosmonaut in space (YURI GAGARIN). 
  • ARMS RACE: After WW2, soviets were terrified by their biggest rival, USA. So they spent billions to maintain a huge army, build technologies, and made the biggest nuclear arsenal. It was done because of fear of MUTUALLY ASSURED DESTRUCTION with the USA. Because of this economy of the USSR shrank. 
  • COLD WAR (1945-1991): being a communist nation, to increase communism worldwide they supported Cuba. North Korea, Vietnam, Laos, China, and many other communist nations and political parties. USSR provided Army, technology, and financial support. 
  • AFGHAN WAR (1979-1989): USSR started a war with MUJAHIDEEN in Afghanistan cause they were interested in getting sea access for efficient trade and exports but Mujahideens were in against the puppet communist Party of Afghanistan. In this war, 15000 soldiers were killed and over  50000 red army soldiers got severely wounded. This caused the draining of vital resources and led to a decrement of the prestige of the USSR and MIKHAIL GORBACHEV. 
  • DECLINE IN ECONOMY: In USSR, the worst drawback of communism appeared. There was a continuous decline in the economy from 1928 to 1987. Tight state control over the economy and the absence of free-market economy was draining the USSR’s economic strength. 
  • OTHER REASONS: USSR had become a totalitarian state under STALIN, who was a ruthless dictator. Under STALIN, normal citizens with non-Russian ethnicity, officers, generals of the red army were murdered. He was known to be worse than Adolf Hitler. 

During the formation of the USSR, Lenin believed that people of different ethnicities (Russians, Kyrgyz, Uzbeks, Tajiks, etc.) will slowly assimilate with each other. But these non-Russians people did not assimilate as Lenin believed. And STALIN killed them who differ in ethnicity. This created a demand for Independence in sub soviet states. 

4.EVENTS

  • TWO POLICIES: To make socialism more efficient and bring the economy out of stagnation, Mikhail Gorbachev launched two policies

GLASS NOT: It means openness in Russian. It granted freedom of speech to media and intellectuals. This created the easing of media censorship. Due to GLASSNOST, the media started criticizing the Gorbachev govt. itself and spread the demand of Independence in sub-national soviets. 

PERESTROIKA: It means Reformation. It changed the soviet political and economic structure. It allowed elections, foreign investment, and privately owned businesses, although it started with restrictions, initially. There was no change in the Communist Party but now there can more than one candidate in elections. 

 These two policies were implemented to improve the conditions of the USSR but ended up with the dissociation of the USSR. 

  • EASTERN EUROPE SCENARIO: Relaxed policies lead to demands of sovereignty and Independence in eastern and central Europe snd also in sub-national soviets. 

In Poland, Bulgaria, Romania, the communist government was the puppet of general of the USSR and took orders from Moscow. but due to those policies, there was a demand to end this influence of the USSR in these nations. 

  • 1989 – YEAR OF REVOLUTION: 

    ( DOMINO EFFECT) 

From the summer of 1989 till 1991 there were revolutions in Poland, Hungary, East Germany, Czechoslovakia, Bulgaria, Romania and Yugoslavia, one after another. There were nationalists movements to gain Independence from USSR influence and communist puppet govt.

 

Gorbachev did not send a red army to suppress the protests or movements because he thought this will not harm the fate of the USSR. In December 1989, Bush sr. announced the end of the Cold War because slowly communist governments in eastern European nations were collapsing. The Baltic countries declared their Independence from USSR. They believed that they were never been part of the USSR. 

                       Throughout 1990-91, one by one all the 15 subnational soviets became Independent. And in reelections, the Communist Party was defeated everywhere. 

  • AUGUST COUP (THE LAST ATTEMPT): In August 1991, a group of senior party leaders attempted a COUP D’ETAT by placing Gorbachev under house arrest and demanding the restoration of USSR. They called the red army to control the public who were protesting against coup. But the army refused because this was against people’s will and so the coup failed. 

FALL OF USSR

BORIS YELTSIN ( leader of Moscow unit of the communist party) advocated for independent Russia and was against the coup. Later he became the first elected president of Russia. After the failed coup, the breakup process accelerated. 

8 DECEMBER 1991 – the presidents of Russia, Ukraine, and Belarus signed BELAVEZHA ACCORDS, which dissolved the USSR and established the COMMONWEALTH OF INDEPENDENT STATES ( CIS) in its place. Gorbachev RESIGNED as president of the USSR on 25 Dec. 

Supreme soviet dissolved the Union on 26 December 1991.

Written by: Anirudh Rajpurohit

KEEP LEARNING

TEAM CEV

KOREA- LAND OF MORNING CALM

Reading Time: 6 minutes

36 Year old, The Supreme leader of North Korea again in the news due to rumors of his death. Kim Jong-un is a Dictator of the Democratic People’s Republic of Korea. North Korea has been a country under a dictatorship since 1948. & Since 1948 the world has seen contradictory growth of two nations with the same culture and language. The Korean peninsula was once one country, then a conflict carved it down the middle and created two nations divided by their rulers’ opposing ideologies, So what circumstances made one nation to divide into two?

GEOLOGY OF KOREA

Korea is an Eastern country of Asia known as “ Land of Morning Calm “. North Korea shares a border with South Korea, China, and a short border with Russia. The peninsula extends some 1000 km South of the China-Russia border dividing the Sea of Japan on the East from the Yellow Sea on the West.

KOREA- LAND OF MORNING CALM

ANNEXATION BY JAPAN

The annexation of Korea took place in August 1910 was based on the “Japan-Korea Annexation treaty”, which stipulated that the Japanese Empire take over the Korean empire, making it a colony of Japanese territory. The 1905 Korea-Japan Convention had already made Korea a protectorate of Japan, the reason was In 1905 the world had seen major upset by Japan over Russia. Yes, Russo-Japanese War. The foundation of the hegemony of japan over Korea is related to this war. So, How does a nation like Japan, which had way less military power than the mighty USSR, won the war?

Russo-Japanese War: 

In between 19th-20th century Japan’s transportation from an isolationist feudal state into a vigorous modern power drew the attention of the world. Meanwhile, Russia had its own political strategies for east Asia. Japan had already eliminated Chinese power in Korea and won over the peninsula of Manchuria in the Sino-Japanese War in 1895. Russian had control over Siberia, and in 1897 Russia had embarked on Railway building on Chinese territory to open it up to commercial exploitation, The famous Moscow-Manchuria rail had the potential of economic control, colonization, and military policy caused alarm among Japanese leaders. As negotiations stalled, on the night of February 1904 Japanese destroyers launched a surprise attack on Russian warships at Port Arthur in Manchuria and Chemulpo (Inchon) in Korea. On 10 February 1904, after the initial assaults had taken place, Japan declared war. Russia was shocked and overconfident about this war. Russia had astonishing weapons and military power but the problem was transportation, the Russian military was in Moscow and it wasn’t possible to export a large army to port Arthur without rail. They still tried to reach there through the Atlantic ocean and Indian ocean but while in the North Atlantic ocean they misunderstood Britain’s ship as a Japanese and attacked it. Brittain didn’t start a war but they blocked ways for them. Death casualties were large on both sides but in the end, Japan had won clear victories at Manchuria and st. Arthur. Japanese forces were exhausted, low on ammunition and the country’s economy was strained. Russia could draw on more substantial reinforcements than Japan. In 1905 peace negotiation began in the US. The Treaty of Portsmouth signed in September 1905 recognized Japanese rights in Korea and ceded Port Arthur, Dalny, and the adjacent territory to Japan, along with control over the South Manchurian Railway. It was a rise in the hegemony of Japan over Korea.

Korean dynasty ruled over unified Korea for 1500 years. From 1910 to 1945 Japanese ruled in Korea. There is the view that Japan’s 35 years of colonial rule improved Korea’s infrastructure, education, agriculture, other industries, and economic institutions, and thus helped Korea modernize. But one should never forget the discrimination and sufferings that the Korean people experienced under colonial rule.

DIVISION OF KOREA

Korea chafed under Japanese colonial rule for 35 years until the end of world war-2. Japan lost world war-2 & it became clear to the Allied Powers that they would have to take over the administration of Japan’s occupied territories, including Korea until elections could be organized and local governments set up. Korea wasn’t a part of the US’s priority but Russia was keen to acquire control of lands that the Tsar’s government had relinquished its claim to after the Russo-Japanese War (1904–05). On Aug. 6, 1945, the United States dropped an atomic bomb on Hiroshima, Japan. Two days later, the Soviet Union declared war on Japan and invaded Manchuria. Soviet amphibious troops also landed at three points along the coast of northern Korea. On Aug. 15, after the atomic bombing of Nagasaki, Emperor Hirohito announced Japan’s surrender, ending World War 2. Now it was clear that southern Korea was occupied by the US and northern by Russia. The US wanted to establish Democracy in South Korea with a capitalistic ideology. On the other hand, Russia wanted communism in North Korea. Without consulting any Koreans, they arbitrarily decided to cut Korea roughly in half along the 38th parallel of latitude, ensuring that the capital city of Seoul—the largest city in the peninsula—would be in the American section. The establishment of the division—made without their input, let alone their consent—eventually dashed those hopes.

IMPACT OF DIVISION:

The location of the 38th Parallel was in a bad place, crippling the economy on both sides. Most heavy industrial and electrical resources were concentrated north of the line, and most light industrial and agricultural resources were to the south. Both North and South had to recover, but they would do so under different political structures. The US had appointed anti-communist leader Syngman Rhee to rule South Korea. Meanwhile, USSR appointed Kim Il-Sung, who had served during the war as a major in the Soviet Red Army.

MODERN NORTH KOREA & SOUTH KOREA:      

KOREA- LAND OF MORNING CALM

It’s been more than 70 years since Unified Korea split into two. North Korea is now a Stalinist state and is accused of holding hundreds of thousands of people — including children — in political prison camps and other detention facilities across the country. It also receives the lowest ratings when it comes to press freedom and government accountability. Life in South Korea, on the other hand, is fueled by an unashamedly loud and proud style of capitalism. The country is also officially a constitutional democracy. After this many year’s North Korea is far behind South Korea in every aspect whether its economy, industries, human resources, technology. After the demise of the USSR, North Korea was vulnerable and insecure, they languished by spending more money on Military, implied isolatic state rules, business with selected countries & plight of citizens. Whereas  South Korea is a lifestyle fuelled by an unashamedly loud and proud style of capitalism.

FUTURE OF PENINSULA

The North Korean regime is concerned foremost with regime survival, it regards economic reform as potentially destabilizing. Admittedly, evidence on this point is mixed. Certain recent changes in rhetoric, diplomatic opening, and economic policy signal a will of change. South Korean GDP per capita has more than doubled since the end of the 20th century in 2020. But South Korea would see a loss of around 2.9 trillion South Korean won in tourism revenue if the novel coronavirus spreads rapidly in the country. Hyper competitive lifestyle is the reason behind the increase in suicidal rate,  hard-pressed school children consistently rank among the world’s least happy countries. South Korea might work on the Standard of living in Future. The contradictory growth of both nations leads to the conclusion that the foundation and ideology of a nation & people are essential for a prosperous future. 

REFERENCES

https://www.thoughtco.com/why-north-korea-and-south-korea-195632

https://www.theguardian.com/world/datablog/2013/apr/08/south-korea-v-north-korea-compared

https://www.japantimes.co.jp/opinion/2010/08/29/editorials/the-annexation-of-korea/#.XruaxGgzZNw

http://fightforjustice.info/?page_id=3169&lang=en

https://www.dw.com/en/north-and-south-korea-how-different-are-they/a-43548731

https://encyclopedia.1914-1918-online.net/article/russian-japanese-war

Beginners Machine Learning Explained Simply

Reading Time: 9 minutes

It is predicted that 80% of emerging technologies will have AI foundation by 2021

AI has its applications everywhere. Auto driving, face recognition, language translation, market prediction, chatbots, text to speech, ads and movies suggestion, application in medical science, game bots, and many more.

Do you want to know how these applications work?

Do you want to make these yourself?

Then this article can help you !

Artificial intelligence is a continuously growing field.

Till now, AI can already :

  • Read : and summarize a long text for you. Like it does in google search engine.
  • Write : Can make jokes and write a poem. Even a novel has been generated by AI that was short-listed for an award.
  • See : Auto driving cars, facial recognition.
  • Hear and understand : Virtual assistants. In some applications, even it can alert you if it hears a gunshot.
  • Play Games : DeepMind’s AlphaZero has already beat the World Chess Champion. And do you know how much time it took to learn chess completely from scratch ? Just 4 hours.
  • It can Speak, Smell, touch, Move, Create, debate and …..

Despite all these, we can say that we are at the beginning of the development of AI. Like here, shown in the picture.

Image

In this article, my purpose is to grow a spark inside you towards artificial intelligence and machine learning. So that if you like, you can get your hands on to this amazing growing field !

By the end of this article, you will :

  • know what is AI and ML
  • know similarity between AI and human brain
  • Make a simple model to predict house prices, using a simple but widely used model.
  • And at last, find some courses that you can take to learn Machine Learning from beginner to advance.

Note : The model is implemented in python. But if you don’t know python, you need not to worry. You can just learn the model for a start and implement later.

So, are you excited ?

Let’s get started !

What is Artificial Intelligence ?

In one sentence, it is “An ability of a computer to mimic human”. It’s that simple.

An ability of a computer to mimic human

And Machine Learning is an application used in AI, which uses models to help computers learn itself from the data. Computer takes data, recognizes patterns from it and learns from the data.

This can be well understood by the example below :

Human Brain and AI

Suppose you are a kid and your parents shows you something which you don’t know what is. And they say, “It is X”. And some days later, they show you that again, and they say, “Hey dear, it’s X”.

Soon after a certain number of times your parents show you the thing with the label X, you can classify that as X, even if the thing X is of slighly different in kind.

In the above example, your brain got some data, it recognized patterns from it, attached a label X to it and now it knows what it is. That is how human brain works. But its not common for computers. Computer can’t know what it is, if that is something it has never seen.

But with the Machine Learning model, if you show similar objects with the label, for many times, it can recognize the similar new object next time it sees it. Like if, you show your face telling it that it is “you”, then the next time it will see you, it will recognize you, even if you are wearing glasses or are different

The reason for high rise in Machine Learning ?

Machine Learning has been used since the 1950s or before. But the 2 reasons why it has gain much development are the following :

  1. Internet and Data : The rate at which we are producing data everyday is the highest till now.
  2. Computational Power : The computational power is increasing every year. This increase allowed us to train bigger data faster.

So that was some info regarding AI and ML. Now you will learn to make a simple but effective ML model.

Your First ML model

We will make a model which predicts house price. The data used here has house sq. foot area and their orginal price. After training the model, it will be able to predict price for new houses.

Sq. Foor Area Price
1500 158900
1700 169850
1750 178950
1900 ?? to predict
1850 ?? to predict

The data on graph looks like below. If we can draw an approximate line that fits the data, we can predict price of any house.
Image

Now we will discuss the model to get that approximate line that fits the data.

Model

So, how can you draw this line?

The line which will fit our data, will have the least sum of perpendicular distance from all the points to the line.

Thus the approximate line can be drawn by taking the sum of all the perpendicular distances from all the points to a random line and making that sum least possible

In machine learning terms, we call this sum as cost.
Image

To get the line, we need first parameters for the line.

Let say it is represented by a Theta array.

theta = [ [a], [b] ]

And thus the line will be Y = aX + b, with X as sq.ft area and Y as price.

and our data is :

data = [ [X1, Y1], [X2, Y2], […, …] ]

We will seperate data X and predictions Y. And then add column vector of 1’s to X.

X = [ [X1, 1], [X2, 1], […, …] ]

Y = [ [Y1], [Y2], […] ]

So that, now matrix multiplication of Theta and X will give us Y

image

Now,

Step 1 : Cost

Image

Step 2 : Minimize Cost

The value of cost depend on the parameter vector Theta.

We will initialize Theta with [[0], [0]], which will make the cost value largest.

How do you think we can minimize cost ?

How about finding local minima ?

To find the local minima, we will differentiate cost w.r.t theta. This will provide us with slope.

Image

How did the above equation came ? Find differentiation of Cost wrt Theta. Hint : First differentiate cost with theta, then multiply the differentiation of (Z – Y) with theta. The transpose of X is taken to match the matrix multiplication

Update Theta as : Theta = Theta – alpha x dTheta

Here, alpha is some costant number.

By performing the above 2 steps from many number of time, we will reach the local minima of cost.

How ?
  1. Suppose the slope i.e, d_theta is positive, then theta value will decrease and thus the value of cost with decrease.
  2. Suppose the slope is negative, then theta value will increase and cost value will decreases, and thus leading the cost closer to local minima.

Image

If we perform the above 2 steps for many times, let say ‘iteration’ number of times, we will reach the local minima.

Image

The implementation is like this :

loop from 1 to iterations times:
    # This will give us predictions
    Z = matrix_mulitplication(X, Theta)     
    
    Cost = (1/m)*sum( abs(Z - Y) )
    
    # This update will take us closer to its local minma
    d_Theta = (1/m)*matrix_multiplication(X.T, Z-Y)
    Theta = Theta - alpha*d_theta

If you find the above two steps a bit difficult to understand, don’t worry, as you will implement it yourself, you will get more clear. I recommend you to think thoroughly while implementing.

So, you have seen how our model looks like and how to build it.

Let us now code our model.

You will see how short it is, but does some great things.

Full Implementation in Python :

I have used ‘ipython jupyter notebook’ to write the code, and I recommend you the same for every machin learning models

The dataset I have used to train the model, is available here : Dataset

import numpy as np
import matplotlib.pyplot as plt

data = np.loadtxt('data.txt', delimiter = ',', dtype = int)

X = data[:, :1]
Y = data[:, 1].reshape(X.shape[0], 1)

X = np.hstack((X, np.ones((X.shape[0],1)) ))
# Visualization of Dataset 
plt.scatter(X[:, 0], Y)
plt.show()

Image

def model(X, Y, alpha, iterations):
    
    cost_list = []
    m = Y.shape[0]
    theta = np.zeros((X.shape[1],1))
    
    for i in range(iterations+1):

        A = np.dot(X, theta)
    
        cost = (1/m)*np.sum(np.abs(A - Y))
        
        d_theta = (1/m)*np.dot(X.T, A-Y)
        
        theta = theta - alpha*d_theta
        
        if(i % (iterations/10) == 0):
            print("cost after", i, "iterations is :", cost)
            
        cost_list.append(cost)
        
    return theta, np.array(cost_list)
    
theta, cost_list = model(X, Y, alpha = 0.00000005, iterations = 50)

Image

Do you see, how the value of cost is decreasing and then almost remains cost. This shows that it has reached local minima.

Our mode is trained ! Lets us see, how it is predicting !

new_houses = np.array([[1547, 1], [1896, 1], [1934, 1], [2800, 1], [3400, 1], [5000, 1]])
for house in new_houses :
    print("Our model predicts the price of house with", house[0], "sq. ft. area as : $", round(np.dot(house, theta)[0], 2))

Image

Congratulations, you have implemented your first Machine Learning model!

This was a good implementation as beginner. You did very nice!

Here we used only 1 feature of data as just our house sq.ft. area. The real-world application uses many more number of features. And that can be, garage area, bathroom area, total number of rooms, locality, furniture quality, etc.

If the number of features increases, the only thing you need to change is parameter Theta. If the total number of features are N, then take the size of theta as (N+1, 1) vector, and everything else remains the same. In our case, N was 1 i.e, house sq. ft. area.

The model you just implemented is very famous model, which machine learning engineers uses all the time. And its name is SVM (Support Vector Machine). When you will take an actual machine learning course, then you will be more familiar with the terms.

There are many other models as well in Machine Learning, and the implementation of different models depends on the application we are trying to build and on dataset.

The most famous of them are Neural Networks.

Courses you can take to become a Machine Learning expert :

  1. Deep Learning Specialization on Coursera – by Andrew Ng. This specialization teaches you Neural Networks. The projects in it are very awesome like, image recognition, language translation(Spanish to English), Emojifying the text. I am sure you will love doing it yourself.This is more suitable if you are interested in making AI applications yourself.

    You can Audit the course and learn everything for free.

    You can check the course out here : Deep Learning Specialization

  2. Machine Learning A-Z, udemy course. This course teaches your different machine learning models and data pre-processing. The implementation is done using sklearn-library of python, which has build in models. It is like you can access the whole model in one line of code.This is more suitable if you want to step into competitive projects.

    You can check this out here : Machine Learning A-Z

I hope you enjoyed the implementation of the model and learned something valuable from this blog post.

Cheers !!

CEV - Handout