New Home

Farm Bills 2020

Reading Time: 6 minutes

Introduction

Let us talk about one of the most heated topics of current Indian politics in the most unpolitical way possible: the Farmer bills 2020. As you will go through this blog, you will know about the pros, cons and other aspects of the bills that the government claims to revolutionize the Indian agricultural sector. On diving deeper, one can realize that the bills are way more than just having pros and cons. Sometimes things look pretty on paper but not in realworld and vice versa. Let us first understand the current system of Indian agriculture: the Mandi System. 

Farm Bills 2020

The APMC and MSP

In the present system, there are two ways the farmers can sell their crops in the market. The first way is through the Agricultural Produce Market Committee (APMC) managed by the licensed traders and regulated by state governments. The sole purpose behind the establishment of APMCs was to prevent the farmers from exploitation by the lenders, landlords and retailers. Consisting of licensed traders who are supposed to buy the crops after auctioning, APMC makes sure that the farmers receive the best price for their produce. These traders would later sell it to retailers with a significant margin. The APMC acts like a middleman between the farmers and the retailers and earns good profits. As all these trades are taxed, it is one of the major revenue sources for state governments. But eventually, these corrupt middlemen started forming cartels and buying the crops after mutually deciding a price way lower than before, keeping major portions of margin in their own pocket, leading to the exploitation of farmers, thus defying the very purpose they were supposed to work for.

But there still exists a ray of hope sparkled by the government. It provides farmers a second option: the MSP or the Minimum Support Price, the price at which the government assures to buy farmers’ produce if the APMC traders are not reasonable.

Farm Bills 2020

The Revolutionized Market

Enough of the old, outdated systems. Let us bring some modernization here. Imagine if, in the place of old messed up APMC Mandis, there are huge storage tanks and warehouses owned by private companies storing high-quality yield of commodities produced by the most high-tech and modernized systems of cultivating and being sold in supermarkets instead of the untidy sabzi mandis. The farmers are finally earning deserved profits thanks to the elimination of APMC mediators. We are getting great deals on foods everywhere in the country, thanks to the high-tech storage facility and no wastage. It seems great, right? Everyone is happy. This is the moment when we stop imagining and start thinking, the part that the government seems to forget every time. But before that, let us first have a very brief understanding of the ordinances passed:    

The Farmers’ Produce Trade and Commerce (Promotion and Facilitation) Bill 2020: allows intra-state and inter-state trade of farmers’ produce beyond the physical premises of APMC markets.

Farmers (Empowerment and Protection) Agreement of Price Assurance and Farm Services Bill, 2020: creates a framework for contract farming through an agreement between a farmer and a buyer prior to the production or rearing of any farm produce.

The Essential Commodities (Amendment) Bill, 2020: allows the central government to regulate the supply of certain food items only under extraordinary circumstances (such as war and famine). Stock limits may be imposed on agricultural produce only if there is a steep price rise..

Do you see any flaws in this? It seems like even if you and I go out there and start farming, it might not be that much of an issue. But here we are talking about the majority of the Indian farmers, who are uneducated, poor, 85% of whom own less than 2 hectares of land and would be far from having an equal say in the contracts designed by the profit-making strategists sitting in the air-conditioned cabins of huge corporates who will hardly think of the farmers’ well-being. Here come the exploitation and slavery again. But the farmers would still have the option of selling the crops to the government and at MSP, right? The government says they would but at the same time didn’t mention it in the ordinances. This is where the controversies begin

We all are well aware of the most common strategy that will be used by private companies to lure the farmers: initially they will offer farmers great prices and attractive contracts to stay alive in the competition. This will result in farmers going for private contracts every time, decomposing the APMC and MSP system. The government’s warehouses will become liabilities and will stop receiving investments. Once the MSP scheme becomes totally inefficient, farmers will be left with no option but to go for contract farming every time. This is when the private giants resize their pockets to fit in huge chunks of money, needless to say, by taking a part of farmers’ share. These established monopolies will jeopardize everything, farmers being exploited, fluctuating food essentials prices thanks to no prohibition on hoarding leading to the middle class’ suffering. This time the government couldn’t be relied upon, as they will have huge tax revenues rolling in already.

Till now, we have seen two scenarios. Both of them imaginary. The government keeps on promoting the first one and the opposition second. So where does the reality lies? It appears to be lying somewhere in between, that too greatly depending on how it is implemented, and how actively government participates in these procedures and considers itself responsible for the consequences at ground level.

Farm Bills 2020

Arising Questions

Middlemen removed thus only GST will be imposed on selling, then what about the state government revenues? It will bring the latest technologies and professional farming encouraging cash crops, but since farmers cannot own this equipment themselves, will this lead to slavery? Farmers will get better prices for their yield, but will they continue to get the same in the future?

Contract farming will assure good prices independent of market situations, but will farmers have enough power to take legal actions if required? The government guarantees to give MSP, then why not write it in the ordinance? One Nation One Market, but can farmers afford the transportation cost? Allowing to stock will help in less wastage and even distribution, but what about black marketing? If the system badly failed in the USA and Europe, how come it would succeed in India?

These are the worries that led the farmers to hop on their tractors and rally till India Gate, lodge protests, and block roads and railway tracks. But there are political reasons behind that as well, and we can’t deny that. Most of us have already lost trust both in the opposition and the ruling party as everything they do appears to be for some kind of propaganda or the party’s interest only.         

Way Out/ Conclusion:

1) The government should assure to give MSP and purchase products for welfare schemes through APMC, not verbally, but by a proper ordinance.

2) Government should play an active role as a regulator and facilitator for both corporates and farmers. It would be beautiful if the private sector and government fill each other’s gaps and work hand-in-hand.

3) Arrangement of free and powerful legal assistance for farmers whenever they need it.

4) Reforms in APMC and have a check on its illegal activities instead of their complete removal.

5) Educating farmers and making them aware of both systems and how to use them. Preventing them from being exploited or misled.

The system does have the potential to revolutionize Indian agriculture. As the farmers’ situation in the current system is already deteriorating, a change certainly needs to be done. But each step needs to be taken very carefully, as we already have the USA and Europe falling prey to this.

Battery: An Insight into Electrochemical Energy Storage Technology

Reading Time: 5 minutes

Electronic devices and vehicles are now more than just a basic need for humanity. Nowadays, and especially during pandemic breakouts which reportedly occur once in a span of 100 years, people have become completely dependent on electronic devices of one sort or the other. The wonders that these machines can do are known to most users. Many devices run on a direct connection to a designated power supply, which unfortunately limits the scope for portability. To overcome this limitation, most portable devices run on energy stored in batteries. Owing to this essentiality of the battery, scientific groups around the world have been in the quest for developing the “perfect” battery for the past few decades. Success could be in terms of increased efficiency and energy density, reduced charging time, leak-resistance, and more such capabilities that could overturn the energy sciences and engineering.

Batteries have also found their use in the automotive industry. The heart of Electric Vehicles are their batteries, which replace energy from conventional fossil fuels, eliminating CO2 emissions and consequently, EVs have the potential to address the issue of climate change. CO2 emissions from conventional fuel-based transportation have certainly been one of the major factors for such issues and this fact is backed by statistics: In 1959, global CO2 levels were at 313 parts per million (ppm). Now, just six decades later, they are nearly 100 ppm higher, surpassing 412 ppm in September of 2019. This unprecedented change in atmospheric CO2 levels will continue to create a major impact on Earth in the decades to come. Climate researchers have advised to reduce emissions for many years, but humanity continues to release billions of tons of CO2 every year into the atmosphere, despite the uncertainty in a safe upper limit. This can be seen as an economically driven phenomenon owing to the affordability and reliability of fossil fuels as an energy source. Fortunately, renewable energy technologies, like wind and solar energy, are already undercutting the cost of conventional fuels in a few places and with the help of research and innovations, by 2030, it will be cheaper to generate from renewable sources almost everywhere. Presently, we need such a rapid transition to prevent the worst effects of climate change.

Image Featuring Batteries at a production unit.

However, there are times when weather and environment changes may not allow for steady energy generation (examples include not enough sunlight, or a drop in wind speeds) and this is when energy storage techniques come into play. With the rise in the production of energy from renewable sources, storage plays a critical role in providing output during high demand and low production times. Pumped hydropower has provided the cheapest form of energy storage in the past, and with more such projects, this method proves efficient to an extent, as in the case of long-term and bulk energy storage purposes. However, pumped hydropower has geographical limitations, as it demands a difference in elevation which is not found at all locations. On the other hand, electrochemical storage is seen as much more suitable for uninterrupted power supply with the ability to provide load shifting. Electrochemical storage has gained popularity in recent years evidently due to the fall in the prices of Li-ion batteries: diving 85% in cost from 2010 to 2019.

The energy storage market, in a broad sense, can be split into two sectors: stationary and mobile. The stationary storage sector concentrates at a cost per unit energy ($/kWh figure) along with concerning safety issues and maintaining the $/kWh figure of merit. The more dangerous option requires a rigorous safety system as in the case of nuclear power, which in principle is cheap but in practice, it requires multiple safety constructions that increase the cost. Mobile storage, contrarily, is more associated with a cost per energy density figure ($/kWh/kg) and adheres to inherently safe systems, and involves solutions for additional safety structure, which in turn impacts the overall weight of the system.

Materials and morphology are the two main strategies adopted by battery engineers to achieve low-cost cells. Materials that are abundant, cheap and able to be economically engineered into the appropriate form are usually engaged in low-cost cells. However, some materials with high production cost but made of relatively abundant materials, like carbon in the case of carbon nanotubes (CNTs) may become economically viable once large-scale productions become common. Further, materials like CNTs have been receiving huge attention from researchers from multiple fields.

periodic table highlighting elements involved in battery manufacturing.

Periodic Table: The materials that are available for the production of a low-cost cell.

Several factors are considered while choosing a metal for use in a battery, most prominent being the reduction potential. Consider the case of Lithium metal, which comprises roughly 0.002% of the Earth’s crust. The obvious reason as to why Li-ion batteries have occupied most of the energy storage market is hidden in its Standard Reduction Potential (SRP). Lithium has the highest SRP among all materials (-3.04 V) followed by other metals of group I and alkaline earth metals. SRP is an important figure for achieving high voltages in cells; the more negative the standard reduction potential of the metal, the higher the voltage that the cell can theoretically have. Lithium can be seen to be dominating the market for the next decade or so, but with soaring energy demands and dwindling Lithium sources, we are in an immediate need of some alternative materials. With aluminum-ion batteries coming on top, they seem to be a promising solution, with aluminum being the most abundant metal in the Earth’s crust and theoretical energy density being relatively high. This is where the diagonal relationship can be seen with respect to the chemical properties of such materials.

image depicting battery being used for renewable energy storage

The automotive industry would be the most benefitted by revolutions in battery manufacturing. Electric vehicles have now begun to occupy a small but significant place in markets. India has a lot to achieve from the widespread adoption of e-mobility. But with e-vehicles plying on the roads, we would need certain new infrastructure, like charging stations at multiple checkpoints. As of now, in India, there have been some positive developments regarding e-vehicles. The NTPC has already commissioned its first EV charging station with a capacity to charge three vehicles simultaneously. Admittedly, we need more such initiatives to bring down the burden on the economy due to oil imports.

The scientific community, around the world, has been working on multiple promising alternatives, like fuel cells and other water-splitting technologies. Some of the projects undertaken by scientists might seem ambitious and far from being available on a commercial scale, but one needs to understand the different stages of research, development and finally the commercialization.

All these promising technologies have a long way to go before they can be made available in the market. Advancements in material sciences, battery and other energy storage technologies have lots of unexplored corners, and surely the future seems to be positive for humanity and climate.

ANALYSING DEMONETISATION

Reading Time: 4 minutes

It was my 16th birthday. I took a tepid bath, wore my new black shirt with a cream white pant, put on my party wear shoes and was ready for hosting a party in a nearby restaurant. Me, having fun with my friends on the dining table was about to have a slice of Garlic Bread when my eyes took a glance of the television running on the far side near the counter. It was unusual as the television was not tuned to any music or sports channel as it happens most of the times in restaurants. Rather the screen flashed Breaking News into my eyes. The news quoted some of the words spoken by the Prime Minister of a country with a population of 1.3 billion. The news read:

ANALYSING DEMONETISATION

I didn’t get those words at that point of time and rather chose to concentrate on my Garlic Bread. But soon with the passage of time, we all realized its true meaning and purpose and the extent to which it has impacted our lives. In this blog, I am going to brief you about the biggest demonetization in the history of the world.

Process of Demonetisation:

The plan to demonetise the 500 and 1000 rupee notes initiated 6 to 10 months before the report by SBI analysed possible strategies and effects of demonetisation. RBI started preparing for new banknotes in May 2016 whose printing started in October.

A Union Cabinet meeting took place on 8 November 2016 where everyone was informed about the plan by Narendra Modi. Soon after the meeting, Modi announced demonetisation in an unscheduled live national televised address.

ANALYSING DEMONETISATION

It was announced that the demonetised banknotes could be deposited in a bank over a period of next 50 days until 30 December 2016. A certain limit was kept by RBI for exchanging old demonetised notes with new legal tender notes. The limit was kept at Rs. 4,000 per person till 13 November, which was increased to Rs. 4,500  till 17 November and reduced to 2,000 till 25 November.

Until 2 December 2016, fuel pumps, government hospitals, railways and airline booking counters, state government recognised dairies and ration stores, and crematoriums were allowed to accept demonetised banknotes. Cash withdrawals from bank accounts were also restricted and a daily limit on withdrawals from ATMs was imposed.

Purpose of Demonetisation

The government claimed that the move will curtail the shadow economy and reduce the use of illicit and counterfeit cash to fund illegal activity and terrorism. The move aimed at converting India from a non-tax-compliant society to a tax-compliant society. It was also aimed at serving the purpose of reducing the number of high denomination notes and boosting the digital payment sector in India. There was also a need to reduce the flow of counterfeit banknotes in the economy.

Why eliminating black money is important?

Curbing black money is necessary as it causes a regressive distribution of income in the society which leads to widening of the gap between the rich and the poor. Black money also leads to the under-estimation of the true size of the economy and therefore distortion of production pattern.

Black money is a parallel economy which operates side by side the real economy. This dual economic pattern causes inflation to increase by certain basis points which the government can’t account for as they have no legal data of that spending.

Outcome of Demonetisation

It was estimated by the government that they will be able to permanently remove 20% of the demonetised banknotes from the circulation but this didn’t happen. According to a report by RBI in 2018, 99.3% of the demonetised banknotes returned to the banking system.

Initially, after the demonetisation, there had been a decrease in counterfeit 500 and 1000 rupee banknotes but in 2017-2018, the number of counterfeit 500 and 2000 rupee banknotes increased than the previous year. Therefore, the number of counterfeit banknotes detected experienced no significant change due to demonetisation.

After demonetisation, India witnessed growth in tax collection of 14.6% in FY17 and 17.1% in FY18 as compared to 8.9% and 6.9% in FY15 and FY16 respectively. But seen from a historical perspective, a 14% or even 17% annual increase in direct taxes isn’t extraordinary for the Indian economy.

ANALYSING DEMONETISATION

Conclusion:

Overall we all know that demonetisation has been a big failure for the Indian economy. Some economists severely criticised the move while some said the method of implementation was inappropriate. Whatever it was, people had to suffer to a great extent.

Government from time to time has implemented flawed policies which have caused a strong economic disturbance in the past 4 years. Their policies have lacked proper planning and implementation into the system. Implementation of GST was another example of mismanagement on the part of the government. The economy is now bleeding with a 23.9% YOY decline in GDP for Q1 of FY21. Is it a result of one more misguided policy in the form of Atmanirbhar Bharat Abhiyaan?

Fire

Reading Time: 3 minutes

Fire is believed to be discovered by early humans when they accidentally rubbed two stones against each other. Fire wholly changed the living and eating habits of HOMO SAPIENS. We can’t even imagine eating our food raw; cooking has made it easier for us to consume, and it’s safer as well. Cooking helped in reducing time and energy indigestion. Digestive fire in the digestive system makes digestion easier for us. According to British primatologist Richard Wrangham, cooking may have played a role in the expansion of our brains. Fire helped us keeping warm and surviving harsh winters, not to forget early humans successfully passed the brutal Ice Age which had made many animals go extinct. Flambeau helped us discovering unknowns in the dark and also helped to keep animals away in the night. Early humans were hunter-gatherers and mainly used weapons made up of stones, and hence they were called stone-age humans.

Fire
Fire

With the beginning of the Iron Age, humans started extensively using iron for weapons and other tools. Fire played a vital role in helping humans for moulding iron and other metals. It brought revolution and led to the development of many civilizations across the world. With humans discovering new metals, the importance of metallurgy increased drastically, and so the volume of fire in our everyday life. Various civilizations worshipped fire with different names, it was known as Hephaestus in Greek Mythology, Vulcan in Roman Mythology, Jacawitz in Mayan Mythology, Agnidev in Indian Mythology and Zhurong in Chinese Mythology. The fire is also associated with the dragon, a legendary serpentine creature appearing in the folklore of many cultures. Fire is an emotion for extremes. It is the emotion of extreme anger, extreme confidence, intense fear and last but not the least extreme jealousy. Fire is deeply rooted in human emotions.

Fire

The fire also resembles Sun: the ultimate power source. It’s only because of the Sun that all the environmental cycles on earth are possible and hence the life. The creation of this universe started with fire, and now our daily life revolves around it. Fire can be as tiny as a photon and as massive as the Big Bang or can resemble human emotions and desires. It’s smart and careful handling decides whether it will lead to construction or destruction.

Thank You

Team CEV

MADE: Masked Autoencoder for Distribution Estimation

Reading Time: 8 minutes

These days, everyone is talking about GANs like BigGAN and StyleGAN, and their remarkable and diverse results on massive image datasets. Yeah, the results are pretty cool! However, this has led to a decline in the research of other generative models like Autoregressive models and Variational Autoencoders. So today, we are going to understand one of these unnoticed generative models: MADE.

Generative models are a big part of deep unsupervised learning. They are of two types—Explicit models, in which we can explicitly define the form of the data distribution, and Implicit models, in which we cannot explicitly define the density of data. MADE is an example of a tractable density estimation model in explicit models. Its aim is to estimate a distribution from a set of examples.

The model masks the autoencoder’s parameters to impose autoregressive constraints: each input can only be reconstructed from previous inputs in a given ordering. Autoencoder outputs can be interpreted as a set of conditional probabilities, and their product, the full joint probability.

Autoregression is a time series model that uses observations from previous time steps as input to a regression equation to predict the value at the next time-step. 

Autoencoders

Autoencoder is an unsupervised neural network that learns how to compress and encode data efficiently then learns how to reconstruct the data back from the reduced encoded representation to a representation that is as close to the original input as possible.

MADE: Masked Autoencoder for Distribution Estimation
Autoencoder for MNIST

The process of transforming input x to latent representation z is called the encoder and from latent variable z to reconstructed version of the inputMADE: Masked Autoencoder for Distribution Estimation  is referenced as the decoder.

Lower dimensional latent representation has lesser noise than input and contains essential information of the input image. So the information can be used to generate an image that is different from the input image but within the input data distribution. By computing dimensionally reduced latent representation z, we are ensuring that the model is not reconstructing the same input image.

Now we want to impose some property on autoencoders, such that its output can be used to obtain valid probabilities. By using autoregressive property, we can transform the traditional autoencoder approach into a fully probabilistic model.

Let’s suppose we are given a training set of examples MADE: Masked Autoencoder for Distribution Estimation .Here MADE: Masked Autoencoder for Distribution Estimation  and MADE: Masked Autoencoder for Distribution Estimation , because we are concentrating on binary inputs. Our motivation is to learn a latent representation, by which we can obtain the distribution of these training examples using deep neural networks.

Suppose the model contains one hidden layer and tries to learn h(x) from its input x such that from it, we can generate reconstruction MADE: Masked Autoencoder for Distribution Estimation  which is as close as possible to x. Such that,

MADE: Masked Autoencoder for Distribution Estimation

Where W and V are matrices, b and c are vectors, g is a non-linear activation function and sigm is a sigmoid function.

Cross-entropy loss of above autoencoder is,

MADE: Masked Autoencoder for Distribution Estimation

We can treat  MADE: Masked Autoencoder for Distribution Estimationas the model’s probability that MADE: Masked Autoencoder for Distribution Estimation is 1, so l(x) can be understood as a negative log-likelihood function. Now the autoencoder can be trained using a gradient descent optimization algorithm to get optimal parameters (W, V, b, c) and to estimate data distribution. But the loss function isn’t actually a proper log-likelihood function. The implied data distributionMADE: Masked Autoencoder for Distribution Estimation  isn’t normalizedMADE: Masked Autoencoder for Distribution Estimation . So outputs of the autoencoder can not be used to estimate density.

Distribution Estimation as Autoregression

Now we want to impose some property on autoencoders, such that its output can be used to obtain valid probabilities. By using autoregressive property, we can transform the traditional autoencoder approach into a fully probabilistic model.

We can write joint probability as a product of their conditional probabilities by chain rule,

MADE: Masked Autoencoder for Distribution Estimation

DefineMADE: Masked Autoencoder for Distribution EstimationandMADE: Masked Autoencoder for Distribution Estimation . So now the loss function in the previous part becomes a valid negative log-likelihood function.

MADE: Masked Autoencoder for Distribution Estimation

Here each outputMADE: Masked Autoencoder for Distribution Estimation must be a function taking as input MADE: Masked Autoencoder for Distribution Estimation only and giving output the probability of observing value MADE: Masked Autoencoder for Distribution Estimation at theMADE: Masked Autoencoder for Distribution Estimation  dimension. Computing above NLL is equivalent to sequentially predicting each dimension of input x, so we are referring to this property as an autoregressive property.

Masked Autoencoders

Since outputMADE: Masked Autoencoder for Distribution Estimation  must depend on the preceding inputsMADE: Masked Autoencoder for Distribution Estimation , it means that there must be no computational path between output unitMADE: Masked Autoencoder for Distribution Estimation  and any of the input units MADE: Masked Autoencoder for Distribution Estimation

So we want to discard the connection between these units by element-wise multiplying each weight matrix by a binary mask matrix, whose entries that are set to ‘0’ correspond to the connections we wish to remove.

MADE: Masked Autoencoder for Distribution Estimation

Where MW and MV are mask matrices of the same dimension as W and V respectively, now we want to design these masks in a way such that they satisfy the autoregressive property.

To impose the autoregressive property, we first assign each unit in the hidden layer an integer m between 1 and D-1 inclusively. The kth hidden unit’s number m(k) represents the maximum number of input units to which it can be connected. Here values 0 and D are excluded because m(k)=0 means it is a constant hidden unit, and m(k)=D means it can be connected to maximum D input units, so both the conditions violate the autoregressive property.

MADE: Masked Autoencoder for Distribution Estimation

There are few things to notice in the above figure,

  • Input 3 is not connected to any hidden unit because no output node shouldn’t depend on it.
  • Output 1 is not connected to any previous hidden unit, and it is estimated from only the bias node.
  • If you trace back from output to input units, you can clearly see that autoregressive property is maintained.

Let’s consider Multi-layer perceptron with L hidden layers,

For every layerMADE: Masked Autoencoder for Distribution Estimation, ml(k) stands for the maximum number of connected inputs of the kth unit in the lth layer. In above figure, value written in each node of MADE architecture represents ml(k).

The constraints on the maximum number of inputs to each hidden unit are encoded in the matrix masking the connections between the input and hidden units:

MADE: Masked Autoencoder for Distribution Estimation

And these constraints are encoded on output mask matrix:

MADE: Masked Autoencoder for Distribution Estimation

WhereMADE: Masked Autoencoder for Distribution Estimation , MADE: Masked Autoencoder for Distribution Estimation andMADE: Masked Autoencoder for Distribution Estimation. Note that ≥ becomes > in output mask matrix. This thing is vital as we need to shift the connections by one. The first output x2 must not be connected to any nodes as it is not conditioned by any inputs.

We set ml(k) for every layerMADE: Masked Autoencoder for Distribution Estimation by sampling from a discrete uniform distribution defined on integers from mink’ ml-1(k’) to D-1 whereas m0 is obtained by randomly permuting the ordered vector [1,2,…,D].

MV,W = MVMW1MW2…MWL represents the connectivity between inputs and outputs. Thus to demonstrate the autoregressive property, we need to show that MV,W is strictly lower diagonal, i.e. MV,Wd’,d is 0 if d'<=d.

 

Let’s look at an algorithm to implement MADE:

MADE: Masked Autoencoder for Distribution Estimation
Pseudocode of MADE

Deep NADE models require D feed-forward passes through the network to evaluate the probability p(x) of a D-dimensional test vector, but MADE only requires one pass through the autoencoder.

Inference

Essentially, the paper was written to estimate the distribution of the input data. The inference wasn’t explicitly mentioned in the paper. It turns out it’s quite easy, but a bit slow. The main idea (for binary data) is as follows:

  1. Randomly generate vector x, set i=1
  2. Feed x into autoencoder and generate outputsMADE: Masked Autoencoder for Distribution Estimation for the network, set p =MADE: Masked Autoencoder for Distribution Estimation .
  3. Sample from a Bernoulli distribution with parameter p, set input xi = Bernoulli(p).
  4. Increment i and repeat steps 2-4 until i > D.

The inference in MADE is very slow, it isn’t an issue at training because we know all x<d to predict the probability at dth dimension. But at inference, we have to predict them one by one, without any parallelization

MADE: Masked Autoencoder for Distribution Estimation

Left: Samples from a 2 hidden layer MADE. Right: Nearest neighbour in binarized MNIST.

Though MADE can generate recognizable 28X28X1 images on MNIST dataset, it is computationally expensive to generate high dimensional images from a large dataset.

Conclusion

MADE is a straightforward yet efficient approach to estimate probability distribution from a single pass through an autoencoder. It is not capable of generating comparably good images as that of state-of-the-art techniques (GANs), but it has built a very strong base for tractable density estimation models such as PixelRNN/PixelCNN and Wavenet.  Nowadays, Autoregressive models are not used in the generation of images and it is one of the less explored areas in generative models. Nevertheless, its simplicity makes room for the advancement of research in this field.

Ball

Reading Time: 6 minutes

Taking forward the tradition of the CEV, of having the GDs on all sorts of topics, it was an “abstract discussion” we agreed to ponder on …

 “BALL”

No one could have guessed it to be the topic of discussion. Everyone was clueless, rolling the eyeballs, thinking about the absurdity of the case.

Ohh, wait a minute. Eye-BALL. There it is. 

But we were yet to get there. The first thing to pop up in our mind was the game of Cricket. Because this is India, and it’s given that the majority would relate to Cricket when they hear the word ‘Ball.’ 

But, our perspective about it was soon going to be altered. At this point, we were unsure about the learning only to reconceive how we observe the world around us.

Have you ever wondered, that in our universe(explicitly mentioning – the known universe) most of the matter, be it a tiny atom(which we imagine to be) or the planets, the moons, or the stars, all are almost spherical! Or we can say, all are in the shape of a Ball. Isn’t it interesting?

In the case of Planets and other Celestial bodies, they are round because of Gravity. A planet’s gravity pulls from the center to the edges like the spokes of a bicycle wheel. That makes the overall shape of a planet a sphere. But, in general, we can also say that the Sphere is the most stable, most efficient, and widely found state/shape. A sphere has the lowest possible surface area required to bound any given volume. Therefore, it’s the most energy-efficient configuration. The average strength of a sphere is the strongest of any three-dimensional shape, i.e., it doesn’t matter where you apply pressure on its body; the stress will be the same.

Now let’s come back to our very own home, the ‘Pale-blue Dot’ somewhere in the middle of the Cosmos. And also, let’s go back in time.

Human history. It is a perfect example of how the least exciting things can significantly impact our daily lives. Balls have such an effect. The first known use of the word ‘ball’ in English in the sense of a spherical body that we play with, was in 1205 in Laȝamon’s Brut, or Chronicle of Britain in the phrase, Summe heo driuen balles wide ȝeond Þa feldes.

None of us can imagine our childhood without balls. They used to be among our favorite toys, whether cricket balls, smiley balls, footballs, or any other ball. They have been an integral part of our life, fulfilling our needs for enjoyment and entertainment in every possible way. 

Ball

Growing up, we became familiar with sports like football, cricket, and basketball, among many. They helped us grow physically and mentally while making our bodies and minds healthier. Being a 90s kid, we can’t ever forget the computer mice with small rolling balls inside, and the endless efforts it took to take them out, to just look at it.

Other than these, where do you think you can find them? They’re all around you and also inside, they’re a part of you.

You don’t believe it? 

 

Consider the Ball and Socket joint in our bodies, a joint in which the rounded surface of a bone moves within a depression on another bone, allowing greater freedom of movement than any other joint. Also, the body part you see from is in the shape of a ball, i.e., an Eyeball. 

Balls make our lives easier, as they are also vital components of many machines that we use. The Ball-bearings, Locks, Ball-Screw mechanism, Robotic joints, Pens, etc., all have balls inside them. You Might also have observed; the food you eat. Yes. Can we not say that there are so many eatables/dishes that take the shape of balls, or even constitute the word ‘ball’ in their name? Here’s the list: Laddoo, Gulaab-Jamun, Rasogulla, Cheese Balls, Gol-Gappe, Meat Balls, Hush Puppies, Pakode, Manchurian, and many more.

Ball

Let’s have a look at how the balls have inspired and entertained us through Stories, Movies, and Art. 

Balls are also related to dance. Here is how: A ballroom or ballhall is a large room inside a building, the primary purpose of which is holding large formal parties called balls. Traditionally, most balls were held in private residences; many mansions contain one or more ballrooms. Ballroom dance’s social origin lies in the European court dances of the 17th and 18th centuries. They are popular mainly in the West and performed by people of all age groups. The ‘Collegiate Ballrooms’ are designed for students organizing various competitions like  MIT Open Ballroom Dance Competition, Big Apple Dancesport Challenge, Cardinal Classic, Berkeley Classic,  Harvard Invitational, etc. This helps to spread interests among the students and to familiarize them with the dance steps. If you are a ‘Potterhead’, you might remember the ‘Yule Ball’, a Ballroom Dancing event organized with the ‘Triwizard Tournament’.

Ball

And how can we forget about the game the protagonist and his father played; in the same series. “Quidditch”. The quidditch match between Gryffindor and Slytherin was among the most memorable moments from the book. There are three different balls: the Quaffle, the two Bludgers, and the Golden Snitch. The Quidditch game ends only when anyone from the two teams catches the “Golden Snitch,” which somehow relates to our very life. Our lives are all about ‘catch’ and ‘drop’ moments; when you can catch the right opportunity, overcome all the obstacles in your path, you win, or you go for another try. That is why every one of us can relate to these lines:

“ This …is the Golden Snitch, and it’s the most important ball of the lot. It’s tough to catch because it’s so fast and difficult to see. It’s the Seeker’s job to catch it. You’ve got to weave in and out of the Chasers, Beaters, Bludgers, and Quaffle to get it before the other team’s Seeker, because whichever Seeker catches the Snitch wins his team an extra hundred and fifty points, so they nearly always win.

Ball teaches us life lessons, isn’t it amazing? Here is one similar line: “The long game was ended, the Snitch had been caught, it was time to leave the air.

So, what did we gain here? We got to know how the small things(in this case, a ball) creates a significant impact on our lives. They are present around us everywhere and every time. They sometimes teach us valuable lessons. We just need to change our perspective on how we observe the world around us. The key is to stay curious and live in the moment.

We’re leaving with a final quote from the movie – ‘MS Dhoni: The Untold Story’. In a scene, Dhoni is sitting alone and depressed at the Kharagpur Station, and then his superior AK Ganguly said these famous lines: “Life ek Cricket match ki tarah hai aur tum woh match khel rahe Batsman. Life tumhe baari baari ball phenkegi, aur sab ball ek samaan thode na milega. Merit pe khelna hai aur tike rehna hai, scoreboard will keep moving!”

Life is like a cricket match and you’re the batsman. It will keep throwing balls at you, and every ball will not be easy to play. What you need to do here is just hold your position and the scoreboard will keep moving. What this means is that there will be numerous difficulties along the way but you have to withstand. The better you endure the situations, the farther you will move ahead.

Thank you!!

Team CEV

3 Horizons of Innovation

Reading Time: 6 minutes

- by Aman Pandey

Being in a technical club, we often discuss about innovation 💡 . Anyways, it is not just about being in a tech club 🔧 it is all about being a visionary, you frequently ponder into the thoughts of How an Idea come into existence

Ever thought about actually making a solution and creating its “actual” value 💸. (don’t care, it’s just an emoji). Value is not always about money, it about how much and how great effect it is making on the lives of this magnificient earth 🌏 . Money is just a side effect of creating value.

" A very simple definition of innovation 💡 can be thought of as A match between the SOLUTION 🔑 & the NEED 🔒 that creates some form of VALUE. "

It is all about the game of a GOOD Product strategy, that turns the solution into a value.

Whenever a new solution is launched for the society, it curbs across a different set of people 👥 👤 . Infact there’s a chart which will explain the things better than anything else.

https://miro.medium.com/max/1540/1*2kIL4HV7-y2MbzfMRmHQAQ.jpeg

You see the first portion of the curve? The Innovators? These are more of a tech enthusiasts 📟 who are crazy about any new technology, and just want to innovate. Then the Early Adopters ☎️, who actually see some value in the solution. These are the Visionaries 📣 . They are willing to see through business and value of a solution. Then comes the Early Majority, known as the Pragmatist 😷 , they are the first adopters of a technology in the market. They always seek out to improve their companies’ works by obtaining the new technology. Rest are the Late Majority, popularly known as skeptics, they usually look out for recommendations and then the Laggards, idk what they are called.

So there are certain strategies involved in the phases of transiting an innovation to a startup and to a company. This processs is known as Customer Development.

Oh wait ⚠️, looks like we forgot something.

You see a little gap 🌉 between the early adopters and the early majority, The Chasm. This is prolly the hardest and most important bridge that a solution needs to cover in order to create its value 💸 .

There are many startups which might make to that side of chasm, and the startups which might not make. In the most common terms, the first set of customers/buyers of your tech, who agrees to give a try on your innovation.

But, let us keep it for some other time.


Now the stuff, might depend upon certain criteria.

  1. There already exists some market and you want to capture that market.
  2. There are several markets, and you want to Re-Segment them according to your innovation.
  3. You don’t have any market, i.e. you create your own for your product.

But this is the talk of some other time. Let’s pretend we are not going deep into this. We know that, we have a market, which already have customers, a market which exists but isn’t used, and the market, which is still out of existence. You understand the difficulty in all the cases right. 📈


Baghai, Coley, and White came up with something in 2000, called as the Three Horizons of Innovation., more formally known as McKinsey’s Three Horizon of Innovation.

Let us now understand this, with a little example of Sleep medicine industry. 💊

According to a study, in America, around 5~10% population is affected by insomnia, and 2-4% by Sleep Apnea. So, there is already a good market.

Now, the disruption in sleep medicine industry led to a several researches 🔎.

One research was super disruptive, the innovation of Transcranial System.

After a lot of researches on its subjects, collecting data through fitbands, and devices like Beddit which were kept under the mattress of the subject, the researchers collected a lot of data about sleeping patterns. The researchers 🔎, came with the solution of Transcranial systems, which is a device, in which changing magnetic fields stimulates the brain signals and lets you sleep.

https://upload.wikimedia.org/wikipedia/commons/thumb/3/38/Neuro-ms.png/285px-Neuro-ms.png
Source: Wikipedia

And most of all, this is an non invasive device, i.e. it need not to be planted inside your brain. How do you think the researchers were able to do this?

Well this is all because of Artificial Intelligence.

  • The wrists bands ⌚️ used to monitor sleep activities. The fitbit bands accumulated around 7 billion nights of sleep😪.
  • The beddit devices, were kept under the mattresses, that records your pulses(could not record your oxygen levels though).
  • Apple🍎 watches, are so sharp in their tracking systems, that sometimes they are used as medical diagnosis devices.

So, what transcranial systems do, they track the abnormal pattern in the sleep signals, and send electrical signals, to let the person sleep comfortably.


Now there’s a bigger picture to understand here. If such a solution exists, then why ❓ is it not being used.

To understand this, let us now see the 3 horizon of Innovations:

https://innovationpapers.files.wordpress.com/2018/03/innovation-horizons.png?w=458&h=424

The horizontal axis, is about how new the innovation is, and the vertical axis is about the novelty of the market, that if the users already exists.

-> The Trancranial System lies, somewhere in the bottom right, where we know the existing market, which in this case are the APNEA patients, but the tech is still new to be used.

This makes it a bit difficult to convert this innovation to a company. 🎬

This still needs a lot of research and finally the makers have to tamper the already existing market, and bring in their device.

Let us take one more example. Support your plan to make a device, that tracks the breathing patterns or pulse rate, and you get data on your mobile phone. Now this data, after going through a series of AI models, lets the Doctor diagnose the severity of the disease and correctly cure you. ⭕️

In this case, you know the solution, and exactly what might solve your problem. Plus hand, you know the target customers. So is possible that this product can be shipped like in the next month.

This App lies somewhere in the lower left.

Now, let me clarify something for you.

  • Horizon 1 is considered to be of Not Much risk ⚪️, and these just need the improvements and cost reductions from the item the customer used before this (because you are targeting already existing customers)
  • Horizon 2 is the More Risk zone🔵 , and thus should be approached with care
  • Horizon 3 is the Highest risk zone 🔴, and you never know, whether the innovation will be able to even make it to that side or not. And might even take next 5 years to come into proper existence.

So, looking at the picture, from the farther point, we spot a sense of the patience and efforts required to give an innovation, a value.

Just like, Apple beat blackberry by making a device which served more as a personal device, unlike Blackberry which focused only on business users. So, in a short span of time, just in 2 years after launching iPhone in 2007, it took over Blackberry as the leading Mobile phone seller in the world.

You have to be a visionary to understand it.

Thank You.

Courtesy: CHRISTIAN TERWIESCH

Impact of COVID-19 on Education

Reading Time: 5 minutes
Impact of COVID-19 on Education
Impact of COVID-19 on Education

Due to Coronavirus Pandemic, many Educational Institutions were closed in a measure towards the prevention of the spread of Coronavirus, now it’s been closed since months and there is no certainty of reopening. This is a crucial time for various Institutions as well as for Entrance exams, as they were going to conduct exams during this time period.

So, no matter what COVID-19 had affected us in many ways, we have listed down some of its positives and negatives on our Education System, like a coin has two sides positive and negative side:

Positive Impact:-

Impact of COVID-19 on Education

Learn interest in the field: – An​ ability to learn in the time of this internet age, everyone has access to information and it allows everyone to learn the things of their choices of interests. Students can explore many fields whichever they feel excited about. Learning from online resources have made students more self-dependent and curious in their field of interest.

Impact of COVID-19 on Education

Availability of Resources at any Time ​ – The students who are studying from online classes are able to record lectures for future references which might not be possible in our traditional offline classes.​

Impact of COVID-19 on Education

Changes in Education Policies – Our Indian Education got revised during this pandemic as most offline learning was a part of barrier, so to deal with those barriers, New Education Policy was introduced in India. And this Policy will give multiple benefits to the students, like to College students will be provided some multiple Degrees based on the no. of years of study, and students will learn coding from early 6th standard. M​ultiple entries and exits from the chosen course are possible for the students. The focus will be on practical and application based knowledge. So the NEP will help students in many ways to brighten their future.

Negatives Impact:-

Impact of COVID-19 on Education

Distractions: – I​n the present day, social media has evolved leaps and bounds, with 90% of the world’s population using social media. So the devices which are provided to the students for studying instead of that they are actively used on social media i.e. N​etflix, amazon prime and Hulu etc…  which have the ability to steal someone’s time for hours due to the fact that students would much rather watch their favourite shows and movies rather than investing that time on other work or on their homework. It also provides a way to procrastinate the workload. It is difficult to focus on homework or notes in class if you are reading captions on Instagram or posting on Twitter about last night’s episode of your favourite TV show. Just thinking about your amount of followers or what you want to post later can also be distracting when trying to get work done.

Impact of COVID-19 on Education

Health Issues ​ : – S​eating in front of a screen for 4-5 hours affects a student psychologically and causes mental health issues. W​ith an increasing number of studying hours online, the number of health issues are also growing at a rapid pace. If you work in front of a computer for a few hours that too once in a while you may not be at a health risk. But if you spend about 4 hours or more everyday then you should probably keep a check on these health issues. Vision Problems, Headache, Stress disorders etc. these might also get developed, so it is necessary to keep monitor on our usage to avoid these Health problems.

Impact of COVID-19 on Education

Issues regarding Teaching methods: – F​aculties initially used to teach in classrooms but due to this covid impact, teachers had to shift online to teach students, and some faculties face issue over there, as there might be lack over the resources (i.e. Lack of Internet, Unavailability of Devices, etc.), or some have issue regarding how should they use devices to teach students. Unavailability of online Platforms is also the issue due to which proper online teaching methods are not getting applicable.

Impact of COVID-19 on Education

Recruitments Issue: – C​ollege students who had just graduated this year are facing issue regarding their placements, like due to covid impact, most of the Companies are struggling to survive in this Pandemic, as their profits are going down, they are facing losses, due to which they are not able to hire new graduates. Even Some Companies had fired some of the employees to recover their losses and to survive in market. Engineer’s ratio from core related fields maybe less and many such cases can be seen that many are opting for computer related or IT related fields or courses. Government may or may not invest for students of mechanical or civil after the outbreak.

Impact of COVID-19 on Education

Institutions Charging Full Fees ​ :- During this pandemic, where people are getting jobless, some are getting low salaries during this time, National Institutions charge full fees even charges regarding ( extra services ) and faculties are getting paid less for their teaching, so where does this extra money goes ? Just give it a thought….!!!

Conclusion :-

Impact of COVID-19 on Education

There are always pros and cons for a situation so better to focus on positive points to make a good future instead of complaining about the current situation. We should move ahead and wait for some time to get control on COVID-19. 

Everything happens for better reason, like during this we have New Education Policy, and we all Indians are pretty sure, that it will help our next generations for their future. So, Hope for the best and work on the betterment of our nation’s future.

 

Stay safe and keep enthusiasm!

Team CEV.

Brexit and The Pounds

Reading Time: 6 minutes

The History                              

Brexit was a word barely heard till 2012 but rose to prominence and became a politically defining term in 2016. This term is a blend of two words “Britain” and “exit” which represents Britain’s exit from the European Union. Visionary leaders came together to create economic and political stability to ensure long term peace in Europe. 

The EU had a total of 28 European member states, including the UK. From then on, many others have followed in their footsteps, striving to build on this vision through successive treaties. In 1957, France, West Germany, Belgium, Italy, Luxembourg, and the Netherlands signed the Treaty of Rome, which established the European Economic Community (EEC), the predecessor of today’s European Union. 

The Treaty of Maastricht signed on February 7, 1992, established the European Union (EU) based on three pillars: the European Communities, the Common Foreign and Security Policy (CFSP), and the Police and Judicial Cooperation in Criminal Matters (JHA). It introduced the concept of European citizenship, enhanced the powers of the European Parliament and launched the economic and monetary union (EMU). The Treaty of Nice, signed in 2001, streamlined the institutional system in a bid to maintain efficiency. The UK finally made it into the club in 1973, but just two years later was on the verge of backing out again.

In 1975, the nation held a referendum on the question: “Do you think the UK should stay in the European Community (Common Market)?” The 67 percent “Yes” vote included most of the UK’s 68 administrative counties, regions and Northern Ireland. In contrast, only Shetland and the Western Isles voted “No.” The center-left Labour Party split over the issue, with the pro-Europe wing splitting from the rest of the party to form the Social Democratic Party (SDP).

Tensions between the EEC and the UK exploded in 1984 when the Conservative Prime Minister Margaret Thatcher talked tough in order to reduce British payments to the EEC budget. In October 2016, Prime Minister Theresa May announced her intention to invoke Article 50 of the Treaty on the European Union, formally giving notice of Britain’s intent to leave the EU. In the vote of June 23, 2016, The UK voted to leave the EU by 52% to 48%. Leave won the majority of votes in England and Wales, while every council in Scotland saw Remain majorities. On March 29, 2017, the order, signed by May was delivered to the Council of the European Union, officially starting the two-year countdown to Britain’s EU departure, set for March 30, 2019.

 

GBP’s (Great Britain Pound) roller coaster ride on Brexit

Brexit and The Pounds

The economic relationship of Britain with the world was sure to take a different turn as it decided to leave the European Union. The boat of Brexit has posed many unforeseen challenges to the pound. Enhanced periodic volatilities have inevitably surrounded vital diplomatic and political events not just in the UK, but almost the entire European continent Let us witness the journey of pound through the Brexit years: –

  1. JUNE 2016: BREXIT VOTE

The vote was undoubtedly an iffy prospect for political leaders and parties, economists and financial professionals. The aftermath of the March 2016 vote resulted in a tumultuous position of pound in the market. GBP experienced heavy losses and fell to a 31-year low. It continued to fall against its major standard competitor dollar in the coming months. It fell to 6% against USD in October 2016, and by June 2017 it had plummeted to a 12% low. That means the pound which was earlier worth 1.32 euros had fallen to a lowly 1.11 Euros by the October following the vote.

  1. MARCH 2017: ARTICLE 50 IN PLAY

Because of the March 2017 triggering of Article 50, the GBP experienced considerable pressure after a Parliamentary vote cleared the way for May’s declaration. In the hours after Parliament rendered its decision, the GBP rapidly fell 0.7% against the USD.

  1. DECEMBER 2017: EU/U.K. DIVORCE DEALS

On December 8, 2017, leaders from the EU and UK reached an agreement for the coming “divorce” or separation.

The deal outlined provisions for the Northern Ireland border, EU/U.K. citizenship and a financial settlement of £39 billion to be paid by the UK to the EU. Upon public announcement of a deal, the GBP rallied 0.9% against the USD and more than 1% vs the euro. 

While some were apprehensive of the future of the pound, some people and organizations viewed this in the positive light. Analysts at Goldman Sachs said that the pound was still a profitable investment. Currency traders were also optimistic about the divorce move. The dubious atmosphere persisted, but GBP restored some of the trust, its potential in the global market soared again.

  1. JANUARY 2019: REJECTION OF THERESA MAY’S DIVORCE DEAL

On January 15, 2019, the House of Commons officially rejected May’s divorce deal by an overwhelming margin. However, the vote came as no surprise to forex traders. For January 15, 2019, session, the GBP lost a modest 0.01% vs the USD while climbing by 0.46% against the euro.

There were possibilities of a new Brexit referendum, snap election or delay of the scheduled March 29, 2019, Brexit Day.

  1. THE IMPLEMENTATION PERIOD

The “implementation period”, was the period of 21 months between March 29, 2019, and December 31, 2020. Many see the implementation period as merely being an extension of UK membership in the EU. However, the ability of the UK to negotiate its treaties opens the door for new economic partnerships. The GBP echoed this sentiment shortly after the Brexit transition deal’s announcement. Significant rallies against the euro (+0.51%) and USD (+0.61%) occurred after the agreement became public in March 2018. 

However, the GBP struggled to sustain market-share throughout the tumult of 2018. For the year, the GBP lost 1.8% against the USD and 1.1% vs the euro. Nonetheless, the pound sterling rebounded in 2019 against the majors. During 2019, the GBP gained more than 4% and 6% versus the USD the euro respectively.

THE PRESENT AND THE FUTURE

There has been a diverse division of people of what holds for the UK economic future. People of the United Kingdom feel that leaving EU has saved the nation from various anomalies of the organisation like:

  • The corruption in the EU
  • Regional Separatist mentality in the number of member states
  • Anti-Democratic nature of EU

Experts say that leaving the EU might be the right decision in the long run. However, it leads to tensed relations with Ireland, losses for both, importers and of course, fall of the pound.

Brexit and The Pounds

Being a part of the EU gave Britain a myriad benefits. However, its exit has put doubt in the mind of investors and businesses across the world. The UK was one of the politically most influential countries of the total of 28 countries. As it has withdrawn, there is speculation that Germany might rise to power and dominate the organization.

The bigger question which arises now is how Britain withdraws from the European Union, whether it will go for Hard Brexit (sharp deal to cut off ties with no trade and projects continued) or Soft Brexit (agreement on specific policies). 

The coronavirus pandemic has already severely hit finances all over the world and the UK has taken one of the biggest hits by its GDP plummeting by 20.4%. It is undoubtedly a critical time for policy-makers and citizens all over the UK as their calculated risks and visions today will either save or destroy, once a valiant colony as the British empire.

If the UK fails to strike a deal with the EU by the current end date of the transition period, i.e., December 31, 2020 – and the period unextended – then the country would leave with no deal and revert to WTO rules on trade and security – which would have a direct impact on the pound.

Keep Learning

Keep Hustling

Team CEV

5 Key Challenges In Today’s Era of Big Data

Reading Time: 4 minutes

Digital transformation will create trillions of dollars of value. While estimates vary, the World Economic Forum in 2016 estimated an increase in $100 trillion in global business and social value by 2030. Due to AI, PwC has estimated an increase of $15.7 trillion and McKinsey has estimated an increase of $13 trillion in annual global GDP by 2030. We are currently in the middle of an AI renaissance, driven by big data and breakthroughs in machine learning and deep learning. These breakthroughs offer opportunities and challenges to companies depending on the speed at which they adapt to these changes.

Modern enterprises face 5 key challenges in today’s era of big data

1. Handling a multiplicity of enterprise source systems

The average Fortune 500 enterprise has a few hundred enterprise IT systems, all with their different data formats, mismatched references across data sources, and duplication

2. Incorporating and contextualising high frequency data

The challenge gets significantly harder with increase in sensoring, resulting inflows of real time data. For example, readings of the gas exhaust temperature for an offshore low-pressure compressor are only of limited value in of itself. But combined with ambient temperature, wind speed, compressor pump speed, history of previous maintenance actions, and maintenance logs, this real-time data can create a valuable alarm system for offshore rig operators.

3. Working with data lakes

Today, storing large amounts of disparate data by putting it all in one infrastructure location does not reduce data complexity any more than letting data sit in siloed enterprise systems. 

4. Ensuring data consistency, referential integrity, and continuous downstream use

A fourth big data challenge is representing all existing data as a unified image, keeping this image updated in real-time and updating all downstream analytics that use these data. Data arrival rates vary by system, data formats from source systems change, and data arrive out of order due to networking delays.

5. Enabling new tools and skills for new needs

Enterprise IT and analytics teams need to provide tools that enable employees with different levels of data science proficiency to work with large data sets and perform predictive analytics using a unified data image.

Let’s look at what’s involved in developing and deploying AI applications at scale

Data assembly and preparation

The first step is to identify the required and relevant data sets and assemble them. There are often issues with data duplication, gaps in data, unavailable data and data out of sequence.

Feature engineering

This involves going through the data and crafting individual signals that the data scientists and domain experts think will be relevant to the problem being solved. In the case of AI-based predictive maintenance, signals could include the count of specific fault alarms over the trailing 7 days,14 days and 21 days, the sum of the specific alarms over the same trailing periods; and the maximum value of certain sensor signals over those trailing periods. 

Labelling the outcomes

This step involves labeling the outcomes the model tries to predict. For example, in AI-based predictive maintenance applications, source data sets rarely identify actual failure labels, and practitioners have to infer failure points based on a  combination of factors such as fault codes and technician work orders.

Setting up the training data

For classification tasks, data scientists need to ensure that labels are appropriately balanced with positive and negative examples to provide the classifier algorithm enough balanced data. Data scientists also need to ensure the classifier is not biased with artificial patterns in the data.

Choosing and training the algorithm

Numerous algorithm libraries are available to data scientists today, created by companies, universities, research organizations, government agencies and individual contributors.

Deploying the algorithm into production

Machine learning algorithms, once deployed, need to receive new data, generate outputs, and have some actions or decisions be made based on those outputs. This may mean embedding the algorithm within an enterprise application used by humans to make decisions – for example, a predictive maintenance application that identifies and prioritizes equipment requiring maintenance to provide guidance for maintenance crews. This is where the real value is created – by reducing equipment downtime and servicing costs through more accurate failure prediction that enables proactive maintenance before the equipment actually fails. In order for the machine learning algorithms to operate in production, the underlying compute infrastructure needs to be set up and managed. 

Close-loop continuous improvement

Algorithms typically require frequent retraining by data science teams. As market conditions change, business objects and processes evolve, and new data sources are identified. Organizations need to rapidly develop, retrain, and deploy new models as circumstances change.

Therefore, problems that have to be addressed to solve AI computing problems are nontrivial. Massively parallel elastic computing and storage capacity are prerequisites. In addition to the cloud, there is a multiplicity of data services necessary to develop, provision, and operate applications of this nature. However, the price of missing a transformational strategic shift is steep. The corporate graveyard is littered with once-great companies that failed to change.

This article originally appeared on Makeen Technologies.

CEV - Handout