This series covers some of the popular methods of consensus being used in current public blockchain networks.
Before diving deep into the stream. To make things more clear, let’s understand which type of consensus protocol can be used in case of public blockchain networks.
In the case of permissioned blockchain, actual message passing (gossiping) takes place between the participating nodes. So nodes are known to each other in such networks. Contrary to this, in public blockchain networks, any node can join or leave the network anytime. So nodes are anonymous to each other and to reach a decision, leader election will always take place in reaching the consensus.
The leader will decide what will be the next block and other nodes can easily validate the new proposed block.
At the end of the article, if you feel confident to explain the term to others, then don’t forget to leave claps behind.
Proof of Work(PoW)
This consensus algorithm is quite popular and used in many blockchain networks like Bitcoin and Ethereum.
To understand PoW in a better way, let’s discuss a few more terms:
Mining: It is the term given to the event of adding a new block of transactions to the blockchain. In public blockchains, a leader is elected who will decide what will be the next block. The elected leader is called “Miner” and the process of adding the block is called “Mining”.
Hash: A hash function is any function that can be used to map data of arbitrary size to fixed-size values — Wikipedia
Nonce: A nonce is a random number in cryptography which is used only once. In the context of Blockchain, the miners hash the block data and nonce to reach a target in a brute-force manner.
Let’s know more about Transaction Life-Cycle in Bitcoin Network-
Each transaction is added to the transaction-pool first.
From transaction-pool, different participating nodes take input the transactions and compete with each other to mine the next block.
The node which solves the problem first wins and decides what will be the next block.
So, the first question you might be thinking must be- How does the leader gets elected? Who will decide what will be the next block since different miners who are competing will have different transaction list due to the network delays? Don’t worry, Let’s catch it up together after reading a few more things.
During the consensus process of PoW, the complex mathematical problem selected is such that it is very hard to solve but very easy to verify. For example- in the case of the Bitcoin Network, the problem is – the generated block hash should have the fixed number of zeroes in the beginning. e.g the target Block-Hash should have 12 zeroes in the beginning.
So in the transaction life-cycle, different miners collect transactions and generate a new block and try to solve the complex mathematical problem. The miner who solves the problem first gets the chance to broadcast his block to the rest of the network, where it can be verified easily.
Moving forward, we are talking about ‘complex’ mathematical problem. So what exactly it is? Before this, we need to understand how the block hash is actually generated. Let’s see the case of Bitcoin:
As already mentioned, different miners collect transactions from the pool. After that, they generate the Merkle root hash from the list of transactions. This is done by hashing the transactions in a pair-wise manner repetitively.
This ‘Merkle Root Hash’ along with the ‘previous block hash’ and ‘nonce’ constitute the ‘Block Header’ of the block. The ‘Block Header’ is then hashed with few more fields present in the block to generate a unique identifier of the block, called ‘Block Hash’.
Say, the target problem is- to generate the block hash to have initial 4 number of zeroes. Then to solve this problem, each miner will vary the nonce, so that generated block hash meets the target.
The solution to this problem is complex as it can only be achieved through a brute-force approach by varying the nonce. A huge amount of computational power is spent in finding the right solution which meets the target.
Another amazing point is that, the difficulty of the complex problem(how much initial zeroes), is set according to network demands and transaction throughput of the bitcoin network.
So, now you can imagine just to add a valid block, a huge amount of computation is done. The tamper-proof characteristic of blockchain is somewhat supported by PoW. If someone tries to intrude any transaction data in the block, then its Block Hash becomes invalid and since in blockchain each block holds the hash of the previous block too, so hypothetically we say the chain breaks, and to restore the things the attacker needs to recompute the Block Hash of all the subsequent blocks from the block in which he changed the data till the most recent block, or we can say the attacker has to do the work again. All this will require a huge amount of computational resources which may worth very less than the value of data.
The only motivation for the miners to participate in Bitcoin mining is rewards. The miner gets reward for proposing a valid block and this reward is in the form of newly generated/mined bitcoins in the network.
That’s it, this is the short explanation of PoW from my side. Keep hustling to understand it till you are not satisfied.
This short-article focusses on one of the most popular consensus problems in distributed computing known as “Byzantine’s General Problem” and when a distributed system is said to be Byzantine Fault Tolerant. Various Byzantine Fault Tolerant algorithms are being used in Permissioned Blockchain Networks e.g Hyperledger Sawtooth is using Practical Byzantine Fault Tolerant(PBFT) to achieve consensus. So, if you want to understand how the consensus is actually achieved in such systems. This article is surely for you.
So let’s begin this journey.
First of let’s focus on the term ‘Byzantine Fault Tolerant’ and when a distributed system is said to be Byzantine Fault Tolerant? The answer lies within the different types of failures that can occur in a distributed system-
1.Crash-Fail: In this type of failure, the component stop working without any warning. So you need to restart the node or replace it. We can say it is a ‘Fail-Stop’ failure.
2.Omission Failure: In this, component transmits a message but that message is not received by other nodes or we can say it is omitted.
3.Byzantine Failures: It is a ‘no-stop failure’.It occurs when there is a malicious or traitor node in the network which sends conflicting messages or block the messages by not sending them to the other nodes in the network, which may lead to faulty results.
Now I think it is self-explanatory that a distributed system is said to be Byzantine Fault Tolerant if it can cope-up with the Byzantine Failures.
The applications of BFT can be found in various domain like Blockchain and even in Boeing 777 and 787 flight controls.
Let’s move on to a specific problem which forms a base for understanding BFT-
The Byzantine General’s Problem :
Situation: Suppose there are several generals and they have to attack army camp C and they are surrounding the army camp such that they can’t communicate with each other directly. The only way communication can happen in between them is through a carrier and he needs to pass the enemy camp for transferring every message. So they need proper protocols to reach a final decision whether to “attack ” on C, the next morning or “retreat”. If they all agree to attack, and they do attack then they will surely win or if they agree to “retreat” then they can fight on another day. But if one of the general attack and other decides to retreat then they are surely gonna lose.
Malicious Generals create variation in the decision to the others
Message Carriers may not reach
Reach a single solution, considering the downsides of a few Generals
Keeping in mind the situation, let’s discuss this problem with three generals.
Three Generals Problem:
Suppose, there are one commander and two lieutenants surrounding the army camp C and they have to collectively reach a decision to ‘attack’ or ‘retreat’.
If neither of the generals is faulty then all will work good and they will surely reach a decision.
Let’s see the case if one of the generals starts behaving maliciously:
If you have reached here, you are definitely intrigued to explore ways you can enhance your humble typing and get that much needed touch typing skill added to your accolades.
Its a subtle confession that when we see a developer typing his code into the terminal or an editor at blazing fast speeds, we all feel awe-inspired and amazed and just gawk at the marvel. If you are one amongst us who have always wanted to get better at typing but just don’t get the way of getting started then I guess you should be getting prepped up as the content coming ahead will help you do exactly that.
But before anything, one Important point that I would want to discuss is the plan that we will follow while learning “how to type — blazing fast!!”.
Some months ago, I came across a YouTube video. It is a short 20 mins Ted Talk by a man named Josh Kaufman who is essentially telling us about his research in which he reveals the secret of Learning anything in just 20 hours. I would highly recommend the reader to pause and watch the video before moving on with the story. It is a true treasure!
Well, hope you did watch the video 😛
Josh tells us one important thing that we often tend to forget :-
We are often not preparing to be the best in our field of learning. Most of the time we are just preparing to get good at that one skill.
It is quite important to understand this concept. I have had many personal experiences myself in which I have procrastinated learning a skill, just for the fact that it seemed really overwhelming at first. I used to evaluate the learning curve in my mind and believe it to be very steep as if it would drain my soul equivalent to climbing Mt.Everest. I guess we all can share that feeling. Such thoughts often come to our brains when we are thinking of learning a new tech that’s blooming but completely new to us.
Well, nowadays it gets really easy as I know that in order to get just good at something I need to invest only 20 hours into it. Nothing much is expected from me and I can make myself reliable enough in just 20 good learning hours and that makes the process much more relaxed and in turn really rewarding.
Moving on, I guess with all that background score already set up we can comfortably look into the learning at hand here.
Learning typing should be taken as a very basic skill and I feel it is much equivalent to learning to write your first alphabet letters by hand. You have to remember the key positions, get them into your finger memory and then practice practice practice until one day you know that you are able to glide in finesse on your keyboard.
Allow me to break the learning into two parts.First one will be memorising the key position and positioning your fingers on your keyboard in the most efficient way. Next comes the never ending practice!
Memorising the key positions:-
There are so many websites available online that teaches you this. My personal favourite is typing.com. That maybe because it was the first site that I reached when i thought of learning typing or maybe because of the great User Interface that they provide that eases the learning process into a very intuitive one.
After making an account as a student, you will be taken to a page where you can see lessons that will help you getting started just like that. Start by learning the key positions, how to place your fingers on the home row and how to use your thumb for the space bar.
As you will practice more and more and complete the lessons you will progress from beginner to intermediate and later to advanced. This can feel a bit overwhelming at first while you read it but let me show you some calculations that will help getting clear with the learning plan.
We know that we are preparing for 20 hours. That means 1200 mins. If we even invest only 10 mins into learning typing everyday we see that in just 120 days we would reach our goal. 120 days is more or less 4 months.
So you see, in just 4 months with daily practice of 10 mins you will earn a feat that will give you all that nerdy feel with a typing speed that will easily reach to around 70 -75 words per minute. Lets move on to the next phase of our process.
Practice and polish your skill:-
Once you have attained that finger memory and know how to locate keys on the keyboard, you now need to practice, practice hard! so that you can generate speed into your dexterity.
keybr.com is a great place built exactly for that. Quoting the developers from keybr-: It employs statistics and smart algorithms to automatically generate typing lessons matching your skills.
It helps you find your weaker keys and shows them on the screen in red so that you can contain your focus on them while you type. It also auto generates its lessons such that you are given more opportunity to improve on your weaknesses.
Keybr is a great place to practice and ease finesse. With all that data they take from your typing sessions, they generate many graphs and visuals that you can refer to and also download from the profile section. You can also compete with other typewriters in real time in the multiplayer section. And the best part, they also have a dark mode. Isn’t that amazing : D
Learning how to touch type is a great skill one can learn. It helps you write your code fast, your documents fast, helps you to amaze your friends by yours subtle finger-dance on the keyboard and vert importantly — stand out in the crowd.
Also, just as a proof to my learning, here is a screenshot that shows my present typing speed on keybr. Mind that I did start learning around some 3 months ago and have reached here with just 10 mins everyday.
The picture depicts what Augmented Reality can be like!!!
What would you do in a foreign place whose native language you don’t know? How would you read the signs? Would you feel worried?
Well, you don’t have to worry. With Google Translate’s new AR function, you can easily scan text using your phone’s camera and have it translated in any language. Cool right, but hold on what is AR?
How does it work?
What are its applications?
Just relax and read on to find out everything you need to know about this cool technology. So let’s get started with the definitions…
WHAT IS AUGMENTED REALITY (AR)?
According to a dictionary, to augment something means to make it more effective by adding something to it.
Moving onto a technical definition, augmented reality is the technology that enhances our physical world by superimposing computer-generated perceptible information on the environment of a user in real-time.
This integrated information may be perceived by one or more senses and enhances one’s current perception of reality with dazzling visuals, interactive graphics, amazing sounds and much more. (Exciting!)
You must have played the popular AR game Pokemon GO which revolutionized the gaming industry and is a huge success making 2 million dollars per day even now. Pokemon GO uses a smartphone’s GPS to determine the user’s location. The phone’s camera scans the surroundings and digitally superimposes the fictional characters of the game with the real environment.
Some other popular examples of AR apps include Quiver, Google translate, Google Sky Map, Layar, Field trip, Ingress, etc. and who don’t know about cool snap chat filters!
I KNOW ABOUT VIRTUAL REALITY…HOW IS IT DIFFERENT?
Augmented reality is often confused with virtual reality. Although both these technologies offer enhanced or enriched experiences and change the way we perceive our environment, they are different from each other.
The most important distinction between augmented reality and virtual reality is that Virtual reality creates the simulation of a new reality which is completely different from the physical world whereas augmented reality adds virtual elements like sounds, computer graphics to the physical world in real-time.
A virtual reality headset uses one or two screens that are held close to one’s face and viewed through lenses. It then uses various sensors in order to track the user’s head and potentially their body as they move through space. Using this information, it renders the appropriate images to create an illusion that the user is navigating a completely different environment.
Augmented reality on the other hand, usually uses either glasses or a pass-through camera so that the user can see the physical environment around them in real-time. Digital information is then projected onto the glass or shown on the screen on top of the camera feed.
WHERE DID IT ALL START?
In 1968, Ivan Sutherland, a Harvard professor created “The Sword of Damocles” with his student, Bob Sproull. The Sword of Damocles is a head-mounted display that hung from the ceiling where the user would experience computer graphics, which made them feel as if they
were in an alternate reality.
In 1990, the term “Augmented Reality” was coined for the first time by a Boeing researcher named Tom Caudell.
In 1992, Louis Rosenburg from the USAF Armstrong’s Research Lab created the first real operational augmented reality system named Virtual Fixtures which is a robotic system that places information on the workers’ work environment to increase efficiency similar to what AR systems do today.
The technology has progressed significantly since then. (Now keeping aside the further details in history so that you don’t get bored!)
For details of history and development of augmented reality, check out the link given below.
It produces the 3D image of the object detected by the camera when the camera is scanned over a visual marker such as QR code. This enables a user to view the object from various angles.
2. Markerless AR
This technology uses location tracking features in smartphones. This method works by reading data from the mobile’s GPS, digital compass and accelerometer to provide data based on users location and is quite useful for travellers.
3. Projection-based AR
If you are thinking that this technology has something to do with projection, then kudos you are absolutely correct! This technology projects artificial light onto surfaces. Users can then interact with projected light. The application recognizes and senses the human touch by the altered projection (the shadow).
4. Superimposition based AR
As the name suggests, this AR provides a full or partial replacement of the object in focus by replacing it with an augmented view of the same object. Object recognition plays a vital role in this type of AR.
HOW DOES AR WORK?
Now that you know something about AR, your technical minds must be wondering how the technology works. Here is a brief technical explanation of the supercool technology.
AR is achieved by overlaying the synthetic light over natural light, which is done by projecting the image over a pair of see-through glasses, which allow the images and interactive virtual objects to form a layer on top of the user’s view of reality. Computer vision enhances the reality for users in real-time.
Augmented Reality can be displayed on several devices, including screens or monitors or handheld devices or smartphones or glasses. It involves technologies like S.L.A.M. (simultaneous localization and mapping) which enables it to recognize 3D objects and track physical location to overlay augmented content, depth tracking (briefly, a sensor data calculating the real-time distance to the target object). AR has the following components:
1. Cameras and sensors:
They are usually on the outside of the augmented reality device. A sensor collects information about a user’s real-world interactions and a camera visually scans the user’s surroundings to gather data about it and communicates it for processing. The device takes this information, which determines where surrounding physical objects are located, and then formulates the desired 3D model. For example, Microsoft Hololens uses specific cameras to perform specific duties, such as depth sensing. Megapixel cameras in common smartphones can also capture the information required for processing.
Augmented reality devices basically act like mini-supercomputers which require significant computer processing power and utilize many of the same components that our smartphones do. These components include a CPU, a GPU, flash memory, RAM, Bluetooth/Wifi, global positioning system (GPS) microchip, etc. Advanced augmented reality devices, such as the Microsoft Hololens utilize an accelerometer to measure the speed, a gyroscope to measure the tilt and orientation, and a magnetometer to function as a compass to provide for a truly immersive experience.
This refers to a miniature projector found on wearable augmented reality headsets. The projector can turn any real surface into an interactive environment. As mentioned earlier, the data taken in by the camera is used to examine the surrounding world, is processed further and the digital information is then projected onto a surface in front of the user; which includes a wrist, a wall, or any other person. The use of projections in AR is still in the developing stage. With further developments in the future, playing a board game might be possible on a table without the use of a smartphone.
Augmented reality devices have mirrors to assist your eyes to view the virtual image. Some AR devices have “an array of many small curved mirrors”, others have a simple double-sided mirror to reflect light to the camera and the user’s eye. In the case of Microsoft Hololens, the use of “mirrors” involves holographic lenses that use an optical projection system to beam holograms into your eyes. A so-called light engine emits the light towards two separate lenses, which consists of three layers of glass of three different primary colours. The light hits these layers and enters the eye at specific angles, intensities, and colours, producing the final image on the retina.
AR: CURRENT APPLICATIONS
AR is still in the developing stage yet it has found applications in several fields from simple gaming to really important fields like medicine and military. Here are some of the current applications of AR (the list is not exhaustive).
The gaming industry is evolving at an unprecedented rate. Developers all over the world are thinking of new ideas, strategies and methods to design and develop games to attract gamers all across the globe. There are a wide variety of AR games available in the market ranging from simple AR indoor board games to advanced games which could include the players jumping from tables to sofas to roads. AR games such as Pokemon Go have set a benchmark in the gaming industry. Such games expand the field of gaming as they attract gamers who easily develop an interest in games that involve interaction with their real-time environment.
AR has seen huge growth in the advertising sector over the past few years and is becoming popular among advertisers who are trying to increase their customers by making engaging ads with AR. Buyers tend to retain information conveyed through virtual ads. AR ads provide an enjoyable 3D experience to users which gives them a better feel of the product. For example, the IKEA Place app lets customers see exactly how furniture items would look and fit in their homes. AR ads establish a connection between the consumer and brand through real-time interaction due to which consumers are more likely to buy a product. Many researchers believe that AR is similar to other digital technologies however its interactive features set it apart from other technologies.
Classroom teaching is rapidly undergoing changes. With the introduction of AR in traditional classrooms, boring lectures can become extremely interesting! Students can easily understand complex concepts and remember information better as it is easier to retain information from audio and visual stimulation compared to traditional textbooks. Today teens are increasingly owning smartphones and other electronic gadgets that they use for playing games and using social media, then why not use AR in the field of education! AR provides an interactive and engaging platform that makes the learning process enjoyable. With the development of AR, not just classroom teaching but distance learning can become more efficient giving students greater insights into the subjects they study. Google Translate now uses an augmented reality function with which students can use the camera to take a picture of the text and have it translated in real-time.
MEDICINE AND HEALTHCARE:
Augmented reality can help doctors to diagnose the symptoms accurately and cure diseases effectively. It is helpful to surgeons performing invasive surgeries involving complex procedures. Surgeons can detect and understand the problems in bones, muscles and internal organs of the patients and decide accordingly which medication or injection would best suit the patient. For example, AccuVein is a very useful Augmented reality application used to locate veins. In emergency operations, surgeons can save time with the use of smart glasses which can give instant access to the patients’ medical information, surgeons need not shift their attention to anything else in the operation theatre. Medical students can get practical knowledge of all parts of the human body without having to cut it.
WHAT’S IN THERE FOR THE FUTURE?
AR has captured our imagination like none other technology. From being something seen in science fiction films to something that has become an integral part of our lives, It has come a long way and has gained success in many fields.
Ever since the introduction of AR-enabled smartphones, the number of smartphone users has increased. The fastest-growing technologies AI and ML can be combined with AR to enhance the experience of mobile users.
The augmented reality saw its record growth in 2018. AR is positioned to be strong among commercial support, with big tech names like Microsoft, Amazon, Apple, Facebook, and Google making heavy investments. It is expected that by 2023, the installed user base for AR-supporting products like mobile devices and smart glasses will surpass 2.5 billion people. Revenue for the industry should hit $75 billion. Industry players in the augmented reality world expect 2019 to be a year marked by a rapid increase in the pace of industrial growth.
The future of AR is bright and it is expected that its growth will increase further with more investments from big tech companies that are realizing the potential of AR.
That’s all for this blog!
Thanks for reading and I hope this blog gave you some new information and insights about augmented reality. Please give your valuable feedback.
Hey, are you interested about starting competitive programming?
Here might be a Guide or maybe a Motivation of that.
Why Competitive Programming?
Competitive programming is a mind sport for software/IT industry. Also, it improves problem-solving in an interactive way. It also helps in developing the algorithm for a particular topic. Many tech companies put first round as competitive programming round to filter out candidates. It just helps you get better problem solver. And many data structure and algorithm related questions are asked in interviews. Companies give a direct interview for candidates having excellence in some competitive programming competitions.
Here is a guide if you want to start.
You can use any (Programming) language(No HTML it is not a programming language). Just check if it is in ICPC official language or not.
But most preferred would be C, C++, JAVA. Sometimes I see people running python code not getting green tick(means all answers right) because of the time limit(python has more run time then C++ or JAVA).
But in good competitions, you might not get any kind of problems with that.
Google hash code finalist and God of machine learning Andrei Margeloiu say about JAVA: -” It’s slow. But it has Biginteger class, even if there are very few problems that require using it. If the time limit is tight, you will get Time limit exceeded. Java is not accepted in all competitions “ in his article.
Now the choice is yours.
Firstly, write your code like an artist it should be easy to debug by other guy reading your code.
Concrete your basics, do at least 30 problems(for each topic, 50 for arrays), an implementation in any competition you can find one or two problems based upon implementation.
For any language basic topics are
Looping(For looping practice pattern programming is best),
Array(for C,C++),(Highly recommended)
Structures(for C),Classes (JAVA ,C++ or any Object oriented language)
Bit manipulation(Highly recommended)
Some basic tricks:
You can always put
Some basic algorithms
sorting :(Bubble sort, quick sort, merge sort) implement them by yourself
Searching: linear search, Binary search, Ternary search that to by yourself
Now you can start competing for any competition your rank might be low but still you can try to get into the environment.
You can start learning Libraries implementation
For C++ there are standard template libraries like
Vector (mostly used),
List(haven’t used it since I learned it).
Every language there exists such kind of libraries.
Libraries function like sort, find, reverse, gcd, etc are good but better if you practice them before.
For competitions, you can use (preferred by me) :Hackerearth,CodeChef,Codeforces
Hackerearth: Good for basics, for a particular topic
CodeChef: For ICPC region style competition as well as long style competition 3 per month.
Codeforces: Better for good(Efficient) approach at least 6 competitions per month.
Now there are several things to learn like, the efficient algorithm of Tree, Graph(Depth-first search, Breadth-first search) Kadane’s algorithm, Shortest path algorithms for graphs, Segment Tree.
Clearly, all it is sufficient for getting a job but there are more topics.
In CP typing speed can make a difference especially question in easy and everyone knows the approach keep it decent 55 to 60.
Do not focus too much on that but that can be a tiebreaker.
GOOGLE HASH CODE, ACM ICPC, GOOGLE CODEJAM, CODECHEF SMACKDOWN, and several competitions in hackerearth, codechef, codeforces.
GOOGLE HASH CODE: Hash Code is a team programming competition, organized by Google, for students and professionals around the world. You pick your team and programming language and we pick an engineering problem for you to solve. This year’s contest kicks off with an Online Qualification Round, where your team can compete from wherever you’d like, including from one of our Hash Code hubs. Top teams will then be invited to a Google office for the Final Round. More details
ACM ICPC (Association for Computing Machinery – International Collegiate Programming Contest) : The ACM ICPC is considered as the “Olympics of Programming Competitions”. It is quite simply, the oldest, largest, and most prestigious programming contest in the world.
Google CODEJAM: Code Jam is Google’s longest running global coding competition, where programmers of all levels put their skills to the test. Competitors work their way through a series of online algorithmic puzzles to earn a spot at the World Finals, all for a chance to win the championship title and $15,000. More details
CodeChef SNACKDOWN: SnackDown is a global programming event that invites teams from all over the world to take part in India’s most prestigious multi-round programming competition. Hosted by CodeChef, SnackDown is open to anyone with a knack for programming and began in the year 2009. More details
These competitions are held every month on a specific date/week/time. These competitions help you boost your profile on the respective website by ranking you based on your performance.
CodeChef Long Challenge is a 10-day monthly coding contest where you can show off your computer programming skills. The significance being – it gives you enough time to think about a problem, try different ways of attacking the problem, read the concepts etc. If you’re usually slow at solving problems and have ample time at hand, this is ideal for you. CodeChef
CodeChef Cook-Off is a two and half hour coding contest where you can show off your computer programming skills.CodeChef
A 3 hours challenge conducted in the first week of every month. Comprises 6 algorithmic programming problems conducted between 21:30 IST to 00:30 IST. Hackerearth
Circuits take place during the third and fourth week of every month. The objective of Monthly Circuits is to challenge the talented and creative minds in competitive programming with some interesting algorithmic problems. The participants will be challenged by Multiple Problem Setters with 8 problems of varying difficulty levels in a duration of 9 days. Hackerearth
The Last Lesson:
The last lesson is don’t get demotivated by cp(competitive programming), There can be a case you might won’t show any progress for 3 months. Try as many approaches as you can for a particular problem. And get one thing that CP is not the only thing in computer science if you are not interested in it you can neglect it.
That’s all from my side.
It is practice, practice, and practice. But don’t give some topic over time.
I started on 4th December 2017. After eleven months I got the fifth rank in our college with my respective team and first in our year with ICPC rank of 534 in India region. And First rank in 2nd year. And got respected rank held in of CP several competitions held in our college NIT Surat. And it is fun to see your rank above your friend’s in competition and get motivation if his/her rank is better than you.
Hello peeps!! If you haven’t read Part-1 of the series, then take a look over it for better understanding.
In the Part-1, most of the jargons related to Git and Github and basic commands have already been discussed. Still, there is much more to learn like how to revert back the changes, what is branching, merging, etc.
So hold your coffee and let’s begin & try to understand them one by one in simplest form and don’t worry we are not going to deal with any PPT, Lol!!
We have used the term master branch in the previous article several times. So Let’s discuss it first..
We will try to relate this with a real-life scenario at first and then we will move on to the technical explanation. Imagine you are working on a team project. In such a project, there are often bugs to be fixed and sometimes a new feature has to be added. In a small project, it is easy to work directly on the main version, technically ‘master branch’ but in case of big projects, if you do so there is a high probability that you and other teammates may make changes which are conflicting. So the solution for this is ‘branching’ in git.
For proper understanding, you can think of the main git chain as a tree trunk which is technically called ‘master branch’.So whenever you want to work on a new feature, you can make a separate branch from the main tree trunk or master branch and start committing changes in your new branch and once you think that your feature is ready you can again merge that branch in the master branch.
Let’s understand this in a more robust way and also discuss the basic commands related to branching.
Branch in git is a special pointer to one of the commit. Every time you make a commit, it moves forward to the latest commit.
Another important point to mention is that git has a special pointer called HEAD to keep track of the branch you are currently working on.
Let’s create a new branch with the name ‘new-feature’.
git branch new-feature
This will create a new branch(a pointer) on the same commit you were working on.
Note that HEAD is still on the master branch. You need to checkout to switch to the new branch.
git checkout new-feature
NOTE-You can create a new branch and immediately checkout to the new branch by:
git checkout -b new-feature
Let’s start working on the new feature and make a new commit.
Now if we check out to the main branch again and make a new commit, the new-feature branch will not be affected at all. Let’s do this:
git checkout master
After some changes, and commiting the new changes:
So, you can see how you can work on a new-feature without disturbing the master branch and once you complete your task on the new-feature you can “merge” that branch into the main branch. Isn’t it amazing that you and your team can work on different features by creating multiple branches and later merging them into master? Hell Yeah!!!
Now Let’s discuss a little bit about merging and basic commands related to it.
Whenever you make a separate branch for working on a feature, you can commit your changes in that branch. But when you task related to the feature for which you make a branch completes, you need to merge that branch into the main codebase/master branch and this process is called ‘Merging’.
Suppose your task on a new-feature branch is now complete and you want to merge that branch into the master branch. Then firstly checkout to the master branch.
git checkout master
And use the following command:
git merge branchname
*Here in our case branch name is new-feature.
This command merge the changes you made in new-feature branch with the master branch by squashing them into a new commit that has information related to the two parent commits.See the picture…
And here comes the bad part….
Merge Conflicts :
When you merge a branch with the master branch, then there are chances you run into ‘Merge -conflicts’. Basically, ‘merge-conflicts’ arise when you changed the line of code that someone else also changed.
In such a situation, you manually have to decide which version of the code you want to keep that is you need to resolve the merge conflicts.
That’s All!!! Thanks for reading the article. The next blog in the series will focus more on using GitHub. Stay Tuned.
Git is just a software which tracks the changes in the files of your project. It keeps the different versions of your files, hence it belongs to a category of software called Version Control System (VCS) so that different Versions of Software is in your Control So if you are a developer, it can help you in handling situations like:
Reverting back to an older version of your code
Collaborate with the team effectively while working on the same project.
The sole purpose of git is to track the changes in the project and collection of files and keep different versions of it. So, the question is where does the git store these changes made in your project files. Here comes the concept of a repository, it is just a sub-directory in the root directory of your project you are working upon, which stores all the information related to changes in your files and much more useful information like who made the changes and when these were made.
Suppose a situation where you are working on a team project consisting of different members. So the situation can’t be handled easily. A great hard-work will be required to merge the changes made by all into a single final project. And there can also be situations like there will be merge conflicts due to differences in a file which are stored on a single member’s machine. So we can say in this way, we can’t really collaborate on a team project.
Git solves this problem through ‘remote’.A git remote is a common repository residing on another central machine, which can be used by the whole team for collaboration on a team project.
Till now you know what is git, a repository and a remote. Another thing which we are going to discuss is “Github”.First of all, there is always confusion between Git and Github. Are they same thing or different?. So for more clarification-
— Git is a version control system, a tool to manage versions of your code — GitHub is a hosting service for git repositories.
Now another question which can come into your mind is “How Git is gonna track and stage all the changes?”.The answer lies behind the distinct Git States, let’s tackle them first before proceeding-
The basic workflow of git includes the following three stages :
It is the state when you make changes in a file, but it is untracked by the git.
When you have modified a file or files, then you have to inform the git to look over and track the file if it is untracked till now, by taking a snapshot and this snapshot will go into the next commit. So we can say it is just the marking of a file to go into the next commit.
It is the state when all the staged changes are stored in the local git repository or we can say the database.
After this much, we can continue creating a git repository on your local machine and then pointing it to Github. All you need is git installed on your system and a GitHub account. We will be using Ubuntu for the tutorial but most of the commands are same for Windows also. Let’s Go!!
Step 1: Configuring Git
*To check the version of git and for making sure git is installed or not.
The workshop was intended to excite and inspire some of the minds to set into brain-storming that might help the audience to come up with innovation with 21 st century internet Magnus, BLOCKCHAIN, to solve the real-world problems.
The prerequisite was null and void but an audience with a little bit of patience and curiosity, which certainly the audience was!
The workshop was segmented into two halves. It started with non-technical points to get launched into the topic and later shifted to tech-based. Key points of the talk were:
Intuitive questions like why blockchain, what are the daily life problems that are needed to be addressed through blockchains, how blockchains can solve the crisis of current internet like data tampering and data breaching was answered to create the vacuum in audience minds for the talk.
The talk then moved to introduce the audience with basic key terminologies like cryptography, hashing, mining, genesis block, ledger, nodes, consensus, etc.
Very carefully the backbones of the blockchain were introduced. The network design concept that gives this technology the key power were: a) Distributed system and b) Decentralized systems. How the Blockchain provides architectural and political decentralization and logical centralization was deeply discussed.
All of these gives the blockchain major characteristic system advantages of attack and collusion resistance, fault tolerance, good scalability, etc.
Finally, the real world application of the blockchain was explained and vividly demonstrated. The successful digital currency of Bitcoin, how it solved the major two unsolved problems of its time- malicious activity prevention and the double spending, how it works and some of its basic underlining stories were discussed.
The workshop ended with the display of a few of the projects that instructors made to participate in various hackathons.
Here is the ppt used in the actual workshop. All the references and important links would be uploaded soon, stay tuned!!!!
Are You a New Developer to the Ethereum Ecosystem?
Below is a mix of the main infrastructure tools and knowledge centres that will teach you how to build software on Ethereum. We recommend taking a look through the portal and reading about all the developer tools and options before getting started.
The most used chrome extension wallet and Web 3 provider that allows users to interact with decentralized applications.
Smart Contract Languages
A pythonic programming language for implementing smart contracts. Vyper is also currently beta software.
IDE stands for Integrated Development Environment. IDEs and Editors are what you need to write and test software. They are software suites that consolidate basic tools that are required to start writing on Ethereum. Below are the most popular IDEs and Editors.
Visual Studio Code extension that adds support for Solidity.
Public Testnets on Ethereum offer a way for developers to test what they build without putting their creations on the main Ethereum network. Developers are able to obtain as much ETH as you want on testnets because testnet ETH doesn’t carry any monetary value. Below are the most used testnets to start testing on and the links for where you can request testnet ETH.
A proof-of-authority blockchain started by the Geth team. Test ether must be requested.
Similar to Public Testnets, Local Testnets are a place for you to test your software without pushing it public. Unlike Public Testnets, the Local Testnet software will only run on your computer/node and other users won’t be able to see it or interact with it.
Fast Ethereum RPC client for testing and development. The command line version of Ganache, your personal blockchain for Ethereum development.
If you want to start developing dapps, you’ll need front-end development skills. Below are the most popular front-end interfaces that will help you turn your dapp from an idea to a live Ethereum mainnet application.
A collection of front-end libraries that make writing decentralized application frontends easier and more predictable. Drizzle provides a Redux library to connect a frontend to a blockchain.
If you want to graduate from just building dapps, you’ll need to start learning and using the backend interfaces listed below. If you’re interested in doing backend/protocol work on Ethereum, you should have significant experience with Go, Rust, Java, .NET, Ruby, or Python. Explore some of the most frequently used backend interfaces below.
A lightweight Java and Android library for integration with Ethereum clients.
Smart Contract Library
You’ve probably used programming libraries before, and these are no different. A smart contract library is the reusable piece of code for a smart contract which is deployed once and shared many times. Below are the most used smart contract libraries.
A collection of building blocks for building smart contract systems written in Solidity.
Smart Contract Testing and Deployment
If you are creating a tool, product, or application on Ethereum, you’ll want to make sure your smart contract is in working order before deploying to the mainnet. These tools will help you build, test, and ship your code.
A framework that allows you to easily develop and deploy decentralized applications. Currently integrates with EVM blockchains (Ethereum), IPFS, Swarm, Whisper, and Orbit.
An Ethereum client refers to any node that is able to parse and verify the blockchain, its smart contracts, and everything in between. An Ethereum client also provides interfaces to create transactions and mine blocks which is the key for any Ethereum transaction. Below are the most popular Ethereum clients.
A command line interface for running a full Ethereum node implemented in Go.
Ethereum allows you to save variables or data in permanent storage. The storage platforms below are where all of the smart contract data lives. IPFS is the most commonly used storage system on Ethereum. Explore the platforms below to learn more about how storage on Ethereum works.
A decentralized peer to peer database on top of IPFS.
Ok, so you’ve finally built your dapp or smart contract. But how do you know it was set up correctly and is safe from hackers? The security tools below will help ensure that your code is safe and follows all Ethereum development best practices.
Here are some definitions of API from various resources:
“In computer programming, an application programming interface (API) is a set of subroutine definitions, communication protocols, and tools for building software. In general terms, it is a set of clearly defined methods of communication between various components.” -[Wikipedia]
“An application program interface (API) is a code that allows two software programs to communicate with each other.”-[TechTarget]
Not only these if you search on any website about API, they will explain it brilliantly,
But it is only understandable by those who have worked on API, but if you haven’t then it can be difficult to fully understand, although the explanation is perfect but not in easy words.
The goal of this blog is simply to make you understand the meaning of API in more easy words.
So let’s begin with an easy example-
Suppose if you wanna book a ticket of a train, then it is possible to book tickets of the same train through various apps say IRCTC Official App, Paytm etc.
Now the main thing you need to understand is that how it is possible to book the same seat through two different Apps?
Yes, the answer lies in API.
What API is doing is just letting you use someone’s else code in your application.
Absolutely, PAYTM will be using API provided by IRCTC.
Take a look at this perfect video provided by Mulesoft:
I think you are now getting some idea of what API actually is?
Let’s look at another example- Google is a huge website and it writes a tall pile of codes. These codes are for various services like search, youtube, Gmail, etc. What if we want to use them?
You must have seen, many websites provide logging in through Google’s Login credentials in their apps. So in a second, you can log in on that third party app using your google account.
So what actually is happening behind this, third party-app is using Google’s API for providing login. In easy words, they are using Google’s code for the login system and fitting in their app and using their features, without worrying about what they have written.
Types of API:
Since this topic is very much wide…
Among all the types mentioned above , we will mainly focus on Web APIs aka Web-Services.
Web API as the name suggests, is an API over the web which can be accessed using the HTTP protocol. It is a concept and not a technology. We can build Web API using different technologies such as Java, .NET etc. For example, Twitter’s REST APIs provide programmatic access to read and write data using which we can integrate twitter’s capabilities into our own application.
Types Of WEB-APIs:
SOAP was designed back in 1998 by Dave Winer, Don Box, Bob Atkinson and Mohsen Al-Ghosein for Microsoft Corporation. It was designed to offer a new protocol and messaging framework for the communication of applications over the Web. While SOAP can be used across different protocols, it requires a SOAP client to build and receive the different requests, and relies heavily on the Web Service Definition Language (WSDL) and XML:
Early on, SOAP did not have the strongest support in all languages, and it often became a tedious task for developers to integrate SOAP using the Web Service Definition Language. However, SOAP calls can retain state, something that REST is not designed to do.
Before going to the next type, let’s understand a new term ,
RPC- “Remote Procedure Call (RPC) is a protocol that one program can use to request a service from a program located in another computer on a network without having to understand the network’s details.”
Apart from this definition, we can take it as a protocol working on the client-server model, without going much into detail
On the other hand, Remote Procedure Calls, or RPC APIs, are much quicker and easier to implement than SOAP. XML-RPC was the basis for SOAP, although many continued to use it in its most generic form, making simple calls over HTTP with the data formatted as XML.
However, like SOAP, RPC calls are tightly coupled and require the user to not only know the procedure name, but often the order of parameters as well. This means that developers would have to spend extensive amounts of time going through documentation to utilize an XML-RPC API, and keeping documentation in sync with the API was of utmost importance, as otherwise, a developer’s attempts at integrating it would be futile.
JSON was then developed to provide a simple, concise format that could also, capture state and data types. Yahoo started taking advantage of JSON in 2005, quickly followed by Google in 2006. Since then JSON has enjoyed rapid adoption and wide language support, becoming the format of choice for most developers. You can see the simplicity that JSON brought to data formatting as compared to the SOAP/ XML format above:
So, JSON presented a marked improvement over XML.
Now the most popular choice for API development, REST or RESTful APIs were designed to take advantage of existing protocols. While REST can be used over nearly any protocol, it typically takes advantage of HTTP when used for Web APIs. This means that developers do not need to install libraries or additional software in order to take advantage of a REST API. REST also provides an incredible layer of flexibility. Since data is not tied to methods and resources, REST has the ability to handle multiple types of calls, return different data formats.
Unlike SOAP, REST is not constrained to XML, but instead can return XML, JSON, or any other format depending on what the client requests. And unlike RPC, users aren’t required to know procedure names or specific parameters in a specific order.
I think uptill now, the question “what the hack is API?” is somewhat answered, but this description is not enough , there are many things you still need to explore on your own. So keep exploring.
Thanks for reading..
Happy Learning! ..
*Most Welcome to questions and doubts in comment section……….
Postman -Best app for checking and analysing Web APIs.
In my previous tutorial Constructing a Simple Blockchain using PYTHON , I did promise on writing the further about Applications of Blockchain & their Implementation. I will soon post regarding the same. I wanted to make the next tutorial over Blockchain as simple as possible, so I will be needing some time to design my next Tutorial blog on Blockchain. So, keep patience :).
Now, alongside me learning the blockchain, I was also working on Machine Learning and Deep Learning, as if it is my core Learning subjects.
In my last blog on Blockchain, I received few comments that the terms were not easy to understand, thus making the blog difficult to read for the readers completely new to programming, which, is of course very true because these technologies have their own glossary.
I came up with an idea to give you guys the taste of the Machine Learning Models with the easiest way possible, to make my blog better, or you can say that I just trained myself ; P.
“Machine learning is a field of computer science that uses statistical techniques to give computer systems the ability to “learn” with data, without being explicitly programmed.”
Does that help?
I guess not!
My belief for doing the things perfectly is by actually doing them.
I would love to dip my hands into something worthy rather than sitting and listening to some boring lectures (though they are not that boring, it’s my way of understanding things 😀 ;p).
So, here I present you the best way, that I think is well enough to get you guys a boost start in making and understanding machine learning models.
Before actually getting started, let’s get back to the definition and try to understand it,
“Machine learning is a field of computer science that uses statistical techniques to give computer systems the ability to “learn” with data, without being explicitly programmed.”
Few words to underline:
-ability to “learn”
-without being explicitly programmed
Now in this tutorial, I will not be taking the names of any technical term except the ones that you need to know. Or better to say the ones which are extremely required. Because I think, for those who are having their first experience in Machine Learning, it becomes extremely confusing when such “Out of their Glossary” kind of terms starts bombarding on them.
So, now how can we start understanding above-underlined terms? How do we actually implement them? How a machine with zero IQ will learn? How will it answer to the problems that are new to them? And most importantly how will we train the machine?
I will try to explain it in very short as I can.
->Statistical means you have previously recorded data of Thousands or millions or even billions of records. E.g. the data of
Occurrences of words in emails marked as SPAM
Data of various houses & their degree of damage along with structural information of the houses etc.
These datasets are used to make a Mathematical Model, which will then be used to predict the answers for the test datasets.
->Ability to “learn” here is not that computer gets some human power or something and starts learning on its own. Naah. This the thing which we recently called the Mathematical Model.
We actually create a mathematical model using the previous datasets and train them on basis of them, or in other words to say we actually plot them using various techniques (in fancy words called as Machine Learning Algorithms ) based on features (another fancy term), which actually stands for various properties or information related to some object in which we are going to predict our results on.
Decision tree Classifier
Neural Networks etc. etc. etc.
😀 Haha.. none of them gives us clue what they mean. Right?
Now before moving forward I would love to illustrate you with some example, you’ll love the way it all works:
Suppose you want to distinguish between an “apple” and an “orange”.
Now what you have for information about them?
Ummm, maybe weight, or color may be different levels of its ripeness as it may be possible that apple or orange may have different weights and color at different ripeness level.
“Now, we have two features color and weight now.”
A mathematical model is created by plotting these properties on a 2d graph as shown. But that is possible if we have some numerical representation of a feature.
In this way, we plot them(intuitively), and ready to classify them.
So for the training data we will plot new inputs on this graph and the examples plotted on this graph having ordinates > line will be oranges and the ones having ordinates <line, are the apples.
This is an example of simple Linear regression, in which we plot a line to classify between two targets.
And this is how a computer performs without being explicitly programmed.
It may happen that you want to do Machine Learning, and you don’t need to take a full course on python. I know many of the sources where you can learn enough python to go on with machine learning.
Starting with Building a Machine Learning model.
Steps : –
Installing required Libraries Pandas, scikit learn, numpy… that’s it for now
Creating python file, importing required libraries and all
Loading dataset we can do with any library but for now, we’ll just have Iris flower dataset, which is actually considered as “Hello world” dataset for python, you’ll find at many places
Exploring our dataset
Making our first model
Printing the accuracy of our model
Testing various models
**Note: Before starting anything I need you to clone the following repo from GitHub link to your local PC:
In the Github repo given above, you’ll find a file name required.txt, this file has all the requirements for the project, just run the following command into your terminal, being into repo directory to install required packages.
This will install all the required libraries for our model.
2. Creating a Python file, importing libraries and all
Create a python file of your desired name with .py extension in the repo directory, and open it into your favourite text editor and import required libraries as follows:
import pandas as pd
from sklearn import model_selection
from sklearn.metrics import accuracy_score
# these are various machine learning models already stored in the sklearn library
from sklearn.linear_model import LogisticRegression
from sklearn.tree import DecisionTreeClassifier
from sklearn.neighbors import KNeighborsClassifier
from sklearn.discriminant_analysis import LinearDiscriminantAnalysis
from sklearn.naive_bayes import GaussianNB
from sklearn.svm import SVC
3. Loading the Dataset
Here we shall use read_csv() function of pandas library to read our dataset as follows.
file.head(5) will view the first 5 rows of the dataset.
And do notice, in read_csv we have header = None, this is used because our dataset does not contain any headings to define the columns. It will look something like this:
4. Exploring our dataset
Few things before building our model.
Run the following lines to print various information about the dataset we are going to use.
2. Describing data with analytics
3. Printing distribution of class(grouping according to column no 4, as we have seen in point 3.)
5. Making our First model
Before making any Model and testing data on it, we have a very important step, that is to creating training & testing datasets separately. To train the model on and to test the model on.
For this purpose, we have already imported model_selection from sklearn.
-> Splitting dataset into Training and Testing
Following code is to first change the dataset into a 2D array, then separating the target from it into Y, defining seed. And finally dividing our dataset into training and validation dataset.
array = file.values # dataset to a 2d array
X = array[:,0:4] # feature dataset
Y = array[:,4] # target dataset
# validation size is used to take out 0.3 i.e 30% of our dataset into test dataset.
validation_size = 0.30
seed = 5 # why random seed is used its given
# finally slicing our dataset into training and testing
X_train, X_validation, Y_train, Y_validation = model_selection.train_test_split(X, Y, test_size=validation_size, random_state=seed)
# to test if its sliced properly
-> Defining and using our model
We will be using simple Logistic Regression classifier as our model and use to train our dataset and predict the outcomes.
Few steps, Define model, then fit model, then predict the output.
model = LogisticRegression()
# fitting our model
# predicting outcomes
predictions = model.predict(X_validation)
print(predictions[:10])) will print the predictions on validation dataset after being train on the training dataset.
6. Printing the accuracy of our model
Now to rate our model we need to find its accuracy. For this, we need to compare our Validation data to our predicted data. And since we are using a library we don’t need to manually calculate it. We have the following command to do this job as we have already imported accuracy_score from sklearn.metrics.
I had the following output when I ran this in my ipython notebook, which I have included in my Github repo.
It is 93.33% accurate.
And now, you are done with your first machine learning model.
Here are various accuracies of different models, we will be learning about in upcoming blogs.
**Please have a look at the ipython nb in the repository. Also, you can comment in the REPOSITORY itself.
So, that’s it with this tutorial blog.
My next blog on Machine Learning will be quite boring as I will be explaining some “Boring” terms of machine learning. And after reading this blog. You’ll have an easy understanding of those terms. And also An intuitive idea of every term if you want to learn good quality machine learning.
***Note. If you want then I’ll be providing some references about it in the blog.
# Please provide your suggestions and even if there’s any doubt regarding whatever you have learned from this blog or any other blog. Just get in touch with me @ my email firstname.lastname@example.org.