The ability to mimic human learning without being explicitly commanded by a programmer to do so is called machine learning. There’s no shortage of buzzwords associated with machine learning, some of these include “neural networks”, and “deep learning” among others. Google has programmed all of its services in such a way that their networks learn from the data they’re fed, which is probably why every single one of us has contributed to an improvement in their software one way or another.
A computer can solve counting and logical problems a lot faster than a human can – limited only by the speed of light running through electric circuitry. If you were to ask them to add one large number to another larger number, they would do it in a fraction of the speed of light, but ask them to perform an ‘intelligent’ task, such as maybe identifying the flaws in a thesis paper, they’ll probably fall short of your worst expectations.
It is this glaring flaw of computers that has allowed CAPTCHAs, computer generated tests, to reliably tell a human apart from computers, preventing machine based attacks on a network, preventing robots from creating fake accounts, and carrying out large scale spamming attacks on websites.
Massive amounts of data
Thanks to the advent the transistors, computers that previously completely occupied large rooms now sit in the palm of our hands, in the form of tablets, smartphones and the desktops. Today we draw the line at anything beyond the size of desktop computer. This same tech boom has brought with it the ability to store massive amounts of data inside computers, there was once a time when having a space of 1 MB was considered a luxury that only the super rich could afford, or something that was the exclusive domain of scientists and researchers.
Today, the average hard disk on the market has a storage capacity of 1 TB, with some offerings exceeding well beyond 10 TB. The result is that the internet of today has a massive wealth of data, estimated to be around 1.2 million terabytes in 2013, and has more than likely exceeded the giant milestone of 1 zettabyte. Cisco, a networking company that powers the internet infrastructure of a large portion of the world claims that this figure will likely double in 2019. Owing a lot of this growth to smartphones, which have recently become ubiquitous owing to their lower costs.
A human cannot possibly hope to process even a tiny, minuscule portion of this data, instead this responsibility has been thrown to the data harnessing capabilities of the computer. Several designs have been formulated in order to allow a computer to efficiently handle all of this data and make sense of it. Some of these designs borrowed heavily from the theories of statistics to allow computers to make reliable predictions and outcomes.
But what has made this process much more streamlined is mimicking the human ability to learn, by using the same principles used by the human brain, in other words, bringing over the efficiency of neural science has quadrupled the usefulness of a computer.
Artificial neural networks and Google Translate case study
Artificial neural networks involve a large number of processors working in tandem, arranged in a way similar to how human brains work. One portion of the processors receives raw information, the successive portion receives the output from the previous portion and the last portion produces the final output from the computer system.
Neural networks are extremely adaptive and are able to change their behavior based on what they learn, getting ‘smarter’ as more data is fed into them. A prime example of such learning is Google Translate. As more people use Google’s machine translation service, the more accurate it gets. Each subsequent trial and error gives Google Translates more acuity with the process, much like a toddler first learns to crawl, then subsequently stand on its own two feet, then walking and finally culminating in the ability to run.
It wasn’t really the programmers with fancy computer science degrees that powered Google Translate’s innate ability to translate languages, it was the common folks, whose incessant need to constantly poke fun at its inability to accurately convey one language into another – finally gave it a more uncanny, human like accuracy.
How we can benefit from machine learning?
As machine learning slowly becomes more intelligent, they find use in a lot of applications. The rule of thumb is that the more data you supply to a machine learning algorithm, the more effective it gets. The following are just few of the applications that benefit heavily from machine learning;
Face detection and recognition
Thanks to advances in machine learning algorithms, your smartphone camera, or digital camera can take a more accurate picture of your face when you smile. Facial recognition software has allowed computers to identify a human based on a photo. Facebook is notorious for deploying this software, automatically tagging friends and family members as soon as you upload their photos on your wall.
The next big thing when it comes to technological disruptions is self driving vehicles, at the forefront of which is Ford and Uber. Once autonomous vehicles become widespread, they’ll make our lives a lot easier by making trips faster and affordable, saving lives and cutting down on costs. Remember, computers don’t tire, humans do.
A prominent example is the Smart Tissue Autonomous Robot (STAR), by using machine learning and 3D sensing, this robot has been able to stitch together pig intestines more effectively than even the most experienced of surgeons.
By collecting data from around the world, using satellites, historical records, social media, blogs and other sources, neural networks can be used to predict epidemic breakouts. This is particularly useful in poorer countries of the world which often times have no access to medicine and important infrastructure.
Solving deadly diseases
IBM’s supercomputer, Watson detected a rare breed of leukemia in just 10 minutes by swiftly going through a patient’s genetic data and made a diagnosis that would have otherwise taken doctors at least two weeks. The supercomputer was able to detect a thousand genetic mutations in the patient’s DNA and could promptly tell which of those mutations was responsible for the disease.
As can be seen from this article, machine learning is only going to benefit mankind – not spawn a line of dangerous robots hell bent on conquering the world. As is obvious with anything new, there will always be naysayers, but one should learn to look beyond their tin foil hat and see the bigger picture.
by Bobby J Davidson
As the President of Percento Technologies, I provide day-to-day leadership to the company’s senior management and I am personally involved in the strategy, business development and sales activities of the company.
The company was founded in 1999 with the purpose of providing a Business Technology | Anytime | Anywhere for organizations in need of Managed IT Services, Professional IT Consulting, and Infrastructure Projects. We have a fantastic suite of boutique-managed Cloud Servers to choose, along with Network Cabling and high end professional website services.
We personalize the IT experience with a team approach, working with clients from diverse sectors of industry, including banking, legal, health-care, energy and corporate business.
Leave a Reply