It is a common notion now-a-days that with increasing power of AI, at some points machines may rule over humans. Can that really happen?
Let us have a look that how machines actually learn and act. Commonly it can be divided into some basic categories, such as:
* Supervised Learning : In supervised learning, machines act based on a given data set and pre-defined correct output.For example , deciding whether the weather is good or bad based on certain weather parameters like: temperature, humidity, air flow.
* Unsupervised Learning : In this case , machines process input data without any pre-defined instruction regarding correct output. For example: taking a sample of 1000 people and and grouping them into certain segments based on their residential location.
* Reinforcement Learning : In this scenario no training data exists, so machines basically learn continuously from experience and keeps updating its actions accordingly. For example: translation between languages.
If we look at the similar attributes for human beings- supervised learning can be considered as usual learnings that we go through in educational institutions with defined contents and instructors. Unsupervised learning is other systematic ways of learning where we make judgement by ourselves. And reinforcement learning means continuously learning from different experiences in life.
If we look at ourselves, we find that a major portion of learning is reinforcement learning, which we are actually going through in our entire life journey. This learning does not have any binding or goals, and obviously every human being has his/her own way to observe, adopt and learn things. That creates a unique difference among people, and dominantly between machines and humans.
Let us consider a particular case. Medical image processing is a common application of AI; where automated diagnosis is the goal through processing of respective output images like X-ray, ultrasound,CT Scan, MRI etc. Now, machines need certain level of predefined inputs and output to perform such processing. If anything appears beyond that domain, machines may fail to identify. Suppose, scissor is accidentally left inside patient’s body after a surgery. In X-ray image, the scissor becomes clearly visible. So any person can detect its presence by just having a look at the image. However, a machine learning algorithm may fail to identify that as ideally it is not expected to know how scissor looks like inside a human body, so it may rather just consider it as any part of human body!!
Another such example in recent time was Amazon’s recruitment dilema. Since 2014 Amazon was using automated algorithm to review job applicants’ résumés. The company’s experimental hiring tool used artificial intelligence to give job candidates scores ranging from one to five stars .But by 2015, the company realized its new system was not rating candidates for technical posts in a gender-neutral way.That is because Amazon’s computer models were trained to vet applicants by observing patterns in résumés submitted to the company over a 10-year period. Most of those past resumes came from men, a reflection of male dominance across the tech industry.In effect, Amazon’s system taught itself that male candidates were preferable. It penalized résumés that included the word “women’s”, as in “women’s chess club captain”. And it downgraded graduates of two all-women’s colleges !!! Amazon attempted to rectify the programs to make them neutral to these particular terms. But that was no guarantee that the machines would not devise other ways of sorting candidates that could prove discriminatory. So ultimately this automation endeavor got disbanded.
So, as the functional theories along with such practical examples are showing, still it can take some more time for the machines to reach near the intelligence, analytical and critical thinking capabilities of human beings. Can that really happen and if it does, what will be the impact ? We shall talk about that in the next part of this article series.