You are here
Home > Tech > The paradox of Artificial Intelligence (Part 4)

The paradox of Artificial Intelligence (Part 4)

One of the key debates currently going on regarding AI implementation is- should there be “right to object” against any decision taken by AI?

Let us elaborate it further. If you receive any kind of service or output from an AI system, should you have the option to object to that decision and ask for a human interference?  For example: in previous part of this article series we saw the problem Amazon faced when they deployed automated screening for their job applicants. Their AI engine, getting biased by past data in which there was very less number of female candidates, considered that as standard trend and started to discard all female applicants. Now suppose an eligible applicant gets rejected through this process. Can she file a complaint against this and ask for a human evaluation?

Ideally we can think that she must get this option. However, few factors need to be considered here. Firstly, before making such complaint the applicant needs to know that whether her/his application was evaluated by an AI system. So, it needs to be mentioned somewhere (particularly when the application is getting submitted) that it will be an AI based evaluation at initial stage.

Secondly, should anyone be allowed to raise such objection? In ideal scenario we may say that- yes,of course, why not? However, we need to consider the practical aspects. There can be thousands of such applicants, and obviously not all the rejections are happening due to the bias problem in AI engine. There can be lot of valid rejection cases as well. So if a generalized option is kept open for all to raise complaint whenever they get a rejection, then the system may eventually get bombarded with such complaints. It will then become a big manageability challenge.

Similar scenario can happen in other relevant cases. Like: when someone is applying in bank for a home loan/car loan or someone is applying for a government service. Rejection is never a pleasant experience. So naturally a rejected applicant deserves the right to know or objectagainst such outcome.

So it is quite essential to maintain a balanced scenario in this regard. There will be options to raise objection, while ensuring that such provision does not get misused. There can few possible solutions in this regard. For example: before placing the application, the applicant can be offered two options. Option one will be automated evaluation by AI, which will be a quick process without any cost. Option two will be manual evaluation by human, which will be a comparatively slower process and may involve some cost. So these can clarify the scenario to applicants/users beforehand and guide them to choose the right option accordingly.

Also, if any one wants to raise objection against a decision made by an AI engine, then he/she has to follow a defined process which will require designated period of time. One question in this regard is- to whom the objection will be raised? Will it be a common platform run by government/judiciary (just like the existing practices of court, drug administration or consumer protection bodies) for handing all these kinds of complaints from all domain? Or will only corresponding organization remain responsible to deal such complaints? There is no straightforward solution in this regard, as the scenario may vary from context-to-context. It can be either option, or a combination of both.

Any system is not perfect. In fact we, the humans, are never free from bias and make a lot of wrong judgement all around our lives. So it is obviously not justified to expect an AI platform to be 100% perfect right from day one. It will gradually develop by learning from experiences and especially from its mistakes. So it is essential that rather than getting too worried,we consider the mistakes of AI as continuous scopes for improvement.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Top