Algorithmic accountability

From Wikipedia, the free encyclopedia

Algorithmic accountability refers to the issue of where accountability should be apportioned for the consequences of real-world actions that were taken on account of algorithms used to reach a decision.[1]

In principle, an algorithm should be designed in such a way that there is no bias behind the decisions that are made during its execution process. That is, the algorithm should evaluate only essential characteristics of the inputs presented, without making distinctions based on characteristics that usually should not be used in a social environment, such as the ethnicity of an individual who is being judged in a court of law. However, this principle may not always respected and on occasions individuals may be deliberately harmed by these outcomes. It is at this point that the debate arises about who should be held responsible for the losses caused by a decision made by the machine: the system itself or the individual who designed it with such parameters, since a decision that harms other individuals due to lack of impartiality or incorrect data analysis will happen because the algorithm was designed to perform that way.[2]

Algorithm usage[edit]

The algorithms designed nowadays are spread out in the most diverse sectors of society that have some involvement of computational techniques in their control systems, of the most diverse sizes and with the most varied applications, being present in, but not limited to medical, transportation and payment services.[3] In these sectors, the algorithms embedded in the applications perform activities of natures such as:[4]

  • Approve/deny credit card applications;
  • Vote counting in elections;
  • Approve/deny immigrant visas;
  • Decide which taxpayers will be audited on their income taxes;
  • Managing the system that controls self-driving cars on a highway;
  • Scoring certain individuals as possible criminals for future use at trial.

The way these algorithms are implemented, however, can be quite confusing. Effectively, algorithms in general behave like black boxes, and in most cases it is not known the process that an input data goes through during the execution of a particular routine, but only the resulting output linked to what was initially entered.[5] In general, there is no knowledge related to the parameters that make up the algorithm and how biased to certain aspects they can be, which can end up raising suspicions about the bias with which an algorithm treats a set of inputs. It depends on the outputs that are generated after the executions and if there is any individual who feels harmed by the result presented, especially when another individual, under similar conditions, ends up getting a different answer. According to Nicholas Diakopoulos:

But these algorithms can make mistakes. They have biases. Yet they sit in opaque black boxes, their inner workings, their inner “thoughts” hidden behind layers of complexity. We need to get inside that black box, to understand how they may be exerting power on us, and to understand where they might be making unjust mistakes

Wisconsin Supreme Court case[edit]

As mentioned before, algorithms are widespread in the most diverse fields of knowledge and make decisions that affect the lives of the entire population. Moreover, their structure and parameters are often unknown by those who are affected by them. A case that illustrates this well was a recent ruling by the Wisconsin Supreme Court regarding so-called "risk assessment" for crime.[3] It was ruled that such a score, which is computed through an algorithm that takes various parameters from individuals, cannot be used as a determining factor for an accused to be arrested. In addition, and more importantly, the court ruled that all reports submitted to judges in such cases should contain an information related to the accuracy presented by the algorithm used to calculate the scores.

This event has been considered a major victory in the sense of how the data-driven society should deal with softwares that operates making decisions and how to make them reliable, since the use of these algorithms in highly complex situations like courts requires a very high degree of impartiality when treating the data provided as input. However, defenders of concepts related to big data argue that there is still much to be done regarding the accuracy presented by the results of algorithms, since there is still nothing concrete regarding how we can understand what is happening during data processing, leaving room for doubt regarding the suitability of the algorithm or those who designed it.[citation needed]

Controversies[edit]

Another case where there is the possibility of biased execution by an algorithm was the subject of an article in The Washington Post[6] discussing the passenger transportation tool Uber. After analyzing the data collected, it was possible to verify that the estimated waiting time for users of the service was higher depending on the neighborhood where these individuals lived. The main factors affecting the increase in time were the majority ethnicity and the average income of the neighborhood.

In the above case, environments with a majority white population and with higher purchasing power had lower waiting time rates, while neighborhoods with a population of other ethnicities and lower average income had higher waiting times. It is important, however, to make clear that this conclusion was based on the data collected, not necessarily representing a cause and effect relationship, but possibly a correlation, and no value judgment is made about the behavior adopted by the Uber app in these situations.

In an article published in the column "Direito Digit@l" in Migalhas website,[7] Coriolano Almeida Camargo and Marcelo Crespo discuss the use of algorithms in contexts previously occupied by human beings when making decisions and the flaws that can occur when validating whether the decision made by the machine was fair or not.

The issue transcends and will transcend the concern with which data is collected from consumers to the question of how this data is used by algorithms. Despite the existence of some consumer protection regulations, there is no effective mechanism available to consumers that tells them, for example, whether they have been automatically discriminated against by being denied loans or jobs.

The great evolution of technology that we are experiencing has brought a wide range of innovations to society, among them the introduction of the concept of autonomous vehicles controlled by systems. That is, by algorithms that are embedded in these devices and that control the entire process of navigation on streets and roads and that face situations where they need to collect data and evaluate the environment and the context where they are inserted in order to decide what actions should be taken at each moment, simulating the actions of a human driver behind the wheel.

In the same article in the excerpt above, Camargo and Crespo discuss the possible problems involving the use of embedded algorithms in autonomous cars, especially with regard to decisions made at critical moments in the process of using the vehicles.

The technological landscape is rapidly changing with the advent of very powerful computers and algorithms that are moving toward the impressive development of artificial intelligence. We have no doubt that artificial intelligence will revolutionize the provision of services and also industry. The problem is that ethical issues urgently need to be thought through and discussed. Or are we simply going to allow machines to judge us in court cases? Or that they decide who should live or die in accident situations that could be intervened by some technological equipment, such as autonomous cars?

In TechCrunch website, Hemant Taneja wrote:[8]

Concern about “black box” algorithms that govern our lives has been spreading. New York University’s Information Law Institute hosted a conference on algorithmic accountability, noting: “Scholars, stakeholders, and policymakers question the adequacy of existing mechanisms governing algorithmic decision-making and grapple with new challenges presented by the rise of algorithmic power in terms of transparency, fairness, and equal treatment.” Yale Law School’s Information Society Project is studying this, too. “Algorithmic modeling may be biased or limited, and the uses of algorithms are still opaque in many critical sectors,” the group concluded.

Possible solutions[edit]

Some discussions on the subject have already been held by experts in order to try to reach some viable solution to understand what goes on in the black boxes that "guard" the algorithms. It is advocated primarily that the companies that develop the code themselves, which are responsible for running the data analysis algorithms, should be responsible for ensuring the reliability of their systems, for example by disclosing what goes on "behind the scenes" in their algorithms.

In TechCrunch website, Hemant Taneja wrote:[8]

...these new utilities (the Googles, Amazons and Ubers of the world) must proactively build algorithmic accountability into their systems, faithfully and transparently act as their own watchdogs or risk eventual onerous regulation.

From the excerpt above, it can be seen that one possible way is the introduction of a regulation in the computer sectors that run these algorithms so that there is an effective supervision of the activities that are happening during their executions. However, the introduction of this regulation could end up affecting the software industries and developers, and it would possibly be more advantageous for them if they would willingly open and disclose the content of what is being executed and what parameters are used for decision making, which could even end up benefiting the companies themselves with regard to the way in which the solutions developed and applied by them work.

Another possibility discussed is self-regulation by the developer companies themselves through the software.[8]

In TechCrunch website, Hemant Taneja wrote:[8]

There’s another benefit — perhaps a huge one — to software-defined regulation. It will also show us a path to a more efficient government. The world’s legal logic and regulations can be coded into software and smart sensors can offer real-time monitoring of everything from air and water quality, traffic flows and queues at the DMV. Regulators define the rules, technologist create the software to implement them and then AI and ML help refine iterations of policies going forward. This should lead to much more efficient, effective governments at the local, national and global levels.

See also[edit]

References[edit]

  1. ^ Shah, H. (2018). "Algorithmic accountability". Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences. 376 (2128): 20170362. Bibcode:2018RSPTA.37670362S. doi:10.1098/rsta.2017.0362. PMID 30082307. S2CID 51926550.
  2. ^ Kobie, Nicole. "Who do you blame when an algorithm gets you fired?". Wired. Retrieved March 2, 2023.
  3. ^ a b Angwin, Julia (August 2016). "Make Algorithms Accountable". The New York Times. Retrieved March 2, 2023.
  4. ^ Kroll; Huey; Barocas; Felten; Reidenberg; Robinson; Yu (2016). Accountable Algorithms. University of Pennsylvania. SSRN 2765268.
  5. ^ "Algorithmic Accountability & Transparency". Nick Diakopoulos. Archived from the original on January 21, 2016. Retrieved March 3, 2023.
  6. ^ Stark, Jennifer; Diakopoulos, Nicholas (March 10, 2016). "Uber seems to offer better service in areas with more white people. That raises some tough questions". The Washington Post. Retrieved March 2, 2023.
  7. ^ Santos, Coriolano Aurélio de Almeida Camargo; Chevtchuk, Leila (October 28, 2016). "Por quê precisamos de uma agenda para discutir algoritmos?". Migalhas (in Portuguese). Retrieved March 4, 2023.
  8. ^ a b c d Taneja, Hemant (8 September 2016). "The need for algorithmic accountability". TechCrunch. Retrieved March 4, 2023.

Bibliography[edit]

  • Kroll, Joshua A.; Huey, Joanna; Barocas, Solon; Barocas, Solon; Felten, Edward W.; Reidenberg, Joel R.; Robinson, David G.; Robinson, David G.; Yu, Harlan (2016) Accountable Algorithms. University of Pennsylvania Law Review, Vol. 165. Fordham Law Legal Studies Research Paper No. 2765268.