Direito da Regulação: Inteligência artificial tem preconceitos

Fonte: IBRAEDP | Data: 24 de april, 2017

Fonte: IBRAEDP

Estudo impactante publicado, em abril de 2017, na Revista Science, demonstra que a inteligência artificial também pode padecer de vieses e se inclinar à formação de estereótipos. A partir do Implicit Association Test (IAT) e, mais ainda, do teste denominado "word-embedding association test (WEAT)", os autores revelam como os programas que utilizam algoritmos processam associações problemáticas que herdam da cultura humana. Ao que tudo indica, algoritimos não se liberam da formação de "preconceitos semânticos" e, assim, dos estereótipos de gênero e raça, por exemplo. O estudo é intitulado "Semantics derived automatically from language corpora contain human-like biases." Os autores são Aylin Caliskan, Joanna J. Bryson e Arvind Narayanan. O texto foi publicado na Science, 14, abril, 2017, pp. 183-186.


Eis o abstract: "Machine learning is a means to derive artificial intelligence by discovering patterns in existing data. Here, we show that applying machine learning to ordinary human language results in human-like semantic biases. We replicated a spectrum of known biases, as measured by the Implicit Association Test, using a widely used, purely statistical machine-learning model trained on a standard corpus of text from the World Wide Web. Our results indicate that text corpora contain recoverable and accurate imprints of our historic biases, whether morally neutral as toward insects or flowers, problematic as toward race or gender, or even simply veridical, reflecting the status quo distribution of gender with respect to careers or first names. Our methods hold promise for identifying and addressing sources of bias in culture, including technology."

No artigo, os autores evidenciam "that standard machine learning can acquire stereotyped biases from textual data that reflect everyday human culture. (...)". Arrematam: "Our findings suggest that if we build an intelligent system that learns enough about the properties of language to be able to understand and produce it, in the process it will also acquire historical cultural associations (...) We recommend addressing this through the explicit characterization of acceptable behavior. One such approach is seen in the nascent field of fairness in machine learning, which specifies and enforces mathematical formulations of nondiscrimination in decision-making (19, 20). Another approach can be found in modular AI architectures, such as cognitive systems, in which implicit learning of statistical regularities can be compartmentalized and augmented with explicit instruction of rules of appropriate conduct (21, 22). Certainly, caution must be used in incorporating modules constructed via unsupervised machine learning into decision-making systems".