AI and Technology-Facilitated Violence and Abuse
Jane BAILEY, Jacquelyn BURKELL, Suzie DUNN, Chandal GOSSE, et al. « AI and Technology-Facilitated Violence and Abuse », dans Florian MARTIN-BARITEAU, Teresa SCASSA (dir.), Artificial Intelligence and the Law in Canada, Toronto: LexisNexis Canada, (2021).
Jacquelyn A. Burkell
En anglais seulement.
Artificial intelligence (AI) is being used—and is in some cases specifically designed—to cause harms against members of equality-seeking communities. These harms, which we term “equality harms” have individual and collective effects, and emanate from both “direct” and “structural” violence. Discussions about the role of AI in technology-facilitated violence and abuse (TFVA) sometimes do not include equality harms specifically. When they do, they frequently focus on individual equality harms caused by “direct” violence (e.g. the use of deepfakes to create non-consensual pornography to harass or degrade individual women). Often little attention is paid to the collective equality harms that flow from structural violence, including those that arise from corporate actions motivated by the drive to profit from data flows (e.g. algorithmic profiling). Addressing TFVA in a comprehensive way means considering equality harms arising from both individual and corporate behaviours. This will require going beyond criminal law reforms to punish “bad” individual actors, since responses focused on individual wrongdoers fail to address the social impact of the structural violence that flows from some commercial uses of AI. Although, in many cases, the harms occasioned by these (ab)uses of AI are the very sort of harms that law is used to address or has been used to address, existing Canadian law is not currently well placed to meaningfully address equality harms.
Ce contenu a été mis à jour le 24 novembre 2020 à 11 h 53 min.