{"id":1335,"date":"2024-07-12T11:16:13","date_gmt":"2024-07-12T11:16:13","guid":{"rendered":"https:\/\/resources.illc.uva.nl\/illc-blog\/?p=1335"},"modified":"2024-11-07T16:47:45","modified_gmt":"2024-11-07T16:47:45","slug":"going-beyond-a-mathematical-investigation-of-bias","status":"publish","type":"post","link":"https:\/\/resources.illc.uva.nl\/illc-blog\/going-beyond-a-mathematical-investigation-of-bias\/","title":{"rendered":"Going beyond a mathematical investigation of bias"},"content":{"rendered":"\n<p>12 juli 2024, Oskar van der Wal<\/p>\n\n\n\n<figure class=\"wp-block-image size-large\"><img loading=\"lazy\" decoding=\"async\" width=\"1024\" height=\"768\" src=\"https:\/\/resources.illc.uva.nl\/illc-blog\/wp-content\/uploads\/2024\/07\/iStock-1677540992-1024x768.jpg\" alt=\"\" class=\"wp-image-1337\" srcset=\"https:\/\/resources.illc.uva.nl\/illc-blog\/wp-content\/uploads\/2024\/07\/iStock-1677540992-1024x768.jpg 1024w, https:\/\/resources.illc.uva.nl\/illc-blog\/wp-content\/uploads\/2024\/07\/iStock-1677540992-300x225.jpg 300w, https:\/\/resources.illc.uva.nl\/illc-blog\/wp-content\/uploads\/2024\/07\/iStock-1677540992-768x576.jpg 768w, https:\/\/resources.illc.uva.nl\/illc-blog\/wp-content\/uploads\/2024\/07\/iStock-1677540992.jpg 1183w\" sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" \/><\/figure>\n\n\n\n<p><em>A version of this blog post first appeared on&nbsp;<a href=\"https:\/\/odvanderwal.nl\/2023\/positioning-bias\" target=\"_blank\" rel=\"noreferrer noopener\">https:\/\/odvanderwal.nl\/2023\/positioning-bias<\/a>.<\/em><\/p>\n\n\n\n<p>When researchers study how&nbsp;<a href=\"https:\/\/certain-ai.nl\/research-themes\/bias\/\">biased<\/a>&nbsp;<a href=\"https:\/\/certain-ai.nl\/research-themes\/language-model\/\">language models<\/a>&nbsp;are, they generally approach this in a mathematical or statistical way. For example, they could say that ChatGPT is gender biased if,&nbsp;when asked to write a story about a CEO or nurse,&nbsp;&nbsp;it writes more than 50% of the time about that male CEOs are men and female nurses are women when asking it to write a story about a CEO or nurse. Another way that this bias study can be very mathematical, is when researchers look inside the language model using&nbsp;<a href=\"https:\/\/certain-ai.nl\/research-themes\/explainable\/\">interpretability<\/a>&nbsp;methods to see how it represents this biased information internally.<\/p>\n\n\n\n<p>While this is a useful way to better understand why language models, like ChatGPT, may be gender biased, it is also useful to take a step back and to consider bias in NLP from a broader perspective. The analysis of bias is incomplete if we ignore the ethical questions and the sociotechnical context. Both the technical details of the model and the social aspects (the designers, users, stakeholders, historical and cultural context, company goals, etc.) are important to consider! In this blog post, we\u2019ll discuss three such considerations that go beyond a mathematical approach:<\/p>\n\n\n\n<ol class=\"wp-block-list\">\n<li>algorithmic bias is a sociotechnical problem,<\/li>\n\n\n\n<li>society is constantly changing and so is our conceptualization of bias,<\/li>\n\n\n\n<li>algorithmic bias is not&nbsp;<em>simply<\/em>&nbsp;a reflection of the data\/society.<\/li>\n<\/ol>\n\n\n\n<h3 class=\"wp-block-heading\">1. Bias is a sociotechnical problem<\/h3>\n\n\n\n<p>When should we consider the gender bias of an AI system as harmful? The implicit assumption in the AI debate is generally that we should aim for gender-neutral behavior, which is based on the idea that not differentiating between the genders defines or constitutes fair behavior. However, whether this is the case might strongly depend on the particular task we want athe&nbsp; system to perform and its sociotechnical context (i.e., both technical and social aspects are important to consider).<\/p>\n\n\n\n<p>For example, In translations, we might want the an AI system to consider the (grammatical) gender of the subject when&nbsp;translating a text, but not when in&nbsp;<a href=\"https:\/\/www.siliconrepublic.com\/careers\/amazon-ai-hiring-tool-women-discrimination\">assessing the competency of job candidates when automatically filtering resumes<\/a>! (In fact, whether we should want to use AI for automating these tasks is another question entirely.)<\/p>\n\n\n\n<p>Our perspective may also change if we do not see the bias of an AI system in isolation, but as situated in the broader practices it is part of: We may find that the individual examples of bias do not paint the a full picture of the structural bias&nbsp;of the institutions, businesses, or organizations making use of it. Why does an AI system assign higher competency scores for the resumes of people more similar to the ones already working in the company? Is it because they are truly more competent, or is the training dataset skewed because of historical reasons and would more diversity actually benefit the company? In this light, we might even have to consider adding a counteracting bias to generate equal opportunities for different subgroups in a population, to compensate for the disadvantages these groups have.<\/p>\n\n\n\n<p>Not all bias is unwanted, and there might be contexts in which we need it to reach certain goals. To formulate the (moral) standards for an AI system, we need to look at the broader context in which it functions, understand the way the AI system interacts with this environment, and consider how the entire system might contribute to unfairness or cause harm to particular groups or individuals. This also means that the current paradigm for analyzing bias in NLP is perhaps inadequate:&nbsp;<a href=\"https:\/\/arxiv.org\/abs\/2111.15366\" target=\"_blank\" rel=\"noreferrer noopener\">Raji et al. (2021)<\/a>&nbsp;make a compelling argument that benchmarks for evaluating AI systems are fundamentally limited, as these consist of decontextualized examples.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">2. Society is constantly changing and so is \u201cbias\u201d<\/h3>\n\n\n\n<p>Ideally, the discussion about the norms and standards of a particular AI application are resolved before the build starts. But what counts as unfair or harmful behaviors are not stable societal factors that we can align our AI systems with. They are constantly changing, as the debate in society progresses,. and therefore a principal solution of the bias problem is simply impossible. Worse even, new biases can emerge if our AI systems do not adjust for such this changes (<a href=\"https:\/\/doi.org\/10.1145\/230538.230561\" target=\"_blank\" rel=\"noreferrer noopener\">Friedman and Nissenbaum, 1996<\/a>;&nbsp;<a href=\"https:\/\/dl.acm.org\/doi\/10.1145\/3442188.3445922\" target=\"_blank\" rel=\"noreferrer noopener\">Bender et al., 2021<\/a>). This concern is especially apparent for very large language models, which are expensive to train and therefore reused for many downstream tasks (<a href=\"https:\/\/dl.acm.org\/doi\/10.1145\/3442188.3445922\" target=\"_blank\" rel=\"noreferrer noopener\">Bender et al., 2021<\/a>).<\/p>\n\n\n\n<p>Moreover, given the various applications that could make use of language technology, there is no way to have standards that fit them all. However, we can be transparent and detailed about the way a particular model is trained, including the dataset, so that this information is available in case of model transfer (<a href=\"https:\/\/doi.org\/10.1145\/3287560.3287596\" target=\"_blank\" rel=\"noreferrer noopener\">Mitchell et al., 2019<\/a>;&nbsp;<a href=\"https:\/\/www.aclweb.org\/anthology\/2020.acl-main.463\" target=\"_blank\" rel=\"noreferrer noopener\">Bender and Koller, 2020<\/a>;&nbsp;<a href=\"https:\/\/dl.acm.org\/doi\/fullHtml\/10.1145\/3458723?casa_token=TU9HFtTyB88AAAAA:J97L_b3qNCzO-r8MLt2yVnE9D6P4LwXsbuJDlhbUfYM1aTVT9oduXJq-reWE_zWaFw5DtrQyl-gH\" target=\"_blank\" rel=\"noreferrer noopener\">Gebru et al., 2021<\/a>;&nbsp;<a href=\"https:\/\/dl.acm.org\/doi\/10.1145\/3442188.3445922\" target=\"_blank\" rel=\"noreferrer noopener\">Bender et al., 2021<\/a>. Furthermore, we need to develop technologies that allow us to counteract biases in the system whenever they do matter for the downstream task. But for this, we also need a clear understanding of how this bias comes about in the first place.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">3. Algorithmic bias is does not only reflecting pre-existing bias<\/h3>\n\n\n\n<p>A popular argument in the AI community, is that the bias of a deep neural model simply reflects pre-existing biases that are present in the training data. However, we should not neglect the responsibility we have in designing and implementing these AI systems: Many forms of bias can emerge at the different stages of creating and deploying the language technology (see&nbsp;<a href=\"https:\/\/onlinelibrary.wiley.com\/doi\/abs\/10.1111\/lnc3.12432\" target=\"_blank\" rel=\"noreferrer noopener\">Hovy and Prabhumoye, 2021<\/a>). Others have even pointed out that biased algorithms can change the world in profound ways. For example,&nbsp;<a href=\"https:\/\/proceedings.mlr.press\/v81\/ensign18a.html\" target=\"_blank\" rel=\"noreferrer noopener\">Ensign et al. (2018)<\/a>&nbsp;show how biased policing algorithms could result in more policing of certain neighborhoods, which in turn feeds back into new data reinforcing the earlier bias, leading to a \u2018runaway feedback loop\u2019.<\/p>\n\n\n\n<p>Language technology is not merely reflecting society, but its implementations can be a part of it and even change it in unexpected ways. A well-known theme in the&nbsp;<a href=\"https:\/\/www.futurelearn.com\/info\/courses\/philosophy-of-technology\/0\/steps\/26311\" target=\"_blank\" rel=\"noreferrer noopener\">philosophy of technology<\/a>, is that technologies \u2018mediate\u2019 our experiences and shape our world-view of \u201chow to live\u201d (<a href=\"https:\/\/www.psupress.org\/books\/titles\/0-271-02539-5.html\" target=\"_blank\" rel=\"noreferrer noopener\">Verbeek, 2005<\/a>). Machine translation systems may dictate a world-view primarily of men, with women restricted to stereotypical occupations (<a href=\"https:\/\/www.humanamente.eu\/index.php\/HM\/article\/view\/307\" target=\"_blank\" rel=\"noreferrer noopener\">Wellner, 2020<\/a>), and search engines that only show men for the keyword \u201cCEO\u201d similarly shape our image of the archetypal business leader. Or consider the following example by&nbsp;<a href=\"https:\/\/unesdoc.unesco.org\/ark:\/48223\/pf0000367823\" target=\"_blank\" rel=\"noreferrer noopener\">this UNESCO\/COMEST report from 2019<\/a>:<\/p>\n\n\n\n<p>\u201cThe \u2018gendering\u2019 of digital assistants, for example, may reinforce understandings of women as subservient and compliant. Indeed, female voices are routinely chosen as personal assistance bots, mainly fulfilling customer service duties, whilst the majority of bots in professional services such as the law and finance sectors, for example, are coded as male voices. This has educational implications with regards to how we understand \u2018male\u2019 vs \u2018female\u2019 competences, and how we define authoritative versus subservient positions.\u201d<\/p>\n\n\n\n<p>How we define bias and measure it, may also influence how we view bias itself. In the context of fairness metrics, Jacobs and Wallach (<a href=\"https:\/\/doi.org\/10.1145\/3442188.3445901\" target=\"_blank\" rel=\"noreferrer noopener\">2021<\/a>) refer to \u2018consequential validity\u2019, the fact that \u201cthe measurements shape the ways that we understand the construct itself\u201d\u2014which is often overlooked when designing a bias metric.<\/p>\n\n\n\n<p>How we define and measure racial and gender categorizations, for example, also shapes how we view and act on these constructs in society; Viewing gender as a binary construct may be hurtful to non-binary communities (<a href=\"https:\/\/papers.ssrn.com\/abstract=3189696\" target=\"_blank\" rel=\"noreferrer noopener\">Costanza-Chock, 2018<\/a>). (And see for a discussion of different perspectives on&nbsp;<em>race<\/em>&nbsp;see&nbsp;<a href=\"https:\/\/books.google.nl\/books?hl=nl&amp;lr=&amp;id=BguXDwAAQBAJ&amp;oi=fnd&amp;pg=PP1&amp;dq=What+Is+Race%3F+glasgow&amp;ots=sdLkgr5WNm&amp;sig=le0lOZEXn-WqjaEAyq1E2EHCoqI#v=onepage&amp;q=What%20Is%20Race%3F%20glasgow&amp;f=false\" target=\"_blank\" rel=\"noreferrer noopener\">Glasgow, 2019<\/a>.)<\/p>\n\n\n\n<p>Algorithmic bias is an inherently complex phenomenon due to its sociotechnical and context-sensitive nature, which makes a precise definition difficult\u2014yet&nbsp;a discussion of how it is defined is crucial when researching it (e.g.,&nbsp;<a href=\"https:\/\/www.aclweb.org\/anthology\/2020.acl-main.485\" target=\"_blank\" rel=\"noreferrer noopener\">Blodgett et al., 2020<\/a>,&nbsp;<a href=\"https:\/\/jair.org\/index.php\/jair\/article\/view\/15195\/26998https:\/\/arxiv.org\/abs\/2211.13709\" target=\"_blank\" rel=\"noreferrer noopener\">van der Wal et al. 2024<\/a>). Researchers cannot resort to a \u2018catch-all\u2019 bias metric for understanding the bias, and mitigating the harms might require more than simply removing the biased information (<a href=\"https:\/\/aclanthology.org\/2022.bigscience-1.3\" target=\"_blank\" rel=\"noreferrer noopener\">Talat et al., 2022<\/a>). It is even unclear whether it is possible to completely debias an AI system (see for a discussion of&nbsp;<em>debiasing<\/em>, for example,&nbsp;<a href=\"https:\/\/arxiv.org\/abs\/2101.11974\" target=\"_blank\" rel=\"noreferrer noopener\">Talat et al., 2021<\/a>).<\/p>\n\n\n\n<p>Going even further, maybe the starting point should not be to ask ourselves how we can debias AI models, as phrased in&nbsp;<a href=\"https:\/\/kvab.be\/sites\/default\/rest\/blobs\/2806\/tw_waardevoldigitaliseren.pdf\" target=\"_blank\" rel=\"noreferrer noopener\">this Dutch report on digitalization<\/a>, but rather to focus on the larger questions that we have to answer as a society: How do we want to shape the world with language technology as a part of life? How can we design these AI systems such that they help create a more just society, instead of solidifying existing (or even leading to new forms of) systemic bias? Naturally, such a broad discussion about what entails fair behavior in AI systems, needs to involve not only AI researchers, but also various other experts from outside the technical domain.<\/p>\n\n\n\n<p>Thanks to&nbsp;<a href=\"https:\/\/staff.fnwi.uva.nl\/w.c.moltmaker\/\" target=\"_blank\" rel=\"noreferrer noopener\">Wout Moltmaker<\/a>&nbsp;for his helpful comments on this blog post.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>While  mathematical investigation is a useful way to better understand why language models, like ChatGPT, may be gender biased, it is also useful to take a step back and to consider bias in NLP from a broader perspective. <\/p>\n","protected":false},"author":1,"featured_media":1337,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[1],"tags":[],"class_list":["post-1335","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-uncategorized"],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v25.9 - https:\/\/yoast.com\/wordpress\/plugins\/seo\/ -->\n<title>Going beyond a mathematical investigation of bias - ILLC Blog<\/title>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/resources.illc.uva.nl\/illc-blog\/going-beyond-a-mathematical-investigation-of-bias\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Going beyond a mathematical investigation of bias - ILLC Blog\" \/>\n<meta property=\"og:description\" content=\"While mathematical investigation is a useful way to better understand why language models, like ChatGPT, may be gender biased, it is also useful to take a step back and to consider bias in NLP from a broader perspective.\" \/>\n<meta property=\"og:url\" content=\"https:\/\/resources.illc.uva.nl\/illc-blog\/going-beyond-a-mathematical-investigation-of-bias\/\" \/>\n<meta property=\"og:site_name\" content=\"ILLC Blog\" \/>\n<meta property=\"article:published_time\" content=\"2024-07-12T11:16:13+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2024-11-07T16:47:45+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/resources.illc.uva.nl\/illc-blog\/wp-content\/uploads\/2024\/07\/iStock-1677540992.jpg\" \/>\n\t<meta property=\"og:image:width\" content=\"1183\" \/>\n\t<meta property=\"og:image:height\" content=\"887\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/jpeg\" \/>\n<meta name=\"author\" content=\"root\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"root\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"7 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\/\/schema.org\",\"@graph\":[{\"@type\":\"Article\",\"@id\":\"https:\/\/resources.illc.uva.nl\/illc-blog\/going-beyond-a-mathematical-investigation-of-bias\/#article\",\"isPartOf\":{\"@id\":\"https:\/\/resources.illc.uva.nl\/illc-blog\/going-beyond-a-mathematical-investigation-of-bias\/\"},\"author\":{\"name\":\"root\",\"@id\":\"https:\/\/resources.illc.uva.nl\/illc-blog\/#\/schema\/person\/d61202239120ac4fd11d09679b7f1f81\"},\"headline\":\"Going beyond a mathematical investigation of bias\",\"datePublished\":\"2024-07-12T11:16:13+00:00\",\"dateModified\":\"2024-11-07T16:47:45+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\/\/resources.illc.uva.nl\/illc-blog\/going-beyond-a-mathematical-investigation-of-bias\/\"},\"wordCount\":1511,\"publisher\":{\"@id\":\"https:\/\/resources.illc.uva.nl\/illc-blog\/#organization\"},\"image\":{\"@id\":\"https:\/\/resources.illc.uva.nl\/illc-blog\/going-beyond-a-mathematical-investigation-of-bias\/#primaryimage\"},\"thumbnailUrl\":\"https:\/\/resources.illc.uva.nl\/illc-blog\/wp-content\/uploads\/2024\/07\/iStock-1677540992.jpg\",\"inLanguage\":\"en-US\"},{\"@type\":\"WebPage\",\"@id\":\"https:\/\/resources.illc.uva.nl\/illc-blog\/going-beyond-a-mathematical-investigation-of-bias\/\",\"url\":\"https:\/\/resources.illc.uva.nl\/illc-blog\/going-beyond-a-mathematical-investigation-of-bias\/\",\"name\":\"Going beyond a mathematical investigation of bias - ILLC Blog\",\"isPartOf\":{\"@id\":\"https:\/\/resources.illc.uva.nl\/illc-blog\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\/\/resources.illc.uva.nl\/illc-blog\/going-beyond-a-mathematical-investigation-of-bias\/#primaryimage\"},\"image\":{\"@id\":\"https:\/\/resources.illc.uva.nl\/illc-blog\/going-beyond-a-mathematical-investigation-of-bias\/#primaryimage\"},\"thumbnailUrl\":\"https:\/\/resources.illc.uva.nl\/illc-blog\/wp-content\/uploads\/2024\/07\/iStock-1677540992.jpg\",\"datePublished\":\"2024-07-12T11:16:13+00:00\",\"dateModified\":\"2024-11-07T16:47:45+00:00\",\"breadcrumb\":{\"@id\":\"https:\/\/resources.illc.uva.nl\/illc-blog\/going-beyond-a-mathematical-investigation-of-bias\/#breadcrumb\"},\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\/\/resources.illc.uva.nl\/illc-blog\/going-beyond-a-mathematical-investigation-of-bias\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/resources.illc.uva.nl\/illc-blog\/going-beyond-a-mathematical-investigation-of-bias\/#primaryimage\",\"url\":\"https:\/\/resources.illc.uva.nl\/illc-blog\/wp-content\/uploads\/2024\/07\/iStock-1677540992.jpg\",\"contentUrl\":\"https:\/\/resources.illc.uva.nl\/illc-blog\/wp-content\/uploads\/2024\/07\/iStock-1677540992.jpg\",\"width\":1183,\"height\":887,\"caption\":\"Scales and paper balls as a symbol of positive and negative thoughts.\"},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\/\/resources.illc.uva.nl\/illc-blog\/going-beyond-a-mathematical-investigation-of-bias\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\/\/resources.illc.uva.nl\/illc-blog\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"Going beyond a mathematical investigation of bias\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\/\/resources.illc.uva.nl\/illc-blog\/#website\",\"url\":\"https:\/\/resources.illc.uva.nl\/illc-blog\/\",\"name\":\"ILLC Blog\",\"description\":\"Institute for Logic, Language and Computation\",\"publisher\":{\"@id\":\"https:\/\/resources.illc.uva.nl\/illc-blog\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\/\/resources.illc.uva.nl\/illc-blog\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"en-US\"},{\"@type\":\"Organization\",\"@id\":\"https:\/\/resources.illc.uva.nl\/illc-blog\/#organization\",\"name\":\"ILLC Blog\",\"url\":\"https:\/\/resources.illc.uva.nl\/illc-blog\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/resources.illc.uva.nl\/illc-blog\/#\/schema\/logo\/image\/\",\"url\":\"https:\/\/resources.illc.uva.nl\/illc-blog\/wp-content\/uploads\/2020\/04\/logo-uva.png\",\"contentUrl\":\"https:\/\/resources.illc.uva.nl\/illc-blog\/wp-content\/uploads\/2020\/04\/logo-uva.png\",\"width\":301,\"height\":30,\"caption\":\"ILLC Blog\"},\"image\":{\"@id\":\"https:\/\/resources.illc.uva.nl\/illc-blog\/#\/schema\/logo\/image\/\"}},{\"@type\":\"Person\",\"@id\":\"https:\/\/resources.illc.uva.nl\/illc-blog\/#\/schema\/person\/d61202239120ac4fd11d09679b7f1f81\",\"name\":\"root\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/resources.illc.uva.nl\/illc-blog\/#\/schema\/person\/image\/\",\"url\":\"https:\/\/secure.gravatar.com\/avatar\/189caf137ea99ec6c9f9ded1953b4c9acc3e0062dc55746ebe5156089d95d5b8?s=96&d=mm&r=g\",\"contentUrl\":\"https:\/\/secure.gravatar.com\/avatar\/189caf137ea99ec6c9f9ded1953b4c9acc3e0062dc55746ebe5156089d95d5b8?s=96&d=mm&r=g\",\"caption\":\"root\"},\"sameAs\":[\"https:\/\/resources.illc.uva.nl\/illc-blog\"],\"url\":\"https:\/\/resources.illc.uva.nl\/illc-blog\/author\/root\/\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"Going beyond a mathematical investigation of bias - ILLC Blog","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/resources.illc.uva.nl\/illc-blog\/going-beyond-a-mathematical-investigation-of-bias\/","og_locale":"en_US","og_type":"article","og_title":"Going beyond a mathematical investigation of bias - ILLC Blog","og_description":"While mathematical investigation is a useful way to better understand why language models, like ChatGPT, may be gender biased, it is also useful to take a step back and to consider bias in NLP from a broader perspective.","og_url":"https:\/\/resources.illc.uva.nl\/illc-blog\/going-beyond-a-mathematical-investigation-of-bias\/","og_site_name":"ILLC Blog","article_published_time":"2024-07-12T11:16:13+00:00","article_modified_time":"2024-11-07T16:47:45+00:00","og_image":[{"width":1183,"height":887,"url":"https:\/\/resources.illc.uva.nl\/illc-blog\/wp-content\/uploads\/2024\/07\/iStock-1677540992.jpg","type":"image\/jpeg"}],"author":"root","twitter_card":"summary_large_image","twitter_misc":{"Written by":"root","Est. reading time":"7 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/resources.illc.uva.nl\/illc-blog\/going-beyond-a-mathematical-investigation-of-bias\/#article","isPartOf":{"@id":"https:\/\/resources.illc.uva.nl\/illc-blog\/going-beyond-a-mathematical-investigation-of-bias\/"},"author":{"name":"root","@id":"https:\/\/resources.illc.uva.nl\/illc-blog\/#\/schema\/person\/d61202239120ac4fd11d09679b7f1f81"},"headline":"Going beyond a mathematical investigation of bias","datePublished":"2024-07-12T11:16:13+00:00","dateModified":"2024-11-07T16:47:45+00:00","mainEntityOfPage":{"@id":"https:\/\/resources.illc.uva.nl\/illc-blog\/going-beyond-a-mathematical-investigation-of-bias\/"},"wordCount":1511,"publisher":{"@id":"https:\/\/resources.illc.uva.nl\/illc-blog\/#organization"},"image":{"@id":"https:\/\/resources.illc.uva.nl\/illc-blog\/going-beyond-a-mathematical-investigation-of-bias\/#primaryimage"},"thumbnailUrl":"https:\/\/resources.illc.uva.nl\/illc-blog\/wp-content\/uploads\/2024\/07\/iStock-1677540992.jpg","inLanguage":"en-US"},{"@type":"WebPage","@id":"https:\/\/resources.illc.uva.nl\/illc-blog\/going-beyond-a-mathematical-investigation-of-bias\/","url":"https:\/\/resources.illc.uva.nl\/illc-blog\/going-beyond-a-mathematical-investigation-of-bias\/","name":"Going beyond a mathematical investigation of bias - ILLC Blog","isPartOf":{"@id":"https:\/\/resources.illc.uva.nl\/illc-blog\/#website"},"primaryImageOfPage":{"@id":"https:\/\/resources.illc.uva.nl\/illc-blog\/going-beyond-a-mathematical-investigation-of-bias\/#primaryimage"},"image":{"@id":"https:\/\/resources.illc.uva.nl\/illc-blog\/going-beyond-a-mathematical-investigation-of-bias\/#primaryimage"},"thumbnailUrl":"https:\/\/resources.illc.uva.nl\/illc-blog\/wp-content\/uploads\/2024\/07\/iStock-1677540992.jpg","datePublished":"2024-07-12T11:16:13+00:00","dateModified":"2024-11-07T16:47:45+00:00","breadcrumb":{"@id":"https:\/\/resources.illc.uva.nl\/illc-blog\/going-beyond-a-mathematical-investigation-of-bias\/#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/resources.illc.uva.nl\/illc-blog\/going-beyond-a-mathematical-investigation-of-bias\/"]}]},{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/resources.illc.uva.nl\/illc-blog\/going-beyond-a-mathematical-investigation-of-bias\/#primaryimage","url":"https:\/\/resources.illc.uva.nl\/illc-blog\/wp-content\/uploads\/2024\/07\/iStock-1677540992.jpg","contentUrl":"https:\/\/resources.illc.uva.nl\/illc-blog\/wp-content\/uploads\/2024\/07\/iStock-1677540992.jpg","width":1183,"height":887,"caption":"Scales and paper balls as a symbol of positive and negative thoughts."},{"@type":"BreadcrumbList","@id":"https:\/\/resources.illc.uva.nl\/illc-blog\/going-beyond-a-mathematical-investigation-of-bias\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/resources.illc.uva.nl\/illc-blog\/"},{"@type":"ListItem","position":2,"name":"Going beyond a mathematical investigation of bias"}]},{"@type":"WebSite","@id":"https:\/\/resources.illc.uva.nl\/illc-blog\/#website","url":"https:\/\/resources.illc.uva.nl\/illc-blog\/","name":"ILLC Blog","description":"Institute for Logic, Language and Computation","publisher":{"@id":"https:\/\/resources.illc.uva.nl\/illc-blog\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/resources.illc.uva.nl\/illc-blog\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"en-US"},{"@type":"Organization","@id":"https:\/\/resources.illc.uva.nl\/illc-blog\/#organization","name":"ILLC Blog","url":"https:\/\/resources.illc.uva.nl\/illc-blog\/","logo":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/resources.illc.uva.nl\/illc-blog\/#\/schema\/logo\/image\/","url":"https:\/\/resources.illc.uva.nl\/illc-blog\/wp-content\/uploads\/2020\/04\/logo-uva.png","contentUrl":"https:\/\/resources.illc.uva.nl\/illc-blog\/wp-content\/uploads\/2020\/04\/logo-uva.png","width":301,"height":30,"caption":"ILLC Blog"},"image":{"@id":"https:\/\/resources.illc.uva.nl\/illc-blog\/#\/schema\/logo\/image\/"}},{"@type":"Person","@id":"https:\/\/resources.illc.uva.nl\/illc-blog\/#\/schema\/person\/d61202239120ac4fd11d09679b7f1f81","name":"root","image":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/resources.illc.uva.nl\/illc-blog\/#\/schema\/person\/image\/","url":"https:\/\/secure.gravatar.com\/avatar\/189caf137ea99ec6c9f9ded1953b4c9acc3e0062dc55746ebe5156089d95d5b8?s=96&d=mm&r=g","contentUrl":"https:\/\/secure.gravatar.com\/avatar\/189caf137ea99ec6c9f9ded1953b4c9acc3e0062dc55746ebe5156089d95d5b8?s=96&d=mm&r=g","caption":"root"},"sameAs":["https:\/\/resources.illc.uva.nl\/illc-blog"],"url":"https:\/\/resources.illc.uva.nl\/illc-blog\/author\/root\/"}]}},"_links":{"self":[{"href":"https:\/\/resources.illc.uva.nl\/illc-blog\/wp-json\/wp\/v2\/posts\/1335","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/resources.illc.uva.nl\/illc-blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/resources.illc.uva.nl\/illc-blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/resources.illc.uva.nl\/illc-blog\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/resources.illc.uva.nl\/illc-blog\/wp-json\/wp\/v2\/comments?post=1335"}],"version-history":[{"count":7,"href":"https:\/\/resources.illc.uva.nl\/illc-blog\/wp-json\/wp\/v2\/posts\/1335\/revisions"}],"predecessor-version":[{"id":1344,"href":"https:\/\/resources.illc.uva.nl\/illc-blog\/wp-json\/wp\/v2\/posts\/1335\/revisions\/1344"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/resources.illc.uva.nl\/illc-blog\/wp-json\/wp\/v2\/media\/1337"}],"wp:attachment":[{"href":"https:\/\/resources.illc.uva.nl\/illc-blog\/wp-json\/wp\/v2\/media?parent=1335"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/resources.illc.uva.nl\/illc-blog\/wp-json\/wp\/v2\/categories?post=1335"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/resources.illc.uva.nl\/illc-blog\/wp-json\/wp\/v2\/tags?post=1335"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}