Logic List Mailing Archive

1st ACL Workshop on Gender Bias for NLP

1-2 Aug 2019
Florence, Italy

1st ACL Workshop on Gender Bias for Natural Language Processing

http://genderbiasnlp.talp.cat

1st/2nd August, Florence

Gender and other demographic biases in machine-learned models are of
increasing interest to the scientific community and industry. Models of
natural language are highly affected by such perceived biases, present in
widely used products, can lead to poor user experiences. There is a growing
body of research into fair representations of gender in NLP models. Key
example approaches are to build and use fairer training and evaluation
datasets (e.g. Reddy & Knight, 2016, Webster et al., 2018, Maadan et al.,
2018), and to change the learning algorithms themselves (e.g. Bolukbasi et
al., 2016, Chiappa et al., 2018). While these approaches show promising
results, there is more to do to solve identified and future bias issues. In
order to make progress as a field, we need standard tasks which quantify
bias.

This workshop will be the first dedicated to the issue of gender bias in
NLP techniques and it includes a shared task on coreference resolution. In
order to make progress as a field, this workshop will specially focus on
discussing and proposing standard tasks which quantify bias.

Shared Task

We invite work on gender-fair modeling via our shared task, coreference
resolution on GAP (Webster et al. 2018).  GAP is a coreference dataset
designed to highlight current challenges for the resolution of ambiguous
pronouns in context.  GAP is a gender-balanced dataset and evaluation is
gender disaggregated. Previous work has shown state-of-the-art resolvers
are biased to yield better performance on masculine pronouns due to
differences in the public discourse between genders. Participation will be
via Kaggle, with submissions open over a three month period in the lead up
to the workshop.

Topics of interest

We invite submissions of technical work exploring the detection,
measurement, and mediation of gender bias in NLP models and applications.
Other important topics are the creation of datasets exploring demographics
such as metrics to identify and assess relevant biases or focusing on
fairness in NLP systems. Finally, the workshop is also open to
non-technical work welcoming socialogical perspectives.

Paper Submission Information

Submissions will be accepted as short papers (4-6 pages) and as long papers
(8-10 pages), plus additional pages for references, following the ACL 2019
guidelines. Supplementary material can be added. Blind submission is
required.

Shared task participants will be invited to submit short papers (4-6 pages,
plus references). No need to anonymize papers in this shared task
submission.

Important dates

Shared Task

Jan 21. Baseline system released

April 15-21. Test phase

April 26. Results announced

May 3. Submission of system description papers

May 17. Description paper reviews completed

May 30. Camera-ready description papers due

Technical Papers

April 26. Deadline for Submission

May 15. Notification of acceptance

May 22. Camera ready submission


Keynote Speaker

Pascale Fung, Hong Kong University of Science and Technology

Programme Committee

Cristina España-Bonet, DFKI, Germany

Silvia Chiappa, DeepMind, UK

Rachel Rudinger, John Hopkins University, US

Saif Mohammad, National Council Canada

Svetlana Kiritchenko, National Council Canada

Corina Koolen, University of Amsterdam

Kai-Wei Chang, University of Washington

Kaiji Lu, Carnegie Mellon University, US

Sameep Mehta, IBM Research India

Sharid Loáiciga, University of Gothenburg

Zhengxian Gong, Soochow University

Marta Recasens, Google, US

Jason Baldridge, Google AI Language, US

Bonnie Webber, University of Edinburgh

Ben Hachey, The University of Sydney, Australia

Organizers

Marta R. Costa-jussà, Universitat Politècnica de Catalunya, Barcelona

Christian Hardmeier, Uppsala University

Kellie Webster, Google AI Language, New York

Will Radford, Canva, Sydney

Contact persons

General Workshop: Marta R. Costa-jussà: marta (dot) ruiz (at) upc (dot) edu

Shared Task: Kellie Webster: websterk (at) google (dot) com
--
[LOGIC] mailing list
http://www.dvmlg.de/mailingliste.html
Archive: http://www.illc.uva.nl/LogicList/

provided by a collaboration of the DVMLG, the Maths Departments in Bonn and Hamburg, and the ILLC at the Universiteit van Amsterdam