5-9 Aug 2013
Duesseldorf, Germany
Call for Papers
Workshop on Bayesian Natural Language Semantics and Pragmatics
organised as part of the European Summer School on Logic, Language and
Information (ESSLLI 2013, http://esslli2013.de/) August 5th-9th, Heinrich Heine
University, Düsseldorf/ Germany
http://www.bnlsp.ws
Aims and Scope
Bayesian interpretation is a standard technique in signal interpretation in
which the most probable message M conveyed by a signal S is found by using two
models, namely the prior probability of the message M and the production
probability of the signal S, that is, the probability of the signal given the
message. Since by Bayes' theorem argmaxM p(M|S) = argmaxM p(M)p(S|M), the two
models suffice for detecting the most probable message given the signal.
Bayesian NL interpretation is just the same: the signal is an utterance (of a
word, sentence, turn or text), and the messages, that is the interpretation
hypotheses range over the possible intentions of the speaker, which according
to Grice the hearer must recognise in a successful communication. Bayesian
methods include Bayesian nets, Bayesian belief revision and information states
that are represented as probability distributions, among other methods.
The workshop wants to collect emerging work in Bayesian interpretation as well
as work using Bayesian methods in natural language (NL) interpretation and
bring together the various approaches so as to contribute to a more integrated
research programme in this new area.
What we are looking for are analyses in semantics and pragmatics using the
possibilities of Bayesian interpretation, and papers exploring the consequences
of Bayesian NL interpretation. An example of the first kind is the analysis of
counterfactuals pioneered by Judea Pearl and elaborated in a more linguistic
setting by Stefan Kaufmann and Katrin Schulz, an approach to causality that
Lassiter and Zeevat show also to apply to presupposition projection. The second
kind is exemplified by Jayez and Winterstein in their analysis of argumentation
or in interpretation by abduction (Hobbs et al.).
Bayesian interpretation gives three departures from standard assumptions.
First, it can be seen as a defence of linguistic semantics as a production
system that maps meanings into forms as was assumed in generative semantics,
but also in systemic grammar, functional grammar and optimality theoretic
syntax. This brings with it a more relaxed view of the relation between
syntactic and semantic structure: the mapping from meanings to forms should be
efficient (linear) and the prior strong enough to find the inversion from the
cues in the utterance.
The second departure is that the prior is also the source for what is not said
in the utterance but part of the pragmatic enrichment of the utterance: what is
part of the speaker intention but not of the literal meaning. There is no
principled difference between inferring in perception that the man who is
running in the direction of the bus stop as the bus is approaching is trying to
catch the bus and inferring in conversation that the man who states that he is
out of petrol is asking for help with his problem.
The third departure is thus that interpretation is viewed as a stochastic and
holistic process leading from stochastic data to a symbolic representation or a
probability distribution over such representations that can be equated with the
conversational contribution of the utterance.
Models relevant to the prior ? that is, the probability of the message M ?
include Bayesian networks for causality, association between concepts and
(common ground) expectations. It is tempting to see a division in logic:
classical logic for expressing the message, the logic of uncertainty for
finding out what those messages are. Radical Bayesian interpretation can be
described as the view that not just the identification of the message requires
Bayesian methods, but also the message itself and the contextual update have to
be interpreted with reference to Bayesian belief revision, Bayesian networks or
conceptual association. It states the hypothesis of a Bayesian mind/ brain.
(Cf. Oaksford and Chater.)
Bayesian NL interpretation can be defined as stochastic interpretation based on
a model of NL production and the prior probabilities about what the speaker
will do in the context and the other events in the context, as in speech
perception, where the model of speech production is a Hidden Markov Model and
the prior probabilities are given by a language model, or as in computer vision
where the production model is the mental camera that maps hypotheses about what
is seen to visual signals and the required prior is given in much the same way
as in NL. (NL interpretation and vision share crucial features ? being fast,
subconscious and eliminating vast amounts of ambiguities ? that are not covered
by standard pipeline models of NLI.) Bayesian interpretation merely brings NL
interpretation in line with these popular views of how these other kinds of
perception might work. It is however clear that a range of classical techniques
are important constraints on the prior probability (e.g. classical logical
entailment and conversational and other planning) and on the production model
(e.g. rule based grammar).
The topic of the workshop can also be approached from cognitive science, as the
application to language of the increasingly popular hypothesis that the mind is
a Bayesian inference machine (Oaksford and Chater). It follows from that view
that natural language interpretation, the information states that it relies on
and constructs and the semantics of expressions of inference like conditionals
and modals must be Bayesian.
Topics of interest include (but are not limited to) the following:
Bayesian intention recognition
evaluating syntax by simulated production
motor theories of recognition and interpretation
natural language interpretation and vision
the Bayesian mind/ brain
information states as probability distributions
causal reasoning
interpretation by (weighted) abduction
Bayesian models of relevance
Bayesian accounts of semantic and pragmatic phenomena such as metonymy,
pronoun resolution, discourse structure detection, temporal interpretation,
noun-noun compounds, particle meaning, vagueness and others
References
Dehghani, M., Iliev, R., and Kaufmann, S. 2012. Causal explanation and fact
mutability in counterfactual reasoning. Mind & Language 27(1):55-85.
Hobbs, J., Stickel, M., Appelt, D., and Martin, P. (1990). Interpretation
as abduction. Technical Report 499, SRI International, Menlo Park, California.
Jayez, J., and Winterstein, G. 2012. Additivity and Probabilty. Lingua.
http://dx.doi.org/ 10.1016/j.lingua.2012.11.004
Lassiter, D. 2012. Presuppositions, provisos, and probability. Semantics
and Pragmatics, 5{2}: 1-37.
Oaksford, M., Chater, N. 2010. Cognition and Conditionals: Probability and
Logic in Human Thinking. OUP.
Pearl, J. 2009. Causality. 2nd edition. CUP.
Schulz, K. 2011. "If you'd wiggled A, the B would've changed", Causality
and counterfactual conditionals. Synthese 179(2).
Zeevat, H. 2013. Accommodation in Communication. Ms.
Further introductory texts on Bayesian models of cognition:
Tenenbaum,J., Kemp, C., Griffiths, T., Goodman, N. 2011. How to grow a
mind: Structure, statistics, and abstraction. Science.
http://www.stanford.edu/~ngoodman/papers/tkgg-science11-reprint.pdf
Griffiths, T., Kemp, C., Tenenbaum, J. 2008. Bayesian models of cognition.
In The Cambridge Handbook of Computational Cognitive Modeling.
http://cocosci.berkeley.edu/tom/papers/bayeschapter.pdf
Trends in Cognitive Science, Special issue 2006 on probabilistic models of
cognition including an article on probabilistic models of language acquisition
and processing by Nick Chater and Christopher Manning:
http://www.cell.com/trends/cognitive-sciences/issue?pii=S1364-6613(06)X0119-5.
Submission Details
Authors are invited to submit an anonymous, extended abstract. Submissions
should not exceed 2 pages, including references. Submissions should be in PDF
format. Please submit your abstract via the EasyChair system:
https://www.easychair.org/conferences/?conf=bnlsp13. For questions regarding
the submission procedure, contact Hans-Christian Schmitz (v.i.). The
submissions will be reviewed by the workshop's programme committee.
Contributors will be invited for a discussion session on the Future of Bayesian
NL Interpretation scheduled for the Saturday after the workshop.
Workshop Format
The workshop is part of ESSLLI and is open to all ESSLLI participants. It will
consist of five 90-minute sessions held over five consecutive days in the first
week of ESSLLI. There will be 2-3 slots for paper presentation and discussion
per session. On the first day the workshop organisers will give an introduction
to the topic. Proceedings: Workshop Proceedings will be published. We will
publish a separate call for full papers.
Invited Speakers
Jacques Jayez, ENS Lyon & CNRS L2C2
Stefan Kaufmann, University of Conneticut & Northwestern University
Daniel Lassiter, Stanford University
Important Dates
Submission Deadline: April 15, 2013
Notification: April 30, 2013
Preliminary programme: May 7, 2013
Workshop dates: August 5-9, 2012
Programme Committee
Anton Benz, ZAS Berlin
Graeme Forbes, University of Colorado, Boulder
Fritz Hamm, Universität Tübingen
Jerry Hobbs, University of Southern California
Noah Goodman, Stanford University
Jacques Jayez, ENS Lyon, CNRS L2C2
Stefan Kaufmann, Northwestern University & University of Connecticut
Uwe Kirschenmann, Fraunhofer FIT
Ewan Klein, University of Edinburgh
Daniel Lassiter, Stanford University
Jacob Rosenthal, Universität Bonn
Remko Scha, ILLC Amsterdam
David Schlangen, Universität Bielefeld
Hans-Christian Schmitz, IDS Mannheim
Markus Schrenk, Universität Köln & Universität Düsseldorf
Bernhard Schröder, Universität Duisburg-Essen
Grégoire Winterstein, CNRS LLF
Henk Zeevat, ILLC Amsterdam
Thomas Ede Zimmermann, Universität Frankfurt am Main
Organisers
Hans-Christian Schmitz, IDS Mannheim, schmitz@ids-mannheim.de
Henk Zeevat, ILLC Amsterdam, H.W.Zeevat@uva.nl
The workshop receives funding from the German Society for Computational
Linguistics & Language Technology (GSCL).
--
Dr. Hans-Christian Schmitz
Institut für deutsche Sprache (IDS)
R 5, 6-13
68161 Mannheim, Germany
+49 (0)621 1581 217