<div dir="ltr"><div class="gmail_default" style="font-family:tahoma,sans-serif;font-size:large"><br clear="all"></div><div><div dir="ltr" class="gmail_signature" data-smartmail="gmail_signature"><div dir="ltr"><div><div dir="ltr"><div><div dir="ltr"><div dir="ltr"><div dir="ltr"><div dir="ltr"><div>George Duncan</div><div>Emeritus Professor of Statistics, Carnegie Mellon University<br><a href="http://georgeduncanart.com/" target="_blank">georgeduncanart.com</a></div><div>See posts on Facebook, Twitter, and Instagram</div>
<div>Land: (505) 983-6895  <br></div><div>Mobile: (505) 469-4671</div>
<div> <br>My art theme: Dynamic exposition of the tension between matrix order and luminous chaos.<br></div><div><br></div><div><h1 style="letter-spacing:-0.02em;margin:0px"><font size="2" face="arial, helvetica, sans-serif" style="font-weight:normal">"Attempt what is not certain. Certainty may or may not come later. It may then be a valuable delusion."</font></h1><div><span style="font-size:small;letter-spacing:-0.02em;line-height:1.125em"><font face="arial, helvetica, sans-serif">From "Notes to myself on beginning a painting" by Richard Diebenkorn. </font></span></div><table width="85%" style="color:rgb(93,86,81);font-family:Helvetica;font-size:18px;margin:auto;border-collapse:collapse!important"><tbody><tr><td style="text-align:center"><p style="margin-top:4px;margin-bottom:12px"><font size="2">"It's that knife-edge of uncertainty where we come alive to our truest power." Joanna Macy.</font></p></td></tr><tr><td valign="top" style="font-size:13px;text-transform:uppercase"><p style="margin-top:0px;margin-bottom:27px;color:rgb(146,146,146);text-align:center"><br></p></td></tr></tbody></table></div></div></div></div></div></div></div></div></div></div></div><br><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">---------- Forwarded message ---------<br>From: <strong class="gmail_sendername" dir="auto">John Friday</strong> <span dir="auto"><<a href="mailto:jfriday@cs.cmu.edu">jfriday@cs.cmu.edu</a>></span><br>Date: Mon, Sep 21, 2020 at 12:44 PM<br>Subject: LTI Colloquium, September 25th<br>To:  <<a href="mailto:lti-seminar@cs.cmu.edu">lti-seminar@cs.cmu.edu</a>><br></div><br><br><div dir="ltr"><font face="tahoma, sans-serif">Hi Everyone,</font><div><font face="tahoma, sans-serif"><br></font></div><div><font face="tahoma, sans-serif">This week we have a double feature at the LTI Colloquium. Both Shruti Rijhwani and Zirui Wang will be presenting talks on Friday, September 25th from 1:30 to 2:50 PM EST. The talks will be presented on <a href="https://cmu.zoom.us/j/96867532227?pwd=QWtBbTB2ZDFUMit3b1dHK1BnZEhnZz09" target="_blank">Zoom</a>, passcode 883155.</font></div><div><font face="tahoma, sans-serif"><br></font></div><div><font face="tahoma, sans-serif">Here's the information on this week's speakers and their topics.</font></div><div><span style="line-height:107%"><font face="tahoma, sans-serif"><br></font></span></div><div><font face="tahoma, sans-serif"><span style="line-height:107%"><b>Shruti Rijhwani</b> is a PhD student at the Languages Technologies Institute
at Carnegie Mellon University. Her primary research interest is multilingual
natural language processing, with a focus on low-resource and endangered
languages. Her research is supported by a Bloomberg Data Science Ph.D.
Fellowship. Much of her published work focuses on improving named entity
recognition and entity linking for low-resource languages and domains.</span> </font></div><div><font face="tahoma, sans-serif"><br></font></div><div><font face="tahoma, sans-serif">Title: Zero-shot Neural Transfer for Cross-lingual Entity Linking</font></div><div><p class="MsoNormal" style="margin:0in 0in 0.0001pt;line-height:normal;background-image:initial;background-position:initial;background-size:initial;background-repeat:initial;background-origin:initial;background-clip:initial"><font face="tahoma, sans-serif"><br></font></p><p class="MsoNormal" style="margin:0in 0in 0.0001pt;line-height:normal;background-image:initial;background-position:initial;background-size:initial;background-repeat:initial;background-origin:initial;background-clip:initial"><font face="tahoma, sans-serif">Abstract:  Cross-lingual entity linking maps a named entity in a
source language to its corresponding entry in a structured knowledge base that
is in a different (target) language. While previous work relies heavily on
bilingual lexical resources to bridge the gap between the source and the target
languages, these resources are scarce or unavailable for many low-resource
languages. To address this problem, we investigate zero-shot cross-lingual
entity linking, in which we assume no bilingual lexical resources are available
in the source low-resource language. Specifically, we propose pivot-based
entity linking, which leverages information from a high-resource
"pivot" language to train character-level neural entity linking
models that are transferred to the source low-resource language in a zero-shot
manner. With experiments on nine low-resource languages and transfer through a
total of 54 languages, we show that our proposed pivot-based framework improves
entity linking accuracy 17% (absolute) on average over the baseline systems for
the zero-shot scenario. Further, we also investigate the use of
language-universal phonological representations which improves average accuracy
(absolute) by 36% when transferring between languages that use different
scripts.</font></p><p class="MsoNormal" style="margin:0in 0in 0.0001pt;line-height:normal;background-image:initial;background-position:initial;background-size:initial;background-repeat:initial;background-origin:initial;background-clip:initial"><span style="line-height:107%;background-image:initial;background-position:initial;background-size:initial;background-repeat:initial;background-origin:initial;background-clip:initial"><font face="tahoma, sans-serif"><br></font></span></p><p class="MsoNormal" style="margin:0in 0in 0.0001pt;line-height:normal;background-image:initial;background-position:initial;background-size:initial;background-repeat:initial;background-origin:initial;background-clip:initial"><font face="tahoma, sans-serif"><span style="line-height:107%;background-image:initial;background-position:initial;background-size:initial;background-repeat:initial;background-origin:initial;background-clip:initial"><b>Zirui Wang</b> is currently a PhD student at the
Language Technologies Institute (LTI). He works on transfer learning, meta
learning, and multilingual models. He is advised by Jaime Carbonell, Yulia
Tsvetkov, and Emma Strubell.</span>    <br></font></p><p class="MsoNormal" style="margin:0in 0in 0.0001pt;line-height:normal;background-image:initial;background-position:initial;background-size:initial;background-repeat:initial;background-origin:initial;background-clip:initial"><font face="tahoma, sans-serif"><br></font></p><p class="MsoNormal" style="margin:0in 0in 0.0001pt;line-height:normal;background-image:initial;background-position:initial;background-size:initial;background-repeat:initial;background-origin:initial;background-clip:initial"><font face="tahoma, sans-serif">Title: Cross-lingual Alignment vs Joint Training: A
Comparative Study and A Simple Unified Framework</font></p><p class="MsoNormal" style="margin:0in 0in 0.0001pt;line-height:normal;background-image:initial;background-position:initial;background-size:initial;background-repeat:initial;background-origin:initial;background-clip:initial"><font face="tahoma, sans-serif"><br></font></p><p class="MsoNormal" style="margin:0in 0in 0.0001pt;line-height:normal;background-image:initial;background-position:initial;background-size:initial;background-repeat:initial;background-origin:initial;background-clip:initial"><font face="tahoma, sans-serif"><span style="line-height:107%;background-image:initial;background-position:initial;background-size:initial;background-repeat:initial;background-origin:initial;background-clip:initial">Abstract:  Learning multilingual representations of text
has proven a successful method for many cross-lingual transfer learning tasks.
There are two main paradigms for learning such representations: (1) alignment,
which maps different independently trained monolingual representations into a
shared space, and (2) joint training, which directly learns unified
multilingual representations using monolingual and cross-lingual objectives
jointly. In this work, we first conduct direct comparisons of representations
learned using both of these methods across diverse cross-lingual tasks. Our
empirical results reveal a set of pros and cons for both methods, and show that
the relative performance of alignment versus joint training is task-dependent.
Stemming from this analysis, we propose a simple and novel framework that
combines these two previously mutually-exclusive approaches. We show that our
proposed framework alleviates limitations of both approaches and can generalize
to contextualized representations such as Multilingual BERT.</span>    </font><span style="font-size:11pt;font-family:Arial,sans-serif"><br></span></p><p class="MsoNormal" style="margin:0in 0in 0.0001pt;line-height:normal;background-image:initial;background-position:initial;background-size:initial;background-repeat:initial;background-origin:initial;background-clip:initial"><font face="tahoma, sans-serif"><br></font></p><p class="MsoNormal" style="margin:0in 0in 0.0001pt;line-height:normal;background-image:initial;background-position:initial;background-size:initial;background-repeat:initial;background-origin:initial;background-clip:initial"><font face="tahoma, sans-serif">Please reach out to me if you have any questions.</font></p><p class="MsoNormal" style="margin:0in 0in 0.0001pt;line-height:normal;background-image:initial;background-position:initial;background-size:initial;background-repeat:initial;background-origin:initial;background-clip:initial"><font face="tahoma, sans-serif"><br></font></p><p class="MsoNormal" style="margin:0in 0in 0.0001pt;line-height:normal;background-image:initial;background-position:initial;background-size:initial;background-repeat:initial;background-origin:initial;background-clip:initial"><font face="tahoma, sans-serif">Best wishes,</font></p><p class="MsoNormal" style="margin:0in 0in 0.0001pt;line-height:normal;background-image:initial;background-position:initial;background-size:initial;background-repeat:initial;background-origin:initial;background-clip:initial"><font face="tahoma, sans-serif">John Friday</font></p><p class="MsoNormal" style="margin:0in 0in 0.0001pt;line-height:normal;background-image:initial;background-position:initial;background-size:initial;background-repeat:initial;background-origin:initial;background-clip:initial;font-size:11pt;font-family:Calibri,sans-serif"><span style="font-family:Arial,sans-serif;font-size:11pt"><br></span></p><p class="MsoNormal" style="margin:0in 0in 0.0001pt;line-height:normal;background-image:initial;background-position:initial;background-size:initial;background-repeat:initial;background-origin:initial;background-clip:initial;font-size:11pt;font-family:Calibri,sans-serif"><span style="font-family:Arial,sans-serif;font-size:11pt"><br></span></p></div><div><span style="font-size:12pt;font-family:Calibri,sans-serif"><br></span></div><div>   <br></div><div><br></div><div><br></div></div>
</div></div>