text
stringlengths
4
222k
label
int64
0
4
The key idea of SSR is to train a Seq2Seq model to rewrite machine-generated text spans that may contain a variety of noise such as paraphrase, grammatical and factual errors, into ground truth that are correct and appropriate in the context. As illustrated by Figure 1 (b), SSR involves three steps:(1) masking out parts of the text; (2) generating imperfect text to fill in the masked spans; (3) training the Seq2Seq model to rewrite the imperfect spans to the ground truth. We will introduce the technical details of SSR in Section 3.1 and an advanced training strategy for SSR in Section 3.2.Text Span Masking To generate training data of sequence span rewriting in a self-supervised fashion, we first randomly sample a number of text spans and mask them. Specifically, the spans are masked with special mask tokens by order (e.g., <s 1 >, <s 2 > and <s 3 >) in Figure 1 (b) as in T5, with span lengths drawn from a Poisson distribution (λ = 3). The number of spans is controlled so that approximately 30% of all tokens are masked. Specially, 0-length spans correspond to an insertion of a mask token.For example, as shown in Figure 1 , given a sentence "In 2002, Elon Musk founded SpaceX, an aerospace manufacturer company.", we randomly sample three text spans (two of them are of length 1). The masked sentence becomes "In <s 1 >, Elon Musk <s 2 > SpaceX, <s 3 > company." Imperfect Span Generation With masked spans, we can generate imperfect text to fill in the spans. Specifically, we feed the masked input into the imperfect span generator to generate predictions in an auto-regressive fashion. To improve the diversity of generation, we use nucleus sampling (Holtzman et al., 2020) that truncates the unreliable tail of the probability distribution and samples from the dynamic nucleus of tokens containing the vast majority of the probability mass. For instance, given the previous masked input sentence, a T5-large model generates "2001", "joined", and "a manufacturer" as imperfect spans.Span Rewriting After we obtain imperfect spans within the text, we pre-train the Seq2Seq model to rewrite imperfect text spans into the ground truth. Specifically, we use special tokens <s i > and </s i > to denote the starting and ending of i-th text span to be rewritten in the source sequence, which gives "In<s 1 > 2001 </s 1 >, Elon Musk <s 2 > joined </s 2 > SpaceX, <s 3 > a manufacturer </s 3 > company."as the input for SSR pre-training. Similarly, we use <s i > to separate different text spans in the target sequence, which gives "<s 1 > 2002 <s 2 > founded <s 3 > an aerospace manufacturer" as the target sequence. We train the model to generate target text spans from left to right auto-regressively by maximum likelihood estimation.We can see that the SSR objective involves using a pre-trained model to generate imperfect spans, which will lead to increased computational cost. In practice, we suggest starting SSR pre-training based on checkpoints of existing Seq2Seq pretrained models. In this way, we only need to generate a few amount of imperfect spans and continually pre-train the models for a few steps. In this perspective, SSR can be viewed as a general approach that be used to improve various Seq2Seq pre-trained models before fine-tuning them on downstream text generation tasks.For fine-tuning SSR, we simply denote the entire input sequence with the same span identifier (e.g., <s 1 >) used during SSR pre-training. Therefore, the model would learn to rewrite the entire input sequence, alleviating the gap caused by the <mask> token during text infilling pre-training. For example, for grammatical error correction, the input is formatted as "<s 1 > I go to school yesterday. </s 1 >" and the output is "<s 1 > I went to school yesterday.", which exactly corresponds to the pre-training format of SSR.In addition, for some constrained text generation tasks (Lin et al., 2020) and controlled text generation (Hu et al., 2017 ) tasks, we can specify which part of input text to be rewritten with span identifiers. This enables more flexible text generation with Seq2Seq pre-trained models. Taking text attribute transfer as an example, an input example would looks like "Great food <s 1 > but very rude </s 1 > waiters." and the corresponding target sequence is "<s 1 > and very friendly". The inductive bias of span rewriting learned by SSR pre-training naturally benefit these kind of NLG applications.As mentioned above, we apply SSR as a continual training objective for pre-trained Seq2Seq models that were originally trained with the text infilling objective. However, continually training a pre-trained Seq2Seq model with a different objective may result in drastic adaption of its parameters. To make this transition smoother and reduce the difficulty of optimization, we propose to schedule the SSR training examples with curriculum learning (Bengio et al., 2009) according to their difficulties. Specifically, we measure the difficulty of rewriting a certain imperfect text span with both the length of the imperfect span and the uncertainty (i.e., perplexity) of the imperfect span generator when generating this span.Intuitively, a short imperfect span generally includes some simple word substitution (e.g., big → large) or grammatical error (e.g., is → was) while a longer imperfect span may require more complicated paraphrasing (e.g., what is happening → what's up). Also, an imperfect span with larger perplexity suggests the span may be of lower quality or more uncommon, thus more difficult to be rewritten into ground truth. Therefore, we consider longer imperfect spans and spans with a higher perplexity under the imperfect span generator to be more difficult. We split the SSR training examples into k (k = 5 in our experiments) groups according to the sum of per-token loss of the imperfect span generator when it generates an SSR training example. We then start pre-training the model with the easiest group of SSR training examples and then gradually switch to more difficult groups during pre-training. Intuitively, this will make the transition from the original text infilling objective to the sequence span rewriting objective more smooth.
2
Deep neural networks, with or without word embeddings, have recently shown significant improvements over traditional machine learning-based approaches when applied to various sentence-and relation-level classification tasks. Kim (2014) have shown that CNNs outperform traditional machine learning-based approaches on several tasks, such as sentiment classification, question type classification, etc. using simple static word embeddings and tuning of hyper-parameters. Zhou et al. (2016) proposed attention-based bi-directional LSTM networks for relation classification task. More recently, (Shwartz et al., 2016) proposed LSTMbased integrated approach by combining path-based and distributional methods for hypernymy detection and shown significant accuracy improvements.Following Kim (2014), we present a variant of the CNN architecture with four layer types: an input layer, a convolution layer, a max pooling layer, and a fully connected softmax layer for term pair relation classification as shown in figure 1. Each term pair (sentence) in the input layer is represented as a sentence(relation) comprised of distributional word embeddings. Let v i ∈ R k be the k-dimensional word vector corresponding to the ith word in the term pair. Then a term pair S of length is represented as the concatenation of its word vectors:EQUATIONIn the convolution layer, for a given word sequence within a term pair, a convolutional word filter P is defined. Then, the filter P is applied to each word in the sentence to produce a new set of features. We use (Collobert et al., 2011; Kim, 2014) at pooling layer to deal with the variable sentence size. After a series of convolutions with different filters with different heights, the most important features are generated. Then, this feature representation, Z, is passed to a fully connected penultimate layer and outputs a distribution over different relation labels:EQUATIONwhere y denotes a distribution over different relation labels, W is the weight vector learned from the input word embeddings from the training corpus, and b is the bias term.We model the relation classification as a sentence classification task. We use the CogALex-V 2016 shared task dataset in our experiments which is described in the next section. This dataset consisting of term pairs is tokenized using white space tokenizer. We performed both binary and multi-class classification on the given data set containing two binary and five multi-class relations from subtask-1 and subtask-2 respectively. We used Kim's (2014) Theano implementation of CNN 3 for training the CNN model. We use word embeddings from word2vec which are learned using the skipgram model of Mikolov et. al (2013a,b) by predicting linear context words surrounding the target words. These word vectors are trained on about 100 billion words from Google News corpus. As word embeddings alone have shown good performance in various classification tasks, we also use them in isolation, with varying dimensions, in our experiment. We performed 10-fold cross-validation (CV) on the entire training set for both the subtasks in random and word2vec embedding settings. We initialized random embeddings in the range of [−0.25, 0.25]. We did not use any external corpus for training our model but used precompiled word2vec embeddings trained on about 100 billion words from Google News corpus. We used a stochastic gradient descent-based optimization method for minimizing the cross entropy loss during the training with the Rectified Linear Unit (ReLU) non-linear activation function.Tuning Hyper Parameters. The hyper-parameters we varied are the drop-out, batch size, embedding dimension and hidden node sizes for training our models in cross-validation setting for finding the optimal model using training set. We performed grid search over these value ranges for the mentioned hyper parameters: drop out{0. 1,0.2,0.3,0.4,0.5,0.6}, batch size{12,24,32,48,60}, embedding di-mension{50,100,150,200,250,300} and hidden node sizes{100,200,300,400,500}. Optimal results are obtained using drop out-0.5, batch size-32,embedding size-300 and hidden node size-300 for subtask-1 and dropout-0.5, batchSize-24, embedding size-300 and hidden node size-400 for subtask-2 in cross validation setting as shown in tables 3 and 5. We used fixed context-window sizes set at [1, 2] as max length of the term pair in given corpus is 2 for both the tasks. We also used fixed number of 25 iterations with default learning rate (0.95) for training our models.
2
We describe a text simplification system that uses a synchronous grammar defined over typed dependencies. We demonstrate that this has specific advantages over previous work on text simplification: (1) it allows for better linguistic modelling of simplification operations that require morphological changes, (2) the higher level of abstraction makes it easy to write and read grammar rules; thus common syntactic operations (such as conversion of passive to active voice) can be handled in this framework through accurate hand-written rules, and (3) It is easier and more elegant to automatically acquire a synchronous grammar from data, compared to synchronous grammars based on constituency-parses. In this section we describe our framework and text simplification system in more detail; then, in section 4, we report an evaluation that compares our system against a human simplification and the Woodsend and Lapata (2011) system. Ding and Palmer (2005) introduce the notion of a Synchronous Dependency Insertion Grammar (SDIG) as a tree substitution grammar defined on dependency trees. They define elementary trees (ETs) to be sub-sentential dependency structures containing one or more lexical items. The SDIG formalism assumes that the isomorphism of the two syntactic structures is at the ET level, thus allowing for non-isomorphic tree to tree mapping at the sentence level. We base our approach to text simplification on SDIGs, but the formalism is adapted for the monolingual task, and the rules are written in a formalism that is suited to writing rules by hand as well as automatically acquiring rules from aligned sentences. Our system follows the architecture proposed in Ding and Palmer (2005) , reproduced in Fig. 1 . In this paper, we will present the ET Transfer component as a set of transformation rules. The rest of Section 3 will focus on the linguistic knowledge we need to encode in these rules, the method for automatic acquisition of rules from a corpus of aligned sentences, and the generation process. To acquire a synchronous grammar from dependency parses of aligned English and simple English sentences, we just need to identify the differences. For example, consider two aligned sentences from the aligned corpus described in Woodsend and Lapata (2011):1. (a) Also, lichen fungi can reproduce sexually, producing spores.(b) Also, lichen fungi can reproduce sexually by producing spores.An automatic comparison of the dependency parses for the two sentences (using the Stanford Parser, and ignoring punctuation for ease of presentation) reveals that there are two typed dependencies that occur only in the parse of the first sentence, and two that occur only in the parse of the second sentence (in italics): Thus, to convert the first sentence into the second, we need to delete two dependencies and introduce two others. The rule contains variables (?Xn), which can be forced to match certain words in square brackets: By collecting such rules, we can produce a meta-grammar that can translate dependency parses in one language (English) into the other (simplified English). The rule above will translate "reproduce, producing spores" to "reproduce by producing spores". This rule is alternatively shown as a transduction of elementary trees in Fig. 2 . Such deletion and insertion operations are central to text simplification, but a few other operations are also needed to avoid broken dependency links in the Target ETs (cf. Fig. 1 ).RULE: PRODUCING2BY PRODUCING 1. DELETEConsider lexical simplification; for example, where the word "extensive" is replaced by "big", resulting in one amod relation being deleted and a new one inserted. Now, a third list is automatically created when a variable (?X1) is present in the DELETE list but not the INSERT list. This is a command to move any other relations (edges) involving the node ?X1 to the newly created node ?X2, and ensures correct rule application in new contexts where there might be additional relations involving the deleted word. We also apply a process of generalisation, so that a single rule can be created from multiple instances in the training data. For example, if the modifier "extensive" has been simplified to "big" in the context of a variety of words in the ?X0 position, this can be represented succinctly as "?X0[networks, avalanches, blizzard, controversy]". Note that this list provides valid lexical contexts for application of the rule. If the word is seen in sufficient contexts, we make it universal by removing the list. An example of a generalised rule follows: This rule states that any of the words in "[extensive, large, massive, sizable, major, powerful, unprecedented, developed, giant]" can be replaced by "big" in any lexical context ?X0; i.e., these words are not ambiguous. We acquire rules such as the above automatically, filtering out rules that involve syntactic constructs that we require manually-written rules for (relative clauses, apposition, coordination and subordination). We have extracted 3180 rules from SEW revision histories and aligned SEW-EW sentence pairs. From the same data, Woodsend and Lapata (2011) extract 1431 rules, but these include rules for deletion, as well as for purely syntactic sentence splitting. The 3180 rules we derive are only lexical simplifications or simple paraphrases. We do not perform deletion operations, and use manually written rules for sentence splitting rules Our approach allows for the encoding of local lexico-syntactic context for lexical simplification. Only if a simplification is seen in many contexts do we generalise the rule by relaxing the lexical context. We consider this a better solution to that implemented in Woodsend and Lapata (2011), who have to discard lexical rules that are only seen once, because they do not model lexical context.RULE: *2BIG 1. DELETE (a)In addition to the automatically acquired grammar as described above, our system uses a small hand crafted grammar for common syntactic simplifications. As discussed earlier, these rules are difficult to learn from corpora, as difficult morphology and tense manipulations would have to be learnt from specific instances seen in a corpus. In practice, it is easy enough to code these rules correctly.We have 26 hand-crafted rules for apposition, relative clauses, and combinations of the two. A further 85 rules handle subordination and coordination. These are greater in number because they are lexicalised on the conjunction. 11 further rules cover voice conversion from passive to active. Finally, we include 14 rules to standardise quotations; i.e., reduce various constructs for attribution to the form "X said: Y." Performing this step allows us to simplify constructs embedded within quotations -another case that is not handled well by existing systems. One of the rules for converting passive to active voice is shown below: The rule specifies that the node ?X0 should inherit the tense of ?X2 and agree in number with ?X3. This rule correctly captures the morphological changes required for the verb, something not achieved by the other systems discussed in Section 2. The dependency representation makes such linguistic constraints easy to write by hand. However, we are not yet in a position to learn such constraints automatically. Our argument is that a small number of grammar rules need to be coded carefully by hand to allow us to express the difficult syntactic constructions, while we can harvest large grammars for local paraphrase operations including lexical substitution.RULE:In this work we apply the simplification rules exhaustively to the dependency parse; i.e., every rule for which the DELETE list is matched is applied iteratively. As an illustration, consider:The cat was chased by a dog that was barking. This rule removes the embedding "rcmod" relation, when there is a subject available for the verb in the relative clause. Then we apply the rule to convert passive to active voice, as described in Section 3.3. Following these two rule applications, we are left with the following list of dependencies: det(cat-2, The-1) dobj(chased-4, cat-2) det(dog-7, a-6) nsubj(chased-4, dog-7) aux(barking-10, was-9) nsubj(barking-10, dog-7)This list now represents two trees with chased and barking as root nodes: Generating from constituency-based parse trees is trivial, in that leaf nodes need to be output in the order processed by a depth first LR search. The higher level of abstraction of dependency representations makes generation more complicated, as the dependencies abstract away from constituent ordering and word morphology. One option is to use an off the shelf generator; however, this does not work well in practice; e.g., Siddharthan (2011) found that misanalyses by the parser can result in unacceptable word and constituent orders in the generated texts. In the system described here, we follow the generation-light approach adopted by Siddharthan (2011) . We reuse the word order from the input sentence as a default, and the synchronous grammar encodes any changes in ordering. For example, in Rule PASSIVE2ACTIVE above, we include a further specification: This states that for node ?X0, the traversal order should be subtree ?X3 followed by current node ?X0 followed by subtree ?X1. Using this specification would allow us to traverse the tree using the original word order for nodes with no order specification, and the specified order where a specification exists. In the above instance, this would lead us to simplify "The cat is chased by the dogs" to "the dogs chase the cat". Details of the generation process can be found elsewhere (Siddharthan, 2011 , for example), but to summarise, the genlight approach implemented here uses four lists: At present the automatically harvested rules do not encode morphological changes. They do however encode reordering information, which is automatically detected from the relative word positions in the original and simplified training sentences.
2
Our approach to unsupervised style transfer is to modify source texts to match the style of the target domain. To achieve this, we can typically keep most of the source tokens and only modify a fraction of them. To determine which tokens to edit and how to edit them, we propose the following three-step approach:(1) Train padded MLMs on source domain data (Θ source ) and on target domain data (Θ target ). ( §2.1)(2) Find the text spans where the models disagree the most to determine the tokens to delete. ( §2.2) (3) Use Θ target to replace the deleted spans with text that fits the target domain.The original MLM objective in BERT (Devlin et al., 2019) does not model the length of infilled token spans since each [MASK] token corresponds to one wordpiece token that needs to be predicted at a given position. To model the length, it is possible to use an autoregressive decoder or a separate model (Mansimov et al., 2019) . Instead, we use an efficient non-autoregressive padded MLM approach by Mallinson et al. (2020) which enables BERT to predict [PAD] symbols when infilling a fixed-length spans of n p [MASK] tokens.When creating training data for this model, spans of zero to n p tokens, corresponding to whole word(s), are masked out after which the mask sequences are padded to always have n p [MASK] tokens. For example, if n p = 4 and we have randomly decided to mask out tokens from i to j = i + 2 (inclusive) from text W , the corresponding input sequence is:W \i:j = (w 1 , . . . , w i−1 , [MASK], [MASK], [MASK], [MASK], w i+3 , . . . , w |W | ).The targets for the first three [MASK] tokens are the original masked out tokens, i.e. w i , w i+1 , w i+2 , while for the remaining token the model is trained to output a special [PAD] token.Similar to Salazar et al., 2020) , we can compute the pseudo-likelihood (L) of the original tokens W i:j according to:L W i:j | W \i:j ; Θ = j t=i P MLM w t | W \i:j ; Θ × i+np−1 t=j+1 P MLM [PAD] t | W \i:j ; Θ ,where P MLM * t | W \i:j ; Θ denotes the probability of the random variable corresponding to the t-th token in W \i:j taking value w t or [PAD] . Furhermore, we can compute the maximum pseudo-likelihood infilled tokens W i:j = arg max W i:j L W i:j | W \i:j ; Θ by taking the most likely insertion for each [MASK] independently, as done by the regular BERT. These maximum likelihood estimates are used both when de-ciding which spans to edit (as described in §2.2) as well as when replacing the edited spans.In practice, instead of training two separate models for the source and target domain, we train a single conditional model. Conditioning on a domain is achieved by prepending a special token ([SOURCE] or [TARGET]) to each token sequence fed to the model. 1 At inference time, padded MLM can decide to insert zero tokens (by predicting [PAD] for each mask) or up to n p tokens based on the bidirectional context it observes. In our experiments, we set n p = 4. 2Our approach to using MLMs to determine where to delete and insert tokens is to find text spans where the source and target model disagree the most. Here we introduce a scoring function to quantify the level of disagreement.First, we note that any span of source tokens that has a low likelihood in the target domain is a candidate span to be replaced or deleted. That is, source tokens from index i to j should be more likely to be deleted the lower the likelihood L W i:j | W \i:j ; Θ target is. Moreover, if two spans have equally low likelihoods under the target model, but one of them has a higher maximum likelihood replacement W target i:j , then it is safer to replace the latter. For example, if a sentiment transfer model encounters a polarized word of the wrong sentiment and an arbitrary phone number, it might evaluate both of them as unlikely. However, the model will be more confident about how to replace the polarized word, so it should try to replace that rather than the phone number. Thus the first component of our scoring function is:TargetScore(i, j) = L W target i:j | W \i:j ; Θ target − L W i:j | W \i:j ; Θ target .This function can be used on its own without having access to a source domain corpus, but in some 1 The motivation for using a joint model instead of two separate models is to share model weights to give more consistent likelihood estimates. An alternative way of conditioning the model would be to add a domain embedding to each token embedding as proposed by Wu et al. (2019) . 2 In early experiments, we also tested np = 8, but this resulted in fewer grammatical predictions since each token is predicted independently. To improve the predictions, we could use SpanBERT (Joshi et al., 2020) , which is designed to infill spans, or an autoregressive model like T5 (Raffel et al., 2019) . cases, this leads to undesired replacements. The target model can be very confident that, e.g., a rarely mentioned entity should be replaced with a more common entity, although this type of edit does not help with transferring the style of the source text toward the target domain. To address this issue, we introduce a second scoring component leveraging the source domain MLM:SourceScore(i, j) = − max 0, L W target i:j | W \i:j ; Θsource −L Wi:j | W \i:j ; ΘsourceBy adding this component to TargetScore(i, j), we can counter for edits that only increase the likelihood of a span under Θ target but do not push the style closer to the target domain. 3 Our overall scoring function is given by:Score(i, j) = TargetScore(i, j) + SourceScore(i, j).To determine the span to edit, we computearg max i,j Score(i, j), where 1 ≤ i ≤ |W | + 1 and i − 1 ≤ j ≤ i + n p − 1.The case j = i − 1 denotes an empty source span, meaning that the model does not delete any source tokens but only adds text before the i-th source token. The process for selecting the span to edit is illustrated in Figure 1 , where the source text corresponds to two sentences to be fused. The source MLM has been trained on unfused sentences and the target MLM on fused sentences from the Dis-coFuse corpus (Geva et al., 2019) . In this example, the target model is confident that either the boundary between the two sentences or the grammatical mistake "in the France" should be edited. However, also the source model is confident that the grammatical mistake should be edited, so the model correctly ends up editing the words ". She" at the sentence boundary. The resulting fused sentence is: Marie Curie was born in Poland and died in the France .Efficiency. The above method is computationally expensive since producing a single edit requires O(|W | × n p ) BERT inference steps -although 3 SourceScore(i,j) is capped at zero to prevent it from dominating the overall score. Otherwise, we might obtain lowquality edits in cases where the likelihood of the source span Wi:j is high under the source model and low under the target model but no good replacements exist according to the target model. Given the lack of good replacements, W target i:j may end up being ungrammatical, pushing SourceScore(i,j) close to 1 and thus making it a likely edit, although TargetScore(i,j) remains low. Source words 0.00 0.11 0.00 0.00 0.00 0.00 0.00 0.00 0.01 0.00 0.00 0.00 0.00 0.00 0.01 0.00 0.00 0.00 0.02 0.00 0.00 0.00 0.04 0.02 0.07 0.00 0.09 0.02 0.07 0.01 0.00 0.16 0.49 0.06 0.02 0.00 0.01 0.01 0.01 0.01 0.00 0.01 0.01 0.01 0.00 0.00 0.00 0.50 0.07 0.00 0.00 0.46 0.09 0.00 0.00 0.02 0.10 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 TargetScore 0 1 2 3 4 #deleted words 0.00 -0.22 -0.01 -0.02 0.00 0.00 -0.03 -0.01 -0.00 0.00 0.00 0.00 0.00 0.00 -0.00 0.00 0.00 0.00 -0.00 -0.00 0.00 0.00 -0.16 -0.01 0.00 0.00 -0.19 -0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 -0.00 -0.00 -0.00 0.00 -0.02 -0.00 -0.00 -0.00 0.00 0.00 -0.39 -0.02 0.00 0.00 -0.44 -0.02 0.00 0.00 -0.11 -0.12 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 SourceScore 0 1 2 3 4 #deleted words 0.00 -0.12 -0.01 -0.02 0.00 0.00 -0.02 -0.01 0.01 0.00 0.00 0.00 0.00 0.00 0.01 0.00 0.00 0.00 0.02 -0.00 0.00 0.00 -0.12 0.00 0.07 0.00 -0.10 0.02 0.07 0.01 0.00 0.16 0.49 0.06 0.02 0.00 0.01 0.01 0.01 0.01 0.00 -0.01 0.00 0.01 0.00 0.00 0.00 0.11 0.05 0.00 0.00 0.01 0.07 0.00 0.00 -0.09 -0.01 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 these can be run in parallel. The model can be distilled into a much more efficient supervised student model without losing -and even gaining -accuracy as shown in our experiments. This is done by applying MASKER to the unaligned source and target examples to generate aligned silver data for training the student model.
2
We propose methods that consider lexical semantic relations between the defined word and the defining words in the definition encoder for definition embeddings (Section 3.1) and the definition decoder for definition generation (Section 3.2). To utilize the implicit semantic relations in definitions, we use word-pair embeddings that represent semantic relations of word pairs. We describe how word-pair embeddings are obtained in Section 4.1To consider lexical semantic relations in acquiring embedding from definitions, we feed the wordpair embeddings to the definition encoder. Assuming that the pair embedding v (wtrg,wt) represents the relation between the defined word w trg and the t-th defining word w t , we calculate h t as follows:EQUATIONwhere ; denotes vector concatenation. To exclude meaningless relations between the defined word and functional words, we replace v (wtrg,wt) with the zero vector if w t is a stopword. With wordpair embeddings as inputs, the definition encoder can recognize the role of information that is provided by the current word w t , for example, a type of w trg (Is-a), a goal of w trg (Used-for), or a component of w trg (Has-a).To provide the definition decoder with information regarding lexical semantic relations, we use an additional loss function with word-pair embeddings as follows:L rel = 1 |K| w t ∈K v (wtrg,wt) − (W r h t + br) 2 (7) K = {wt|wt ∈ D ∧ wt ∈ S} (8)where S is a set of stopwords. As in Section 3.1, we ignore the loss when w t is a stopword. This additional loss allows the definition decoder to learn the pattern of what semantic relations occur in definitions and how they occur. For example, if w trg indicates a type of tools, a defining word that has the Is-a relation to w trg tends to be followed by the Used-for word.
2
Our approach curates a frame lexicon of a Proposition Bank generated with annotation projection. The approach has two curation steps: filtering (section 3.1) and merging (section 3.2). We then make a final pass to add human readable explanations to the curated frame lexicon (section 3.3). (This way, a man loses his head.)Example 2: incorrect exampleWir verlieren den Krieg gegen das Dominion.We lose the war against the Dominion.(We are losing the war against the Dominion.)Example 3:Predicate: verlieren The first task is to identify all incorrect frames for TL verbs. For each entry in the lexicon, curators must make a binary decision on whether the entry is correct or not. In order to make this decision, curators are presented with the following information: 1) The TL verb. 2) A description of the English frame and its roles. 3) A sample of TL sentences annotated with this frame. Refer to Figure 3 for illustration. Given this information, curators must answer two questions (detailed below). If the answers to both questions are yes, the entry is considered valid. If one of the questions is answered with no, this entry must be removed from the lexicon.Q1: Is the English frame a valid sense for the TL verb? The first question concerns the semantic validity of the English frame for the TL verb. To answer this question, curators only consider the English frame description. If the description refers to semantics that the TL verb clearly cannot evoke, the answer to this question is no. We encountered such a case in section 1 with the verb drehen that cannot evoke frame TURN.02. In all other cases, the answer is yes. Notably, we do not ask if an English frame is a perfect fit in semantics. At this point in the process, we are only interested in filtering out clear errors.Q2: Does the TL verb accurately reflect the English frame description in the sample sentences? Even if an entry is valid in principle, it may still be subject to errors in practice. We find that some entries are correct judging from their description, but are never correctly detected in the corpus due to errors made by the English SRL. This problem disproportionally affects frames for which only limited English training data is available. For this reason, we require the curator to inspect a sample of 5 TL sentences per entry and determine whether they are correctly labeled. Refer to Figure 3 examples for both cases: Sentence 1 and 3 correctly invoke LOSE.03 (lose a battle), whereas Sentence 2 evokes LOSE.02 (lose an item). If a majority of example sentence is incorrectly labeled, this question must be answered with no.The second task addresses the issue of redundancy caused by multiple entries for TL verbs that evoke the same semantics. For each pair of entries for the same TL verb, a curator must decide whether they are synonymous and need to be merged into a single entry. This task therefore effectively decides the semantic granularity of the lexicon entries for each TL verb.We base merging decisions on the annotation guidelines of the English Proposition Bank, which specify that new frames need to be created to reflect different syntactic usages of a verb. In addition, new frames are created for broadly different meanings of a verb even if the syntactic subcategorization is the same .For each merging decision, we present curators with the following information: 1) The TL verb.2) The two frames and their descriptions. 3) A set of TL sample sentences for each frame. The latter is the most important since sample sentences illustrate how the TL verb is used in related contexts when labeled with a specific frame. Refer to Figure 4 for example.Given this information, curators must answer two questions (explained below). If the answer to any Figure 4 : Information presented to curator for merge decisions: Two frames, their descriptions and example sentences.of these questions is no, the two entries should not be merged. Q1: Are the two entries usage-synonyms? We define usage-synonyms as target language usage synonyms. To illustrate the difference to regular synonyms, consider the example in Figure 4 in which curators must decide whether TASTE.01 and TRY.01 should be merged. While the two English frames are clearly not synonymous, their target language usages are. As lexicon entries for the German verb kosten, they are both solely used in the context of tasting food and are therefore usage-synonyms, illustrated by the sample sentences for each frame in Figure 4 . If two entries are clearly not usage-synonyms, the answer to this question is no. In all other cases, the answer is yes.Q2: Do the two entries represent syntactically different usages? We found a number of cases in which curators disagreed on whether two entries are usage-synonyms or not. An example of this were entries which partially overlapping semantics, such as the frame pair WRAP.01 (enclose) and PACK.01 (fill, load). To address this, we created a guideline to compare syntactic usage of TL verbs. We ask curators to build the dictionary expansion for both entries, which we define as the default syntactic expansion that one might find in a dictionary. An English example for the verb turn is to turn something for TURN.01 and to turn into something for TURN.02. However, we ask curators to create this form for TL verbs 2 . If the TL dictionary expansion is different, the answer to this question is no.After curators complete both tasks, we rerun annotation projection using the created lexicon to filter out incorrect entries and merge redundant entries. This produces a Proposition Bank with manually curated TL frames. To complete the curation process, we ask curators to inspect each entry in the dictionary and add comments or explanations, as well as dictionary expansions. This information is intended for human consumption. The purpose of this annotation is to make apparent the distinctions between multiple entries for the same TL verb and explain our aliasing decisions. The entire curation process thus produces an annotated Proposition Bank with salient, manually curated frames for each TL verb.
2
We address the unsupervised domain adaptation (UDA) task with only a trained source model and without access to source data. We consider K-way classification. Formally, in this novel setting, we are given a trained source model f s : X → Y and a target domainD t = {x i } m i=1 ⊂ X with m unlabeled samples.Here, the goal of Cross-domain Knowledge Distillation (CdKD) is to learn a target model f t : X → Y and infer {y i } m i=1 , with only D t and f s available. The target model f t is allowed to have different network architecture with f s .CdKD is a special KD which consists of a trained teacher model f s , a student model f t and unlabeled data D t as well. But it differs from KD in that the empirical distribution of D t don't match the distribution associated with the trained model f s . Therefore, it is necessary to introduce distribution adaptation to eliminate the biases between the source and target domains during distilling the knowledge. Specifically, as shown in Figure 1 (a), we first introduce KD to distill the knowledge to the target domain in terms of the class probabilities produced by the source model f s . Then, we introduce a novel criterion JKSD to match the joint distributions across domains by evaluating the shift between a known distribution and a set of data. This is the first work to explore the distribution discrepancy between a model and a set of data in UDA task.Given a target sample x ∈ D t , the target model f t : X → Y produces class probabilities by using a "softmax" output layer that converts the logits p= (p 1 , • • • , p K ) into a probability f t (x) = (q 1 , • • • , q K ), q i = exp(p i /T ) j exp(p j /T )where T is a temperature used for generating "softer" class probabilities. We optimize the target model f t by minimizing the following objective for knowledge distillation,EQUATIONIn our paper, the setting of temperature follows the work (Hinton et al., 2015): a high temperature T is adopted to compute f t (x) during training, but after it has been trained it uses a temperature of 1.In traditional UDA setting, Joint Maximum Mean Discrepancy (JMMD) (Long et al., 2017) has been applied to measure the discrepancy in joint distributions of different domains, and it can be estimated empirically using finite samples of source and target domains. Specifically, suppose k : X ×X → R and l : Y × Y → R are the positive definite kernels with feature maps φ(•) : X → F and ψ(•) : Y → G for domains of X and Y , respectively that corresponds to reproducing kernel Hilbert space (RKHS) F and G . Let C P XY : G → F be the uncentered cross covariance operator that be defined as CP XY = E (x,y)∼P [φ(x) ⊗ ψ(y)]. JMMD measures the shifts in joint distributions P (X, Y) and Q(X, Y) byJ(P, Q) = sup f ⊗g∈H E Q (f (x)g(y)) − E P (f (x)g(y)) = C Q XY − C P XY F ⊗Gwhere H is a unit ball in F ⊗ G . In our setting, unfortunately, the empirical estimation of JMMD is unavailable since we cannot access the source data D s directly (The empirical estimation of JMMD is in Appendix A.1). Kernelized stein discrepancy (KSD) as a statistical test for goodness-of-fit can test whether a set of samples are generated from a marginal probability (Chwialkowski et al., 2016; Liu et al., 2016) . Inspired by KSD, we introduce Joint KSD (JKSD) to evaluate the discrepancy between a known distribution P (X, Y) and a set of dataQ= {x i , y i } m i=1 obtained from a distribution Q(X, Y).Assume the dimension of X is d(X = R d ), i.e., x = (x 1 , • • • , x d ), ∀x ∈ X . We denote by F d = F × • • • F the Hilbert space of d × 1 vector-valued functions f = {f 1 , • • • , f d } with f i ∈ F , and with an inner product f, f F d = d i=1 f i , f i F for f ∈ F d .We begin by defining a Stein operatorA P : F d ⊗ G → F d ⊗ G acting on functions f ∈ F d and g ∈ G (A P f ⊗g)(x, y) = g(y) ( ∇ x f (x) +f (x)∇ x log P (x, y) ) 1 d (2)where ∇ x log P (x, y) = ∇xP (x,y)P (x,y) ∈ R d×1 , ∇ x f (x) = ( ∂f 1 (x) ∂x 1 , • • • , ∂f d (x) ∂x d ) ∈ R d×1 for x = (x 1 , • • • , x d ) and 1 d is a d × 1vector with all elements equal to 1. The expectation of Stein operator A P over the distribution P is equal to 0E P (A P f ⊗ g)(x, y) = 0(3) which can be proved easily by (Chwialkowski et al., 2016, Lemma 5 .1). The Stein operator A P can be expressed by defining a function ξ xy over the space F d ⊗ G that depends on gradients of the log-distribution and the kernel,EQUATIONThus, (A P f ⊗ g)(x, y) can be presented as an inner product, i.e., f ⊗ g, ξ xy F d ⊗G . Now, we can define JKSD and express it in the RKHS by replacing the term f (x)g(y) in J(P, Q) as our Stein operator,S(P, Q) := sup f ⊗g∈H E Q (A P f ⊗ g)(x, y) − E P (A P f ⊗ g)(x, y) = sup E Q (A P f ⊗ g)(x, y) = sup f ⊗ g, E Q ξ xy F d ⊗G = E Q ξ xy F d ⊗Gwhere H is a unit ball in F d ⊗ G . This makes it clear why Eq. 3 is a desirable property: we can compute S(P, Q) by computing the Hilbert-Schmidt norm E Q ξ xy , without need to access the data obtained from P . We can empirically estimate S 2 (P, Q) based on the known probability P and finite samplesQ ={(x i , y i )} m i=1 ∼ Q(X, Y)in term of kernel tricks as follows,S 2 (P, Q) = 1 m 2 tr(∇ 2 KL + 2ΥL + ΩL) (5) (∇ 2 K) i,j = ∇ x i φ(x i ), ∇ x j φ(x j ) F d Υ i,j = (∇ x i k(x i , x j )) ∇ x j log P (x j , y j ) Ω i,j = k(x i , x j ) ∇ x i log P (x i , y i ) ∇ x j log P (x j , y j ) where L = {l(y i , y j )} is the kernel gram matrix, ∇ x φ(x), ∇ x φ(x ) F d = d i=1 ∂k(x,x ) ∂x i ∂x i , all the matrices ∇ 2 K, Υ, Ω and L are in R m×m , and tr(M) is the trace of the matrix M. (Refer to Appendix A.2 for detail.)In our experiments, we adopt Gaussian kernel k(x 1 , x 2 ) = exp(− 1 σ 2 x 1 − x 2 2 ) where its derivative ∇ x 1 k(x 1 , x 2 ) ∈ R d and (∇ 2 K) i,j ∈ R can be computed numerically, ∇ x 1 k(x 1 , x 2 ) = k(x 1 , x 2 ) − 2 σ 2 (x 1 − x 2 ) (∇ 2 K) i,j = k(x 1 , x 2 ) 2d σ 2 − 4 x 1 − x 2 2 σ 4Remark. Based on the virtue of goodness-fit test theory, we will have S(P, Q) = 0 if and only if P = Q (Chwialkowski et al., 2016) . Instead of applying uniform weights as MMD does, JKSD applies non-uniform weights β i,j ,S 2 (P, Q) = i,j β i,j l(y i , y j )where β i,j = (∇ 2 K + 2Υ + Ω) i,j is, in turn, determined by the activation-based and gradient-based features of the known probability P . JKSD computes a dynamic weight β i,j to decide whether the sample i shares the same label with other sample j in the target domain. Different from cluster-based methods , JKSD assigns each sample a label according to all the data in the target domain instead of the centroid of each category. The computation of centroid severely suffers from the noise due to the domain shifts. In contrast, our solution is more suitable for UDA because we avoid to use the untrusted intermediate results (i.e., the centroid of each category) to infer the labels.The pipeline of our CdKD framework is shown in Figure 1 (b). The source model parameterized by a DNN consists of two modules: a feature extractor T s : X → Z s and a classifier G s :Z s → Y, i.e., f s (x) = G s (T s (x)).The target model f t = T t •G t also has two modules where we use parallel notations T t (•; θ T ) : X → Z t and G t (•; θ G ) : Z t → Y for target model. Note here in our experiments, the dimension of the latent representations of source model is set equal to the target model, i.e., Z s = Z t = R d . The extractors T s and T t are allowed to adopt different network architectures. The input space X is usually highly sparse where the kernel function cannot capture sufficient features to measure the similarity. Therefore, we evaluate JKSD based on latent representations of target samples, i.e.,Q = {(z, y)|z = T t (x), y = G t (z), x ∈ D t } ∼ Q(Z, Y). In Eq. 5, it is required to evaluate the joint probability P (Y = y, Z = z) = p(y|z)p(z) over a sample (z, y) obtained fromQ. The probability p(y|z) that the sample follows conditional distribution of the source domain P (Y|Z) can be evaluated as p(y|z) = y G s (z). Similarly, the term p(z) represents the probability that the target representation z follows the marginal distribution P (Z) of the source domain. Since we cannot access the source marginal distribution directly, we approximate it by evaluating the cosine similarity of the representations outputted from the source model and target model, i.e., p(z) = 1 2 cos(z, T s (x)) + 1 2where x = T −1 t (z) is the sample corresponding to z for any z ∈Q. Formally, the term ∇ z log P (z, y) in Eq. 5 can be computed as∇ z log P (z, y) = 1 p(y|z) y ∇ z G s (z) + ∇ z p(z) p(z)where ∇ z G s (z) ∈ R K×d is a Jacobian matrix of the target latent representation with respect to the source classifier G s . We propose to train the target model f t by jointly distilling the knowledge from the source domain and reducing the shifts in the joint distributions via JKSD, minθ T ,θ G L KD + µŜ 2 (P, Q)where µ > 0 is a tradeoff parameter for JKSD. In order to maximize the test power of JKSD, we require the class of functions h ∈ F d ⊗G to be rich enough. Meanwhile, kernel-based metrics usually suffer from vanishing gradients for low-bandwidth kernels. We are enlightened by (Long et al., 2017) which introduces the adversarial training to circumvent these issues. Specifically, we multiple fully connected layers U and V parameterized by θ U and θ V to JKSD, i.e., k(x i , x j ) and l(y i , y j ) are replaced as k(U (x i ), U (x j )) and l(V (y i ), V (y j )) in Eq. 5. We maximize JKSD with respect to the new parameters θ U and θ V to maximize the test power of JKSD such that the samples in the target domain are made more discriminative by abundantly exploiting the activation and gradient features in the source domain. As shown in Figure 1(c) , the target model f t can be optimized by the following adversarial objective,EQUATION4 Experiments
2
Language models compute p(w i | w <i ), the probability distribution of the next token w i given the preceding context w <i . The conventional training objective of an LM is to minimize the surprisal of tokens in a training set. The surprisal of a single token can be expressed as the negative log probability of that token given the preceding context (prefix):l i = − log p(w i | w <i )While many models were proposed to compute p(w i | w <i ), we focus on the Transformer architecture (Vaswani et al., 2017) , which consists of a stack of alternated self-attention and feed-forward blocks and has become the mainstream architecture for large-scale LM pretraining.Unlike prior work, which has focused on fixed Transformer language model checkpoints, we are curious to see how intervening in the training process would impact the resulting models. Specifically, we ask: are there any training objectives or model design choices that would improve the models' acquisition of linguistic knowledge?To understand how varying training configurations affect the linguistic capacities of the final models, we narrow our focus to the LM training objective and the self-attention mechanism. We train a set of Transformer LMs, each differing from each other in only the changes described below:Focal loss (FL) As shown by Zhang et al. (2021) , language models learn different linguistic phenomena at different speeds and require different amounts of data. For instance, the learning curve for subject-verb agreement phenomena plateaus after training on more than 10M tokens, whereas filler gap dependencies display steadily increasing performance even up to 30B tokens of training data. This suggests that each phenomenon has an inherent "difficulty", with some requiring more data for an LM to master. In such a scenario, can we improve the acquisition of linguistic knowledge by forcing the model to pay more attention to the "difficult" tokens? To achieve this, one potential alternative to the standard log loss training objective is focal loss (Lin et al., 2018) , which can be intuitively explained as reducing the penalty on "easy" well-predicted tokens and increasing the penalty on the "hard" tokens. Formally, the surprisal of each target token is negatively scaled by the predicted probability:l F L i = −(1 − p(w i | w <i )) γ log(p(w i | w <i ))Here, γ is a hyper-parameter controlling the relative importance between poorly-predicted and well-predicted tokens. Larger values of γ allocate more weight to tokens with high surprisal.Masked loss (ML) In the focal loss setting, well-predicted tokens still receive a certain amount of penalty. As an extreme version of the focal loss setting, we simply zero out the loss (masked loss) for the tokens whose predicted probability exceeds a given threshold. Formally, given a threshold t, the masked loss is thus:l M L i = − 1 − I(p(w i | w <i ) ≥ t) log p(w i | w <i )Auxiliary loss (AL) Multitask training is commonly adopted to provide extra supervision signals to the language model (Winata et al., 2018; Zhou et al., 2019) . To explicitly endow an LM with better understanding of syntactic knowledge, we add an auxiliary task where the model is trained to predict labels derived from an external constituency parser using the final layer's tokenlevel representations. The loss of this prediction task is added to the original loss, weighted by a hyper-parameter α.l AL i = −α log p(w i | w <i )−(1−α) log p(c i | w <i )c i denotes the linguistic label for each token, which we obtain by associating a token with both the the smallest non-terminal constituent type containing that token and the depth of that constituent in the parse tree. For example, a noun phrase "red apple" having depth 3 in the parse tree will have "NP3 NP3" as the labels for the auxiliary task.Local attention (LA) Besides the training objective, modifying the architecture is another way to change the inductive biases of the model. As there is a huge number of potential architectural modifications, we constrain our changes to only the attention mechanism as it does not change the total number of parameters and is thus easier to perform a fair comparison. Instead of using the standard self-attention, we adopt local attention, where the attention window is limited to only k tokens immediately preceding the target token Roy et al., 2021; Sun and Iyyer, 2021) . We hope that these local attention variants can more easily pick up a recency bias previously shown to exist in RNN language models (Kuncoro et al., 2018) . However, note that although the model only attends to the previous k tokens in each layer, the effective receptive field can still be large as the information is propagated through the stacked Transformer layers.To measure the amount of linguistic knowledge captured by each language model variant, we use BLiMP (Warstadt et al., 2020a) , a benchmark of English linguistic minimal pairs. It contains pairs of grammatical and ungrammatical sentences, the latter of which is minimally edited from the grammatical one. The sentence pairs fall into 67 paradigms spanning 12 common English grammar phenomena 1 . A language model makes the correct prediction on this task when it assigns the grammatical sentence higher probability than the ungrammatical one. Each paradigm contains 1K examples, and the accuracy of each paradigm can be treated as a proxy of the amount of specific linguistic knowledge encoded by the LM.
2
The figure 3 gives a picture of the architecture that we have currently implemented for this task. We are primarily focused on data preprocessing to improve the quality of utterances and also enhance feature representations. We started our approach with simple term frequency based on CNN+BiLSTM; the same methods are discussed next. We have improved our model accuracy on each step of stacking and modifying the features. Our whole approach is based on the architecture model figure 3. Below are the steps followed before building the model. Performance improved after data cleansing.
2
In order to evaluate spontaneous multimodal input production, data from the training session of a simulation experiment were analyzed. Procedure The multimodal system called SIM, Sistema Interattivo per la Modulistica, was simulated to assist students with form filling tasks. Conversing with SIM, users had to gather information on how to fill out form fields (user questions) and to provide personal data for automatic insertion (user answers). Hard-copy instructions described system capability and required participants to complete the task as quickly and accurately as possible. No examples of dialogue were directly given, to avoid biasing communication behavior. Participants worked individually in a user room and were monitored by a closed circuit camera. Dialogues and pointing were logged and interactions videotaped during all experimental sessions. At the end all students filled in a user satisfaction questionnaire (USQ) and were debriefed.Simulation The system was simulated by the Wizard of Oz technique, in which a human (the wizard) plays the role of the computer behind the human-computer interface (Fraser and Gilbert, 1991) . A semi-automatic procedure supported the simulation that was carried out on two connected SUN SPARC workstations. Interface constraints and several pre-loaded utterances (including a couple of prefixed answers for every task-relevant action, error messages, help and welcoming phrases) supported two trained wizards. These strategies have been found to increase simulation reliability by reducing response delays and lessening the attentional demand on the wizard (Oviatt et al., 1992) .The user interface was composed by a dialogue window in the upper part and a form window in the lower part of the screen (figure 1). In the dialogue 3This is opposite to WYSIWYG (What You See Is What You Get) interfaces where what can be done is clearly visible. window users typed their input and read system output. Pointing was supported by mouse and pointer was constrained inside the form window. SIM was simulated with good dialogue capabilities, anyway still far away from human abilities. It accepted every type of multimodal references, i.e. with or without linguistic anchorage for the gesture and either close or far pointing. It could understand ellipses and complete linguistic references.The form layout had nine fields grouped in three rows. Rows and fields had meaningful labels (one or two words) suggesting the required content. Users could refer both to single fields and to rows as a whole. At row selection SIM gave instruction on the whole row content. After users received row information, to further fields selection corresponded more synthetic instructions. SIM required to click on the referred field and gave visual feedback to this 4. It supported multiple references too. System answers were always multimodal with demonstrative pronouns and synchronized (visual) pointing.Participants and Design Twenty five students from Trieste University participated in the simulations as paid volunteers. Ages of participants ranged from 20 to 31 and all were Italian native speakers.Participants were grouped in two different sets according to their computer experience. Selection was achieved by a self-administered questionnaire on computer attitude and experience. Half sample was represented by experienced users, skilled typists with positive attitude towards computers and some programming experience. The other half was composed by students who had never used a computer before.4In a previous study we demonstrated that visual feedback allows more efficient multimodal input, increases integration of pointing within writing and is preferred by users (De Angeli et al., 1996; De Angeli, 1997) .
2
The tasks of SeeDev-binary and BB-event both can be treated as binary relation extraction which specifics whether there is interaction between two entities. In relation extraction, the semantic and syntactic information for sentence act as a significant role. Traditional method usually need to design and extract complex features from sentence based on domain-specific knowledge, such as tree kernel and graph kernel, to model the sentences. As a result, this will lead to much lower ability of generation for corpus dependent. Consequently, instead of complicate hand-designed feature engineering, we employ convolutional neural network, also called CNN, to model the sentences by convolution and max-pooling operation from raw input with word embedding and full connected neural network to learn senior features automatically. Furthermore, we employ POS embedding to enrich the semantic information of words, distance embedding to capture the information of relative distance between the entities and entity type embedding as the supplement features of the sentence. All the feature embedding is combined to build final distributive semantic representation which is fed to convolutional neural network.As described in Fig.1 , the proposed model mainly contains two modules: distributive se-mantic representation building, such as word embedding, POS embedding, distance embedding and entity type embedding, and CNN model training. In the next parts, we will introduce more details.Traditional one-hot representation, which is employed by mostly machine learning methods, can vectorize the text and plays an important role. However, it can result in the problems of semantic gap and dimension disaster which restrict its application. Consequently, in our proposed method, we employ distributive semantic representation, proposed by Hinton (1986) at first, as the feature representation of the model. And then, we exploit the advantage of convolutional neural network at modeling the sentences to learn sentence-level representation from raw input. The distributive semantic representation is built as follows. For simply definition, we assume S = 1 1 2 3 … 2 as the word sequence between two entities in one sentence, where 1 , 2 stand for the entities and 1 … stand for the words between two entities.Max-pooling Full Connected network E 1 W 2 W 3 W 4 W 5 E 6 T 1 T 2 WORD POS DISInstead of traditional one-hot representation, we utilize the distributive semantic representation of words for solving the problem of dimension disaster and semantic gap. Firstly, we employ word2vec tool, which can effectively learn distributive representation of words from massive and unlabeled data, to train word embedding from massive available Pubmed abstracts. The embedding with low dimension and realistic value contains rich semantic information and can be treated as feature representation of words instead of one-hot. Inspired by language model, we employ the contexts of two entities to predict the relation type. In our experiments, the contexts are expressed by the words between two entities in one sentence. Then, the word sequence is transformed into word embedding matrix by looking up the word embedding table. The word embedding matrix can be treated as local feature of the sentence and fed to CNN model to learn global feature which can contribute to the relation identification. The word embedding matrix is represented as follows:( ) = [〈 〉 1 , 〈 〉 1 , 〈 〉 2 , … , 〈 〉 , 〈 〉 2 ]Where ϵℝ | |× (| | is the size of dictionary and is the dimension of word embedding) is the word embedding table trained by word2vec with Pubmed abstracts and fine-tuned while training.Through analyzing the dataset, we observe that different entities with different types have different probability to interact with each other if the entity type satisfies the relation constraints. Consequently, entity type of two entities is an import factor for predicting the relation type. In our model, entity types are treated as the extra features of the relation and the supplement of word sequence. 〈( 1 ) , 〈 〉 ( 2 ) are added as the extra features of the relation:, ( ) = [〈 〉 1 , 〈 〉 1 , … , , 〈 〉 2 , 〈 〉 ( 1 ) , 〈 〉 ( 2 ) ]Where ϵℝ | |× is type embedding which is randomly initialized by random sampling from the uniform distribution ([-0.25, 0.25]).(•) stands for the entity type.is the dictionary of entity types.Word semantics usually have several aspects containing similarity, POS (part-of-speech) and so on. For enriching the semantic representation of each word, POS embedding is introduced as the supplement of word embedding:( ) = [〈 〉 p( 1 ) , 〈 〉 p( 1 ) , … , 〈 〉 p( ) , 〈 〉 p( 2 ) , , ]We denote ϵℝ | |× as the POS embedding which is randomly initialized as well as type embedding, where is the size of POS dictionary and, , a hyper-parameter, is the dimension of POS embedding. We set = 5 through trying different configuration. Zero vector ( ) is used to pad the sentence.In relation classification tasks, distance information usually plays an important role. Distance can capture the relative position between two entities. As shown in followed formulas, ( ) 1 stands for the relative distance between words and the first entity, and ( ) 2 stands for the relative distance between words and the second entity.( ) 1 = [〈 〉 ( 1 , 1 ) , … , 〈 〉 d( , 1 ) , 〈 〉 d( 2 , 1 ) , , ] ( ) 2 = [〈 〉 ( 1 , 2 ) , … , 〈 〉 d( , 2 ) , 〈 〉 d( 2 , 2 ) , , ]Where ϵℝ | |× stands for the distance embedding and | | is the number of different distances. The embedding is randomly initialized and fine-tuned while training. We set = 5 through trying different confiuration. Zero vector ( ) is also used to pad the sentence.As shown in followed formula, the final distributive semantic representation is acquired by joining the word embedding, type embedding, POS embedding and distance embedding.φ( ) = [ , ( ) ( ) ( ) 1 ( ) 2 ]After building the distributive semantic representation of relation, we employ convolution and max-pooling to learn the global feature representation from raw input. The detailed computation procedure is described as follows.〈 〉 = ( • φ( ) + ) 〈ℎ〉 = max〈 〉Where W is the convolution filter, it extracts local features from given window of word sequence. 〈ℎ〉 can be treated as the global feature representation learned from raw distributive representation φ( ) and be fed to the full connection layer to learn hidden and senior features.As we all know, convolutional neural network is a model with vast computation cost. Consequently, we implement the CNN model with theano (Bergstra et al., 2010; Bastien et al., 2012) and run in GPU kernels for accelerating the training procedure. As a result, it takes about half hour to train a CNN model. Meanwhile, we make some modifications in our model for achieving more significant experiment results. In the convolutional layer, we make use of multiple convolution kernels with different window size for capturing sentence features from different views. In the full connection layer, we modified the network with dropout (Srivastava et al., 2014) which is a much simple and efficient method to prevent the problem of overfitting. The dropout network can prevent the co-adaption between the nodes through randomly dropping some nodes or make them not work. Learning rate is the most important hyper-parameter in deep learning. Consequently, we employ Adadelta (Zeiler, 2012) an adaptive learning rate method, to automatically adapt the learning rate instead of configuring it manually. Finally, we empirically search for the reasonable combination of all the hyperparameters and tune in development dataset. The optimal parameters of CNN model are described in Table 1
2
Our method is fairly simple. We take a state-of-theart parsing model, the Berkeley parser (Petrov et al., 2006) , train it on data with explicit empty elements, and test it on word lattices that can nondeterministically insert empty elements anywhere. The idea is that the state-splitting of the parsing model will enable it to learn where to expect empty elements to be inserted into the test sentences.Tree transformations Prior to training, we alter the annotation of empty elements so that the terminal label is a consistent symbol (ϵ), the preterminal label is the type of the empty element, and -NONEis deleted (see Figure 2b ). This simplifies the lattices because there is only one empty symbol, and helps the parsing model to learn dependencies between nonterminal labels and empty-category types because there is no intervening -NONE-.Then, following Schmid (2006) , if a constituent contains an empty element that is linked to another node with label X, then we append /X to its label. If there is more than one empty element, we process them bottom-up (see Figure 2b ). This helps the parser learn to expect where to find empty elements. In our experiments, we did this only for elements of type *T*. Finally, we train the Berkeley parser on the preprocessed training data.Lattice parsing Unlike the training data, the test data does not mark any empty elements. We allow the parser to produce empty elements by means of lattice-parsing (Chappelier et al., 1999) , a generalization of CKY parsing allowing it to parse a wordlattice instead of a predetermined list of terminals. Lattice parsing adds a layer of flexibility to existing parsing technology, and allows parsing in situations where the yield of the tree is not known in advance. Lattice parsing originated in the speech processing community (Hall, 2005; Chappelier et al., 1999) , and was recently applied to the task of joint clitic-segmentation and syntactic-parsing in Hebrew (Goldberg and Tsarfaty, 2008; Goldberg and Elhadad, 2011) and Arabic (Green and Manning, 2010) . Here, we use lattice parsing for emptyelement recovery.We use a modified version of the Berkeley parser which allows handling lattices as input. 2 The modification is fairly straightforward: Each lattice arc correspond to a lexical item. Lexical items are now indexed by their start and end states rather than by their sentence position, and the initialization procedure of the CKY chart is changed to allow lexical items of spans greater than 1. We then make the necessary adjustments to the parsing algorithm to support this change: trying rules involving preterminals even when the span is greater than 1, and not relying on span size for identifying lexical items.At test time, we first construct a lattice for each test sentence that allows 0, 1, or 2 empty symbols (ϵ) between each pair of words or at the start/end of the sentence. Then we feed these lattices through our lattice parser to produce trees with empty elements. Finally, we reverse the transformations that had been applied to the training data.
2
To find sentence types typical to the clinical domain, a comparison to standard text was conducted. The used clinical corpus was: free-text entries from assessment sections, thus mostly containing diagnostic reasoning, that were randomly selected from the Stockholm EPR corpus 1 (Dalianis et al., 2009) ; and the used standard corpus was: Läkartidningen (Kokkinakis, 2012) , a journal from the Swedish Medical Association.The comparison was carried out on part-ofspeech sequences on a sentence level. The partof-speech tagger Granska (Carlberger and Kann, 1999) , having an accuracy of 92% on clinical text (Hassel et al., 2011) , was applied on both corpora, and the proportion of each sentence tag sequence was calculated. 'Sentence tag sequence' refers here to the parts-of-speech corresponding to each token in the sentence, combined to one unit, e.g. 'dt nn vb nn mad' for the sentence 'The patient has headache.'. Pronouns, nouns and proper names were collapsed into one class, as they often play the same role in the sentence, and as terms specific to the clinical domain are tagged inconsistently as either nouns or proper names (Hassel et al., 2011) . As sentences from Läkartidningen not ending with a full stop or a question mark are less likely to be full sentences, they were not included, in order to obtain a more contrasting corpus.A 95% confidence interval for the proportion of each sentence combination was computed using the Wilson score interval, and the difference between the minimum frequency in the clinical corpus and the maximum frequency in the standard language corpus was calculated. Thereby, statistics for the minimum difference between the two domains was achieved.A total of 458 436 sentence types were found in the clinical corpus. Of these, there were 1 736 types significantly more frequent in the clinical corpus than in the standard corpus, not having overlapping confidence interval for the proportions. 33 sentence types, to which 10% of the sentences in the corpus belonged, had more than 0.1 percentage points difference between minimum frequency in the clinical corpus and maximum frequency in the standard language corpus. For each of these 33 sentence types, 30 sentences were randomly extracted and the dependency parser Malt-Parser (Nivre et al., 2009) , pre-trained on Talbanken (Nivre et al., 2006) using the algorithm stacklazy (Nivre et al., 2009) , was applied to these part-of-speech tagged sentences. Error categories were manually identified, using MaltEval (Nilsson and Nivre, 2008) for visualisation.Given the identified error categories, two preprocessing rules were constructed. These were then evaluated by applying the same pre-trained parser model on pre-processed sentences as on original sentences. A manual analysis was performed on a subset of the sentences that were differently parsed after pre-processing.
2
To explore the topic of usability and translation modality further, a within-subject experiment was designed to compare MS Word translated from English using raw Japanese MT (MT) and a released version of that same product (HT).Since the number of participants that were available to participate was limited due to the location and the time available, a within-subject experiment was the best option to have enough participants for a statistical analysis.This research poses the following questions: RQ1: Will users perform the same number of successful tasks regardless of the scenario used (English original version, MT, or HT)?RQ2: Will there be differences in time when participants perform the tasks in the different scenarios (English, MT or HT)?RQ3: Will the participants be equally satisfied when using the English, MT or HT scenario? RQ4: Will participants expend different amounts of cognitive effort when performing the tasks in different scenarios?Following specific studies on usability mentioned in this paper (Castilho et al., 2014; Castilho, 2016; O'Brien, 2012, 2014) , usability was defined as per the ISO/TR 16982 guidelines: "the extent to which a product can be used by specified users to achieve specified goals with effectiveness, efficiency, and satisfaction in a specified content of use" (ISO 2002). 2 Effectiveness was measured through task completion. Users were presented with tasks to complete through interaction with different components of the user interfaces. The more tasks the user completed following specific instructions, the higher the effectiveness score was (from 0 to 100). The following formula was used to calculate the Effectiveness score:# # 100 =Efficiency was measured considering the tasks that were completed in relation to the time it took to complete those tasks. If less time was invested to complete a task, then the efficiency score was higher, and vice versa. The following formula was used to calculate the efficiency rate:∑ × 100 ℎ × 100 =Efficiency was also measured in terms of cognitive effort using an eye-tracking device. Fixation duration (total length of fixation in an area of interest or AOI), fixation count (total number of fixations within an AOI) were measured. Eye-tracking has been established as an adequate tool to measure cognitive effort in MT studies (Doherty and O'Brien, 2009; Doherty et al., 2010) .Satisfaction was measured through an IBM After-Scenario Questionnaire (Lewis, 1995) containing a series of statements that users rated. This questionnaire was chosen instead of other frequently used questionnaires such as SUS (Software Usability Scale) or Post-Study System Usability Questionnaire (PSSUQ) because, in this project, two set of tasks (1, 2, 3 and 4, 5, 6) were assessed while the other questionnaires are better suited to rate an entire system. The ASQ has three questions to rate on a 7-point Likert-type scale. This test was modified to address the language factor in two questions to differentiate between the quality in the instructions and in the Word as follows: 1. Overall, I am satisfied with the ease of completing the tasks in this scenario. 2. Overall, I am satisfied with the time it took to complete the tasks in this scenario. 3. Overall, I am satisfied with the instructions given for completing the tasks. 4. Overall, I am satisfied with the language used in the Word menus, dialog boxes and buttons. The participants could rate between 1 (Strongly agree) to 7 (Strongly disagree). Question 3 was added, even if it does not refer to MS Word specifically, because participants always worked with the Instruction windows visible.In collaboration with Microsoft Ireland, the business partner for this research project, the different applications that form part of the Office suite were analyzed. Finally, Word was chosen as the optimal application for the experiment. This was firstly because the study sought to reach as many participants as possible and Word is the most popular application in the suite, and secondly, because it was important to measure the impact of translation modality as opposed to the users' skills or knowledge when using an application, and Word is a relatively easy application to use.The set of languages analyzed here were English, and Japanese. English was chosen to be used as the control group and Japanese was chosen because it is a language traditionally considered to be difficult for MT.on-line http://www.iso.org/iso/iso_catalogue/catalogue_tc/catalogue _detail.htm?csnumber=31176 (last accessed April 2 nd 2019)The software version used was Microsoft Word 2016 MSO (16.0.9126.2315) 32-bit in English and in Japanese. The providers' translation cycle involves MT and full PE. The final quality of the translation delivered by the service provider is equal to publishable quality as defined in the localization instructions and the quality evaluation channels the localization assets go through. It is relevant to note that the localization process might involve translating with no previous reference, but, in general, it includes MT and translation memories, among other reference material, as well as a review cycle.A specially-devised version of Word was used for the Japanese MT scenario, translated from English using the business partner's highlycustomized Microsoft Translator SMT. 3 At the time of implementing this experimental setup, customized Microsoft NMT was not available.A warm-up task and 6 subsequent tasks were selected. The criteria for selection were that they contained enough text so as to measure the translation modality, that they were coded for telemetry purposes (for a second phase of this experimental project), that they could be performed in all the languages tested (German, Spanish, Japanese and English), and that they were relatively new or non-standard so as to minimize the effect of previous experience. The warm-up task involved selecting a paragraph and changing the font. The six tasks were: 1) selecting a digital pen and drawing a circle using a defined thickness and color, 2) changing the indentation and spacing for the paragraph (presented to the users), 3) automatically reviewing the document, 4) selecting an option from the Word Options dialog box in the corresponding menu, 5) inserting a section break; and 6) finding the Learning Tools in the corresponding menu and changing the page appearance.The tasks were evaluated by an English native speaker to test the instructions and the environment. Since it was not possible to analyze the original and translated text with standard readability metrics, a Japanese native speaker evaluated the tasks in the Japanese-released version and in the raw MT environment. This evaluator commented on the high quality of the MT although she signaled the sentences and words that were not idiomatic, wrong, or different from the released version. The errors spotted in the MT scenario in the tasks selected was 3 https://hub.microsofttranslator.com/ comparable to the other languages that were going to be included in the project.The instructions for the experiment were translated using Microsoft's localization services. They translated the texts following specific instructions to respect the fluency and accuracy of the text and the experimental design.Three scenarios (i.e. conditions) were defined for the experiment: MT, HT and English.The Japanese participants in Group 1 completed three tasks as A) HT, and three tasks as B) MT, while participants in Group 2 were presented the same tasks but in reverse order, that is, B) MT, A) HT. This served to counterbalance the within-subject effect. Between scenarios, there was a brief pause that allowed the researcher to change the Word configuration and recalibrate the eye-tracker.The English-speaking group were presented with a warm up task and 6 tasks. As with the Japanese group, they had a brief pause between the tasks, replicating the same environment.The participants were asked to fill in a questionnaire before the experiment. The questionnaire assessed the experience users had in using word-processing applications, Word, their native language and level of English, gender, age, education level, as well as their experience in doing the tasks that were part of the experiment. The questionnaire was provided by email using Google Forms.The criteria for the inclusion of volunteer participants was that they were native speakers, that they were willing to participate in the research and sign a consent form, and that they were frequent users of word processing applications. The participants were recruited through advertisement in social media and email lists within Dublin City University, although the participants were not limited to students or people associated with the university. The participants were given a €20 voucher for their contribution. All participants received a Plain Language Statement and signed an Informed Consent form before the experiment (DCUREC/2017/200). 42 participants took part in the experiment: 20 English-speakers, and 22 Japanese-speakers. 12 Japanese participants were assigned to Group 1 and 10 participants to Group 2.The reason for the difference in number of Japanese participants is that some eye-tracking data was discarded due to poor recording quality (see Section 3.7). Also, after examination, the data from two EN participants were discarded because of changes in the original set-up (Word version). 75% of participants identified as women and 25% as men. The age distribution is important as it might be an indicator of experience with the application. For example, although all of them reported experience using Microsoft Word, the EN group reported a higher level of experience.Also, when participants were asked about their experience in the 6 experimental tasks, the Japanese group (JP) reported an average experience of 2.1 tasks out of 6 (35.61 %) while the EN group reported an average of 3.8 tasks out of 6 (62.96 %). When they were asked to rate their level of proficiency (i.e. "How would you describe your level of proficiency when working with word-processing applications?"), the average value for the EN was 3.83 in a 5-point Likert scale (1 being Novice and 5 being Very proficient) while the JP selected a 2.14. A Mann-Whitney test for self-reported experience suggests that there is a significant difference in the level of perceived experience between the two groups (U=24 and p<0.05). JP participants reported significantly lower experience than EN participants.The data recording equipment consisted of a Tobii X60 XL, a wide screen eye-tracker with a 24-inch monitor and 60Hz sampling rate, and a laptop computer (Intel Core 1.7 vPro tm , 2.00 GHz 2 Core, 4 Logical processors, 8 GB RAM). The laptop was used for stimulus presentation and eye movement recording. The stimuli were presented with a 1600 x 900 resolution. The software used to record and analyze the data was Tobii Studio 3.4.5 1309, Professional Edition. The fixation filter selected was an IV-T Filter provided by the manufacturer. The filter has a velocity threshold of 30 degrees, a maximum time between fixations of 75 ms and a maximum angle of 0.5 degrees. Fixations under 60 ms were discarded.The participants were calibrated using a ninepoint calibration screen (automatic). The participants were recalibrated if the Tobii system reported a poor calibration or if the calibration points were not clearly defined within the calibration grid. The optimal distance to the eyetracker was set as 67 cm. However, this varied as the participants were not tested using a chin rest to preserve ecological validity during the experiment.To estimate the cognitive effort using an eyetracker, two Areas of Interest (AOIs) were defined. One AOI comprised the Instructions windows (25.7%, 369516 px) and the Word application window (74%, 1065165 px). Two participants in the JP group moved the screens slightly, therefore the AOIs for these 2 participants were slightly different for the Instructions (22.81%, 328500px) and the Word application (76.9 %, 1107000px) windows.To test the quality of the sample, the gaze sample data in the Tobii system and the velocity charts were checked. Moreover, the segments of interest were exported (each segment represented a task timeline therefore six segments were exported per participant) to calculate the eye validity codes within these segments. A minimum 80% gaze sample was required for a recording to be considered valid and to be included in the statistical analysis. This meant that each participant had at least one eye or both eyes on the segments 80 per cent of the time.Once the participants had completed the tasks, their gaze data was replayed, and they were asked to comment on what they were doing, thinking or feeling during the experiment. The participants were recorded using Flashback Express 5. The interviews took approximately 15 minutes.The researcher asked certain questions to elicit responses from the participants, such as How did you find this task? What were you thinking at this point? How was the language in this menu? Had you done this task before? Did you notice any difference in Word when you came back from the pause?To analyze the results graphically and statistically, SAS v9.4 and IBM SPSS Statistics, v24 were used. The statistics decisions were made with a significance value of 0.05.To determine the effect of the scenario (HT, MT and EN) for each response variable (Effectiveness, Efficiency and Satisfaction), a general linear mixed model (called hereafter a mixed model) was adjusted according to the scenario and task groups (1, 2, 3 vs. 4, 5, 6 ) and the interaction between the two (Type III Test). The tasks and scenarios are considered fixed factors and the repeated measures of each participant are included in the model (random effects). Table 1 shows that HT evinces higher effectiveness scores on average than the MT scenario in both groups of tasks. The EN group has the highest scores. Figure 1 illustrates A mixed model for effectiveness shows that there are statistically significant differences between scenarios (F(2, 37)=4.26; p=0.0216) and tasks (F(1, 37)=64.73; p<.0001). The estimated mean of effectiveness is 78.47 in EN, 64.65 in HT and 57.22 in MT scenarios.
2
To deal with unknown (or out-of-vocabulary) words, we use a pipeline approach which predicts part-of-speech tags and morpho-syntactic features before lemmatization. In the first stage of the pipeline, we use MADA (Roth et al., 2008) , an SVM-based tool that relies on the word context to assign POS tags and morpho-syntactic features. MADA internally uses the SAMA morphological analyser (Maamouri et al., 2010) , an updated version of Buckalter morphology (Buckwalter, 2004) . Second, we develop a finite-state morphological guesser that can provide all the possible interpretations of a given word. The morphological guesser first takes an Arabic surface form as a whole and then strips all possible affixes and clitics off one by one until all possible analyses are exhausted. The morphological guesser is highly non-deterministic as it outputs a large number of solutions. To counteract this non-determinism, all the solutions are matched against the POS and morpho-syntactic tag output for the full surface token by MADA and the analysis with the closest resemblance (i.e. the analysis with the largest number of matching morphological features) is selected.Beside the complexity of lemmatization described in Section 1.1, the problem is further compounded when dealing with unknown words that cannot be matched by existing lexicons. This requires the development of a finite-state guesser to list all the possible interpretations of an unknown string of letters (explained in detail in Section 3).To identify, extract and lemmatize unknown Arabic words we use the following sequence of processing steps (Figure 1 ):  A corpus of 1,089,111,204 tokens (7,348,173 types) is analysed with MADA. The number of types for which MADA could not find an analysis in the Buckwalter morphological analyser is 2,116,180 (about 29 % of the types). These unknown types were spell checked by the Microsoft Arabic spell checker using MS Office 2010. Among the unknown types of 2,116,180, the number of types accepted as correct is 208,188. The advantage of using spell checking at this stage is that it provides significant filtration of the forms (almost 90 % reduction) and retains a more compact, more manageable, and better quality list of entries to deal with in further processing. The disadvantage is that there is no guarantee that all word forms not accepted by the MS speller are actually spelling mistakes (or that all the ones accepted are correct). We select types with frequency of 10 or more of the types accepted by the MS spell checker. This results in a total of 40,277 types. We use the full POS tags and morpho-syntactic features produced by MADA. We use the finite-state morphological guesser to produce all possible morphological interpretations and relevant lemmatizations. We compare the POS tags and morphosyntactic features in MADA output with the output of the morphological guesser and choose the one with the highest matching score.For testing and evaluation we gold annotate 1,310 words randomly selected from the 40,277 types, providing the gold lemma, the gold POS and lexicographic preference for inclusion in a dictionary. It is to be noted that working with the 2,116,180 types before filtering out possible spelling errors will require annotating a much larger gold standard.
2
The idea is to extend the AMR with logical structure, obtaining a scoped representation AMR + with two dimensions: one level comprising predicate-argument structure (the original AMR, minus polarity attributes), and one level consisting of the logical structure (information about logical operators such as negation and the scope they take). This is achieved by viewing an AMR as a recursive structure, rather than interpreting it as a graph, and performing two operations on them:1. assign an index to each (sub-)AMR;2. add structural constraints to the AMR via the indices.AMRs can be seen as a recursive structure by viewing every slash within an AMR as a sub-AMR (Bos, 2016) . If a (sub-)AMR contains relations, those relations will introduce nested AMRs. (A constant is also an AMR, following this view.) An AMR (and all its sub-AMRs) will be labeled by decorating the slashes with indices (indices will be indicated by numbers enclosed in square brackets).Every AMR is augmented by a set of scoping constraints on the labels. This way, a sub-AMR can be viewed as describing a "context". The constraints state how the contexts relate to each other. They can be declared as the same contexts (=), a negated context (¬), a conditional context (⇒), or a presuppositional context (<). Colons are used to denote inclusion, i.e., l : C states that context l contains condition C.Note that these labels are similar in spirit to those used in underspecification formalisms as proposed in the early 1990s (Reyle, 1993; Copestake et al., 1995; Bos, 1996) . The treatment of presuppositions is inspired by semantic formalism extending Discourse Representation Theory (Van der Sandt, 1992; Geurts, 1999; Venhuizen et al., 2013; Venhuizen et al., 2018) .
2
We employ an encoder-decoder architecture with attention mechanisms (Sutskever et al., 2014; Luong et al., 2015) as illustrated in Figure 1 . The framework consists of (1) an utterance-table encoder to explicitly encode the user utterance and table schema at each turn, (2) A turn attention incorporating the recent history for decoding, (3) a table-aware decoder taking into account the context of the utterance, the table schema, and the previously generated query to make editing decisions.An effective encoder captures the meaning of user utterances, the structure of table schema, and the relationship between the two. To this end, we build an utterance-table encoder with co-attention between the two as illustrated in Figure 2 . Figure 2b shows the utterance encoder. For the user utterance at each turn, we first use a bi-LSTM to encode utterance tokens. The bi-LSTM hidden state is fed into a dot-product attention layer (Luong et al., 2015) over the column header embeddings. For each utterance token embedding, we get an attention weighted average of the column header embeddings to obtain the most relevant columns (Dong and Lapata, 2018) . We then concatenate the bi-LSTM hidden state and the column attention vector, and use a second layer bi-LSTM to generate the utterance token embedding h E . Figure 2c shows the table encoder. For each column header, we concatenate its table name and its column name separated by a special dot token (i.e., table name . column name). Each column header is processed by a bi-LSTM layer. To better capture the internal structure of the table schemas (e.g., foreign key), we then employ a selfattention (Vaswani et al., 2017) among all column headers. We then use an attention layer to capture the relationship between the utterance and the table schema. We concatenate the self-attention vector and the utterance attention vector, and use a second layer bi-LSTM to generate the column header embedding h C .Note that the two embeddings depend on each other due to the co-attention, and thus the column header representation changes across different utterances in a single interaction. Utterance- Table BERT Embedding. We consider two options as the input to the first layer bi-LSTM. The first choice is the pretrained word embedding. Second, we also consider the contextualized word embedding based on BERT (Devlin et al., 2019) . To be specific, we follow Hwang et al. (2019) to concatenate the user utterance and all the column headers in a single sequence separated by the [SEP] token:[CLS], X i , [SEP], c 1 , [SEP], . . . , c m , [SEP]This sequence is fed into the pretrained BERT model whose hidden states at the last layer is used as the input embedding.To capture the information across different utterances, we use an interaction-level encoder on top of the utterance-level encoder.At each turn, we use the hidden state at the last time step from the utterance-level encoder as the utterance encoding. This is the input to a unidirectional LSTM interaction encoder:h U i = h E i,|X i | h I i+1 = LSTM I (h U i , h I i )The hidden state of this interaction encoder h I encodes the history as the interaction proceeds. Turn Attention When issuing the current utterance, the user may omit or explicitly refer to the previously mentioned information. To this end, we adopt the turn attention mechanism to capture correlation between the current utterance and the utterance(s) at specific turn(s). At the current turn t, we compute the turn attention by the dot-product attention between the current utterance and previous utterances in the history, and then add the weighted average of previous utterance embeddings to the current utterance embedding:EQUATIONThe c turn t summarizes the context information and the current user query and will be used as the initial decoder state as described in the following.We use an LSTM decoder with attention to generate SQL queries by incorporating the interaction history, the current user utterance, and the table schema.Denote the decoding step as k, we provide the decoder input as a concatenation of the embedding of SQL query token q k and a context vector c k :h D k+1 = LSTM D ([q k ; c k ], h D k )where h D is the hidden state of the decoder LSTM D , and the hidden state h D 0 is initialized by c turn t . When the query token is a SQL keyword, q k is a learned embedding; when it is a column header, we use the column header embedding given by the table-utterance encoder as q k . The context vector c k is described below. Context Vector with the Table and User Utterance. The context vector consists of attentions to both the table and the user utterance. First, at each step k, the decoder computes the attention between the decoder hidden state and the column header embedding.s l = h D k W column-att h C l α column = softmax(s) c column k = l α column l × h C l (2)where l is the index of column headers and h C l is its embedding. Second, it also computes the attention between the decoder hidden state and the utterance token embeddings:EQUATIONwhere i is the turn index, j is the token index, and h E i,j is the token embedding for the j-th token of i-th utterance. The context vector c k is a concatenation of the two:c k = [c column k ; c token k ]Output Distribution. In the output layer, our decoder chooses to generate a SQL keyword (e.g., SELECT, WHERE, GROUP BY, ORDER BY) or a column header. This is critical for the crossdomain setting where the table schema changes across different examples. To achieve this, we use separate layers to score SQL keywords and column headers, and finally use the softmax operation to generate the output probability distribution:o k = tanh([h D k ; c k ]W o ) m SQL = o k W SQL + b SQL m column = o k W column h C P (y k ) = softmax([m SQL ; m column ]) (4)In an interaction with the system, the user often asks a sequence of closely related questions to complete the final query goal. Therefore, the query generated for the current turn often overlaps significantly with the previous ones.To empirically verify the usefulness of leveraging the previous query, we consider the process of generating the current query by applying copy and insert operations to the previous query 3 . Figure 3 shows the SQL query length and the number of copy and insert operations at different turns. As the interaction proceeds, the user question becomes more complicated as it requires longer SQL query to answer. However, more query tokens overlap with the previous query, and thus the number of new tokens remains small at the third turn and beyond.Based on this observation, we extend our tableware decoder with a query editing mechanism. We first encode the previous query using another bi-LSTM, and its hidden states are the query token embeddings h Q i,j (i.e., the j -th token of the i-th query). We then extend the context vector with the attention to the previous query:c k = [c column k ; c token k ; c query k ]where c query k is produced by an attention to query tokens h Q i,j in the same form as Equation 3.At each decoding step, we predict a switch p copy to decide if we need copy from the previous query or insert a new token.p copy = σ(c k W copy + b copy ) p insert = 1 − p copy (5)Then, we use a separate layer to score the query tokens at turn t − 1, and the output distribution is modified as the following to take into account the editing probability:P prev SQL = softmax(o k W prev SQL h Q t−1 ) m SQL = o k W SQL + b SQL m column = o k W column h C P SQL column = softmax([m SQL ; m column ]) P (y k ) = p copy • P prev SQL (y k ∈ prev SQL) +p insert • P SQL column (y k ∈ SQL column) (6)While the copy mechanism has been introduced by Gu et al. 2016and See et al. (2017) , they focus on summarization or response generation applications by copying from the source sentences. By contrast, our focus is on editing the previously generated query while incorporating the context of user utterances and table schemas.
2
The methodology used by the participants in shared task of speech recognition for vulnerable individuals in Tamil is discussed in this section. Different types of pre-trained transformer models used by the participants in this shared task are Amrrs/wav2vec2-large-xlsr-53-tamil 1 (Dhanya et al., 2022) akashsivanandan/wav2vec2-large-xslr-300mtamil-colab-final 2 (Dhanya et al., 2022) nikhil6041/wav2vec2-large-xlsr-tamilcommonvoice 3 (Dhanya et al., 2022) Rajaram1996/wav2vec-large-xlsr-53-tamil 4 (Suhasini and Bharathi, 2022) The above mentioned models are fine tuned on facebook/wav2vec-large-xlsr-53 5 pre-trained model using multilingual common voice dataset. To fine-tune the model, they had a classifier representing the downstreams task's output vocabulary on top of it and train it with a Connectionist Temporal Classification (CTC) loss on the labelled data. The models used are based on XLSR wav2vec model, this XLSR model is capable of learning cross-lingual speech data, where the raw speech
2
First of all, we present the methodological framework. Then we introduce the crucial part of the image-image search engine, i.e., SIFT based image similarity measurement. Finally, we list the preprocessing methods for collecting and processing raw data.Our method can be regarded as a kind of Cross-Media Information Retrieval (CMIR) technique. The main framework of CMIR is closely similar to that of CLIR. The only difference between them is the bridge used to link a text in the source language with the equivalents in the target language. For the former, the bridge is the image, while the latter the language (e.g., keyword and translation). Figure 2 shows the framework of CMIR. We also provide that of CLIR for comparison. For our method, i.e., CMIR, we collect the texts which summarize the main contents in images, and map the texts to the images in a one-to-one way. On the basis, we search comparable texts by pair-wise image similarity measurement. By contrast, CLIR generally employs a slightly weak translator or bilingual dictionary to generate rough or partial translations (see Section 2). Such translations are used as queries by a text search engine to acquire higher-level equivalents, such as Talvensaari et al (2007) and Bo et al (2010)'s work, using the translations of keywords as the clues to detect document-level equivalents. To some extent, CMIR is easier to use than CLIR. The crucial issue for CMIR is only to improve the quality of the search results. CLIR needs to additionally consider the quality of the bilingual dictionaries or the performance of the weak translators.In order to conduct CMIR, however, we need to ensure that there is indeed a correspondence between a pair of image and text. It means that the text sufficiently depicts the meanings of the image. To fulfil the requirement, we collect the images and their captions from the structure-fixed webpages, and use them to build the reliable data bank for CMIR.In practice, we collect the pairs of images and captions from both the news websites in the source language and that in the target language, respectively building source Data Bank (SDB) and target Data Bank (TDB). Given a caption Cs in SDB and the corresponding image Is, we calculated the image similarity between Is and all images Its in TDB. Then we rank all Its based on image similarity. Finally we select the captions Cts of the most highly ranked Its as the equivalents of Cs. The pairs of Cs and Ct are used as the bilingual sentence pairs to construct the comparable corpora.The image-image search engine uses each of the images in the SDB as a query. For every query, the engine goes through all the images in the TDB and measures their visually similarity to the query. The similarity will be used as the criterion to rank the search results. In this paper, we employ the Scale-Invariant Feature Transformation method (SIFT) for representing the images, creating scale-invariant keypoint-centered feature vector. On the basis, we calculate the image similarity by using the Euclidean distance of the keypoints.SIFT is an image characterization method, which has been proven to be more effective than other methods in detecting the local details from different perspectives at different scales. This advantage causes precise image-to-image matching. Figure 3 shows the theory behind SIFT. Figure 3 : SIFT process (assume that the biggest triangle is the original image). The keypoints are denoted by the directed square marks (the direction is denoted by the line that radiates outward from the middle of the square marks) The scale-invariant keypoints extracted by SIFT for the original image The keypoints of the homogeneous images at different scales First, SIFT zooms in and out on the original image, so as to obtain the homogeneous images at different scales (see the three triangles at the left side of Figure 3 ). Second, SIFT extracts keypoints respectively in the homogeneous, and merges them to generate a set of scale-invariant keypoints (see those points in the triangle at the right side of Figure 3 ). The feature space which is instantiated by those scale-invariant keypoints is scale-independent, and therefore extremely conductive to detecting visually similar images at different scales (Lowe et al 1999 , Lowe et al 2004 .SIFT employs the most distinctive point in a small area as a key feature, i.e., the so-called keypoint. Due to the local processing in different areas, SIFT is not only able to obtain locally optimal features but maintain all the similar key features occurred in different parts of the image.Following the state-of-the-art SIFT (Lowe et al 2004 , Yan et al 2004 and Hakim et al 2006 ) method, we define a small area as the set of a sampling point and the adjacent points (neighbours). It is noteworthy that the area includes not just the neighbours in the original image but those in the homogeneous images at different scales. We use Gaussian function to fit the size of all the points in the area. On the basis, we use the difference of Gaussian function to determine the extreme point, and specify the point as the distinctive point in the area. Figure 4: Different versions of the top 5 image search results. They were respectively obtained when the threshold θ were finely turned from 0.6 to 0.9.We model each keypoint by pixel-wise vectors in the keypoint-centered 16*16 windows. The vector represents both the direction and the value of image gradient. Lowe et al (2004) detail the gradient measurement method.In total, we extract keypoints for the image representation. Given two images, we calculate the similarity by the average Euclidean distance of the matching keypoints. For a keypoint x in the source image, we determine the matching point in the target image by the following steps: First, we acquire two most similar keypoints y and z in the target image. Assume that the similarity of (x, y) is smaller than (x, z), second, we calculated the ratio r of the similarity s(x, y) to s(x, z). If r is bigger than a threshold θ, we determine that the keypoint z is the matching point of x; otherwise there isn't any matching point of x in the target image. We set the threshold θ as 0.8.A smaller value of r (r< θ) will introduce many unqualified matching points in the image similarity calculation. It will reduce the precision of image search results. It means that most of the retrieved im-ages are either dissimilar or unrelated. By contrast, a larger value causes few available matching points. It will influence the diversity of the search results. It means that most of the retrieved images are the same with each other or even extracted from the same provenances. Obviously, we would like to see that they derive from different Medias in different languages. Figure 4 lists a series of images, which are the top 5 search results obtained by using different levels of θ. This group of search results are very representative in our experiments, able to reflect that the setting value 0.8 of θ is a reasonable boundary between correct and incorrect results. In particular, it can be found that such a threshold ensures the diversity of the correct results (Note that the query in this example is the left image in Figure 1 ).We crawl the images and captions by using crawler4j 1 , which is an open source toolkit specially developed for effectively crawling web data. On the basis, we use regular expressions to extract images and captions from the structured source files of the crawled web pages.An optional preprocessing for an experimental system is to index images. It enables high-speed retrieval. We apply the locality-sensitive hashing (LSH) technique 2 for content-based image indexing.
2
Let t denote a 3-tuple (triple) consisting of a subject (t s ), predicate (t p ) and object (t o ). Linked Data resources are typed and its type is called class. We write type (t s ) = c meaning that t s is of class c. p denotes a relation and r p is a set of triples whose t p =p, i.e., r p ={t | t p = p}.Given a specific class c, and its pairs of relations (p, p') such that r p ={t|t p =p, type(t s )=c} and r p' ={t|t p =p', type (t s )=c}, we measure the equivalency of p and p' and then cluster equivalent relations. The equivalency is calculated locally (within same class c) rather than globally (across all classes) because two relations can have identical meaning in specific class context but not necessarily so in general. For example, for the class Book, the relations dbpp:title and foaf:name are used with the same meaning, however for Actor, dbpp:title is used interchangeably with awards dbpp:awards (e.g., Oscar best actor).In practice, given a class c, our method starts with retrieving all t from a Linked Data set where type(t s )=c, using the universal query language SPARQL with any SPARQL data endpoint. This data is then used to measure equivalency for each pair of relations (Section 3.1). The equivalence scores are then used to group relations in equivalent clusters (Section 3.2).The equivalence for each distinct pair of relations depends on three components.Triple overlap evaluates the degree of overlap 2 in terms of the usage of relations in triples. Let SO(p) be the collection of subject-object pairs from r p and SO int the intersection) r ( SO ) r ( SO ) ' p , p ( SO ' p p int  [1] then the triple overlap TO(p, p') is calculated as} | r | | ) r , r ( SO | , | r | | ) r , r ( SO | { MAX ' p ' p p int p ' p p int [2]Intuitively, if two relations p and p' have a large overlap of subject-object pairs in their data instances, they are likely to have identical meaning. The MAX function allows addressing infrequently used, but still equivalent relations (i.e., where the overlap covers most triples of an infrequently used relation but only a very small proportion of a much more frequently used).Subject agreement While triple overlap looks at the data in general, subject agreement looks at the overlap of subjects of two relations, and the degree to which these subjects have overlapping objects. Let S(p) return the set of subjects of relation p, and O(p|s) returns the set of objects of relation p whose subjects are s, i.e.:} s t , p t | t { ) s | r ( O ) s | p ( O s p o p     [3]we define:) r ( S ) r ( S ) ' p , p ( S ' p p int   [4] | ) ' p , p ( S | otherwise , | ) s | ' p ( O ) s | p ( O | if , int ) ' p , p ( S s int       0 0 1 [5] | ) ' p ( S ) p ( S | / | ) ' p , p ( S | int    [6] then the agreement AG(p, p') is     ) ' p , p ( AG [7]Equation [5] counts the number of overlapping subjects whose objects have at least one overlap. The higher the value of α, the more the two relations "agree" in terms of their shared subjects. For each shared subject of p and p' we count 1 if they have at least 1 overlapping object and 0 otherwise. This is because both p and p' can be 1:many relations and a low overlap value could mean that one is densely populated while the other is not, which does not necessarily mean they do not "agree". Equation [6] evaluates the degree to which two relations share the same set of subjects. The agreement AG(p, p') balances the two factors by taking the product. As a result, relations that have high level of agreement will have more subjects in common, and higher proportion of shared subjects with shared objects.Cardinality ratio is a ratio between cardinality of the two relations. Cardinality of a relation CD(p) is calculated based on data:| ) r ( S | | r | ) p ( CD p p  [8]and the cardinality ratio is calculated as)} ' p ( CD ), p ( CD { MAX )} ' p ( CD ), p ( CD { MIN ) ' p , p ( CDR  [9]The final equivalency measure integrates all the three components to return a value in [0, 2]:) ' p , p ( CDR ) ' p , p ( AG ) ' p , p ( TO ) ' p , p ( E   [10]The measure will favor two relations that have similar cardinality.We apply the measure to every pair of relations of a concept, and keep those with a non-zero equivalence score. The goal of clustering is to create groups of equivalent relations based on the pair-wise equivalence scores. We use a simple rule-based agglomerative clustering algorithm for this purpose. First, we rank all relation pairs by their equivalence score, then we keep a pair if (i) its score and (ii) the number of triples covered by each relation are above a certain threshold, T minEqvl and T minTP respectively. Each pair forms an initial cluster. To merge clusters, given an existing cluster c and a new pair (p, p') where either pc or p'c, the pair is added to c if E(p, p') is close (as a fractional number above the threshold T minEqvlRel ) to the average scores of all connected pairs in c. This preserves the strong connectivity in a cluster. This is repeated until no merge action is taken. Adjusting these thresholds allows balancing between precision and recall.
2
To specifically target discontinuity, we explore two mechanisms both preceding a Bi-LSTM: 1) a GCN layer to act as a syntactic ngram detector, 2) an attention mechanism to learn long-range dependencies.Standard convolutional filters act as sequential ngram detectors (Kim, 2014) . Such filters might prove inadequate in modeling complex language units like discontinuous MWEs. One way to overcome this problem is to consider non-sequential relations by attending to syntactic information in parse trees through the application of GCNs.GCN is defined as a directed multi-node graphG(V, E) where v i ∈ V and (v i , r, v j ) ∈ Eare entities (words) and edges (relations) respectively. By defining a vector x v as the feature representation for the word v, the convolution equation in GCN can be defined as a non-linear activation function f and a filter W with a bias term b as:EQUATIONwhere r(v) shows all words in relation with the given word v in a sentence, and c represents the output of the convolution. Following Kipf and Welling (2017) and Schlichtkrull et al. (2017) , we represent graph relations using adjacency matrices as mask filters for inputs. We derive associated words from the dependency parse tree of the target sentence. Since we are dealing with a sequence labelling task, there is an adjacency matrix representing relations among words (as nodes of the dependency graph) for each sentence. We define the sentence-level convolution operation with filter W s and bias b s as follows:EQUATIONwhere X, A, and C are representation of words, adjacency matrix, and the convolution output, all at the level of sentence. The above formalism considers only one relation type, while depending on the application, multiple relations can be defined. Kipf and Welling (2017) construct separate adjacency matrices corresponding to each relation type and direction. Given the variety of dependency relations in a parse tree (e.g. obj, nsubj, advcl, conj, etc), and per-sentence adjacency matrices, we would end up with an over-parametrised model in a sequence labeling task. In this work, we simply treat all relations equally, but consider only three types of relations: 1) the head to the dependents, 2) the dependents to the head, and 3) each word to itself (self-loops). The final output is obtained by aggregating the outputs from the three relations.Attention (Bahdanau et al., 2014) helps a model address the most relevant parts of a sequence through weighting. As attention is designed to capture dependencies in a sequence regardless of distance, it is complementary to RNN or CNN models where longer distances pose a challenge. In this work we employ multi-head self-attention with a weighting function based on scaled dot product which makes it fast and computationally efficient.Based on the formulation of Transformer by Vaswani et al. (2017) , in the encoding module an input vector x is mapped to three equally sized matrices K, Q, and V (representing key, query and value) and the output weight matrix is then computed as follows:EQUATIONThe timing signal required for the self-attention to work is already contained in the preceding CNN layers alleviating the need for position encoding.The overall scheme of the proposed model, composed of two parallel branches, is depicted in Figure 1. We employ multi-channel CNNs as the step preceding self-attention. One channel is comprised of two stacked 1D CNNs and the other is a single 1D CNN. After concatenation and batch normalisation, a multi-head self attention mechanism is applied (Section 2.2). Parallel to the self-attention branch, GCN learns a separate representation (Section 2.1). Since the GCN layer retains important structural information and is sensitive to positional data from the syntax tree, we consider it as a position-based approach. On the other hand, the self-attention layer is intended to capture long-range dependencies in a sentence. It relates elements of the same input through a similarity measure irrespective of their distance. We therefore regard it as a contentbased approach. As these layers represent different methodologies, we seek to introduce a model that combines their complementary traits in our particular task.J times K concat Q V Multihead self attention linear Transform Carry FFN Linear BiLSTM GCN Multichannel CNNs W o × v X T A v × s s × sGating Mechanism. Due to the considerable overlap between the GCN and self-attention layers, a naive concatenation introduces redundancy which significantly lowers the learning power of the model. To effectively integrate the information, we design a simple gating mechanism using feed-forward highway layers (Srivastava et al., 2015) which learn to regulate information flow in consecutive training epochs. Each highway layer consists of a Carry (Cr) and a Transform (T r) gate which decide how much information should pass or be modified. For simplicity Cr is defined as 1 − T r. We apply a block of J stacked highway layers (the section inside the blue dotted square in Figure 1 ). Each layer regulates its input x using the two gates and a feedforward layer H as follows:EQUATIONwhere denotes the Hadamard product and T r is defined as σ(W T r x + b T r ). We set b T r to a negative number to reinforce carry behavior which helps the model learn temporal dependencies early in the training.Our architecture bears some resemblance to Marcheggiani and Titov (2017) and Zhang et al. (2018) in its complementary view of GCN and BiLSTM. However there are some important differences. In these works, BiLSTM is applied prior to GCN in order to encode contextualised information and to enhance the teleportation capability of GCN. Marcheggiani and Titov (2017) stack a few BiLSTM layers with the idea that the resulting representation would enable GCN to consider nodes that are multiple hops away in the input graph. Zhang et al. (2018) use a similar encoder, however the model employs single BiLSTM and GCN layers, and the graph of relations is undirected.In our work, we use pre-trained contextualised embeddings that already contain all the informative content about word order and disambiguation. We put BiLSTM on top of GCN, in line with how CNNs are traditionally applied as feature generating front-ends to RNNs. Furthermore, Marcheggiani and Titov (2017) use an edge-wise gating mechanism in order to down-weight uninformative syntactic dependencies. This method can mitigate noise when parsing information is deemed noisy, however in Zhang et al. (2018) it caused performance to drop. Given our lowresource setting, in this work we preferred not to potentially down-weight contribution of individual edges, therefore treating them equally. We rely on gating as the last step when we combine GCN and self-attention.
2
In this section, we introduce the AttentionRank model in detail. The overview architecture of our model is shown in Figure 1 . AttentionRank integrates the accumulated self-attention component with the cross-attention component to calculate the final score of a candidate. The proposed model has four main steps: (1) Generate a candidate set C from a document; (2) Calculate the accumulated self-attention value a c for each candidate c, c ∈ C;(3) Calculate the cross-attention relevancy value (r c ) between a candidate c and the document d;(4) Calculate the final score s c for each candidate through a linear combination of a c and r c .We use the candidate extraction module implemented in EmbedRank (Bennani-Smires et al., 2018) . This module first use Part-of-Speech ( We use the method introduced by Clark et al. (Clark et al., 2019) to extract self-attention weights of the words from the pre-trained BERT. We sum the attentions (a w w ) that a word (w) received from other words (w ) within the same sentence (s) to obtain the attention value (a w ) of the word within a sentence, shown as Equation 1. This attention value (a w ) represents the importance of the word within the context of a sentence.EQUATIONAs shown in Figure 2 , all highlighted are noun chunks. Intuitively, the darker the noun chunk is, the higher self-attention it receives. They have a higher probability of being selected as keyphrases.To calculate the self-attentions of a candidate (c) in sentence i, we add up the attention of the words in c, shown as Equation 2.EQUATIONThe document level self-attention value of candidate c is computed as the sum of all self-attention values of c in each sentence of document d: a c = i∈d a c i (3)The cross-attention model is inspired by the hierarchical attention retrieval (HAR) model (Zhu et al., 2019) and the bi-direction attentional model (Seo et al., 2016) . Based on their network architectures, we develop the cross-attention component to measure the correlation between a candidate and the document based on the context.A pre-trained BERT model can generate candidate c representation as E c = {e c 1 , ..., e c m }, where e i ∈ R H is embedding of w i , and there are m words in the candidate. Similarly, a pre-trained BERT model can also generate a representation E i = {e i 1 , ..., e i n } for a sentence i which contains n words.Cross-attention calculates a new document embedding to better measure the contextual correlations between candidates and sentences within a document. Given a sentence i represented as E i ∈ R n×H , and a candidate c represented as E c ∈ R m×H , a similarity matrix between i and c S ∈ R n×m can be calculated as Equation 4. Then, the word based sentence to candidate and candidate to sentence similarity can be measured as Equation 5 and 6).EQUATIONEQUATIONThe word-based cross-attention weights from sentence to candidate and from candidate to sentence are calculated as Equation 7 and 8. The new sentence representation V i is built upon these crossattention weights and computed by averaging the sum of the four items, shown as Equation 9. The E i is the original context of the sentence, the A i2c , E i A i2c and E i A c2i measure the context correlation between a sentence and a candidate. is element-wise multiplication.A i2c =S i2c • E c (7) A c2i =S i2c •S c2i • E i (8) V i = AV G(E i , A i2c , E i A i2c , E i A c2i ) (9)The new sentence representation V i is still a set of embeddings that comprises the word-based relations between the candidate and sentence. To generate a standardized sentence representation based on V i , a self-attention is performed on V i to highlight the importance of the words after applying the cross attention. Given a new sentence representationV i = {v i 1 , ..., v i n } with n words, the selfattention of sentence i is calculated as Equation 10. Then, the column-wise average is calculated to gain the final representation of a sentence α i ∈ R H .EQUATIONOnce the sentence embeddings are generated, we perform a similar process on sentence embeddings to generate document embedding. Given a document d which include a set of sentences E d = {α 1 , ..., α i }, to calculate the document embedding, we first generate self-attention of the document to emphasize sentences with higher correlation to the candidate (Equation 12), then take the column-wise average to get the final document embeddingEQUATIONSince the candidate is also originally represented as a word embedding set E c = {e c 1 , ..., e c m }, the self-attention calculation (Equation 14) is also applied, and the column-wise average is done afterwards to get the final candidate embedding p c ∈ R H as Equation 15.EQUATIONFinally, the relevance between a candidate c and a document d is determined by the cosine similarity of p c and p d shown as Equation 16.EQUATION2.4 Final Score Calculation and Post-processing EQUATIONA corpus is often domain-specific. It means some words with high document frequency might be generic words to this corpus. In this research, in order to limit the generic word or phrase becoming a keyphrase, we remove the candidates with a high document frequency than a threshold df θ .
2
In order to allow for zero-shot transfer between monolingual language models pre-trained in isolation from each other, we need to determine a mapping between their hidden representations. We first introduce our methodology for doing so, then we integrate this to the creation of sparse contextualized word representations.The alignment of word representations between independently constructed semantic spaces can be conveniently and efficiently performed via linear transformations. This has been a standard approach for non-contextualized word embeddings (Mikolov et al., 2013; Xing et al., 2015; Smith et al., 2017) , but it has been shown to be useful in the contextualized case as well (Conneau et al., 2020b) .The standard approach is to obtain a collection of pairs of anchor points {x i , y i } n i=1 with x i and y i denoting the representation of semantically equivalent words in the target and source languages, respectively. The mapping W is then obtained asEQUATIONAs we deal with contextualized models, we can obtain various representations for a word even in the same context, by considering the hidden representations from different layers of the neural language models employed. Additionally, as constraining the mapping matrix to be an isometric one have proven to be a useful requirement, we define our learning task to be of the formEQUATIONwith I denoting the identity matrix, x (lt) i and y (ls) i denoting the hidden representations obtained from the l t th and l s th layers of the target and source language neural language models, respectively.Finding the optimal isometric W can be viewed as an instance of the orthogonal Procrustes problem (Schönemann, 1966) which can be solved by W ⊥ = U V , with U and V originating from the singular value decomposition of the matrix product Y ⊺ X, where X and Y include the stacked target and source language contextual representations of pairs of semantically equivalent words.As words of the input sequences to the neural language models can be split into multiple subtokens, we followed the common practice of obtaining word-level neural representations by performing mean pooling of the subword representations. Throughout our experiments, we also relied on the RCSLS criterion (Joulin et al., 2018) , which offers a retrieval-based alternative of obtaining a mapping from the target to the source language representations.Our approach extends the information theoretic algorithm introduced in (Berend, 2020a) for its application in the cross-lingual zero-shot WSD setting. In order to obtain sparse contextualized representations for the source language, we first populate Y ∈ R d×N with d-dimensional contextualized representations of words determined for texts in the source language, and minimize the objectiveEQUATIONwhere C denotes the convex set of d × k matrices with column norm at most 1, λ is a regularization coefficient and the sparse coefficients in α are required to be non-negative. We used the SPAMS library (Mairal et al., 2009) for calculating D and α.Having obtained D for the source language, we determine a sparse contextualized word representation for a target language word with dense contextualized representationx i as min α i ∈R k ≥0 1 2 ∥W x i − Dα i ∥ 2 2 + λ∥α i ∥ 1 , (4)where W is the alignment transformation as described earlier in Section 3.1. Eq. (4) reveals that the cross-lingual applicability of the sparse codes are assured by the mapping transformation W and the fact that the sparse target language representations are also using the same D that was determined for the source language, which also ensures the efficient calculation of sparse representations during inference time. Apart from these crucial extensions we made for providing the use of contextualized sparse representations in the cross-lingual setting, the way we utilized them for the determination of sense representation and inference is identical to (Berend, 2020a) . That is, for all sense-annotated words in the training corpus, we calculated a weighted cooccurrence statistics between a word pertaining to a specific semantic category and having non-zero coordinates along a specific dimension in their sparse contextualied word representations. These statistics are then transformed into pointwise mutual information (PMI) scores, resulting in a sense representation for all the senses in the training sense inventory.Sense representations obtained that way measure the strength of the relation of the senses to the different (sparse) coordinates. Inference for a word with sparse representation α is simply taken as arg max s Φα ⊺ , where Φ is the previously defined matrix of PMI values and s corresponds to the sense at which position the above matrix-vector products takes its largest value.
2
This section discusses the details required for reproducing the results. It mentions the preprocessing steps, the architecture of the classifiers used, and hyperparameter values.The preprocessing performed includes the removal of the USER and URL tokens from the response and utterances in the dialogue. The text was also converted to lower-case.In our study, we used BiLSTM, BERT, and SVM classifiers.The BiLSTM classifier (Hochreiter and Schmidhuber, 1997) we used had a single BiLSTM layer of 100 units. The output from the BiLSTM layer is fed to a fully connected layer of 100 units through a max pooling layer. After applying dropout on the output from the fully connected layer, it was fed to an output layer having a single unit. For the BiLSTM classifier, the text was represented using the pre-trained fastText embeddings. The 300-dimensional fastText embeddings 1 trained on Wikipedia 2017, UMBC webbase corpus and statmt.org news dataset were used in our study.BERT (Devlin et al., 2019 ) is a transformer based architecture (Vaswani et al., 2017) . It is a bidirectional model. As opposed to static embeddings that are produced by fastText, BERT produces contextualized word embeddings where the vector for the word is computed based on the context in which it appears.In our study, we used the uncased large version of BERT 2 . This version has 24 layers and 16 attention heads. This model generates 1024 dimensional vector for each word. We used 1024 dimensional vector of the Extract layer as the representation of the text. Our classification layer consisted of a single Dense layer. This layer used the sigmoid activation layer. The classifier was trained using the Adam optimizer with a learning rate of 2e-5. The binary crossentropy loss function was used.1 https://fasttext.cc/docs/en/english-vectors.html 2 https://github.com/google-research/ bertThe Support Vector Machine (SVM) classifier we used in our study was trained using the TF-IDF features of character n-grams (1 to 6). The linear kernel was used for the classifier and hyperparameter C was set to 1.0.
2
In this section, we first describe representing a query, sentence and document using local and distributed representation schemes. We further describe enhanced query-document (query-title and query-content) and query-sentence interactions to compute query-aware document or sentence representations for Task-1 and Task-2, respectively. Finally, we discuss the application of supervised neural topic modeling in ranking documents for task 1 and introduce unsupervised and supervised sentence rankers for Task-2.In this paper, we deal with texts of different lengths in form of query, sentence and document. In this section, we describe the way we represent the different texts.Bag-of-words (BoW) and Term frequencyinverse document frequency (TF-IDF): We use two the local representation schemes: BoW andꞈ v 2 v 1 v 2 v i ꞈ v D h(v) ... h e (v) d 1 (tr) Loss d 2 (tr) ATF d 3 (tr) PTA d 4 (tr) Arousal d 5 (tr) ATF ... ... d N (tr) LossTraining Documents DocRanker { DocNADE, SVM } + Featuresd 1 (te) d 2 (te) d 3 (te)...Clustersh 1 (v <1 ) v 1 ꞈ W S p(v) p(q|v)(1) Building Classification based Document Ranker(2) Rank Documents within each Cluster based on the prediction probability of the corresponding cluster ID Col-1 indicates "predicted label" by the DocRanker and Col-2 indicates "prediction probability" (p(q|v)). "Features" inside DocRanker indicates FastText and word2vec pretrained embeddings.h 2 (v <2 ) h i (v <i ) U v ... α D α 2 α 1 V i = p(V i | v <i ) ꞈ ... d 1 (te) d 2 (TF-IDF (Manning et al., 2008) to compute sentence/document vectors. Embedding Sum Representation (ESR): Word embeddings (Mikolov et al., 2013; Pennington et al., 2014) have been successfully used in computing distributed representation of text snippets (short or long). In ESR scheme, we employ the pre-trained word embeddings from FastText (Bojanowski et al., 2017) and word2vec (Mikolov et al., 2013) . To represent a text (query, sentence or document), we compute the sum of (pre-trained) word vectors of each word in the text. E.g., ESR for a document d with D words can be computed as:ESR(d) = d = D i=1 e(d i )where, e ∈ R E is the pre-trained embedding vector of dimension E for the word d i .Query-aware Attention-based Representation (QAR) for Documents and Sentences: Unlike ESR, we reward the maximum matches between a query and document by computing density of matches between them, similar to McDonald et al. (2018) . In doing so, we introduce a weighted sum of word vectors from pre-trained embeddings and therefore, incorporate importance/attention of certain words in document (or sentence) that appear in the query text.For an enhanced query-aware attention based document (or sentence) representation, we first compute an histogram a i (d) ∈ R D of attention weights for each word k in the document d (or sentence s) relative to the ith query word q i , using cosine similarity:a i (d) = [a i,k ] D k=1 where, a i,k = e(q i ) T e(d k ) ||e(q i )|| ||e(d k )||for each kth word in the document d. Here, e(w) refers to an embedding vector of the word w.We then compute an query-aware attentionbased representation Φ i (d) of document d from the viewpoint of ith query word by summing the word vectors of the document, weighted by their attention scores a i (d):Φ i (d) = D k=1 a i,k (d) e(d k ) = a i (d) [e(d k )] D k=1where is an element-wise multiplication operator.Next, we compute density of matches between several words in query and the document by summing each of the attention histograms a i for all the query terms i. Therefore, the query-aware document representation for a document (or sentence) relative to all query words in q is given by:EQUATIONSimilarly, a query-aware sentence representation Φ q (s) and query-aware title representation Φ q (t) can be computed for the sentence s and document title t, respectively.For query representation, we use ESR scheme as q = |q| i=1 e(w i ). Figure 2 illustrates the computation of queryaware attention-based sentence representation.Topic models (TMs) (Blei et al., 2003) have shown to capture thematic structures, i.e., topics appearing within the document collection. Beyond interpretability, topic models can extract latent document representation that is used to perform document retrieval. Recently, Gupta et al. (2019a) and Gupta et al. (2019b) have shown that the neural network-based topic models (NTM) outperform LDA-based topic models (Blei et al., 2003; Srivastava and Sutton, 2017) in terms of generalization, interpretability and document retrieval.In order to perform document classification and retrieval, we have employed supervised version of neural topic model with extra features and further introduced word-level attention in a neural topic model, i.e. in DocNADE (Larochelle and Lauly, 2012; Gupta et al., 2019a) .Supervised NTM (SupDocNADE): Document Neural Autoregressive Distribution Estimator (DocNADE) is a neural network based topic model that works on bag-of-words (BoW) representation to model a document collection in a language modeling fashion.Consider a document d, representedas v = [v 1 , ..., v i , ..., v D ] of size D, where v i ∈ {1, ..., Z}is the index of ith word in the vocabulary and Z is the vocabulary size. DocNADE models the joint distributionp(v) of document v by decomposing p(v) into autoregressive conditional of each word v i in the document, i.e., p(v) = D i=1 p(v i |v <i ), where v <i ∈ {v 1 , ..., v i−1 }.As shown in Figure 1 (left), DocNADE computes each autoregressive conditional p(v i |v <i ) using a feed forward neural network for i ∈ {1, ..., D} as,p(v i = w|v <i ) = exp(b w + U w,: h(v <i )) w exp(b w + U w ,: h(v <i )) h i (v <i ) = f (c + j<i W :,v j )where, f (•) is a non-linear activation function, W ∈ R H×Z and U ∈ R Z×H are encoding and decoding matrices, c ∈ R H and b ∈ R Z are encoding and decoding biases, H is the number of units in latent representationh i (v <i ).Here,h i (v <i )contains information of words preceding the word v i . For a document v, the log-likelihood L(v) and latent representation h(v) are given as,EQUATIONHere, L(v) is used to optimize the topic model in unsupervised fashion and h(v) encodes the topic proportion. See Gupta et al. (2019a) for further details on training unsupervised DocNADE.Here, we extend the unsupervised version to DocNADE with a hybrid cost L hybrid (v), consisting of a (supervised) discriminative training cost p(y = q|v) along with an unsupervised generative cost p(v) for a given query q and associated document v:EQUATIONwhereλ ∈ [0, 1].The supervised cost is given by:L sup (v) = p(y = q|v) = softmax(d + S h(v))Here, S ∈ R L×H and d ∈ R L are output matrix and bias, L is the total number of unique RDoC constructs (i.e., unique query labels). Supervised Attention-based NTM (a-SupDocNADE): Observe in equation 3 that the DocNADE computes document representation h(v) via aggregation of word embedding vectors without considering attention over certain words. However, certain content words own high important, especially in classification task. Therefore, we have introduced attention-based embedding aggregation in supDocNADE (Figure 1, left) :EQUATIONHere, α i is an attention score of each word i in the document v, learned via supervised training. Additionally, we incorporate extra word features, such as pre-trained word embeddings from several sources: FastText (E f ast ) (Bojanowski et al., 2017) and word2vec (E word2vec ) (Mikolov et al., 2013) . We introduce these features by concatenating h e (v) with h(v) in the supervised portion of the a-supDocNADE model, as Therefore, the classification portion of a-supDocNADE with additional features is given by:EQUATIONr sup =exp(-||(Φ q (s j ) -q p ) + β(Φ q (s j ) -t p )|| 2 ) r BM25-r q = sim(Φ q (s j ), q p ) r t = sim(Φ q (s j ), t p ) word embeddings + ... + sim + Φ q (s j )p(q|v) = softmax(d + S • concat(h(v), h e (v)))where, S ∈ R H ×L and H = H + E f ast + E word2vec .BM25: A ranking function proposed by Robertson and Zaragoza (2009) is used to estimate the relevance of a document for a given query.BM25-Extra: The relevance score of BM-25 is combined with four extra features: (1) percentage of query words with exact match in the document, (2) percentage of query words bigrams matched in the document, (3) IDF weighted document vector for feature #1, and (4) IDF weighted document vector for feature #2. Therefore, BM25-Extra returns a vector of 5 scores.RDoC Task-1 aims at retrieving and ranking of PubMed abstracts (title and content) that are relevant for 8 RDoC constructs. Participants are provided with 8 clusters, each with a RDoC construct label and required to rank abstracts within each cluster based on their relevance to the corresponding cluster label. Each cluster contains abstracts relevant to its RDoC construct, while some (or most) of the abstracts are noisy in the sense that they belong to a different RDoC construct. Ideally, the participants are required to rank abstracts in each of the clusters by determining their relevance with the RDoC construct of the cluster in which they appear.To address the RDoc Task-1, we learn a mapping function between latent representation h(v) of a document (i.e.., abstract) v and its RDoC construct, i.e., query words q in a supervised fashion. In doing so, we have employed supervised classifiers, especially supervised neural topic model a-supDocNADE (section 3.2) for document ranking. We treat q as label and maximize p(q|v) leading to maximize L hybrid (v) in a-supDocNADE model.As demonstrated in Figure 1 (right), we perform document ranking in two steps:(1) Document Relevance Ranking: We build a supervised classifier using all the training documents and their corresponding labels (RDoC constructs), provided with the training set. At the test time, we compute prediction probability score p(CID = q|v test (CID))) of the label=CID for each test document v test (CID) in the cluster, CID. This prediction probability (or confidence score) is treated as a relevance score of the document for the RDoC construct of the cluster. Figure 1(right) shows that we perform document ranking using the probability scores (col-2) of the RDoC construct (e.g. loss) within the cluster C1. Observe that a test document with least confidence for a cluster are ranked lower within the cluster and thus, improving mean average precision (mAP). Additionally, we also show the predicted RDoC construct in col-1 by the supervised classifier.(2) Document Relevance Re-ranking: Secondly, we re-ranked each document v (ti-tle+abstract) within each cluster (with label q) using unsupervised ranking, where the relevance scores are computed as: (a) reRank(BM25-Extra): sum each of the 5 relevance scores to get the final relevance, and (b) reRank(QAR): cosine-similarity(QAR(v), q).The RDoC Task-2 aims at extracting the most relevant sentence from each of the PubMed abstract for the corresponding RDoC construct. Each abstract consists of title t and sentences s with an RDoC construct q.To address RDoc Task-2, we first compute multi-view representation: BoW, TF-IDF and QAR (i.e., Φ q (s j )) for each sentence s j in an abstract d. On other hand, we compute ESR representation for RDoC construct (query q) and title t of the abstract d to obtain q and t, respectively. Figure 2 and section 3.1 describe the computation of these representations. We then use the representations (Φ q (s j ), t and q) to compute a relevance scores of a sentence s j relative to q and/or t via unsupervised and supervised ranking schemes, discussed in the following section.As shown in Figure 2 , we first extract representations: Φ q (s j ), t and q for the sentence s j query q and title t. During ranking sentences within an abstract for the given RDoC construct q, we also consider title t in computing the relevance score for each sentence relative to q and t. It is inspired from the fact that the title often contains relevant terms (or words) appearing in sentence(s) of the document (or abstract). On top, we observe that q is a very short text and non-descriptive, leading to minimal text overlap with s.We compute two relevance scores: r q and r t for a sentence s j with respect to a query q and title t, respectively. r q = sim( q, Φ q (s j )) and r t = sim( t, Φ q (s j )) Now, we devise two ways to combine the rele-vance scores r q and r t in unsupervised paradigm:version1: r unsup 1 = r q • r q + r t • r tObserve that the relevance scores are weighted by itself. However, the task-2 expects a higher importance to the relevance score r q over q t . Therefore, we coin the following weighting scheme to give higher importance to r q only if it is higher than r t otherwise we compute a weight factor r t for r t . version2:r unsup 2 = r q • r q + r t • r twhere r t is compute as:r t = (r t > r q )|r t − r q |The relevance score r unsup 2 is effective in ranking sentences when a query and sentence does not overlap. In such a scenario, a sentence is scored by title, penalized by a factor of |r t − r q |.At the end, we obtain a final relevance score r unsup f for a sentence s j by summing the relevance scores of BM25-Extra and r unsup 1 or r unsup 2 .Beyond unsupervised ranking, we further investigate sentence ranking in supervised paradigm by introducing a distance metric between the query (or title) and sentence vectors. Figure 2 describes the computation of relevance score for a sentence s j using a supervised sentence ranker scheme. Like the unsupervised ranker (section 3.5.1), the supervised ranker also employs vector representations: Φ q (s j ), t and q. Using the projection matrix G, we then apply a projection to each of the representation to obtain Φ p q (s j ), t p and q p . Here, the operator ⊗ performs concatenation of the projected vector with its input via residual connection. Next, we apply a Manhattan distance metric to compute similarity (or relevance) scores, following : Table 2 : Data statistics -# of PubMed abstracts belonging to each RDoC construct in different data partitions. (L1: "Acute Threat Fear"; L2: "Arousal"; L3: "Circadian Rhythms"; L4: "Frustrative Nonreward"; L5: "Loss"; L6: "Potential Threat Anxiety"; L7: "Sleep Wakefulness"; L8: "Sustained Threat") Best mAP score for each model is marked in bold. (reRank #1: "reRank(BM25-Extra)"; reRank #2: "reRank(QAR)"; reRank #3: "reRank(BM25-Extra) + reRank(QAR)") score computed between q and s j via Siamese-LSTM . To perform sentence ranking within an abstract for a given RDoC construct q, the relevance scorer sup = exp − ||(Φ p q (s j ), q p ) + β (Φ p q (s j ), t p )|| 2 where β ∈ [r sup f (or r unsup f )is computed for all the sentences and a sentence with the highest score is extracted.
2
We first describe how an existing normalization model is modified for this specific use. Then we discuss how we integrate this normalization into the parsing model.We use an existing normalization model (van der Goot, 2016) . This model generates candidates using the Aspell spell checker 1 and a word embeddings model trained on Twitter data (Godin et al., 2015) . Features from this generation are complemented with n-gram probability features of canonical text (Brants and Franz, 2006 ) and the Twitter domain. A random forest classifier (Breiman, 2001 ) is exploited for the ranking of the generated candidates.Van der Goot (2016) focused on finding the correct normalization candidate for erroneous tokens, gold error detection was assumed. Therefore, the model was trained only on the words that were normalized in the training data. Since we do not know in advance which words should be normalized, we can not use this model. Instead, we train the model on all words in the training data, including words that do not need normalization. Accordingly, we add the original token as a normalization candidate and add a binary feature to indicate this. These adaptations enable the model to learn which words should be normalized.We compare the traditional approach of only using the best normalization sequence with an integrated approach, in which the parsing model has access to multiple normalization candidates for each word. Within the integrated approach, we compare normalizing only the words unknown to the parser against normalizing all words. We refer to these approaches as 'UNK' and 'ALL', respectively. Figure 1 shows a possible output when using ALL. When using UNK, the word 'nice' would not have any normalization candidates.We adapt the state-of-the-art PCFG Berkeley Parser (Petrov and Klein, 2007) to fit our needs. The main strength of this PCFG-LA parser is that it automatically learns to split constituents into finer categories during training, and thus learns a more refined grammar than a raw treebank grammar. It maintains efficiency by using a coarse-tofine parsing setup. Unknown words are clustered by prefixes, suffixes, the presence of special characters or capitals and their position in the sentence.Parsing word lattices is not a new problem. The parsing as intersection algorithm (Bar-Hillel et al., 1961) laid the theoretical background for ef-ficiently deriving the best parse tree of a word lattice given a context-free grammar. Previous work on parsing a word lattice in a PCFG-LA setup includes Constant et al. (2013) , and Goldberg and Elhadad (2011) for the Berkeley Parser. However, these models do not support probabilities, which are naturally provided by the normalization in our setup. Another problem is the handling of word ambiguities, which is crucial in our model. Our adaptations to the Berkeley Parser resemble the adaptations done by Goldberg and Elhadad (2011) . In addition, we allow multiple words on the same position. For every POS tag in every position we only keep the highest scoring word. This suffices, since there is no syntactic ambiguity possible with only unary rules from POS tags to words, and therefore it is impossible for the lower scoring words to end up in the final parse tree.To incorporate the probability from the normalization model (P norm ) into the chart, we combine it with the probability from the POS tag assigned by the built-in tagger of the Berkeley parser (P pos ) using the weighted harmonic mean (Rijsbergen, 1979) :P chart = (1 + β 2 ) * P norm * P pos (β 2 * P norm ) + P pos (1)Here, β is the relative weight we give to the normalization and P chart is the probability used in the parsing chart. We use this formula because it allows us to have a weighted average, in which we reward the model if both probabilities are more balanced.
2
We performed detailed evaluation of the different classification and extraction components.We investigate the trade-off between training data volume and performance, and how generalisable a model is. For training volume, we fix a test set and reduce training data in chunks of 20% of total. We test generalisability by training models on country-specific data and evaluating on unseen data from other countries. We report Accuracy overall, as well as Precision, Recall, and F1 on the include label in this binary classification task. Finally, we investigate the effect of thresholding classifier confidence, and sending low confidence documents for manual human review, on both the accuracy of the system and on human time cost.We evaluate the sentence classifier and sequence-labelling approach with our CNN models. We also consider the impact of using document representations constructed with embeddings trained entirely on the source data, versus general purpose GloVe embeddings (Pennington et al., 2014) trained on web data, versus general purpose GloVe embeddings fine-tuned on the source data 10 . As the sentence-level classifier is multilabel and multi-class, we report AUC (Area Under Curve). 11 For the phrase-level sequence labelling approach, we report F1 score.Document Classification veterinarian experts manually labelled papers as include/exclude: 608 papers from searches for 50 diseases for the countries of Ethiopia, Nigeria and Tanzania. We experimented with labelling 100 test documents: half via a reference manager/document reader 12 and via a simple spreadsheet interface where one column contained the paper title, one contained the paper abstract, and the expert filled in a third column for the include/exclude label. 13 The spreadsheet method was 3 times faster than using a reference manager, enabling experts to complete the 608 papers of training data in 5 hours. Half the data contains country information, so we use only that half for our generalisability experiments.Data Extraction 52 documents were randomly sampled from the set of documents manually classified for inclusion. The sampled documents covered 13 diseases for studies in Ethiopia, Nigeria and Tan To select a manageable volume of data for annotation, and avoid including noisy data from the PDF extraction process, we applied some restrictions. For the sentence-based task, all sentences of at least 9 words within the abstract were included, along with a random sample of 150 sentences (between 9 and 25 words long) from the results and methods sections. Sentence length was based on the fact that very short/long sentences were generally noisy due to the PDF conversion process. For the phrase-based task, sections were split into chunks of three sentences to preserve some context. The entire abstract was used, plus a random sample of 25 chunks from each of the methods and results sections. Table 1 briefly describes each item and the breakdown of label frequency in our annotated data. There is a clear imbalance in label frequencysome are not commonly reported in general (e.g. mortality, herd prevalence) while others are reported very few times per paper (e.g. study date).Data Volume We trained document classification models using proportions from 20% to 100% of all data.Generalisability Three document classification models were each trained on two of the three countries, with the final country held out. We included data volume ablations in these experiments as well.We trained the CNN-model on sentence labelled vs. phrase la-belled data to assess the feasibility of using each annotation approach.Architecture & Pretraining We experiment with five different architectures for the phrasebased models. We use the Prodigy CNN with randomly-initialised embeddings, the Prodigy CNN with frozen pre-trained embeddings, the Prodigy CNN with pre-trained embeddings finetuned on our data, distilBERT (Sanh et al., 2019) , and SciBERT (Beltagy et al., 2019) . The CNN is easy to implement out of the box, as it is built into the annotation tool, can be trained without access to a GPU, and could potentially be less datahungry than a transformer -all important considerations in our resource constrained setting. Adding pre-trained embeddings allows us to isolate the effect of pre-training from the effect of architecture. Since the phrase-labelling task is well suited to the masked language modelling objective, we additionally experiment with fine-tuning distilBERT (which is reasonably sized for our small amount of data) and SciBERT, to test whether the domain match of pre-trained data matters.
2
The model architecture, shown in Figure 1 , is a slight variant of the CNN architecture proposed by Kim (2014) . We define x i ∈ R k as the kdimensional word vector (i.e., word embeddings) corresponding to the i-th word in the tweets. We padded the tweets to make all equal length, and represent a tweet of length n asEQUATIONwhere ⊕ is the concatenation operator. In general, we refer x i:i+j to the concatenation of words x i , x i+1 , . . . , x i+j . Then, we apply a convolution operation that uses a filter w ∈ R hk , over a window of h words to produce a new feature. For example, we generate feature c i from a window of words x i:i+h−1 byEQUATIONWe denote b ∈ R as the bias term and f as a RELU activation function defined as f (x) = x + = max(0, x), where x in the input to the neuron (Glorot et al., 2011) . The convolution layer applies the filter to each possible window of words in the sentence {x 1:h , x 2:h+1 , . . . , x n−h+1:n } to produce a feature mapEQUATIONwith c ∈ R n−h+1 . Then, we apply a maxover-time pooling operation over the feature map and take the maximum valueĉ = max{c} as the feature corresponding to this particular filter. The goal is to capture the essential feature (the highest feature value) for the feature maps. The pooling scheme allows us to deal with variable sentence lengths. Figure 1 : The CNN architecture used to identify offensive tweets using binary output layer.We have described the process by which we extract one feature from one filter. The model uses multiple filters to obtain multiple features. These features feed a fully connected layer with a RELU activation function, and finally a sigmoid layer that outputs the probability distribution over labels.For regularization, we employ a dropout layer with rate r = 0.2, constrained on l 2 -norms of the weight vectors. We apply the dropout after the embeddings and the penultimate layer. Dropout prevents co-adaptation of hidden units by randomly dropping out (i.e., set to zero) a proportion of p of the hidden units during forwardbackpropagation. That is, given the penultimate layer z = [ĉ 1 , . . . ,ĉ m ] (note that here we have m filters), instead of usingEQUATIONfor output unit y in forward propagation, dropout usesEQUATIONwhere • is the element-wise multiplication operator and r ∈ R m is a masking vector of Bernoulli random variables with probability p of being 1. Gradients are backpropagated only through the unmasked units. At test time, the learned weight vectors are scaled by p such thatŵ = pw, and w is used (without dropout) to score unseen sentences. We additionally constrain l 2 -norms of the weight vectors by rescaling w to have ||w|| 2 = s whenever ||w|| 2 > s after a gradient descent step.
2
At the heart of our approach there is the simplified Lesk algorithm. Given a text w 1 w 2 ...w n of n words, we disambiguate one at a time taking into account the similarity between the gloss associated to each sense of the target word w i and the context. The meaning whose gloss has the highest similarity is selected. The context could be represented by a subset of surrounding words or the whole text where the word occurs. Moreover, taking into account the idea of the Banerjee's adaptation, we expand each gloss with those of related meanings. Our sense inventory is BabelNet, a very large multilingual semantic network built relying on both WordNet and Wikipedia. In BabelNet linguistic knowledge is enriched with encyclopedic concepts coming from Wikipedia. WordNet synsets and Wikipedia concepts (pages) are connected in an automatic way. We choose BabelNet for three reasons: 1) glosses are richer and contain text from Wikipedia, 2) it is multilingual, thus the proposed algorithm can be applied to several languages, and 3) it also contains information about named entities, thus an algorithm using BabelNet could be potentially used to disambiguate entities.Our algorithm consists of five steps:1. Look-up. For each word w i , the set of possible word meanings is retrieved from BabelNet. First, we look for senses coming from WordNet (or WordNet translated into languages different from English). If no sense is found, we retrieve senses from Wikipedia. We adopt this strategy because mixing up all senses from Wikipedia and WordNet results in worse performance. Conversely, if a word does not occur in WordNet it is probably a named entity, thus Wikipedia could provide useful information to disambiguate it.2. Building the context. The context C is represented by the l words to the left and to the right of w i . We also adopt a particular configuration in which the context is represented by all the words that occur in the text.3. Gloss expansion. We indicate with s ij the j-th sense associated to the target word w i . We expand the gloss g ij that describes the j-th sense using the function "getRelatedMap" provided by BabelNet API. This method returns all the meanings related to a particular sense. For each related meaning, we retrieve its gloss and concatenate it to the original gloss g ij of s ij . During this step we remove glosses belonging to synsets related by the "antonym" relationship. The result of this step is an extended gloss denoted by g * ij . In order to give more importance to terms occurring in the original gloss, the words in the expanded gloss are weighed taking into account both the distance between s ij and the related synsets and the word frequencies. More details about term scoring are reported in Subsection 3.2. 4. Building semantic vectors. Exploiting the DSM described in Section 2, we build the vector representation for each gloss g * ij associated with the senses of w i and the context C.5. Selecting the correct meaning. For each gloss g * ij , the algorithm computes the cosine similarity between its vector representation and context vector C. The similarity is linearly combined with the probability p(s ij |w i ) that takes into account the sense distribution of s ij given the word w i ; details are reported in Subsection 3.1. The sense with the highest similarity is chosen.In order to compare our approach to the simplified Lesk algorithm, we developed a variation of our method in which, rather than building the semantic vectors, we count the common words between each extended gloss g * ij and the context C. In this case, we apply stemming to maximize the overlap.The selection of the correct meaning takes also into account the senses distribution of the word w i . We retrieve information about sense occurrences from WordNet (Fellbaum, 1998) , which reports for each word w i its sense inventory S i with the number of times that the word w i was tagged with s ij in SemCor.SemCor is a collection of 352 documents manually annotated with WordNet synsets. We introduce the sense distribution factor in order to consider the probability that a word w i can be tagged with the sense s ij . Moreover, since some synsets do not occur in SemCor and can cause zero probabilities, we adopt an additive smoothing (also called Laplace smoothing). Finally the probability is computed as follow:p(s ij |w i ) = t(w i , s ij ) + 1 #w i + |S i | (2)where t(w i , s ij ) is the number of times the word w i is tagged with s ij and #w i is the number of occurrences of w i in SemCor.The extended gloss conflates words from the gloss directly associated with the synset s ij with those of the glosses appearing in the related synsets. When we add words to the extended gloss, we weigh them by a factor inversely proportional to the distance in the graph (number of edges) between s ij and the related synsets so to reflect their different origin. Let d be that distance, then the weight is computed as 1 1+d . Finally, we re-weigh words using a strategy similar to the inverse document frequency (IDF ) that we call inverse gloss frequency (IGF ). The idea is that if a word occurs in all the extended glosses associated with a word, then it poorly characterizes the meaning description. Let gf * k be the number of extended glosses that contain a word w k , then IGF is computed as follow:EQUATIONThis approach is similar to the idea proposed by Vasilescu et al. (2004) , where TF-IDF of terms is computed taking into account the glosses in the whole WordNet, while we compute IGF considering only the glosses associated to each word. Finally, the weight for the word w k appearing h times in the extended gloss g * ij is given by:weight(w k , g * ij ) = h × IGF k × 1 1 + d (4)
2
In this section, we introduce our E2GRE framework. First, we describe how to generate entityguided inputs. Then we present how to jointly train RE with evidence prediction, and finally show how to combine this with our evidence-guided attentions. We use BERT as our pretrained LM when describing our framework.The goal of relation extraction is to predict relation label between every head/tail (h/t) pair of given entities in a given document. Most standard models approach this problem by feeding in an entire document and then extracting all of the head/tail pairs Figure 2 : Diagram of our E2GRE framework. As shown in the diagram, we pass an input sequence consisting of an entity and document into BERT. We extract head and tails for relation extraction. We show the learned relation vectors in grey. We extract out sentence representation and BERT attention probabilities for evidence predictions.to predict relations.Instead, we design entity-guided inputs to give BERT more guidance towards the entities during training. Each training input is organized by concatenating the tokens of the first mention of a head entity, denoted by H, together with the document tokens D, to form: "[CLS]"+ H + "[SEP]" + D + "[SEP]", which is then fed into BERT. 2 We generate these input sequences for each entity in the given document. Therefore, for a document with N e entities, N e new entity-guided input sequences are generated and fed into BERT separately.Our framework predicts N e − 1 different sets of relations for each training input, corresponding to N e − 1 head/tail entity pairs.After passing a training input through BERT, we extract the head entity embedding and a set of tail entity embeddings from the BERT output. After obtaining the head entity embedding h ∈ R d and all tail entity embeddings {t k |t k ∈ R d } in an entity-guided sequence, where 1 ≤ k ≤ N e − 1, we feed them into a bilinear layer with the sigmoid activation function to predict the probability of i-th relation between the head entity h and the k-th tail entity t k , denoted byŷ ik , as followŝy ik = δ(h T W i t k + b i ) (1)where δ is the sigmoid function, W i and b i are the learnable parameters corresponding to i-th relation, where 1 ≤ i ≤ N r , and N r is the number of relations. Finally, we finetune BERT with multi-label cross-entropy loss. During inference, we group the N e − 1 predicted relations for each entity-guided input sequence from the same document, to obtain the final set of predictions for a document.Evidence sentences are sentences which contain important facts for predicting the correct relationships between head and tail entities. Therefore, evidence prediction is a very important auxiliary task to relation extraction and also provides explainability for the model. We build our evidence prediction upon the baseline introduced by Yao et al.[2019], which we will describe next.Let N s be the number of sentences in the document. We first obtain the sentence embedding s ∈ R N S ×d by averaging all the embeddings of the words in each sentence (i.e., Sentence Extraction in Fig. 2 ). These word embeddings are derived from the BERT output embeddings.Letr i ∈ R d be the relation embedding of i-th relation r i (1 ≤ i ≤ N r ), which is learnable and initialized randomly in our model. We employ a bilinear layer with sigmoid activation function to predict the probability of the j-th sentence s j being an evidence sentence w.r.t. the given i-th relation r i as follows.EQUATIONwhere s j represents the embedding of j-th sentence, W r i /b r i and W r o /b r o are the learnable parameters w.r.t. i-th relation. We define the loss of evidence prediction under the given i-th relation as follows:EQUATIONwhere y j ik ∈ {0, 1}, and y j ik = 1 means that sentence j is an evidence for the i-th relation. It should be noted that in the training stage, we use the embedding of true relation in Eq. 2. In testing/inference stage, we use the embedding of the relation predicted by the relation extraction model.In [Yao et al., 2019] the baseline relation extraction loss L RE and the evidence prediction loss are combined as the final objective function for the joint training:L baseline = L RE + λ * L Evi (4)where λ > 0 is the weight factor to make tradeoffs between two losses, which is data dependent. In order to compare to our models, we utilize a BERT-baseline to predict relation extraction loss and evidence prediction loss.Pretrained language models have been shown to be able to implicitly model semantic relations internally. By looking at internal attention probabilities, Clark et al. [2019] has shown that BERT learns coreference and other semantic information in later BERT layers. In order to take advantage of this inherent property, our framework attempts to give more guidance to where correct semantics for RE are located. For each pair of head h and tail t k , we introduce the idea of using internal attention probabilities extracted from the last l internal BERT layers for evidence prediction. Let Q ∈ R N h ×L×(d/N h ) be the query and K ∈ R N h ×L×(d/N h ) be the key of the Multi-Head Self Attention layer, N h be the number of attention heads as described in [Vaswani et al., 2017] , L be the length of the input sequence and d be the embedding dimension. We first extract the output of multiheaded self attention (MHSA) A ∈ R N h ×L×L from a given layer in BERT as follows. These extraction outputs are shown as Attention Extractor in Fig. 2 .EQUATIONAtt-head i = Attention(QW Q i , KW K i ) (6) A = Concat(Att-head 1 , • • • , Att-head n ) (7)For a given pair of head h and tail t k , we extract the attention probabilities corresponding to head and tail tokens to help relation extraction. Specifically, we concatenate the MHSAs for the last l BERT layers extracted by Eq. 7 to form an attention probability tensor as:à k ∈ R l×N h ×L×L .Then, we calculate the attention probability representation of each sentence under the given headtail entity pair (h, t k ) as follows.1. We first apply maximum pooling layer along the attention head dimension (i.e., second dimension) overà k . The max values are helpful to show where a specific attention head might be looking at. Afterwards we apply mean pooling over the last l layers. We obtaiñA s = 1 l l i=1 maxpool(à ki ),à s ∈ R L×L from these two steps.2. We then extract the attention probability tensor from the head and tail entity tokens according to the start and end positions of in the document. We average the attention probabilities over all the tokens for the head and tail embeddings to obtainà sk ∈ R L .3. Finally, we generate sentence representations fromà sk by averaging over the attentions of each token in a given sentence from the document to obtain a sk ∈ R NsOnce we get the attention probabilities a sk , we pass the sentence embeddingsF i k from Eq. 2 through a transformer layer to encourage intersentence interactions and form the new represen-tationẐ i k . We combine a sk withẐ i k and feed it into a bilinear layer with sigmoid (δ) for evidence sentence prediction as follows:EQUATIONFinally, we define the loss of evidence prediction under a given i-th relation based on attention probability representation as follows:EQUATIONwhereŷ ia jk is the j-th value ofŷ ia k computed by Eq. 8.Here we combine the relation extraction loss and the attention guided evidence prediction loss as the final objective function for the joint training:EQUATIONwhere λ a > 0 is the weight factor to make tradeoffs between two losses, which is data dependent.
2
Because the boundaries of a mention of a negative symptom are somewhat open to debate, due to the wide variety of ways in which psychiatric professionals may describe a negative symptom, we defined the boundaries to be sentence boundaries, thus transforming it into a sentence classification task. However, for evaluation purposes, precision, recall and F1 are used here, since observed agreement is not appropriate for an entity extraction task, giving an inflated result due to the inevitably large number of correctly classified negative examples.Due to the requirements of the use case, our work was biased toward achieving a good preci-sion. Future work making use of the data depends upon the results being of good quality, whereas a lower recall will only mean that a smaller proportion of the very large amount of data is available. For this reason, we aimed, where possible, to achieve precisions in the region of 0.9 or higher, even at the expense of recalls below 0.6.Our approach was to produce a rapid prototype with a machine learning approach, and then to combine this with rule-based approaches in an attempt to improve performance. Various methods of combining the two approaches were tried. Machine learning alone was performed using support vector machines (SVMs). Two rule phases were then added, each with a separate emphasis on improving either precision or recall. The rule-based approach was then tried in the absence of a machine learning component, and in addition both overriding the ML where it disagreed and being overridden by it. Rules were created using the JAPE language (Cunningham et al., 2000) . Experiments were performed using GATE (Cunningham et al., 2011; Cunningham et al., 2013) , and the SVM implementation provided with GATE (Li et al., 2009) .Evaluation was performed using fivefold crossvalidation, to give values for precision, recall and F1 using standard definitions. For some symptoms, active learning data were available (see Section 2.2.1) comprising a list of examples chosen for having a low confidence score on earlier versions of the system. For these symptoms, we first give a result for systems trained on the original dataset. Then, in order to evaluate the impact of this intervention, we give results for systems trained on data including the specially selected data. However, at test time, these data constitute a glut of misrepresentatively difficult examples that would have given a deflated result. We want to include these only at training time and not at test time. Therefore, the fold that contained these data in the test set was excluded from the calculation. For these symptoms, evaluation was based on the four out of five folds where the active learning data fell in the training set. The symptoms to which this applies are abstract thinking, affect, emotional withdrawal, poverty of speech and rapport.In the next section, results are presented for these experiments. The discussion section focuses on how results varied for different symptoms, both in the approach found optimal and the result achieved, and why this might have been the case. Table 1 shows results for each symptom obtained using an initial "rapid prototype" support vector machine learner. Confidence threshold in all cases is 0.4 except for negative symptoms, where the confidence threshold is 0.6 to improve precision. Features used were word unigrams in the sentence in conjunction with part of speech (to distinguish for example "affect" as a noun from "affect" as a verb) as well as some key terms flagged as relevant to the domain. Longer n-grams were rejected as a feature due to the small corpus sizes and consequent risk of overfitting. A linear kernel was used. The soft margins parameter was set to 0.7, allowing some strategic misclassification in boundary selection. An uneven margins parameter was used (Li and Shawe-Taylor, 2003; Li et al., 2005) and set to 0.4, indicating that the boundary should be positioned closer to the negative data to compensate for uneven class sizes and guard against small classes being penalized for their rarity. Since the amount of data available was small, we were not able to reserve a validation set, so care was taken to select parameter values on the basis of theory rather than experimentation on the test set, although confidence thresholds were set pragmatically. Table 1 also gives the number of classes, including the negative class (recall that different symptoms have different numbers of classes), and number of training examples, which give some information about task difficulty.
2
The algorithm can be divided into the following two steps:1. Find the word to word alignment for each entry in the terminology bank, 2. Assign a synset to the Chinese word sense by resolving the sense ambiguities of its aligned English word.The first step is to find all possible English translations for each Chinese word, which make it possible to link Chinese words to WordNet synsets. Since the English translation may be ambiguous, the purpose of second step is to employ a word sense disambiguation algorithm to select the appropriate synset for the Chinese word. For example, the term pair (water tank, 水 槽 ) will be aligned as (water/水 tank/槽 ) in the first step, so the Chinese word 槽 can be linked to WordNet synsets by its translation tank. But tank has five senses in WordNet as follows:tank_n_1: an enclosed armored military vehicle, tank_n_2: a large vessel for holding gases or liquids, tank_n_3: as much as a tank will hold, tank_n_4: a freight car that transports liquids or gases in bulk, tank_n_5: a cell for violent prisoners.The second step is applied to select the best sense translation. In the following subsections, we will describe the detail algorithm of word alignment in section 3.1 and word sense disambiguation in section 3.2.For a Chinese term and its English translation, it is natural to think that the Chinese term is translated from the English term word for word. So, the purpose of word alignment is to connect the words which have a translation relationship between the Chinese term and its English portion. In past years, several statistical-based word alignment methods have been proposed. [Brown et al. 1993] proposed a method of word alignment which consists of five translation models, also known as the IBM translation models. Each model focuses on some features of a sentence pair to estimate the translation probability. [Vogel et al. 1996] proposed the Hidden-Markov alignment model which makes the alignment probabilities dependent on the alignment position of the previous word rather than on the absolute positions. [Och and Ney 2000] proposed some methods to adjust the IBM models to improve alignment performance.The word alignment task in this paper only focuses on the term pairs of a bilingual terminology bank. Since the length of a term is usually far less than a sentence, some features, such as word position, are no longer important in the task. In this paper, we employ the IBM-1 model, which only focuses on lexical generating probability, to align the words of a bilingual terminology bank.For convenience, we follow the notion of [Brown et al. 1993] , which defines word alignment as follows:Suppose we have a English term e = e 1 ,e 2 ,…,e n where e i is an English word, and its corresponding Chinese term c = c 1 ,c 2 ,…,c m where c j is a Chinese word. An alignment from e to c can be represented by a series a=a 1 ,a 2 ,…,a m where each a j is an integer between 0 and n, such that if c j is partial (or total) translation of e i , then a j = i and if it is not translation of any English word, then a j =0.For example, the alignments shown in Figure 2 are two possible alignments from English to Chinese for the term pair (practice teaching, 教學 實習), (a) can be represented by a=1,2 while (b) can be represented by a=2,1. In the word alignment stage, given a pair of terms c and e, we want to find the most likely alignment a=a 1 ,a 2 ,…,a m , to maximize the alignment probability P(a|c,e) for the pair. The formula can be represented as follows:EQUATIONwhere â is the best alignment of the possible alignments. Suppose we already have lexical translation probabilities for each of the lexical pairs, then, the alignment probability P(a|c,e) can be estimated by means of the lexical translation probabilities as follows: The probability of c given e, P(c|e), is a constant for a given term pair (c,e), so formula 1 can be estimated as follows: . 2For example, the probability of the alignment shown in Figure 2 (a) can be estimated by:1 ( , | ) ( | , ) ( | )/ ( | ) ( | )P(c 1 |e 1 )P(c 2 |e 2 ) = P( 教學 | practice) P( 實習 | teaching) = 0.000480 x 1.14x10 -13 =5.48x10 -17 .While (b) can be estimated by:P(c 1 |e 2 )p(c 2 |e 1 ) = P( 教學 | teaching)P( 實習 | practice ) = 0.6953 x 0.0940 = 0.0654.In this example, the probability of alignment (b) is larger than (a) in Figure 2 . So the alignment (b), (教學/teaching 實習/practice), is a better choice than (a), (教學/practice 實習 /teaching), for the term pair (practice teaching, 教學 實習). The remaining problem of this stage is how to estimate the translation probability p(c|e) for all possible English-Chinese lexical pairs.The method of our translation probability estimation uses the IBM model 1 [Brown et al. 1993] , which is based on the EM algorithm [Dempster et al. 1977] , for maximizing the likelihood of generating the Chinese terms, which is the target language, given the English portion, which is the source language. Suppose we have an English term e and its Chinese translation c in the terminology bank T; e is a word in e, and c is a word in c. The probability of word c given word e, P(c|e), can be estimated by iteratively re-estimating the following EM formulae:Initialization:1 ( | ) | | P c e C = ;(3) E-step: EQUATIONChinese Words from Bilingual Terminology Bank M-step: In the EM training process, we initially assume that the translation probability for any Chinese word c given English word e, P(c|e), is uniformly distributed as in formula 3, where C denotes the set of all Chinese words in the terminology bank. In the E-step, we estimate the expected number of times that e connects to c in the term pair (c,e). As in formula 4, we sum up the expected counts of the connection from e to c over all possible alignments which contain the connection. Formula 5 is the detailed definition of the probability of an alignment a given (c,e). Usually, it is hard to evaluate the formulae in E-step. Fortunately, it has been proven [Brown et al. 1993 ] that the expectation formulae, 4 and 5, can be merged and simplified as follows: After merging and simplifying, as formula 7, the E-step becomes very simple and effective for computing.| | ( ) ( ) 1 | | ( ) ( ) 1 ( , ; , ) ( | ) ( , ; , ) T t t t T t t t vIn the M-step, we re-estimate the translation probability, P(c|e). As shown in formula 6, we sum up the expected number of connections from e to c over the whole bank divide by the expected number of c.The training process will count the expected number, E-step, and re-estimate the translation probability, M-step, iteratively until it has converged.For instance, as the example shown in Figure 2 , the English term e= practice teaching and Chinese term c=教學 實習 are given. Assume the total number of Chinese words in the terminology bank is 100,000. Initially, the probabilities of each translation are as follows:P( 教學 | practice) = 1 | | C = 0.00001, P( 教學 | teaching) = 1 | | C = 0.00001, P( 實習 | practice) = 1 | | C = 0.00001, P( 實習 | teaching) = 1 | | C = 0.00001.In E-step, we count the expected number for all possible connections in the term pair: In M-step, we first count the global expected number of each translation by summing up the expected number of each data entry over the whole term bank: After the global expected number of each translation has been counted, we can re-estimate the translation probabilities by means of the expected numbers: = 0.00632, Z( 教學 ,EQUATIONEQUATIONEQUATIONEQUATIONEQUATIONAs was mentioned in Section 3.1.1, the goal of word alignment is to find the best alignment candidate to maximize the translation probability of a term pair. However, in real situations there are some problems that have to be solved:1. Cross connections: assume there is a series of words, c j ,c j+1 ,c j+2 in a Chinese term, if c j and c j+2 connect to the same English word while c j+1 connects to any other word, we call this Chinese Words from Bilingual Terminology Bank alignment contains a cross connection. There is an example of cross connection shown in Figure 7 . The Chinese word 校 is more likely to connect to examination shown in Figure 8 . 2. Function words: in word alignment stage, function words are usually ignored except when they are part of compound words. For example, Figure 9 , of is a part of a compound which can not be skipped, while in Figure 10 , of can be skipped. In order to solve this problem, two constraints are imposed on the alignment algorithm. Formula 1 is altered by using a cost function instead of probability, defined as follows:EQUATIONwhere cost function is given by: EQUATIONThe cross connection function is used to detect the cross connection in an alignment candidate. If a cross connection is found, the alignment candidate will be assigned a large cost value. The function was given by: EQUATIONThere are two connection directions in word alignment: from Chinese to English, (where Chinese is the source language while English is the target language), and from English to Chinese. The alignment method of the IBM models has a restriction; a word of target language can only be connected to exactly one word of the source language. This restriction causes two words in the source language not to be able to connect to a word in the target language.For example, in Figure 11 , for alignment from Chinese to English, cedar should be connected to both 雪 and 松, but the model does not allow the connection in this direction. In order to solve this problem, the alignments of these two directions are merged using the following steps: 1. Align from Chinese to English. Each word of an English compound will be connected by the same Chinese word in this step which will be treated as an alignment unit in the next step. 2. Align from English to Chinese. Each word of a Chinese compound will be connected to the same English unit, a word or merged compound, in this step.For example, universal gravitation was merged in step 1 while 雪 and 松 were not merged in the same step, as shown in Figure 13 . In step2, 雪 and 松 were merged and universal gravitation will be treated as a unit in the same step, as shown in Figure 14 . After these two steps, all of the compounds in each language will be merged. Figure 15 shows some examples of word alignment in these experiments.Chinese When we tag Chinese words with WordNet senses, if the translation of a word has only one sense, a monosemous word, it can be tagged with that sense directly. If the translation has more than one sense, we should use a disambiguation method to get the appropriate sense. In past years, a lot of word sense disambiguation (WSD) methods have been proposed, including supervised, bootstrapping, and unsupervised. Supervised and bootstrapping methods usually resolve an ambiguity in the collocations of the target word, which implies that the target word should be in a complete sentence. These are not appropriate for this project's data. When some statistical based unsupervised methods are not accurate enough, they will add too much noise to the results. For the purpose of building a high quality dictionary, we tend to use a high precision WSD method which should also be appropriate for a bilingual term bank. We employ some heuristic rules, which are motivated by [Atserias et al. 1997] , described as follows:Heuristic 1.If e i is a morpheme of e then pick the sense of e i , say s j , which contains hyponym e. This heuristic rule works for head morphemes of compounds. For example, as shown in figure 16 , the term pair (water tank, 水 槽 ) is aligned as (water/水 tank/槽 ). There are five senses for tank. The above heuristic rule will select tank-2 as the sense of tank/槽 because there is only one sense of water tank and the sense is a hyponym of tank-2. In this case, the sense of water tank can be tagged as water tank-1 and tank can be tagged as tank-2. Figure 16. water tank-1 is a hyponym of tank-2 .Heuristic 2.Suppose the set {e 1 ,e 2 ,…,e k } contains all possible translations of Chinese word c,Case 1: If {e 1 ,e 2 ,…,e k } share a common sense s t , then pick s t as their sense.Case 2: If one element of the set {e 1 ,e 2 ,…,e k }, say e i , has a sense s t which is the hypernym of synsets corresponding to the rest of the words. We say that they nearly share the same sense and pick s t as the sense e i , pick the corresponding hyponyms as the sense of the rest of words.An example of case 1 is the translations of 腳踏車, {bicycle, bike, wheel}, which are a subset of a synset. This means that the synset is the common sense of these words and we can pick it as the words' sense. An example of case 2, as shown in figure 17, is the translations of 信號旗, {signal, signal flag, code flag}, although these words do not exactly share the same sense, one sense of signal is the hypernym of signal flag and code flag. This means that they nearly share the same sense; we pick the hypernym, signal-1, as the sense of signal and the corresponding hyponyms as the sense of signal flag and code flag. If some of the translations of c are tagged in the previous steps and the results show that the translations of c is always tagged with the same sense, we think c to have mono sense, so pick that sense as the sense of untagged translations.1 signal-1 code_flag-1 water tank-1 … tank-2 …In the previous steps, many Chinese-English pairs have been tagged with WordNet senses. In these tagged instances, we found that some Chinese words were always tagged with the same synset, although they may have many different English translations, and these English words may be ambiguous themselves. The untagged translations of the Chinese word can be tagged with the same synset.For example, as shown in Figure 18 , 防波堤 has many different translations and some of them are ambiguous in WordNet, (groin has 3 senses in WordNet). In fact, those seemingly different senses tagged by previous steps actually are indexed by the same synset in WordNet, so we guess that 防波堤 has mono sense and will be tagged the same synset for all instances.English word Sense
2
We attempt to improve the Alignment Error Rates (AER) achieved by University of Zurich (Rios et al., 2012) by duplicating the results (QU-DE and QU-ES) using the same corpora and resources from their project. Then, we modify the final growth-and-reordering algorithm that Moses provides from the Giza++ alignment. It is important to note that our focus will be on the alignment ideas performed by Rios et al. (2012) ; therefore, we use IBM Model 1 and its lexical matching as a first step rather than focus on other, more complicated, models. All of the corpora used in this project coincide with the corpora used in the tree-banking project at the University of Zurich (Llitjós, 2007) .After duplicating the AER published by Rios et al. (2012) , we create reference sentences in Finnish. This is done by translating the previous (Spanish) reference sentences to Finnish using a Moses system trained on Europarl. Then, we manually align Quechua words to Finnish words. Slight adaptations were made to the original target reference sentences. However, the difference can be considered negligible (less than 2 words on average per sentence).With the reference corpora created, we modify Giza++'s algorithm for alignment, the EM algorithm presented in the book by Koehn (2009) , by adding practical pronoun possessive rules. After rule insertion, we rerun a new Moses (QU-FI) execution and record alignment rates by comparing the new output to our reference corpora.The alignment technique we use attempts to naturally align Quechua with another language that has more readily available corpora -Finnish. Finnish has been chosen because it is quite agglutinative and, in many cases, suffix-based grammatical rules are used to modify words in the Finnish language similar to Quechua. In order to better exemplify agglutination, the example below is presented:• Infinitive Finnish verb "to correct": korja• Conjugate Finnish verb "to correct": korjaame (stem is korjaa)• Infinitive Quechua verb "to correct": allinchay• Conjugate Quechua verb "to correct": allinchaychik (stem is allinchay)There are two main figures from the word evaluation summary table published in the parallel tree-banking paper (Rios et al., 2012 ) that are of most concern: 1) Spanish to Quechua words and 2) Spanish to Quechua inflectional groups. Respectively, the Alignment Error Rate (AER) achieved by the Zurich group are: 1) 85.74 and 2) 74.05. The approach taken in the parallel tree-banking paper is to use inflectional groups that will group word parts, known as lexicons (Becker, 1975) , in order to translate unknown source (Spanish) words. Since Giza++ attempts reverse translations, it could be determined that a reverse translation from Quechua to Spanish would also produce around eighty percent AER. That is because the parameters used in Rios et al. (2012) 's work do not align null words and use the default methods for alignment in Giza++. The rules are not necessarily supervised because they use inflection groups(IG). An IG is a way of applying a tag to a word by annotating it according to a classification with a specific group as was done by Rios et al. (2012) .Quechua is based on a morphological structure that depends on suffixes to determine the meaning of root words that would otherwise be infinitive verbs. We modify the EM algorithm from Koehn (2009) to increase the likelihood of a word containing a desired morpheme match that has not been classified. That way matches are always done on words found in the past rather than a group of phrases. We modify the EM algorithm because other models, outside of IBM Model 1 and IBM Model2, are commonly based on fertility (Schwenk, 2007) and, thus, are not helpful when attempting to translate scarce-resource languages like Quechua. Furthermore, applying probabilities to words that cannot be aligned by a phrasal approach, where the "null" qualifier is allowed, could actually harm the output. For our purpose, which is to produce better alignment error rates than those presented in the University of Zurich parallel tree-banking project (Rios et al., 2012) , all models with exception of IBM Model 1, are excluded leaving a single sentence iteration for probability purposes. While a single iteration may not be the most optimum execution operation for likelihood expectation, it serves well as a determinant for the rule-based probability. One can imagine aspects of the higher order IBM models that don't involve fertility could be useful. e.g., aspects involving distance or relative distance between matching words.We also show that using Spanish as the pivot language for translations to Finnish makes suffixes, or morphemes, easier to align and makes inflectional grouping less necessary. Rules can be added that simply start at the end of the source word and compare them to the end of the target word. Each suffix has its own meaning and use that can be aligned using rule-based heuristics to determine the best word match. Our experiments described below show that the result of changing the target language increases the probability of lower alignment error rates.Finnish has been chosen here for detecting pronouns through suffix identification. Pronouns in Finnish are in many cases added to the end of the stem word, or lemma, in order to signify possession or direction much like is done in Quechua. While we were unable to identify all of the suffixes with their pronouns in Quechua, we show that by adding two pronoun and possession rules we achieve higher AER.Finnish is also ideal because rendering of Finnish sentences from Spanish sentences using a version of Moses trained on Europarl is easier than Quechua to Spanish. That makes choosing a pivot language, such as Spanish, the ideal candidate for translating the QU-ES texts to QU-FI texts and vice-versa. And, while the use of Finnish alone may be considered one of the most important factors in the alignment experiment, the focus of this paper is the adding of rules to the suffixes of both languages in order to better the AER found in previous QU-ES experiments.Here we are working with lexical alignment between two like languages, one with low resources available. That makes a pivot language necessary. The advantage of translating by using a pivot language without a bilingual corpus available has been shown in the past by Wu and Wang (2007) . By using the pivot language, we are able to translate Quechua to Finnish without having any Finnish translations directly available for Quechua. We use Finnish as the target language and Spanish as the pivot language for the alignment strategy of logical word pairing between Finnish and Quechua through their similar suffix incorporation.
2
This section first provides an overview of our method, followed by subsections describing its components. We follow previous clustering-based approaches, where text segments are first clustered into semantically similar groups, exploiting redundancy as a salience signal. Then, each group is fused to generate a merged sentence, while avoiding redundancy. As we operate at the propositionlevel, we first extract all propositions from the input documents ( §3.1). Then, to facilitate the clustering step, we filter out non-salient propositions using a salience model ( §3.2). Next, salient propositions are clustered based on their semantic similarity ( §3.3). The largest clusters, whose information was most repeated, are selected to be included in the summary ( §3.4). Finally, each cluster is fused to form a sentence for a bullet-style abstractive summary ( §3.5). In addition, we provide an extractive version where a representative (source) proposition is selected from each cluster (3.6). Overall, clustering explicit propositions induces a multi-step process that requires dedicated training data for certain steps. To that end, we derive new training datasets for the salience detection and the fusion models from the original gold summaries. The full pipeline is illustrated in Figure 2 , where additional implementation details are in §B in the Appendix.Aiming to generate proposition-based summaries, we first extract all propositions from the source documents using Open Information Extraction (Ope-nIE) (Stanovsky et al., 2018) 2 , following Ernst et al. (2021) . To convert an OpenIE tuple containing a predicate and its arguments into a proposition string, we simply concatenate them by their original order, as illustrated in Figure 3 in the Appendix.To facilitate the clustering stage, we first aim to filter non-salient propositions by a supervised model. To that end, we derive gold labels for proposition salience from the existing reference summaries. Specifically, we select greedily propositions that maximize ROUGE-1 F-1 + ROUGE-2 F-1 against their reference summaries (Nallapati et al., 2017; Liu and Lapata, 2019) and marked them as salient.Cluster A• The agreement will make Hun Sen prime minister and Ranariddh president of the National Assembly.• ...to a coalition deal...will make Hun Sen sole prime minister and Ranariddh president of the National Assembly.• The deal, which will make Hun Sen prime minister and Ranariddh president of the National Assembly...ended more than three months of political deadlock Cluster F• Hun Sen ousted Ranariddh in a coup.• The men served as co-prime ministers until Hun Sen overthrew Ranariddh in a coup last year.• Hun Sen overthrew Ranariddh in a coup last year.A. The deal will make Hun Sen prime minister and Ranariddh president of the National Assembly Cambodia King Norodom Sihanouk praised formation of a coalition of the Countries top two political parties, leaving strongman Hun Sen as Prime Minister and opposition leader Prince Norodom Ranariddh president of the National Assembly. The announcement comes after months of bitter argument following the failure of any party to attain the required quota to form a government. Opposition leader Sam Rainey was seeking assurances that he and his party members would not be arrested if they return to Cambodia. Rainey had been accused by Hun Sen of being behind an assassination attempt against him during massive street demonstrations in September. Table 1 : The proposition clusters and system and reference summaries for DUC 2004, topic D30001. Each summary sentence (lower left box) was fused from its corresponding cluster (top boxes) that also provides supporting source evidence. An example of an unfaithful abstraction is marked in red.Using this derived training data, we fine-tuned the Cross-Document Language Model (CDLM) (Caciularu et al., 2021) as a binary classifier for predicting whether a proposition is salient or not. Propositions with a salience score below a certain threshold were filtered out. The threshold was optimized with the full pipeline against the final ROUGE score on the validation set. All propositions contained in the clusters in Table 1 are examples of predicted salient propositions. We chose to use CDLM as it was pretrained with sets of related documents, and was hence shown to operate well over several downstream tasks in the multidocument setting (e.g., cross-document coreference resolution and multi-document classification).Next, all salient propositions are clustered to semanticly similar groups. Clusters of paraphrastic propositions are advantageous for summarization as they can assist in avoiding redundant information in an output summary. Furthermore, paraphrastic clustering offers redundancy as an additional indicator for saliency, while the former salience model ( §3.2) does not utilize repetitions explicitly. To cluster propositions we utilize SuperPAL (Ernst et al., 2021) , a binary classifier that measures paraphrastic similarity between two propositions. All pairs of salient propositions are scored with Super-PAL, over which standard agglomerative clustering (Ward, 1963) is applied. Examples of generated clusters are presented in Table 1 .The resulting proposition clusters are next ranked according to cluster-based properties. We examined various features, listed in Table 2 , on our validation sets. The features examined include: aver-age of ROUGE scores between all propositions in a cluster ('Avg. ROUGE'), average of SuperPAL scores between all propositions in a cluster ('Avg. SuperPAL'), average of the salience model scores of cluster propositions ('Avg. salience'), minimal position (in a document) of cluster propositions ('Min. position'), and cluster size ('Cluster size').For each feature, (1) clusters were ranked according to the feature, (2) the proposition with the highest salience model score ( §3.2) was selected from each cluster as a cluster representative, (3) the representatives from the highest ranked clusters were concatenated to obtain a system summary. We also measured combinations of two features ('Cluster size + Min. position' for example), where the first feature is used for primary ranking, and the second feature is used for secondary ranking in case of a tie. In all options, if a tie is still remained, further ranking between clusters is resolved according to the maximal proposition salience score of each cluster. The resulting ROUGE scores of these summaries on validation sets are presented in Table 2 . 3 We found that 'Cluster size' yields the best ROUGE scores as a single feature, and 'Min. position' further improves results as a secondary tie breaking ranking feature. Intuitively, a large cluster represents redundancy of information across documents thus likely to indicate higher importance.Next, we would like to merge the paraphrastic propositions in each cluster, while consolidating complementary details, to generate a new coherent summary sentence. As mentioned, this approach helps avoiding redundancy, since redundant information is concentrated separately in each cluster.To train a cluster fusion model, we derived training data automatically from the reference summaries, by leveraging the SuperPAL model (Ernst et al., 2021 ) (which was also employed in §3.3). This time, the model is used for measuring the similarity between each of the cluster propositions (that were extracted from the documents) and each of the propositions extracted from the reference summaries. The reference summary proposition with the highest average similarity score to all cluster propositions was selected as the aligned summary proposition of the cluster. This summary proposi- tion was used as the target output for training the generation model. Although these target OpenIE propositions may be ungrammatical or non-fluent, a human examination has shown that BART tends to produce full coherent sentences (mostly containing only a single proposition), even though it was finetuned over OpenIE extractions as target. Examples of coherent generated sentences can be seen in Table 1 . Accordingly, we fine-tuned a BART generation model (Lewis et al., 2020) with this dedicated training data. As input, the model receives cluster propositions, ordered by their predicted salience score ( §3.2) and separated with special tokens. The final bullet-style summary is produced by appending generated sentences from the ranked clusters until the desired word-limit is reached.To support extractive summarization settings, for example when hallucination is forbidden, we created a corresponding extractive version of our method. In this version, we extracted a representative proposition for each cluster, which was chosen according to the highest word overlap with the sentence that was fused from this cluster by our abstractive version. (Haghighi and Vanderwende, 2009) 31.23 7.07 10.56 31.04 6.03 10.23 LexRank (Erkan and Radev, 2004) 33.10 7.50 11.13 34.44 7.11 11.19 HL-XLNetSegs 5 (Cho et al., 2020) 37.32 10.24 13.54 36.73 9.10 12.63 HL-TreeSegs 5 (Cho et al., 2020) 36.70 9.68 13.14 38.29 10.04 13.57 DPP-Caps-Comb 5 (Cho et al., 2019) 38.14 11.18 14.41 38.26 9.76 13.64 RL-MMR (Mao et al., 2020) 39 Automatic evaluation metric. Following common practice, we evaluate and compare our summarization system with ROUGE-1/2/SU4 F1 measures (Lin, 2004) . Stopwords are not removed, and the output summary is limited to 100 words. 6 7
2
The lexicon induction procedure is recursive on the arguments of the head of the main clause. It is called for every sentence and gives a list of the words with categories. This procedure is called in a loop to account for all sentential conjuncts in case of coordination (Figure 3) .Long-range dependencies, which are crucial for natural language understanding, are not modelled in the Turkish data. Hockenmaier handles them by making use of traces in the Penn Treebank (Hockenmaier, 2003) [sec 3.9]. Since Turkish data do not have traces, this information needs to be recovered from morphological and syntactic clues. There are no relative pronouns in Turkish. Subject and object extraction, control and many other phenomena are marked by morphological processes on the subordinate verb. However, the relative morphemes behave in a similar manner to relative pronouns in English (Ç akıcı, 2002) . This provides the basis for a heuristic method for recovering long range dependencies in extractions of this type, described in Section 3.5. recursiveFunction(index i, Sentence s) headcat = findheadscat(i) //base case if myrel is "MODIFIER" handleMod(headcat) elseif "COORDINATION" handleCoor(headcat) elseif "OBJECT" cat = NP elseif "SUBJECT" cat = NP[nom] elseif "SENTENCE" cat = S . . if hasObject(i) combCat(cat,"NP") if hasSubject(i) combCat(cat,"NP[nom]") //recursive case forall arguments in arglist recursiveFunction(argument,s); Figure 3 : The lexicon induction algorithmThe subject of a sentence and the genitive pronoun in possessive constructions can drop if there are morphological cues on the verb or the possessee. There is no pro-drop information in the treebank, which is consistent with the surface dependency Adjuncts can be given CCG categories like S/S when they modify sentence heads. However, adjuncts can modify other adjuncts, too. In this case we may end up with categories like (6), and even more complex ones. CCG's composition rule (3) means that as long as adjuncts are adjacent they can all have S/S categories, and they will compose to a single S/S at the end without compromising the semantics. This method eliminates many gigantic adjunct categories with sparse counts from the lexicon, following (Hockenmaier, 2003) .(6) dahaThe treebank annotation for a typical coordination example is shown in (7). The constituent which is directly dependent on the head of the sentence, "zıplayarak" in this case, takes its category according to the algorithm. Then, conjunctive operator is given the category (X ! X)/X where X is the category of "zıplayarak" (or whatever the category of the last conjunct is), and the first conjunct takes the same category as X. The information in the treebank is not enough to distinguish sentential coordination and VP coordination. There are about 800 sentences of this type. We decided to leave them out to be annotated appropriately in the future. Mod. Coor.He came running and jumping.2 This includes the passive sentences in the treebankObject heads are given NP categories. Subject heads are given NP [nom] . The category for a modifier of a subject NP is NP[nom]/NP [nom] and the modifier for an object NP is NP/NP since NPs are almost always head-final.The treebank does not have traces or null elements. There is no explicit evidence of extraction in the treebank; for example, the heads of the relative clauses are represented as modifiers. In order to have the same category type for all occurences of a verb to satisfy the Principle of Head Categorial Uniqueness, heuristics to detect subordination and extraction play an important role.(8) Kitabı okuyan adam uyudu. Book+ACC read+PRESPART man slept.These heuristics consist of morphological information like existence of a "PRESPART" morpheme in (8), and part-of-speech of the word. However, there is still a problem in cases like (9a) and (9b). Since case information is lost in Turkish extractions, surface dependencies are not enough to differentiate between an adjunct extraction (9a) and an object extraction (9b). A T.LOCATIVE.ADJUNCT dependency link is added from "araba" to "uyudugum" to emphasize that the predicate is intransitive and it may have a locative adjunct. Similarly, a T.OBJECT link is added from "kitap" to "okudugum". Similar labels were added to the treebank manually for approximately 800 sentences. 9a. Uyudugum araba yandı. Sleep+PASTPART car burn+PAST. The car I slept in burned. b. Okudugum kitap yandı.Read+PASTPART book burn+PAST. The book I read burned.The relativised verb in (9b) is given a transitive verb category with pro-drop, (S ! NP), instead of (NP/NP) ! NP, as the Principle of Head Categorial Uniqueness requires. However, to complete the process we need the relative pronoun equivalent in Turkish,-dHk+AGR. A lexical entry with category (NP/NP) ! (S ! NP) is created and added to the lexicon to give the categories in (10) following Bozşahin (2002
2
Our approach methodology can be summarized as follows: We begin by describing the dataset for this task. Then, the preprocessing step is described. Our final section describes how JUST-DEEP can identify the patronizing language in the text.The SemEval-task 1 competition provided three files (rial, train, and test dataset(Perez-Almendros et al., 2020)) The files contain several columns as follows:• par_id: the identification number for each paragraph.• art_id: the identification number for each article.• keyword: word related to patronizing. .• country_code: the code for each country.• text: the text that we want to classify.• label: denotes if the text contains patronizing or not (4 levels 0,1,2,3 where 0 and 1 means no PCL, as well as 2 and 3, means high PCL).In this section we describe the basic data preprocessing steps we applied for all of our experiments:1. In the beginning, we transform the Label column into a binary format. If the value is zero or one, we convert it to zero. And if it is two or three, we convert it to One. So with this transformation, the Label is a Binary column with two values: the Zero means that there are no PCL in the text, and the value One implies that there is PCL.We converted the column to binary because we are working on sub-task one, so we want to determine if it contains PCL or not; we don't care about the degree of PCL.2. Pre-trained models don't work with raw text, so we converted the text into numbers and added unique tokens to separate sentences at the beginning and end of each sentence. Then we pass the resulting sequence to the models to perform the classification process.3. Pre-trained models work with fixed-length sequences. So we used a simple strategy to choose the appropriate maximum length for all sequences. First, we found that most series have a size of 160, so we set the size of all series to 160.Our task aims to detect whether a text contains PCL language or not. As shown in Figure 1 , JUST-DEEP architecture uses multiple pre-trained language models (BERT and RoBERTa) from the transformers library. The first step is data pre-processing. The training dataset is fed to the augmentation processor, responsible for adding more data to the training dataset since it is originally imbalanced data where the number of Zero class instances is ten times the One class instances. The augmentation processor input is all One instances text. The processor augments the text of the input instances by adding the same instance to the primary dataset multiple times with slightly different text but with the same meaning. Next, the textual data is processed into their corresponding embeddings(tokens) to feed them to the classifiers. The second step is the classification step. Finally, the input tokens are fed in parallel to two ensembling models; the first is composed of stacking of 2 BERT models, and the second contains stacking of 2 RoBERTa models.Finally, the predictions of the two ensembling models are joined by a max voting classifier to produce the final prediction output. We conducted several experiments with different models and hyperparameters, but the JUST-DEEP architecture achieved the best results. Figure 2 show an example use case for architecture. If we fed the sentence 'Fast food employee who fed disabled man becomes internet sensation' which is an example from the train data labeled as 1 (contains PCL). In augmentation step we generate more instances with same meaning of this row by the augmentation processor which might add data like 'Fast food worker who fed weakened guy becomes internet rumor'. Next, the sentence is converted into numeric token and passed to both ensemble models. For instance, the first ensemble model which contains stacking of 2 RoBERTa models produce the prediction of 1 for this row. The other ensemble model with stacking of 2 Bert classifiers predicts Zero as a class label. Finally, the last step which implement a max voting will generate One as the final output label for the instance.
2
First, we briefly introduce the baseline model where we apply our method (Sec. 3.1). We then motivate and introduce the auxiliary sequences we create to improve the model's compositional generalization ability (Sec. 3.2). Finally, we explain how a seq2seq model can jointly predicts its target sequence and the auxiliary sequences in the training and inference (Sec. 3.3).The CGPS model (Li et al., 2019 ) has a RNN encoder that embeds and encodes the syntax and semantics of the input separately and a RNN decoder to achieve generalization over single-word substitutions (e.g., "walk/run− →jump"). We recreate this model on top of the Transformer (Vaswani et al., 2017) as the baseline (visualized in Fig. 1 ).In the SCAN dataset, we denote input sequence x, where each word is from an input vocabulary of size U . The output y is a sequence of T actions, where each action is from an output vocabulary of size V . The CGPS model has two separate embedding matrices for the input: the functional embedding E f and the primitive embedding E p :EQUATIONwhere f and p are the functional and primitive embeddings of the input sequence. The encoder builds the contextualized representation c of the input using functional embeddings f , while the decoder produces the output vector z t by attending to c and previous actions [y 1 , ..., y t−1 ]. Instead of directly projecting z t to the logits on output vocabulary, the decoder employs an extra multihead attention layer, with z t as the query, c as the key, and the primitive embeddings p as the value. Its output vector, and further the logits, come from an attention average over the un-contextualized p:EQUATIONwhere MHAttn is the multi-head cross-attention andŷ t is the final distribution on the output vocabulary. To enforce a strict separation of the information encoded in f and p, they regularize the L 2 norm of both embeddings and add noise to them during training.For every command-action pair in the SCAN dataset, we automatically create two auxiliary sequences of the same length as the action sequence. These sequences represent the lower level symbolic structures in the input and can better teach the model in achieving compositional generalization.(1) As discussed in Sec. 2.2, the model often mistakenly repeats the "jump around left" action thrice when it is only asked to "jump around leftModel ADDJUMP LENGH MCD1 MCD2 MCD3LSTM+Attn (Keysers et al., 2020) 0.0 ± 0.0 14.1 6.5 ± 3.0 4.2 ± 1.4 1.4 ± 0.2 Transformers (Keysers et al., 2020) 1.0 ± 0.6 0.0 0.4 ± 0.2 1.6 ± 0.3 0.8 ± 0.4 CGPS-RNN (Li et al., 2019) 98.8 ± 1.4 20.3 ± 1.1 1.2 ± 1.0 1.7 ± 2.0 0.6 ± 0.3 T5-11B 98.3 3.3 7.9 2.4 16.8 Semi-Sup † (Guo et al., 2021) 100.0 99.9 87.1 99.0 64.7 LANE (Liu et al., 2020) 100.0 100.0 100.0 100.0 100.0 CGPS-Transformer (baseline) 95.82 0.00 7.66 3.25 6.12 baseline + AuxSeqPredict Best98.52 100.0 100.0 100.0 100.0 baseline + AuxSeqPredict (Avg.± std.) 98.32 ± 0.3 100.0 ± 0.0 99.9 ± 0.2 90.1 ± 6.5 98.2 ± 3.2 Table 2 : Test accuracy from the SCAN dataset , under the ADDJUMP, LENGH, and MCD splits (Keysers et al., 2020) . The model with † uses all dev-set monolingual data during the training. The model with is pre-trained on large corpora of natural language data. We report the best and average (± std.) result out of 5 random seed runs. See appendix Sec. A for the complete results of all seeds.twice". To prevent this error, we create the first auxiliary sequence AuxSeq1 (the 2nd row of every outputs in Table 1 ) to track the progress of three "jump around left" and to ensure the correct repetitions of the action are executed. For the example "walk left thrice− →TURNL WALK TURNL WALK TURNL WALK", we create a sequence of ids [2, 2, 1, 1, 0, 0]. This sequence exposes the compositional structure of the action sequence "TURNL WALK TURNL WALK TURNL WALK" as three separate segments of "TURNL WALK": it ignores the content of every action and focuses on the symbolic functions embodied by "twice" and "thrice".(2) The model also sometimes "jump opposite left twice" when it is actually asked to "jump around left twice". In response to this error, we create the second auxiliary sequence AuxSeq2 (the 3rd row of every outputs in Table 1 ) to supervise the correct completion of every single "jump around left". For a shorter example "walk left thrice− →TURNL WALK TURNL WALK TURNL WALK", we create a sequence of ids [1, 0, 1, 0, 1, 0]. This sequence isolates the semantics of "walk left" as an action sequence of length 2. We argue that, if the model can correctly predict these two sequences and builds a connection between them and the actions, it will learn the compositional structures of the commands and generalize to novel combinations in the test set. Please refer to the appendix Sec. B for more details about the auxiliary sequences.Now with these two auxiliary sequences, the original seq2seq task defined in SCAN is augmented to a 'sequence-to-3sequences' problem. Therefore, we made some adaptations to our Transformer decoder to jointly predict three sequences. First, we introduce two extra embedding matrices for the two auxiliary sequences in the decoder in addition to the existing action embeddings. The input to the decoder is the sum of three embedding vectors. After the regular Transformer layers, we add another multi-head cross-attention (the red component in Fig. 1 ) using the output h t of the decoder's first self-attention layer as the query, the input's functional embedding f as the key, and the encoder's output representation c as the value. The attention outputs o aux are then projected to the space of the auxiliary sequence ids to produce the logits of the next id in the auxiliary sequence.EQUATIONLater experiments show that the choice of the query vector plays a crucial role in deciding whether the model can achieve the compositionality in understanding the command. During the training, the decoder takes the two auxiliary sequences, each prepended with a start-of-sentence token, as the input. We then maximize the log-likelihood of predicting the next id in the auxiliary sequence at each step. During the inference, the decoder uses the partial auxiliary and action sequences generated in the previous steps, instead of the ground-truth sequences, as the input.
2
Preliminaries and Task Formulation. In BLI, we assume two vocabularies X ={w x 1 , . . . , w x |X | } and Y={w y 1 , . . . , w y |Y| } associated with two respective languages L x and L y . We also assume that each vocabulary word is assigned its (static) typelevel word embedding (WE); that is, the respective WE matrices for each vocabulary are X∈R |X |×d , Y ∈R |Y|×d . Each WE is a d-dim row vector, with typical values d=300 for static WEs (e.g., fast-Text) (Bojanowski et al., 2017) , and d=768 for mBERT. 3 We also assume a set of seed translation pairs (Mikolov et al., 2013; D 0 ={(w x m 1 , w y n 1 ), ..., (w x m |D 0 | , w y n |D 0 | )} for training, where 1 ≤ m i ≤ |X |, 1 ≤ n i ≤ |Y|.Typical values for the seed dictionary size |D 0 | are 5k pairs and 1k pairs , often referred to as supervised (5k) and semisupervised or weakly supervised settings (1k) (Artetxe et al., 2018) . Given another test lexi-con D T ={(w x t 1 , w y g 1 ), ..., (w x t |D T | , w y g |D T | )}, where D 0 ∩ D T = ∅, for each L x test word w x t i in D Tthe goal is to retrieve its correct translation from L y 's vocabulary Y, and evaluate it against the gold L y translation w y g i from the pair. Method in a Nutshell. We propose a novel two-stage contrastive learning (CL) method, with both stages C1 and C2 realised via contrastive learning objectives (see Figure 1 ). Stage C1 ( §2.1) operates solely on static WEs, and can be seen as a contrastive extension of mapping-based BLI approaches with static WEs. In practice, we blend contrastive learning with the standard SotA mapping-based framework with self-learning: VecMap (Artetxe et al., 2018) , with some modifications. Stage C1 operates solely on static WEs in exactly the same BLI setup as prior work, and thus it can be evaluated independently. In Stage C2 ( §2.2), we propose to leverage pretrained multilingual LMs for BLI: we contrastively fine-tune them for BLI and extract static 'decontextualised' WEs from the tuned LMs. These LM-based WEs can be combined with WEs obtained in Stage C1 ( §2.3).Stage C1 is based on the VecMap framework (Artetxe et al., 2018) which features 1) dual linear mapping, where two separate linear transformation matrices map respective source and target WEs to a shared cross-lingual space; and 2) a self-learning procedure that, in each iteration i refines the training dictionary and iteratively improves the mapping. We extend and refine VecMap's self-learning for supervised and semi-supervised settings via CL.Initial Advanced Mapping. After 2 -normalising word embeddings, 4 the two mapping matrices, denoted as W x for the source language L x and W y for L y , are computed via the Advanced Mapping (AM) procedure based on the training dictionary, as fully described in Appendix A.1; while VecMap leverages whitening, orthogonal mapping, re-weighting and de-whitening operations to derive mapped WEs, we compute W x and W y such that a one-off matrix multiplication produces the same result (see Appendix A.1 for the details).Contrastive Fine-Tuning. At each iteration i, after the initial AM step, the two mapping matrices W x and W y are then further contrastively finetuned via the InfoNCE loss (Oord et al., 2018) , a standard and robust choice of a loss function in CL research (Musgrave et al., 2020; Liu et al., 2021c,b) . The core idea is to 'attract' aligned WEs of positive examples (i.e., true translation pairs) coming from the dictionary D i−1 , and 'repel' hard negative samples, that is, words which are semantically similar Algorithm 1 Stage C1: Self-Learning 1: Require: X,Y ,D0,Dadd ← ∅ 2: for i ← 1 to Niter do 3:Wx, Wy ← Initial AM using Di−1; 4:DCL ← D0 (supervised) or Di−1 (semi-super); 5:for j ← 1 to NCL do 6:RetrieveD for the pairs from DCL; 7:Wx, Wy ← Optimise Contrastive Loss; 8:Compute new D add ; 9:Update Di ← D0 ∪ Dadd; 10: return Wx, Wy;but do not constitute a word translation pair.These hard negative samples are extracted as follows. Let us suppose that (w x m i , w y n i ) is a translation pair in the current dictionary D i−1 , with its constituent words associated with static WEs x m i , y n i ∈R 1×d . We then retrieve the nearest neighbours of y n i W y from XW x and derivew x m i ⊂ X (w x m i excluded) , a set of hard negative samples of size N neg . In a similar (symmetric) manner, we also derive the set of negativesw y n i ⊂ Y (w y n i excluded). We useD to denote a collection of all hard negative set pairs over all training pairs in the current iteration i. We then fine-tune W x and W y by optimising the following contrastive objective:EQUATIONEQUATIONτ denotes a standard temperature parameter. The objective, formulated here for a single positive example, spans all positive examples from the current dictionary, along with the respective sets of negative examples computed as described above.Self-Learning. The application of (a) initial mapping via AM and (b) contrastive fine-tuning can be repeated iteratively. Such self-learning loops typically yield more robust and better-performing BLI methods (Artetxe et al., 2018; . At each iteration i, a set of automatically extracted high-confidence translation pairs D add are added to the seed dictionary D 0 , and this dictionary D i = D 0 ∪ D add is then used in the next iteration i + 1. Our dictionary augmentation method slightly deviates from the one used by VecMap. We leverage the most frequent N freq source and target vocabulary words, and conduct forward and backward dictionary induction (Artetxe et al., 2018) . Unlike VecMap, we do not add stochasticity to the process, and simply select the top N aug high-confidence word pairs from forward (i.e., source-to-target) induction and another N aug pairs from the backward induction. In practice, we retrieve the 2×N aug pairs with the highest Cross-domain Similarity Local Scaling (CSLS) scores (Lample et al., 2018) , 5 remove duplicate pairs and those that contradict with ground truth in D 0 , and then add the rest into D add .For the initial AM step, we always use the augmented dictionary D 0 ∪ D add ; the same augmented dictionary is used for contrastive fine-tuning in weakly supervised setups. 6 We repeat the selflearning loop for N iter times: in each iteration, we optimise the contrastive loss N CL times; that is, we go N CL times over all the positive pairs from the training dictionary (at this iteration). N iter and N CL are tunable hyper-parameters. Self-learning in Stage C1 is summarised in Algorithm 1.Previous work tried to prompt off-the-shelf multilingual LMs for word translation knowledge via masked natural language templates (Gonen et al., 2020), averaging over their contextual encodings in a large corpus (Vulić et al., 2020b; , or extracting type-level WEs from the LMs directly without context (Vulić et al., 2020a . However, even sophisticated templates and WE extraction strategies still typically result in BLI performance inferior to fastText .(BLI-Oriented) Contrastive Fine-Tuning. Here, we propose to fine-tune off-the-shelf multilingual LMs relying on the supervised BLI signal: the aim is to expose type-level word translation knowledge directly from the LM, without any external corpora. In practice, we first prepare a dictionary of positive examples for contrastive fine-tuning: (a) D CL =D 0 when |D 0 | spans 5k pairs, or (b) when |D 0 |=1k, we add the N aug =4k automatically extracted highest-confidence pairs from Stage C1 (based on their CSLS scores, not present in D 0 ) to D 0 (i.e., D CL spans 1k + 4k word pairs). We then extract N neg hard negatives in the same way as in §2.1, relying on the shared cross-lingual space derived as the output of Stage C1. Our hypothesis is that a difficult task of discerning between true translation pairs and highly similar non-translations as hard negatives, formulated within a contrastive 5 Further details on the CSLS similarity and its relationship to cosine similarity are available in Appendix A.2.6 When starting with 5k pairs, we leverage only D0 for contrastive fine-tuning, as Dadd might deteriorate the quality of the 5k-pairs seed dictionary due to potentially noisy input.learning objective, will enable mBERT to expose its word translation knowledge, and complement the knowledge already available after Stage C1.Throughout this work, we assume the use of pretrained mBERT base model with 12 Transformer layers and 768-dim embeddings. Each raw word input w is tokenised, via mBERT's dedicated tokeniser, into the following sequence:[CLS][sw 1 ] . . . [sw M ][SEP ], M ≥ 1, where [sw 1 ] . . . [sw M ]refers to the sequence of M constituent subwords/WordPieces of w, and [CLS] and [SEP ] are special tokens (Vulić et al., 2020b) .The sequence is then passed through mBERT as the encoder, its encoding function denoted as f θ (•): it extracts the representation of the [CLS] token in the last Transformer layer as the representation of the input word w. The full set of mBERT's parameters θ then gets contrastively fine-tuned in Stage C2, again relying on the InfoNCE CL loss:EQUATIONEQUATIONEQUATIONType-level WE for each input word w is then obtained simply as f θ (w), where θ refers to the parameters of the 'BLI-tuned' mBERT model.In order to combine the output WEs from Stage C1 and the mBERT-based WEs from Stage C2, we also need to map them into a 'shared' space: in other words, for each word w, its C1 WE and its C2 WE can be seen as two different views of the same data point. We thus learn an additional linear orthogonal mapping from the C1-induced cross-lingual WE space into the C2induced cross-lingual WE space. It transforms 2normed 300-dim C1-induced cross-lingual WEs into 768-dim cross-lingual WEs. Learning of the linear map W ∈R d 1 ×d 2 , where in our case d 1 =300 and d 2 =768, is formulated as a Generalised Procrustes problem (Schönemann, 1966; Viklands, 2006) operating on all (i.e., both L x and L y ) words from the seed translation dictionary D 0 . 7Unless noted otherwise, a final representation of an input word w is then a linear combination of (a) its C1-based vector v w mapped to a 768dim representation via W , and (b) its 768-dim encoding f θ (w) from BLI-tuned mBERT:EQUATIONwhere λ is a tunable interpolation hyper-parameter.
2
We identify three main steps to delivering personalised conversation to encourage physical activity. Firstly, understanding the interesting topics of conversation, secondly, extracting information from the user to contextualise the conversation and finally, generating non-repetitive responses to encourage long term engagement. In this section we detail our methods applied to realise these steps for FitChat.We adapt co-creation methodology to identify the most effective conversational skills expected in a conversational AI for encouraging PA among older adults. We followed an iterative refinement process (Augusto et al., 2018) and conducted three workshops where the intended stakeholders from the community were invited to participate. Each workshop was held one-month apart allowing the stakeholders to learn the capabilities of the technology and explore and refine requirements iteratively. It was specifically significant given the novelty of the conversational technology within the intended age group. Workshop 1 introduced participants to the study and the concept of voice based conversational interventions. We sought their views on skills including goal setting and reporting that were proposed by the research group to "break the ice" and start the conversation. In workshop 2 a participatory method was followed (Leask et al., 2019). Role playing (Matthews et al., 2014) activities among workshop participants helped understand expectations where the aim was to observe the forms of natural dialogue that transpired between pairs. Participants designed conversation interactions that would allow a user to record their daily activities or that would allow a user to set goals for the coming week. Workshop 3 reviewed and refined the conversational interactions discussed in the previous workshop. Together with the research group, stakeholders prioritised the list of conversations skills identified during the workshops and short listed the top five skills that form the final prototype of FitChat. Workshop 3 also encouraged the participants to propose a name for the conversational AI where they selected the title "FitChat", inspired by the Doric term "Fit like?" (Hello, How are you?). Following are the shortlist of five skills or intents identified during the co-creation activities.Personalisation: The goal is to provide a personalised experience throughout the application.This skill is likely to be used only once during the initial on-boarding process.Weekly Goal Setting: The Goal Setting intent is aimed towards making a conscious commitment to specific physical activity goals that are considered to enforce positive behaviour change (Michie et al., 2013) . This skill is expected to be used at the beginning of every week.Daily Reporting: The Reporting intent is aimed at enabling conversation about daily activities. This conversation can be aligned with the goals set for the day and would be encouraging to the user to out-perform themselves next day. This skill is likely to be used at the end of every day.Weekly Summary: The Summary intent is aimed at providing the user a retrospective look at the of last week's goal achievement. This skill is likely to be used at the end of every week.Exercise Coach: The purpose of the Exercise Coach intent is to guide users to perform exercises by providing exercise steps through read-aloud instructions in a conversational format. This skill is likely to be invoked multiple times (minimum of twice) a week, according to WHO physical activity guidelines.Once the conversational skills are identified, we examine Natural Language Understanding (NLU) capabilities required in each skill to maintain a cohesive personalised conversation with the user. We identify that personalisation, weekly goal setting and daily reporting are the main three skills that are focused on extracting information from the user. The role-playing co-creation activity further identified the types of information each skill should extract in order to personalise or contextualise future conversation. Note that conversation here covers question and answer forms that can be are both open or close-ended. We summarise our findings in Table 1 . To create contextually relevant responses, firstly we look at what information is required to deliver the response through conversation; and secondly, how we can make that conversation personalised. To address the first concern, we identified three skills essential to deliver information to the user. These are listed with the information they deliver in Table 2 . We consider two aspects for personalisation; contextualisation with information extracted at the NLU phase, and generating non-repetitive responses with a corpus or message bank. Two skills stand to benefit from contextualisation as shown in Table 3 and three skills benefit from non-repetitive Table 4 .In order to ensure the non-repetitive behaviour, we adapt a similar methods to corpus-based methods used in literature (Morris et al., 2018) to develop a Motivational Message bank organised under three main categories. Firstly we create a bank of general motivational messages for when the user reports on completed activities; for instance a message such as "Well Done! Regular physical activity is really good for your well being" is uttered by the agent at the end of reporting. Secondly a set of messages to be used when a user does not perform a planned activity due to a specific barrier. Messages in the barriers category are grouped under six barriers that are commonly found in literature (these include Family, Support, Tiredness, Work, Time and Weather). The aim is to deliver a personalised and empathetic response when a user is unable to perform an activity. This message bank is integrated with Reporting, Goal Setting and Summary skills. We started with 20 messages and it is updated regularly. We include few examples from the message bank in Table 5 . Activity Weather Keep a positive attitude and embrace the weather -walking in the rain can be invigorating and you can go home for a warm shower afterwards! extracted using NLU will be used to contextualise a conversation skill. For instance, the step goal information extracted during the Weekly Goal Setting skill will be used to contextualise the Daily Activity Reporting skill. In addition to the 5 identified skills we include a chit-chat skill such that the user is able to carry on an informal conversation about generic topics such as weather or news if needed. This will increase the usability of the conversational bot application giving the user more freedom. Next we look at each skill in depth exploring how the conversations are designed and responses are generated.
2
We now dissect a general framework for unsupervised CLWE learning, and show that the "bag of tricks of the trade" used to increase their robustness (which often slips under the radar) can be equally applied to (weakly) supervised projection-based approaches, leading to their fair(er) comparison.In short, projection-based CLWE methods learn to (linearly) align independently trained monolingual spaces X and Z, using a word translation dictionary D 0 to guide the alignment process. Let X D ⊂ X and Z D ⊂ Z be the row-aligned subsets of monolingual spaces containing vectors of aligned words from D 0 . Alignment matrices X D and Z D are then used to learn orthogonal transformations W x and W z that define the joint bilingual space Y = XW x ∪ ZW z . While supervised projection-based CLWE models learn the mapping using a provided external (clean) dictionary D 0 , their unsupervised counterparts automatically induce the seed dictionary in an unsupervised way (C1) and then refine it in an iterative fashion (C2).Unsupervised CLWEs. These methods first induce a seed dictionary D (1) leveraging only two unaligned monolingual spaces (C1). While theC3 (S1) X X' Z Z' C1 X' Z' D (1) 2. 2. Normalization & centering Induction of seed dictionary C2 (self-learning) C3 (S2) Whitening X D (k) , Z D (k) Z D (k) X D (k)Learning projections Y (k) Mutual NNDe-whitening Re-weighting Y (k) Y (k) D (k+1) 3.Figure 1: General unsupervised CLWE approach. algorithms for unsupervised seed dictionary induction differ, they all strongly rely on the assumption of similar topological structure between the two pretrained monolingual spaces. Once the seed dictionary is obtained, the two-step iterative selflearning procedure (C2) takes place: 1) a dictionary D (k) is first used to learn the joint spaceY (k) = XW (k) x ∪ ZW (k)z ; 2) the nearest neighbours in Y (k) then form the new dictionary D (k+1) . We illustrate the general structure in Figure 1 .A recent empirical survey paper has compared a variety of latest unsupervised CLWE methods ; Alvarez-Melis and Jaakkola, 2018; Hoshen and Wolf, 2018; Artetxe et al., 2018b) in several downstream tasks (e.g., BLI, cross-lingual information retrieval, document classification). The results of their study indicate that the VECMAP model of Artetxe et al. (2018b) is by far the most robust and best performing unsupervised CLWE model. For the actual results and analyses, we refer the interested reader to the original paper of . Another recent evaluation paper (Doval et al., 2019) as well as our own preliminary BLI tests (not shown for brevity) have further verified their findings. We thus focus on VECMAP in our analyses, and base the following description of the components C1-C3 on that model.C1. Seed Lexicon Extraction. VECMAP induces the initial seed dictionary using the following heuristic: monolingual similarity distributions for words with similar meaning will be similar across languages. 1 The monolingual similarity distributions for the two languages are given as rows (or columns; the matrices are symmetric) of M x = XX T and M z = ZZ T . For the distributions of similarity scores to be comparable, the values in each row of M x and M z are first sorted. The initial dictionary D (1) is finally obtained by searching for mutual nearest neighbours between the rows of√ M x and of √ M z .C2. Self-Learning. Not counting the preprocessing and postprocessing steps (component C3), selflearning then iteratively repeats two steps: (k) be the binary matrix indicating the aligned words in the dictionary D (k) . 2 The orthogonal transformation matrices are then obtained as W1) Let D(k) x = U and W (k) z = V, where U ΣV T is the singular value decomposition of the matrix X T D (k) Z. The cross-lingual space of the k-th iteration is thenY (k) = XW (k) x ∪ ZW (k) z .2) The new dictionary D (k+1) is then built by identifying nearest neighbours in Y (k) . These can be easily extracted from the matrix P = XW(k) x (ZW (k) z ) T .All nearest neighbours can be used, or additional symmetry constraints can be imposed to extract only mutual nearest neighbours: all pairs of indices (i, j) for which P ij is the largest value both in row i and column j.The above procedure, however, often converges to poor local optima. To remedy for this, the second step (i.e., dictionary induction) is extended with techniques that make self-learning more robust. First, the vocabularies of X and Z are cut to the top k most frequent words. 3 Second, similarity scores in P are kept with probability p, and set to zero otherwise. This dropout allows for a wider exploration of possible word pairs in the dictionary and contributes to escaping poor local optima given the noisy seed lexicon in the first iterations.While iteratively learning orthogonal transformations W x and W z for X and Z is the central step of unsupervised projection-based CLWE methods, preprocessing and postprocessing techniques are additionally applied before and after the transformation. While such techniques are often overzwei is expected to be roughly as (dis)similar to drei and Katze as two is to three and cat.2 I.e., Dij = 1 ⇐⇒ the i-th word of one language and the j-th word of the other are a translation pair in D (k) . looked in model comparisons, they may have a great impact on the model's final performance, as we validate in §4. We briefly summarize two preprocessing (S1 and S2) and post-processing (S3 and S4) steps used in our evaluation, originating from the framework of Artetxe et al. (2018a) . S1) Normalization and mean centering. We first apply unit length normalization: all vectors in X and Z are normalized to have a unit Euclidean norm. Following that, X and Z are mean centered dimension-wise and then again length-normalized.Whitening. ZCA whitening (Bell and Sejnowski, 1997 ) is applied on (S1-processed) X and Z: it transforms the matrices such that each dimension has unit variance and that the dimensions are uncorrelated. Intuitively, the vector spaces are easier to align along directions of high variance.S3) Dewhitening. A transformation inverse to S2: for improved performance it is important to restore the variance information after the projection, if whitening was applied in S2 (Artetxe et al., 2018a) . S4) Symmetric re-weighting. This step attempts to further align the embeddings in the cross-lingual embedding space by measuring how well a dimension in the space correlates across languages for the current iteration dictionary D (k) . 4 The best results are obtained when re-weighting is neutral to the projection direction, that is, when it is applied symmetrically in both languages.In the actual implementation S1 is applied only once, before self-learning. S2, S3 and S4 are applied in each self-learning iteration.Model Configurations. Note that C2 and C3 can be equally used on top of any (provided) seed lexicon (i.e., D (1) :=D 0 ) to enable weakly supervised learning, as we propose here. In fact, the variations of the three key components, C1) seed lexicon, C2) self-learning, and C3) preprocessing and postprocessing, construct various model configurations which can be analyzed to probe the importance of each component in the CLWE induction process. A selection of representative configurations evaluated 4 More formally, assume that we are working with matrices X and Z that already underwent all transformations described in S1-S3. Another matrix D represents the current bilingual dictionary D: Dij = 1 if the i th source word is translated by the j th target word and Dij = 0 otherwise. Then, given the singular value decomposition U SV T = X T DZ, the final re-weighted projection matrices are Wx = U S 1 2 (and Wz = V S 1 2 . We refer the reader to (Artetxe et al., 2018a) and (Artetxe et al., 2018b) for more details. later in §4 is summarized in Table 1. 3 Experimental Setup Evaluation Task. Our task is bilingual lexicon induction (BLI). It has become the de facto standard evaluation for projection-based CLWEs . In short, after a shared CLWE space has been induced, the task is to retrieve target language translations for a test set of source language words. Its lightweight nature allows us to conduct a comprehensive evaluation across a large number of language pairs. 5 Since BLI is cast as a ranking task, following Glavaš et al. (2019) we use mean average precision (MAP) as the main evaluation metric: in our BLI setup with only one correct translation for each "query" word, MAP is equal to mean reciprocal rank (MRR). 6 (Selection of) Language Pairs. Our selection of test languages is guided by the following goals: a) following recent initiatives in other NLP research (e.g., for language modeling) Gerz et al., 2018) , we aim to ensure the coverage of different genealogical and typological language properties, and b) we aim to analyze a large set of language pairs and offer new evaluation data which extends and surpasses other work in the CLWE literature. These two properties will facilitate analyses between (dis)similar language pairs and offer a comprehensive set of evaluation setups that test the robustness and portability of fully unsupervised CLWEs. The final list of 15 diverse test languages is provided in Table 2 , and includes samples from different languages types and families. We run BLI evaluations for all language pairs in both directions, for a total of 15×14=210 BLI setups.Monolingual Embeddings. We use the 300-dim vectors of Grave et al. (2018) for all 15 languages, pretrained on Common Crawl and Wikipedia with fastText (Bojanowski et al., 2017) . 7 We trim all 5 While BLI is an intrinsic task, as discussed by it is a strong indicator of CLWE quality also for downstream tasks: relative performance in the BLI task correlates well with performance in cross-lingual information retrieval (Litschko et al., 2018) or natural language inference . More importantly, it also provides a means to analyze whether a CLWE method manages to learn anything meaningful at all, and can indicate "unsuccessful" CLWE induction (e.g., when BLI performance is similar to a random baseline): detecting such CLWEs is especially important when dealing with fully unsupervised models.6 MRR is more informative than the more common Precision@1 (P@1); our main findings are also valid when P@1 is used; we do not report the results for brevity.7 Experiments with other monolingual vectors such as the vocabularies to the 200K most frequent words.Training and Test Dictionaries. They are derived from PanLex (Baldwin et al., 2010; Kamholz et al., 2014) , which was used in prior work on cross-lingual word embeddings (Duong et al., 2016; . PanLex currently spans around 1,300 language varieties with over 12M expressions: it offers some support and supervision also for low-resource language pairs (Adams et al., 2017) . For each source language (L 1 ), we automatically translate their vocabulary words (if they are present in PanLex) to all 14 target (L 2 ) languages.To ensure the reliability of the translation pairs, we retain only unigrams found in the vocabularies of the respective L 2 monolingual spaces which scored above a PanLex-predefined threshold. As in prior work , we then reserve the 5K pairs created from the more frequent L 1 words for training, while the next 2K pairs are used for test. Smaller training dictionaries (1K and 500 pairs) are created by again selecting pairs comprising the most frequent L 1 words.original fastText and skip-gram (Mikolov et al., 2013b ) trained on Wikipedia show the same trends in the final results.Training Setup. In all experiments, we set the hyper-parameters to values that were tuned in prior research. When extracting the UNSUPERVISED seed lexicon, the 4K most frequent words of each language are used; self-learning operates on the 20K most frequent words of each language; with dropout the keep probability p is 0.1; CSLS with k = 10 nearest neighbors (Artetxe et al., 2018b) .Again, Table 1 lists the main model configurations in our comparison. For the fully UNSUPER-VISED model we always report the best performing configuration after probing different self-learning strategies (i.e., +SL, +SL+NOD, and +SL+SYM are tested). The results for UNSUPERVISED are always reported as averages over 5 restarts: this means that with UNSUPERVISED we count BLI setups as unsuccessful only if the results are close to zero in all 5/5 runs. ORTHG-SUPER is the standard supervised model with orthogonal projections from prior work (Smith et al., 2017; .
2
In this paper, we take GloVe (Pennington et al., 2014) as the base embedding model and gender as the protected attribute. It is worth noting that our approach is general and can be applied to other embedding models and attributes. Following GloVe (Pennington et al., 2014) , we construct a word-to-word co-occurrence matrix X, denoting the frequency of the j-th word appearing in the context of the i-th word as X i,j . w,w ∈ R d stand for the embeddings of a center and a context word, respectively, where d is the dimension.In our embedding model, a word vector w consists of two parts w = [w (a) ; w (g) ]. w (a) ∈ R d−k and w (g) ∈ R k stand for neutralized and gendered components respectively, where k is the number of dimensions reserved for gender information. 3 Our proposed gender neutralizing scheme is to reserve the gender feature, known as "protected attribute" into w (g) . Therefore, the information encoded in w (a) is independent of gender influence. We use v g ∈ R d−k to denote the direction of gender in the embedding space. We categorize all the vocabulary words into three subsets: male-definition Ω M , female-definition Ω F , and gender-neutral Ω N , based on their definition in WordNet (Miller and Fellbaum, 1998) .Gender Neutral Word Embedding Our minimization objective is designed in accordance with above insights. It contains three components:EQUATIONwhere λ d and λ e are hyper-parameters. The first component J G is originated from GloVe (Pennington et al., 2014) , which captures the word proximity:J G = V i,j=1 f (X i,j ) w T iw j + b i +b j − log X i,j 2 .Here f (X i,j ) is a weighting function to reduce the influence of extremely large co-occurrence frequencies. b andb are the respective linear biases for w andw. The other two terms are aimed to restrict gender information in w (g) , such that w (a) is neutral. Given male-and female-definition seed words Ω M and Ω F , we consider two distant metrics and form two types of objective functions.In J L1 D , we directly minimizing the negative distances between words in the two groups:J L1 D = − w∈Ω M w (g) − w∈Ω F w (g) 1 .In J L2 D , we restrict the values of word vectors in [β 1 , β 2 ] and push w (g) into one of the extremes:J L2 D = w∈Ω M β 1 e − w (g) 2 2 + w∈Ω F β 2 e − w (g) 2 2 ,where e ∈ R k is a vector of all ones. β 1 and β 2 can be arbitrary values, and we set them to be 1 and −1, respectively.Finally, for words in Ω N , the last term encourages their w (a) to be retained in the null space of the gender direction v g :J E = w∈Ω N v T g w (a) 2 ,where v g is estimating on the fly by averaging the differences between female words and their male counterparts in a predefined set,v g = 1 |Ω | (wm,w f )∈Ω (w (a) m − w (a) f ),where Ω is a set of predefined gender word pairs. We use stochastic gradient descent to optimize Eq. (1). To reduce the computational complexity in training the wording embedding, we assume v g is a fixed vector (i.e., we do not derive gradient w.r.t v g in updating w (a) , ∀w ∈ Ω ) and estimate v g only at the beginning of each epoch.
2
This paper has two parts. First, we briefly investigate factors that make a word substitution valid in context and present a machine learning approach to deciding the validity of word substitutions ( §3.1). Then, in the second part ( §4), we study whether readers prefer simpler but more ambiguous words. We use data from the 2007 SemEval lexical substitution task both parts.In order to investigate the tradeoff between ambiguity and commonness, we need an algorithm to:1. Discover possible lexical replacements, and 2. Rank the suitability of these replacements according to parameters such as ambiguity and commonness.Our interest is really in the second step, but we need to identify valid replacements before we begin to rank them. For this purpose, we restricted ourselves to WordNet 3.0 (Miller, 1995) as a source of substitutions. The first step involved extraction of the "synsets" (synonym sets) that contain the word being replaced and then listing all of the elements in those synsets to find synonyms. For verbs and nouns, we also include any synsets bearing a "hypernym" relation to one of the originals; and similarly for adjective synsets via a "similar to" relation.For this paper, we focus on the second step. This involved determining and weighting various properties of the words deemed as possible replacements. We identified the following properties:1. context: a distributional measure of the likelyhood of each word in the context of the sentence;2. recognisability: an estimation of how likely the word is to be recognised; i.e., whether the word is in the reader's vocabulary;3. suitability: an estimate of whether the word is a suitable replacement, given the sense of the original word; and 4. ambiguity: how polysemous the word is.In this way, words that are very common in the context should be more likely to be chosen, but might still be ranked lower than another less common word that is also less ambiguous. There should be a strong preference in the system output for any options that are both common and unambiguous.For the context, we produce a unit vector of the words surrounding the target item (maximum of 5 either side) weighted in proportion to their distance from it. To use an example from the task: "We cannot stand as helpless spectators while millions die for want in a world of plenty" would be encoded as:          cannot, 0.2083 as, 0.2083 we, 0.1666 helpless, 0.1666 spectators, 0.1250 while, 0.0833 millions, 0.0416          An entry in the corpus matching one of the substitutions (e.g., "remain") will have its surrounding vector similarly derived. The dot-product of the two is then calculated. The context score for that substitution option ("remain" here) is the sum of all such vector dot-products for entries in the corpus.The recognisability score is an estimation of how likely a word is to be in a reader's lexicon. We observed that the form of a graph plotting word frequency against word rank does not appear to be plausible as a model of an individual's likely vocabulary. The Zipfian distribution of language would make such a simplistic model predict that the second most common word would only have a 50% chance of being recognised. We predict that a large number of the most common words are almost guaranteed to be recognised, and then a long-tail of the less frequently used words with diminishing recognisability. We model this with the logistic regression function 11+e −z with z = 6 − rank 10000 . This model is so far unjustified though. It predicts a vocabulary of 60,000 words, as per Aitchinson (1994) , following a logistic regression curve plateauing with the most common 30,500 words returning recognisabilities greater than 0.95 and then describing a long-tail of words with reducing recognisabilities.As there is no word-sense disambiguity process involved, all we can be sure of is that one of the original word's senses was the intended sense. The suitability score is calculated as the portion of the original word's senses that the substitution shares. Thus the suitability of a substitution (subs) given the original (orig) is |senses(subs) ∩ senses(orig)| |senses(orig)|The ambiguity score is simply the inverse of the number of senses held by the substitution word: 1 |senses(subs)|The 2007 SemEval lexical substitution task corpus consists of 30 selected words appearing in ten sentences each, giving 300 sentences in total. For each of these 300 sentences, there is a manually compiled list of valid lexical substitutions for the selected word. The challenge is to computationally derive suitable alternatives for the selected word in each of the 300 sentences. Results were scored for precision and recall relative to the manually compiled gold standard.The SemEval 2007 task authors described a baseline for WordNet systems that achieved a precision of 0.30 and recall of 0.29. Our implementation (that multiplies the values of the four features defined above) scores a precision of 0.35 and a recall of 0.35. But, it should still be noted that solutions designed at the time used a much richer set of sources for replacements, including automatically constructed paraphrase corpora, and subsequently scored much better, with the best system achieving precision and recall of 0.72.The solution described above assumes (without justification) an equal weighting for each attribute. We also trained a machine learner to classify replacements as valid or invalid based on these four features. To create labelled data, we collated all of the possible replacements as found by our method described in §3.1. We then labelled the replacement word as "valid" if it was one of those found in the manually compiled gold standard for the task, and "invalid" otherwise.A number of modifications were made to the attributes in order to make them more suitable for the machine-learning process. It was found that the context score had an extremely long tail, and taking the logs of each context score gave a much more reasonable distribution. The polysemy scores were, by their inverse-integer nature, skewed towards 0.0 with large gaps between each fractional value (e.g. no score could possibly be in the range (0.5, 1.0)). For this reason, we instead just used the number of senses the word could be used in, directly, rather than taking the inverse. The overlap scores were modified to be the raw number of senses shared (or the cardinality of the intersection of the two words' sets of senses), demonstrating that in the vast majority of cases only a single sense was shared, suggesting it might not be a very useful metric.This data was then split into ten parts, each with the results and scores for three words. (Each section therefore did not have the same number of entries.) We tested each set on an IBk classifier (Aha et al., 1991 ) trained on the other nine. After extracting the predicted "valid" results we scored them as we described in §3.2 with precision and recall of 0.291. The poor performance of machine learning is possibly due to the low number of words available for training.
2
Given a document T = {x 1 , x 2 , ..., x n }, where x i is the i th token, the problem of keyphrase generation is to generate a set of keyphrases y = {y 1 , y 2 , ..., y m } that best capture the semantic meaning of T . In this paper, we approach this as a supervised problem solved specifically using GAN (Goodfellow et al., 2014 ). Our GAN model consists of a generator G trained to produce a sequence of keyphrases from a given document, and a discriminator D that learns to distinguish between machine-generated and human-curated keyphrases. Adversarial learning for text is a challenging task as it is not straight-forward to back-propagate the loss of the discriminator due to the discrete nature of text data (Rajeswar et al., 2017) . We use RL to address this issue, where the generator is treated as an RL agent, and its rewards are obtained from the 1 Codehttps://github.com/avinsit123/ keyphrase-gan discriminator's outputs. Fig 1, shows an overview of our framework.-We employ CatSeq (Yuan et al., 2018) as our generator. CatSeq model uses an encoder-decoder framework where the encoder is a bidirectional Gated Recurrent Unit (bi-GRU), and the decoder a forward GRU. For a given document, the generator produces a sequence of keyphrases:y = {y 1 , y 2 , ..., y m }, where each keyphrase y i is com- posed of tokens y 1 i , y 2 i , ..., y l i i .To incorporate outof-vocabulary tokens, we use a copying mechanism (Gu et al., 2016) . We also use an attention mechanism to help the generator identify the relevant components of the source text. 2.2 Discriminator -The main aim of the discriminator is to distinguish between human-curated and machine-generated keyphrase sequences. To achieve this, the discriminator would also require a representation of the original document T . We proposed a new conditional hierarchical discriminator model (Fig 2) that consumes the original document T , a sequence of keyphrases y, and outputs that probability of the sequence being human-curated.The first layer of this hierarchical model consists of m + 1 bi-GRUs. The first bi-GRU encodes the input document T as a sequence of vectors: h = {h 1 , h 2 , ..., h n }. The other m bi-GRUs, which share the same weight parameters, encode each keyphrase y j as a vector k j , resulting in a sequence of vectors: {k 1 , k 2 , ..., k m }. We then use an attention-based approach (Luong et al., 2015) to build context vectors c j for each keyphrase (eq. 1), where c j is a weighted average over h. By concatenating c j and k j , we get a contextualized representation e j = [c j :k j ] of y j . c j = n i=1 h i • e h i wsk j n i=1 e h i wsk j (1)The second layer of the discriminator is a GRU which consumes the average of the document representations h avg and all the contextualized keyphrase representations e 1 , e 2 , ....., e m as:EQUATIONThe final state of this layer is passed through one fully connected layer (w f ) and sigmoid transformation to get the probability that a given keyphrase sequence is human-curated P h = σ(w f s m+1 ).The goal of the framework is to optimize the generator to produce keyphrase sequences that resemble human-curated keyphrase sequences. This is achieved by training the generator and discriminator in an alternating fashion. Namely, we train the first version of the generator using maximum likelihood estimation (MLE). We then use this generator to produce machine-generated keyphrases (S f ) for all documents. We combine them with the corresponding human curated keyphrases (S r ), and train the first version of the discriminator to optimize for the following loss function:D loss = −Ey∈S r [log(D(y))]−Ey∈S f [log(1−D(y))] (3)To train the subsequent versions of the generator, we employ reinforcement learning, where the policy gradient is defined as:EQUATIONB is a baseline obtained by greedy decoding of keyphrase sequence using self-critical sequence training (Rennie et al., 2016) method. The rewards for the generator are calculated from the outputs of the discriminator trained in the previous iteration. The resulting generator is then used to create new training samples for the discriminator. This process is continued until the generator converges.When using RL for text generation (Li et al., 2017) , it is necessary to support rewards for intermediate steps or partially decoded sequences. One of the advantages of our proposed discriminator architecture it can assign individual rewards to each generated keyphrase or one reward to the entire sequence. To support individual rewards, each state s i of the final discriminator layer is passed through a feed-forward neural network with a sigmoid activationR(y i ) = D(y i ) = σ(W f s i+1 ).To obtain reward for the entire sequence, we just use the final predicted probability.3 Experimental Workwanted keyphrases and improves diversity. All baseline models in Ex 2 of Table 6 generate keyphrase speech 2 times. However, GAN M R removes all repeating occurences of keyphrases and generates a diverse keyphrase sequence as evidenced by α-nDCG@5 metrics of diversity in Table 4. 3. GAN M R model doesn't help the generator introduce new keyphrases A consistent feature noticeable across all examples is that while the GAN model does im-Source Abstract: Interestingness of frequent itemsets using bayesian networks as background knowledge. The paper presents a method for pruning frequent itemsets based on background knowledge represented by a bayesian network. The interestingness of an itemset is defined as the absolute difference between its support estimated from data and from the bayesian network. Efficient algorithms are presented for finding interestingness of a collection of frequent itemsets and for finding all attribute sets with a given minimum interestingness. Practical usefulness of the algorithms and their efficiency have been verified experimentally. Categories and subject descriptors h. CatSeq: interestingness; frequent itemsets; bayesian networks; data mining; CatSeqTG: interestingness; frequent itemsets; bayesian networks; data mining. CatSeqCorr: interestingness; frequent itemsets; bayesian networks; background knowledge; data mining CatSeqD: interestingness;frequent itemsets;bayesian networks;background knowledge; GANMR: frequent items; bayesian networks; Original Keyphrases: interestingness;frequent itemset;frequent itemsets;bayesian network;background knowledge;data mining;emerging pattern;association rule;association rules; Source Abstract: Twenty years of the literature on acquiring out of print materials . Out of print materials to assess recurring issues and identify changing practices. The out of print literature is uniform in its assertion that libraries need to acquire o.p.materials to replace worn or damaged copies, to replace missing copies, to duplicate copies of heavily used materials, to fill gaps in collections, to strengthen weak collections, to continue to develop strong collections, and to provide materials for new courses, new programs, and even entire new libraries. CatSeq: out of print; libraries; information retrieval; CatSeqTG: out of print; CatSeqCorr: out of print; libraries; information retrieval; data mining; CatSeqD: out of print; print; libraries; united kingdom; GANMR: out of print; retrieval; Original Keyphrases: out of print materials; recurring issues; changing practices; library materials; out of print books; acquisition; 4. GAN M R removes original keyphrases predicted by the pre-trained CatSeq model Sometimes when the keyphrase is present in both the real and fake keyphrase sequence, the discriminator assigns low rewards to these keyphrases even though they might be true. These keyphrases often get discarded due to these low rewards during the GAN training process. Consider Ex.1 of Table 7 , even though keyphrases interestingness and data mining, both, are predicted by the CatSeq generator and are present in the original keyphrase sequence. However, the GAN M R model removes both these keyphrases generating a less original keyphrase sequence, thereby decreasing F1 score.Thus GAN M R improves upon the CatSeqgenerated keyphrases and removes repeated keyphrases. However, it falls short in generating new keyphrases thus not increasing the F1 score much.
2
Our framework adopts the network structure of VL-BERT (Su et al., 2020) . VL-BERT is a singlestream cross-modal model that concatenates word features from the text and bounding box features from the image and feeds the concatenated sequence into a series of transformer blocks.Both vision-grounded masked language model (MLM) and text-grounded masked region classification (MRC) task on image-caption data are used in our model by default, as they have shown strong performance in VL-BERT (Su et al., 2020; ). Since we introduce auxiliary multilingual text corpus, we also use MLM on the texts in other languages by default. Motivated by Unicoder (Huang et al., 2019) showing that pretrained models can be further improved by involving more tasks, we introduce two additional cross-lingual pretraining tasks and one cross-modal task for improving the performance. Cross-model Text Recovery. This task (CMTR) is motivated by the multilingual pretraining model Unicoder (Huang et al., 2019) . As shown in Figure 2, CMTR is based on the image-caption pairs as input, but it does not use the original caption words. Instead, it computes an alignment between word features and bounding box features extracted by tools (e.g., Faster-RCNN (Anderson et al., 2018) ), and uses attended features to simultaneously recover all input words. In particular, let (B, E) be an image-caption input pair, where B = (b 1 , b 2 , • • • , b n )i = n j=1ã ij b j , whereã ij = softmax(A i,: )[j], b j ∈ R h , e i ∈ Rh , and h denotes the embedding dimension. A ∈ R m×n is the attention matrix calculated by bi-linear attention as A ij = e T i Wb j , where W is a trainable parameter. Finally we takeÊ = tanh((ê 1 ,ê 2 , • • • ,ê m )) as input and predict the original caption words. The objective function is:EQUATIONwhere ∆(., .) is the sum of token-level crossentropy loss and e(.) is the encoder component including the input layer, the attention layer and transformer layers. d(.) is the decoder applied on the output of transformers, which is a shared linear projection layer with other MLM tasks and CLTR task introduced below.Cross-lingual Text Recovery. This task (CLTR) is adopted from Unicoder (Huang et al., 2019) , which takes a pair of parallel sentences (X, Y ) and lets the pretrained model learn the underlying word alignments between two languages. Similar to CMTR, we also use the bi-linear attention mechanism to compute an attended representationX for input sentence X in the source language with its parallel sentence Y , and then try to recover X using the attended inputX. In CLTR task, we optimize the same objective function in Eq. (1). Note that CLTR and CMTR do not share attention parameters since there is still a large modal gap between text and image before applying cross-attention.Translation Language Model. This task (TLM) is adopted from XLM (Conneau and Lample, 2019) , which takes a pair of parallel sentences with randomly masked tokens in different languages as input. The model is trained to predict the masked tokens by attending to local contexts and distant contexts in another language. Interested readers please refer to Conneau and Lample (2019) for more details about its objective function.For fine-tuning, we minimize the triplet ranking loss to fine-tune the retrieval model. To boost the performance, we use the hard negative mining strategy in SCAN (Lee et al., 2018) . For each text query, there is only one positive image sample and the rest are negative. Denoting a mini-batch of training samples by {(q i , I i )} K i=1 , where a query q i is only relevant with the image I i , we only penalize the hardest negative image in the mini-batch byL(q i ) = max j =i [R(q i , I j ) − R(q i , I i ) + m] + ,where m is the margin set to 0.2 by default, and function. R(q, I) is the function to evaluate the similarity between query q and image I parameterized by u and b:[x] + = max(0, x) is a clipR(q, I) = u BERT CLS (q, I) + b.On the other hand, for each image, we only penalize the hardest negative query in the mini-batch:L(I i ) = max j =i [R(q j , I i ) − R(q i , I i ) + m] + .Considering the whole mini-batch of images and texts, the final loss function is computed byL = 1 K K i=1 [L(q i ) + L(I i )].
2
Medical report generation is a task to generate reports consisting of a sequence of wordsY = {y 1 , y 2 , ...y N } from a set of images X = {x k } M k=1. Most cases Y include more than one sentence. We annotated a set of finding labels F = {f 1 , f 2 , ...f T } for each set of images. The finding labels include abnormalities (indicated as .Positive), normalities (indicated as .Negative) and uncertain findings (indicated as .Uncertain) . Each finding label can be disassembled into a sequence of words as f t = {w t1 , w t2 , ...w tK }. For example, an abnormality "Airspace Opacity.Positive" label is divided into a sequence of {airspace, opacity, positive}.We employ Two-Stage Medical Report Generator (TS-MRGen), a framework that consists of two separate stages: an image diagnosis module and a data-to-text generation module. The image diagnosis module can be regarded as an image classification task that recognizes input images X and classifies them into a set of findings F . Radiologists can modify the image diagnosis module result F if errors are found in F . Alternatively, they can intentionally omit or append findings labels. The data-to-text generation module generates a report Y from F . We consider the text generation module as a data-to-text task.We train an image classification model that takes as input a single-view chest X-ray and output a set of probabilities of four types of labels (positive, negative, uncertain, and no mention) for each possible finding label. We use EfficientNet-B4 (Tan and Le, 2019) as a network architecture that was initialized with the pretrained model on ImageNet (Deng et al., 2009) .In some cases, the reports are described based on two images: front view and lateral view. Following Irvin et al. (2019) , this module outputs the mean probability of the model between two images.! ! !"#$%&" '(($)*+, !+)*-*.# !,#$/+,*& 0#1&-*.# ! " 0+ 2,#$/+,*& *) )##, &,3445 " # !"#$%&'()#*%)$+" ,-'+-'."/01$/( 2'$*3&#-045$.0)-6$ 7+$3042&(-)-6$ !"#$ %&!"#$%&' (&#$%&' )&#$"*+',#+$' -./"/#0.1)&#$"*+',#+/$"12#$'&13-)24" $ # 641#,#%&-#34 )#,-#,7#489 )&/2"*,14 !!"-'8*)" ,-'+-'."/01$/( !"#$%&'( !"#$%&'( !"#$%&'( !"#$%&'( !"#$%&'( !"# !"" #$$%Figure 3: Overview of our reinforcement learning (RL) with a reconstructor. We leverage the clinical Reconstruction Score (CRS), which estimates the factual correctness of generated reports, as a reward for RL.We adopt a table-to-text encoder-decoder model (Liu et al., 2018) for the text generation module to use words in the findings class labels. The encoder of the text generation module has two layers: a word-level encoder and a label-level layer.h w tk = Enc word (w tk , h w tk−1 ) (1) h l t = MLP label ([h w t0 , h w tK ]) (2) Therein, [h w t0 , h w tK ]denotes the concatenation of vectors h w t0 and h w tK . MLP label represents a multilayer perceptron. We use a one-layer bi-directional gated recurrent unit (GRU) for the word level encoder.For the decoder, we use one-layer GRU with an attention mechanism (Bahdanau et al., 2015) :EQUATIONwhere h l represents the max-pooled vector from {h l 0 , ..., h l T }. The context vector c n is calculated over the label-level hidden vectors h l t and the decoder hidden state h d n .We use RL to train the text generation model to improve the clinical correctness of the generated reports. A benefit of RL is that the model can be trained to produce sentences that maximize the reward, even if the word sequence does not match the correct answer. Many studies of text generation with RL (Keneshloo et al., 2019) use rewards, such as the BLEU and ROUGE metrics, to improve the generated text. To improve the clinical correctness of the generated reports, Liu et al. (2019a) and Irvin et al. (2019) adopted clinically coherent rewards for RL with CheXpert Labeler (Irvin et al., 2019) , a rule-based finding mention annotator. However, in the medical domain, no such annotator is available in most cases other than English chest X-ray reports.We propose a new reward, Clinical Reconstruction Score (CRS), to quantify the factual correctness of reports with a reconstructor module. Figure 3 shows an overview of our method, RL with CRS. Contrary to the data-to-text generator, the reconstructor reversely predicts the appropriate finding labels from the generated reports. This reconstructor quantifies the clinical correctness of the reports. Therefore, we can estimate the correctness of reports without rule-based annotators.We utilize BERT (Devlin et al., 2019) as a reconstructor and reconstructed the finding labelsF as a multi-label text classification task:EQUATIONwhere FC and BERT represent the fully connected layer and the BERT layer, respectively.Ŷ denotes a generated report. In addition, CRS is defined as an F-score of the predicted finding labelsF against the input finding labels for the data-to-text module F . This BERT reconstructor is trained with a Class-Balanced Loss (Cui et al., 2019) to address imbalanced datasets. We design the overall reward as a combination of ROUGE-L score and CRS:EQUATIONwhere Y t represents a gold report regarding the predicted report Y and λ rouge is a hyperparameter. The goal of RL is to find parameters to minimize the negative expected reward R(Ŷ ) forŶ :EQUATIONwhere P θ denotes a policy network for the text generation model. We adopt SCST (Rennie et al., 2017) to approximate the gradient of this loss:∇ θ L rl θ ≈ −∇ θ log P θ (Ŷ s )(R(Ŷ s ) − R(Ŷ g )) (7)whereŶ s is a sampled sequence with a Monte Carlo sampling. We use the softmax function with temperature τ for sampling sequences. R(Ŷ g ) is a baseline reward calculated from a greedily decoded sequenceŶ g .To train the language model, RL with only CRS and ROUGE as a reward is insufficient. Therefore, we use the cross-entropy loss to generate fluent sentences. We design an overall loss function for training as a combination of the RL loss and crossentropy loss L xent :L all = λ rl L rl + (1 − λ rl )L xent (8)where L xent is the cross-entropy loss calculated between the gold reports and generated reports, and λ rl is a hyperparameter.We propose a novel method, RL with Data Augmentation method (RL-DA), to encourage the model to focus on infrequent findings. We focus on the asymmetricity between the augmentation cost of the input data and that of the target report sentences. The input data, which comprise a set of finding labels, can be augmented easily by adding or removing a finding label automatically. However, the augmentation cost is higher for the target reports than the input data because the target reports are written in natural language. Therefore, we introduce a semi-supervised reinforcement learning method to train the model solely by augmenting the input data. We conduct a data augmentation process of RL-DA as the following steps.Step 1: List and Filter all Candidate Finding Labels. Given a set of finding labels F = {f 1 , f 2 , ...f T }, the objective of the data augmentation is to obtain a new set of finding labelsF , for which an additional finding label f T +1 is added to F . We list all finding labels that can be appended to F . We filter the finding labels inappropriate for appending F according to the clinical relation between the labels. Some pairs of finding labels have clinically contradictory relations. We filter the labels based on the following two rules. a. Contradictory Relation. We exclude a pair of contradictory finding labels. For example, the abnormality "Pleural Effusion.Positive" and the normality "Pleural Effusion.Negative" must not be included in the same setF .b. Supplementary Relation. We exclude a pair of contradicting finding labels that supplement other finding labels in F . For example, "Pleural Effusion.Mild" is excluded if "Pleural Effusion.Positive" not in F .Step 2: Assign Sample Finding Labels. We sample an additional finding label f T +1 to append to F . The label is extracted from a set of candidates by random sampling. The data imbalance is mitigated because the data augmentation process appends a new finding label irrespective of the frequency of this finding labels in the training data. We use this augmented set of finding labelsF for RL. The overall loss function is as follows:L all = λ rl (L rl + λ aug L aug ) + (1 − λ rl )L xent (9)where λ rl and λ aug are hyperparameters. L aug denotes the RL loss calculated using the augmented setF . L aug is calculated in the same way as L rl with a reward R(Y ) under the condition of λ rouge = 0. This is because no reference report is available for the augmented setF . Hence, RL-DA method enables training of the model with more data at a low cost.
2
We adopt the method from Wang et al. (2020) to construct our correction model. Additional details are introduced in the following sections.We use a BERT-based model as our pre-trained model. BERT is mainly trained with a task called Masked Language Model. In the Masked Language Model task, some tokens in a sentence are replaced with masked tokens ([MASK] ), and the model needs to predict the replaced tokens.In this study, we use the Chinese-RoBERTawwm-ext model provided by Cui et al. (2019) . The main differences between Chinese-RoBERTawwm-ext and original BERT are as follows:Whole Word Masking (WWM) Devlin et al. (2019) proposed a new masking method called Whole Word Masking (WWM) after proposing their original BERT, which masks entire words instead of subwords. They demonstrated that the original prediction task that only masks subwords is easy and that the performance has been improved by masking entire words. Therefore, Cui et al. (2019) adopted this method to train their Chinese[Original Sentence] 然后 准 准 准备 备 备 别 的 材 材 材料 料 料 。 [Original BERT] 然 后 准 准 准 [MASK] 别 的 [MASK] 料 料 料 。 [Whole Word Masking] 然 后 [MASK] [MASK] 别 的 [MASK] [MASK] 。 [English Translation]Then prepare for other materials. pre-trained models. It should be noted that they used WordPiece (Wu et al., 2016) to preprocess Chinese sentences, and Chinese sentences are segmented into characters (not subwords) by Word-Piece. Therefore, in WWM, when a Chinese character is masked, other Chinese characters that belong to the same word should also be masked. Table 1 shows an example of WWM.Training Strategy Cui et al. (2019) followed the training strategy studied by . Although Cui et al. (2019) referred to the training strategy from , there are still some differences between them (e.g., they did not use dynamic masking).Training Data In addition to Chinese Wikipedia (0.4B tokens) that was originally used to train BERT, an extended corpus (5.0B tokens), which consists of Baidu Baike (a Chinese encyclopedia) and QA data, was also used. The extended corpus has not been released due to a license issue.In this study, we use Transformer as our correction model. Transformer has shown excellent performance in sequence-to-sequence tasks such as machine translation and has been widely adopted in recent English GEC studies (Kiyono et al., 2019; Junczys-Dowmunt et al., 2018) .However, a BERT-based pre-trained model only uses the encoder of Transformer; therefore, it can not be directly applied to sequence-to-sequence tasks that require both an encoder and a decoder, such as GEC. Hence, we initialize the encoder of Transformer with the parameters learned by Chinese-RoBERTa-wwm-ext, and the decoder is initialized randomly. Finally, we fine-tune this initialized model on Chinese GEC data and use it as our correction model.
2
While most existing methods summarize a call transcript as a single paragraph, our system provides a collection of sentences that summarize the entire dialogue in a chronological order. Given a call transcript, the system utilizes word embeddings to break the transcript into semantically coherent segments (Alemi and Ginsparg, 2015) . Each segment is summarized independently capturing key information such as: customer's issue, agent's solution or the underlying topic of the discussion. Finally, the grammatical coherence of highlights is analyzed using a dedicated model before suggesting them to the user. Figure 3 provides a high-level overview of the system's flow.Next, we introduce the key components of our dialogue summarization system in details.Unlike general documents, conversation transcripts have unique structures associated with speakers and turns. In sales calls, participants can either be a customer or an agent and these roles impose a unique language style that can be leveraged by the model. Motivated by this observation, we propose an encoder-decoder model called Dialog-BART, which adapts the well-known BART (Lewis et al., 2020) model with additional embedding parameters to model both turns and speakers positions (Zhang et al., 2019c; Bao et al., 2020) . For speaker embeddings, we introduce designated vectors to represent each speaker which can be easily generalized to multi-participant dialogues. Additionally, we leverage another set of vectors to model turn position embeddings. During inference, the model determines the speaker and turn indices by leveraging a special token that separates the dialogue's turns. As shown in Figure 4 , DialogBART's input is calculated as the sum of the corresponding token, position, speaker and turn position embeddings. These parameters are randomly initialized, however, the remaining parameters are initialized with weights from a pretrained 1 BART-like encoderdecoder models (Lewis et al., 2020; Shleifer and Rush, 2020) . All these weights are further finetuned on dialogue summarization tasks.Despite the human-in-the-loop user experience, customers still expect high quality summaries which require minimal modifications by them. We propose a novel model that determines the quality of each summary highlight in terms of coherence, fluency and its acceptability in general.Grammatical acceptability, a property of natural language text, implies whether a text is accepted or not as part of the language by a native speaker.The notion was widely investigated through vast work done in automatic detection of grammatical errors (Atwell, 1987; Chodorow and Leacock, 2000; Bigert and Knutsson, 2002; Wagner et al., 2007) and on acceptability judgment of neural networks (Lau et al., 2017; Warstadt et al., 2019) . And yet, we are not aware of works that observe the acceptability of neural generated summaries for validation purposes. To determine a highlight's acceptability, we compute the perplexity of each highlight given by a Pretrained Language Model (PLM). This PLM is fine-tuned on summaries from Dialog-Sum dataset (Chen et al., 2021a ) and in-domain proprietary data in a traditional self-supervised manner. Recall that the perplexity of a sequence W = w 0 w 1 ...w n is defined as:EQUATIONwhere θ are the language model specific parameters and p θ is the probability function corresponding to distribution over vocabulary tokens induced by the same model.Based on the perplexity score, the system determines whether a given highlight should be filtered out, presented to the user, or presented with an indication that its revision may be required. Figure 2 illustrates how the system helps users focus their efforts on modifying borderline acceptable highlights based on the perplexity score.
2
Our goal is to develop an effective black box attack for RC based QA models. Our approach proceeds in two steps: first, we build an approximation of the victim model, and second, we attack the approximate model with a powerful white box method. The result of the attack is a collection of adversarial inputs that can be applied to the victim. In this section we describe these steps in detail.The first step in our approach is to build an approximation of the victim model via model extraction (Krishna et al., 2020) . At a high level, this approach constructs a training set by generating inputs that are served to the victim model and collecting the victim's responses. The responses act as the labels of the inputs. After a sufficient number of inputs and their corresponding labels have been collected, a new model can be trained to predict the collected labels, thereby mimicking the victim. The approximate model is known as the extracted model. The crux of model extraction is an effective method of generating inputs. Recall that in RC based QA, the input is composed of a query and a context. Like previous work, we employ 2 methods for generating contexts: WIKI and RAN-DOM (Krishna et al., 2020) . In the WIKI scheme, contexts are randomly sampled paragraphs from the WikiText-103 dataset. In the RANDOM scheme, contexts are generated by sampling random tokens from the WikiText-103 dataset. For both schemes, a corresponding query is generated by sampling random words from the context. To make the queries resemble questions, tokens such as "where," "who," "what," and "why," are inserted at the beginning of each query, and a "?" symbol is appended to the end. Labels are collected by serving the sampled queries and contexts to the victim model. Together, the queries, contexts, and labels are used to train the extracted model. An example query-context pair appears in Table 5 .A successful adversarial attack on an RC base QA model is a modification to a context that preserves the correct answer but causes the model to return an incorrect span. We study non-targeted attacks, in which eliciting any incorrect response from the model is a success (unlike targeted attacks, which aim to elicit a specific incorrect response form the model). Figure 1 depicts a successful attack. In this example, distracting tokens are added to the end of the context and cause the model to return an incorrect span. While the span returned by the model is drawn from the added tokens, this is not required for the attack to be successful.At a high level, the ADDANY attack, proposed by Jia and Liang (2017) , generates adversarial ex- amples for RC based QA models by appending a sequence of distracting tokens to the end of a context. The initial distracting tokens are iteratively exchanged for new tokens until model failure is induced, or a pre-specificed number of exchanges have been exceeded. Since the sequence of tokens is often nonsensical (i.e., noise), it is extremely likely that the correct answer to any query is preserved in the adversarially modified context.In detail, ADDANY proceeds iteratively. Let q and c be a query and context, respectively, and let f be an RC based QA model whose inputs are q and c and whose output, S = f (c, q), is a distribution over token spans of c (representing possible answers). Let s i = arg max S i , i.e., it is the highest probability span returned by the model for context c i and query q, and let s be the correct (ground-truth) span. The ADDANY attack begins by appending a sequence of d tokens (sampled uniformly at random) to c, to produce c 1 . For each appended token, w j , a set of words, W j , is initialized from a collection of common tokens and from tokens that appear in q. During iteration i, compute S i = f (c i , q), and calculate the F1 score of s i (using s ). If the F1 score is 0, i.e., no tokens that appear in s i also appear in s , then return the perturbed context c i . Otherwise, for each appended token w j in c i , iteratively exchange w j with each token in W j (holding all w k , k = j constant) and evaluate the expected F1 score with respect to the corresponding distribution over token spans returned by f . Then, set c i+1 to be the perturbation of c i with the smallest expected F1 score. Terminate after a pre-specified number of iterations. For further details, see Jia and Liang (2017) .During each iteration, the ADDANY attack uses the victim model's distribution over token spans, S i , to guide construction of the adversarial sequence of tokens. Unfortunately, this distribution is not available when the victim is a black box model. To side-step this issue, we propose: i) building an approximation of the victim, i.e., the extracted model (Section 3.1), ii) for each c and q, running ADDANY on the extracted model to produce an adversarially perturbed context, c i , and iii) evaluating the victim on the perturbed context. The method succeeds if the perturbation causes a decrease in F1, i.e., F1(s i , s ) < F1(s 0 , s ), and where s 0 is the highest probability span for the unperturbed context.Since the extracted model is constructed to be similar to the victim, it is plausible for the two models to have similar failure modes. However, due to inevitable differences between the two models, even if a perturbed context, c i , induces failure in the extracted model, failure of the victim is not guaranteed. Moreover, the ADDANY attack resembles a type of over-fitting: as soon as a perturbed context, c i , causes the extracted model to return a span, s i for which F1(s i , s ) = 0, c i is returned. In cases where c i is discovered via exploitation of an artifact of the extracted model that is not present in the victim, the approach will fail.To avoid this brittleness, we present ADDANY-KBEST, a variant of ADDANY, which constructs perturbations that are more robust to differences between the extracted and victim models. Our method is parameterized by an integer k. Rather than terminating when the highest probability span returned by the extracted model, s i , has an F1 score of 0, ADDANY-KBEST terminates when the F1 score for all of the k-best spans returned by the extracted model have an F1 score of 0 or after a pre-specified number of iterations. Precisely, let S k i be the k highest probability token spans returned by the extracted model, then terminate when:max s∈S k i F1(s, s ) = 0.If the k-best spans returned by the extracted model all have an F1 score of 0, then none of the tokens in the correct (ground-truth) span appear in any of the k-best token spans. In other words, such a case indicates that the context perturbation has caused the extracted model to lose sufficient confidence in all spans that are at all close to the ground-truth Table 1 : A comparison of the original model (VICTIM) against the extracted models generated using 2 different schemes(RANDOM and WIKI). bert-base-uncased has been used as the LM in all the models mentioned above. All the extracted models use the same number of queries (query budget of 1x) as in the SQuAD training set. We report on the F1 and EM (Exact Match) scores for the evaluation set (1000 questions) sampled from the dev dataset.span. Intuitively, this method is more robust to differences between the extracted and victim models than ADDANY, and explicitly avoids constructing perturbations that only lead to failure on the best span returned by the extracted model.Note that a ADDANY-KBEST attack may not discover a perturbation capable of yielding an F1 of 0 for the k-best spans within the pre-specified number of iterations. In such situations, a perturbation is returned that minimizes the expected F1 score among the k-best spans. We also emphasize that, during the ADDANY-KBEST attack, a perturbation may be discovered that leads to an F1 score of 0 for the best token span, but unlike ADDANY, this does not necessarily terminate the attack.
2
This section defines the key notations and briefly formulates the problem of this study. We suppose that a review x consists of k wordsx c = [w c 1 , w c 2 , ..., w c k ], n aspect words x t = [w t 1 , w t 2 , ..., w t n ], and m sentiment wordsx s = [w s 1 , w s 2 , ..., w s m ].To prevent conceptual confusion, we use superscripts "s", "t" and "c" to indicate the variables that are related to sentiment words, aspect words and content, respectively. Each review x in the corpus has a category label y and a corresponding reference summaryZ = [z 1 , z 2 , . . . , z T ],where T is the length of the reference summary.Our model MARS consists of two tasks: the abstractive review summarization task and the text categorization task, both working on a shared document encoding layer. In this section, we elaborate the main components of MARS in detail.This section introduces our Mutual Attention Network (MAN) to learn better sentiment, aspect and context representation via interactive learning. MAN utilizes the attention mechanism associated with the sentiment and aspect words to capture important information from the input review and learn the sentiment/aspect-aware review representation. Further, MAN makes use of the interactive information from the input review to supervise the modeling of the sentiment and aspect words which are helpful to capture important information in summary generation.Each word w in the review is mapped to a low-dimensional embedding e ∈ R d through a word embedding layer, where d denotes the embedding dimensionality. Then, we employ three independent LSTM networks to obtain the hidden states of context words, the sentiment words, and the aspect words. Formally, given the input word embedding e t at time step t, the hidden state h t can be updated with the previous hidden state h t−1 , which is computed byEQUATIONwhere i t , f t , o t , c t are input gate, forget gate, output gate and memory cell, respectively. W and U denote weight matrices to be learned. b represents biases. σ is a sigmoid function and stands for element-wise multiplication. Hence, we can use the LSTM networks to obtain the hidden statesH c = [h c 1 , h c 2 , ..., h c k ] ∈ R k×u for context words, the hidden states H s = [h s 1 , h s 2 , ..., h s m ] ∈ R m×u for sentiment words in the review, and the hidden statesH t = [h t 1 , h t 2 , ..., h t n ] ∈ R n×u for aspect words in the review, where u is the size of hidden states for each LSTM unit. Then, we feed these hidden states to a mean-pooling layer to obtain the initial representation of context words, sentiment words, and aspect words in the review, respectively.v c = k i=1 h s i /k; v s = m i=1 h c i /m; v t = n i=1 h t i /n (7)3.1.2 Aspect/sentiment-aware Document RepresentationAttention mechanism plays an important role in text modeling. Inspired by (Bahdanau et al., 2014; Ma et al., 2017) , this section introduces the proposed mutual attention network (MAN) to learn a better sentiment-aware and aspect-specific document representation. In addition, the MAN model can also well represent the sentiment and aspect word representations. Formally, given the context word rep-resentations [h c 1 , h c 2 , ..., h c k ], the initial representations of sentiment and aspect words (i.e., v s and v t , respectively), the mutual attention mechanism generates the attention weight C i of the context byEQUATIONwhere C i indicates the importance of the i-th word in the context, and ρ is the attention function that calculates the importance of h c i in the context:EQUATIONwhere U c and W c are projection parameters to be learned, and b c is the bias. Only using the attention vector C cannot capture the interactive information of the context words and the aspect words (sentiment words), and lacks the ability of discriminating the importance of the words in the context. To make use of the interactive information between the context words and the aspect words (sentiment words), we also use the context words as attention source to attend to the aspect words (sentiment words). Similar to Eq. (8), we can calculate the attention vectors T and S for the aspect words and sentiment words as:EQUATIONEQUATIONwhere ρ is the same as in Equation 9. After computing the mutual attention vectors for the context words, aspect words and sentiment words, we can get the final context, aspect, sentiment representations emb c , emb s and emb t based on the mutual attention vectors C, S and T by:EQUATIONFinally, we concatenate the context, aspect, sentiment representations to form the aspect/sentimentaware review representation emb x for review x:EQUATIONWe feed the final document representation emb x into a task-specific fully connected layer and a softmax classifier to predict the category distribution of the input document x:EQUATIONwhere V 1 and V 2 are projection parameters to be learned. We train this model by minimizing the crossentropy between the predicted distributionŷ and the ground truth distribution y for each review in the training data:J categ. M L (θ) = − 1 N N i=1 D j=1 I(y i = j)log(ŷ i ) (15)where θ is the set of parameters of our model, D is the number of categories, N is the number of reviews in the training set, I(•) is an indicator such that I(true) = 1 and I(false) = 0.The abstractive review summarization subtask shares the same review representation module (encoder) with the text categorization subtask. The generation of summary Z is performed by a LSTM decoder.To generate category-specific summaries, the review representation emb x are transformed to categoryspecific review embedding which is expected to capture category characteristics. Inspired by (Dong et al., 2014; Cao et al., 2017) , we develop a category-specific transformation process to make the transformed review embedding hold the category characteristics information. Formally, our model transforms the review embedding emb x to a category-specific review embedding cemb x byEQUATIONwhere W µ ∈ R d is the transformation matrix, d is the dimensionality of the category-specific document embedding. Note that we define the same dimensionality for both the document embedding and the category-specific document embedding.To make the transformed embedding capture category-specific information, we develop the categoryspecific transformation matrix W µ according to the predicted product category. We introduce |C| sub-matrices (W 1 µ , • • • , W |C| µ ), with each directly corresponding to one product category. Based on the predicted category derived from Eq. 14, the category-specific transformation matrix W µ is computed as the weighted sum of these sub-matrices:W µ = |C| i=1ŷ W i µ .In this way, W µ is automatically biased to the sub-matrix of the predicted category.Inspired by (See et al., 2017) , the pointer-generator network is adopted as the decoder to generate summaries. The pointer-generator network allows both copying words from input text via pointing (P vocab ), and generating words from a fixed vocabulary (P gen ). Thus, the pointer-generator has the ability to produce out-of-vocabulary (OOV) words.The category-specific review representation cemb x is used to initialize the hidden states s 0 of LSTM decoder. On each step t of decoding, the decoder receives the word embedding of the previous word w t−1 (while training, this is the previous word of the reference summary; at test time it is the previous word emitted by the decoder) and update its hidden state s t :EQUATIONThe attention mechanism is used to calculate the attention weights a t and context vector c t . Attention mechaism is expected to take both context-sentiment and context-aspect correlations into consideration. The enhanced context vector c t is aggregated by the representation of those informative words (see Eq. 21). In this paper, we explore three kinds of attention: semantic attention, sentiment attention and aspect attention. Details of these three kinds of attention are described as follows.Semantic Attention Semantic attention simply applies the context representation itself as attention source. Following (Shimaoka et al., 2017) , we apply a multi-layer perceptron (MLP) to compute semantic attention weights as follows:EQUATIONwhere W a 1 and U a 1 are parameter matrices, b a 1 is bias parameter. The attention computed for context words are independent of the aspect/sentiment words. Hence, it is difficult for semantic attention to focus on those context words that are highly related to the aspects and sentiments.Sentiment Attention In order to capture the correlation between sentiment words and the context, we take sentiment word representation emb s x as attention source to compute sentiment attention weights:EQUATIONwhere W a 2 is a bi-linear parameter matrix, U a 2 is parameter matrix, b a 2 is bias parameter.Aspect Attention Aspect attention applies aspect word representation emb t x as attention query, which is expected to capture the correlations between aspect words and context words.EQUATIONwhere W a 3 is a bi-linear parameter matrix, U a 3 is parameter matrix, b a 3 is bias parameter.We define the attention fusion of the semantic attention, sentiment attention and aspect attention at timestep t as:EQUATIONwhere λ 1 , λ 2 and λ 3 are hyper-parameters that determines the weights of the three kinds of attentions. We set λ 1 = 0.5, λ 2 = λ 3 = 0.25. Note that for the documents that do not contain sentiment words and aspect words, we use only the semantic attention to distinguish the important information. The context vector c t is then concatenated with the decoder state s t and fed through a linear layer and a softmax layer to compute the output probability distribution over a vocabulary of words at the current state:EQUATIONwhere (See et al., 2017) to integrate the attention attribution into the final vocabulary distribution which is defined as the interpolation between two probability distributions:V d 1 , V d 2 ,EQUATIONwhere p gen ∈ [0, 1] is the switch variable for controlling generating a word from the vocabulary or directly copying it from the original review. If w is an out-of-vocabulary (OOV) word, then P vocab (w) is zero; if w does not appear in the source review, then i:w i =w a t,i is zero. p gen can be defined as:EQUATIONwhere vectors U d 1 , U d 2 , U d 3 and scalar b gen are learnable parameters. A common way of training a summary generation model is to estimate the parameters by minimizing the negative log-likelihood of the training data:EQUATIONOverall, MARS consists of two subtasks, each has training objective. To make the document embedding sensitive to the category knowledge, we train these two related task simultaneously. The joint multi-task objective function is minimized by:EQUATIONwhere γ 1 and γ 2 are hyper-parameters that determines the weights of L 1 and L 2 . Here, we set γ 1 = 0.2, γ 2 = 0.8.However, the maximum likelihood estimation (MLE) method suffers from two main issues. First, the evaluation metric is different from the training loss. For example, in summarization generation systems, the encoder-decoder models are trained using the cross-entropy loss but they are typically evaluated at test time using discrete and non-differentiable metrics such as BLEU (Papineni et al., 2002) and ROUGE (Lin, 2004) . Second, the input of the decoder at each time step is often the previous ground-truth word during training. Nevertheless, when generating summaries in the testing phase, the input of the next time step is the previous word generated by the decoder. This exposure bias (Ranzato et al., 2015) leads to error accumulation at the testing phase. Once the model generates a "bad" word, the error will propagate and accumulate with the length of the sequence.To alleviate the aforementioned issues when generating summaries, we also optimize directly for ROUGE-1 since it achieves best results among the alternatives such as METEOR (Lavie and Agarwal, 2007) and BLEU (Papineni et al., 2002) , by using policy gradient algorithm, and minimize the negative expected rewards:EQUATIONwhere r(ẑ) is the reward of a greedy decoding generated sequenceẑ, and r(z s ) is the reward of sequence z s generated by sampling among the vocabulary at each step. After pre-training the proposed model by minimizing the joint ML objective (see Eq. 26), we switch the model to further minimize a mixed training objective, integrating the reinforcement learning objective J sum RL (θ) with the original multi-task loss J ml (θ):EQUATIONwhere β is a hyper-parameter, and we set β=0.1.
2
In this section, we brief the main methodology we followed to build our two models.We relied on Microsoft's computational network toolkit (CNTK) 1 in building up our two models. We preferred to tackle the task using an LSTM network to benefit from sequence modeling. The network consists of an input layer, some hidden layers, and an output one. The input layer mainly represents the input tweet in a vector of nodes. The hidden layers of our network consist of an embedding layer and some LSTM layers. We initially set the embedding layer to be a 100-nodes layer and an only one hidden LSTM layer of 200-nodes. For the non typed model, the output layer of the network consists of only two nodes, one to detect entities and the other for non entities, while the typed model's output layer is represented by eleven nodes, one for each entity class and the last one for non entities.For the experimentation procedures we conducted, first we pass the training data through an encoding component in order to convert the raw tweet into a vector which is the input representation of the network. Based on the experiment's feature representation, we produce a mapping function that maps the features into some identifiers. Using the same mapping function, we encode the development set. Then, we design the CNTK configuration file for the network by adjusting the input vector size, the output one, and the whole network parameters. We then, train the two models and test over both the training set and the development one. Finally, we score our results and decide whether to include this feature in the main flow or discard it. We adopted an incremental approach in which we fix all the parameters and change only one to assess the value of adding/removing it.
2
The procedure we devised to track down oxymorons in corpora of written Italian stems from the observation that these constructions are closely connected with antonymous pairs, which have been the subject of several studies based on their co-occurrence in texts (cf. e.g. Charles & Miller 1989; Justeson & Katz 1991; Lobanova 2012; Kostić 2017 ).Our starting point was Jones' (2002) analysis of English antonyms, which makes use of a list of canonical antonymous pairs to be searched in a corpus of texts from The Independent (approx. 280M words). We therefore translated Jones' antonymous pairs into Italian and made a selection out of this set, driven mainly by the exclusion of predicative (e.g. confirm ~ deny) and adverbial (e.g. badly ~ well) couples.This resulted in a list of 17 noun ~ noun antonymous pairs, displayed in Table 1 . Then we designed an inventory of potential oxymorons. All the constructed couples were searched, as lemmas, in two large corpora of contemporary written Italian: Italian Web 2016 (itTenTen16, through the SketchEngine platform:Englishhttps://www.sketchengine.eu/) and CORIS (2017 version; Rossini Favretti et al. 2002; http://corpora.ficlit.unibo.it/coris _eng.html). The results were manually checked.The above-mentioned inventory of potential oxymorons was built in the following way.First, we matched each noun of each pair (e.g. odio 'hate') with its antonym (amore 'love') in either adjectival (amoroso / amorevole 'loving') or verbal (amare 'to love') form. With this first round of extractions, we obtained combinations such as odio amoroso (lit. hate lovely) and amorevole odio (lit. lovely hate) 'loving hate', as well as l'amore odia 'love hates', although sequences containing verbs were quite uncommon. However, the search for lemma verbs retrieved also a number of participial forms used as adjectives (as in amore odiato 'hated love', where odiato is the past participle of odiare 'to hate').Second, we added contrastive adjective ~ adverb pairs connected to the nouns in Table 1 (e.g. felicità 'happiness' > felice 'happy' > felicemente 'happily' + infelicità > infelice 'unhappy', gaining felicemente infelice 'happily unhappy').In addition, to enrich data retrieval, we selected lexemes semantically related to the members of the antonymous pairs in Table 1 (synonyms, hyponyms, etc.) from the Grande Dizionario Analogico della Lingua Italiana (Simone 2010). This step was inspired by Shen's (1987: 109) definition of indirect oxymoron, i.e. an oxymoron where "one of [the] two terms is not the direct antonym of the other, but rather the hyponym of its antonym" (like whistling silence, where whistling is a type of noise). Related lexemes were also searched in the two above-mentioned corpora. For the sake of exemplification, we illustrate some of these paradigmatic expansions for the following three antonymous pairs, for which we retrieved a considerable amount of data: caldo ~ freddo (hot ~ cold) → afa 'stuffiness', calura 'heat', fuoco 'fire', fiamma 'flame' ~ gelo 'frost', ghiaccio 'ice', grandine 'hail', neve 'snow', pioggia 'rain';  felicità ~ infelicità (happiness ~ unhappiness) → allegria 'glee', contentezza 'cheer', gaiezza 'gaiety', gioia 'joy' ~ afflizione 'distress', depressione 'depression', disperazione 'despair', dolore 'pain', sconforto 'discouragement', scontento 'discontentment', tristezza 'sadness';  silenzio ~ rumore (silence ~ noise) → [silenzio only] ~ boato 'rumble', fracasso 'racket', fragore 'clamour', grido 'shout', ululato 'howling', urlo 'scream'.The final phase of the analysis implied the interrogation of the Sketch Engine Word Sketch tool, which describes the collocational behavior of words by showing the lexemes that most typically co-occur with them, within specific syntagmatic contexts, by using statistical association measures. In this case, we searched for all nouns participating in the antonymous pairs in Table 1 and we manually revised all top results provided the Word Sketch function (thus focusing on the most statistically significant combinations). Beside oxymorons that we had already retrieved with the previous procedure (e.g. the very frequent silenzio assordante 'deafening silence'), this method allowed us to identify new configurations, for instance sentential patterns where the two opposite nouns are linked by the copula è 'is' (e.g. la luce è tenebra 'light is darkness') or prepositional phrases where the two opposite nouns are linked by a preposition (e.g. il fragore del silenzio 'the racket of silence').
2
In this section, we define target representations and evaluation metrics (2.1), and describe our transitionbased parsing framework, consisting of an abstract transition system (2.2), a feature-based scoring function (2.3), and algorithms for decoding (2.4) and learning (2.5).We take an unlabeled dependency tree for a sentence x = w 1 , . . . , w n to be a directed tree T = (V x , A), where V x = {0, 1, ..., n}, A ⊆ V x × V +x , and 0 is the root of the tree (Kübler et al., 2009) . The set V x of nodes is the set of positive integers up to and including n, each corresponding to the linear position of a word in the sentence, plus an extra artificial root node 0. We use V +x to denote V x −{0}. The set A of arcs is a set of pairs (i, j), where i is the head node and j is the dependent node.To this basic representation of syntactic structure we add four labeling functions for part-of-speech tags, morphological features, lemmas, and dependency relations. The function π : V +x → P maps each node in V +x to a part-of-speech tag in the set P ; the function µ : V +x → M maps each node to a morphological description in the set M ; the function λ :V + x → Z * maps each node in V +x to a lemma (a string over some character set Z); and the function δ : A → D maps each arc to a dependency label in the set D. The exact nature of P , M and D depends on the data sets used, but normally P and D only contain atomic labels while the members of M are sets of atomic features encoding properties like number, case, tense, etc. For lemmas, we do not assume that there is a fixed lexicon but allow any character string as a legal value.We define our target representation for a sentence x = w 1 , . . . , w n as a quintuple Γ = (A, π, µ, λ, δ) such that (V x , A) is an unlabeled dependency tree; π, µ and λ label the nodes with part-of-speech tags, Figure 1 : Transitions for joint morphological and syntactic analysis. The stack Σ is represented as a list with its head to the right (and tail σ) and the buffer B as a list with its head to the left (and tail β). The notation Γ[q 1 , . . . , q m ] is used to denote an MS-parse that is exactly like Γ except that q 1 , . . . , q m hold true. morphological features and lemmas; and δ labels the arcs with dependency relations. For convenience, we refer to this type of structure as a morphosyntactic parse (or MS-parse, for short). The following evaluation metrics are used to score an MS-parse with respect to a gold standard:Transition Condition LEFT-ARC d ([σ|i, j], B, Γ) ⇒ ([σ|j], B, Γ[(j, i) ∈ A, δ(j, i) = d]) i = 0 RIGHT-ARC d ([σ|i, j], B, Γ) ⇒ ([σ|i], B, Γ[(i, j) ∈ A, δ(i, j) = d]) SHIFT p,m,l (σ, [i|β], Γ) ⇒ ([σ|i], β, Γ[π(i) = p, µ(i) = m, λ(i) = l]) SWAP ([σ|i, j], β, Γ) ⇒ ([σ|j], [i|β], Γ) 0 < i < j1. POS: The percentage of nodes in V +x that have the correct part-of-speech tag.x that have the correct morphological description; if the description is set-valued, all members of the set must match exactly.A transition system for dependency parsing is a quadruple S = (C, T, c s , C t ), where C is a set of configurations, T is a set of transitions, each of which is a (partial) function t : C → C, c s is an initialization function, mapping a sentence x to a configuration c ∈ C, and C t ⊆ C is a set of terminal configurations. A transition sequence for a sentence x in S is a sequence of configuration-transition pairsC 0,m = [(c 0 , t 0 ), (c 1 , t 1 ), . . . , (c m , t m )] where c 0 = c s (x), t m (c m ) ∈ C t , and t i (c i ) = c i+1 (0 ≤ i < m).In our model for joint prediction of part-of-speech tags, morphological features and dependency trees, the set C of configurations consists of all triples c = (Σ, B, Γ) such that Σ (the stack) and B (the buffer) are disjoint sublists of the nodes V x of some sentence x, and Γ = (A, π, µ, λ, δ) is an MS-parse for x. We take the initial configuration for a sentencex = w 1 , . . . , w n to be c s (x) = ([0], [1, . . . , n], (∅, ⊥, ⊥, ⊥, ⊥)),where ⊥ is the function that is undefined for all arguments, and we take the set C t of terminal configurations to be the set of all configurations of the form c = ([0], [ ], Γ) (for any Γ). The MS-parse defined for x by c = (Σ, B, (A, π, µ, λ, δ)) is Γ c = (A, π, µ, λ, δ), and the MS-parse defined for x by a complete transition sequence C 0,m is Γ tm(cm) .The set T of transitions is shown in Figure 1 . It is based on the system of Nivre (2009) , where a dependency tree is built by repeated applications of the LEFT-ARC d and RIGHT-ARC d transitions, which add an arc (with some label d ∈ D) between the two topmost nodes on the stack (with the leftmost or rightmost node as the dependent, respectively). The SHIFT transition is used to move nodes from the buffer to the stack, and the SWAP transition is used to permute nodes in order to allow non-projective dependencies. Bohnet and Nivre (2012) modified this system by replacing the simple SHIFT transition by SHIFT p , which not only moves a node from the buffer to the stack but also assigns it a part-of-speech tag p, turning it into a system for joint part-of-speech tagging and dependency parsing. 2 Here we add two additional parameters m and l to the SHIFT transition, so that a node moved from the buffer to the stack is assigned not only a tag p but also a morphological description m and a lemma l. In this way, we get a joint model for the prediction of part-ofspeech tags, morphological features, lemmas, and dependency trees.In transition-based parsing, we score parses in an indirect fashion by scoring transition sequences. In general, we assume that the score function s factors by configuration-transition pairs:EQUATIONMoreover, when using structured learning, as first proposed for transition-based parsing by Zhang and Clark (2008) , we assume that the score is given by a linear model whose feature representations decompose in the same way:s(x, C 0,m ) = f (x, C 0,m ) • w = m i=0 f (x, c i , t i ) • w (2) Here, f (x, c, t) is a high-dimensional feature vec- tor, where each component f i (x, c, t)is a nonnegative numerical feature (usually binary), and w is a weight vector of the same dimensionality, where each component w i is the real-valued weight of the feature f i (x, c, t). The choice of features to include in f (x, c, t) is discussed separately for each instantiation of the model in Sections 4-6. to 0.0, make N iterations over the training data and update the weight vector for every sentence x where the transition sequence C 0,m corresponding to the gold parse is different from the highest scoring transition sequence C * 0,m . 4 More precisely, we use the passive-aggressive update of Crammer et al. (2006) . We also use the early update strategy found beneficial for parsing in several previous studies (Collins and Roark, 2004; Zhang and Clark, 2008; Huang and Sagae, 2010) . This means that, at learning time, we terminate the beam search as soon as the hypothesis corresponding to the gold parse is pruned from the beam and then update with respect to the partial transition sequences constructed up to that point. Finally, we use the standard technique of averaging over all weight vectors seen in training, as originally proposed by Collins (2002) .PARSE(x, w) 1 h 0 .c ← c s (x) 2 h 0 .s ← 0.0 3 h 0 .f ← {0.0} dim(w) 4 BEAM ← [h 0 ] 5 while ∃h ∈ BEAM : h.c ∈ C t 6 TMP ← [ ] 7 foreach h ∈ BEAM 8 foreach t ∈ T : PERMISSIBLE(h.c, t) 9 h.f ← h.f + f(x, h.c, t) 10 h.s ← h.s + f(x, h.c, t) • w 11 h.c ← t(h.c) 12 TMP ← INSERT(h, TMP) 13 BEAM ← PRUNE(TMP) 14 h * ← TOP(BEAM) 15 return Γ h * c
2
The input for our task is the text-enriched network graph G. The goal is to compute a node embedding from G and then use the embedding to generate features for pairs of nodes, which can then be used for a prediction task. The process follows these steps.• Textual-Similarity (TS) Infused Social Graph: Construct graph weights W ij based on the text in G, according to (1) a Node or Edge view of the documents, and (2) using Topic Model or Word Embedding to represent the content.• Node Embedding: Construct an embedding function V → R k , mapping the (weighted) graph nodes into a R k dimensional space. We used the LINE method (Tang et al., 2015) . We omit the details due to space restrictions.• Feature Extraction: Construct a feature set for each node pair, using 9 similarity measures between the nodes' k-dimensional vector representations from the embedding. We experiment with additional features extracted directly.The TS-Infused social graph captures the interaction between node pairs by modifying the strength of the edge connecting them according to the similarity of the text generated by each one of the nodes. We identify several design decisions for the process.Node vs. Edge Each edge e ij ∈ G is associated with textual content d ij . We can characterize the textual content from the point of view of the node by aggregating the text over all its outgoing edges (i.e., D i ), or alternatively, we can characterize the textual content from the edge point of view, by only looking at the text contained in the relevant outgoing edges (i.e., D ij ).Representing Textual Content using Topic Models vs. Word Embedding Before we compute the similarity between the content of two parties, we need a vector space model to represent the textual information (the set of documents D i , or D ij ). One obvious method for this is topic modeling, in which the textual content is represented as a topic distribution. In this approach, we learn a topic model over the set of documents, and then represent each document via a set of topic weights (T i or T ij ). An alternative approach is using word embedding, which has been proved effective as a word representation. In this approach, we represent each document as the average of the embedding over the words in the document (WE i or WE ij ). Given the distributional representation of text associated with a node/edge, we assign a weight (w ij ) for each edge (e ij ) as the cosine similarity between vector representation of contents from neighboring nodes (e.g., d(T i , T j ) or d(T ij , T ji ), where d is cosine similarity).We utilize the LINE embedding technique (Tang et al., 2015) , aimed at preserving network structures when generating node embedding for social and information networks. LINE uses edge weights corresponding to the number of interactions between each pair of nodes. This only makes use of the network structure, without taking advantage of the text in the network. We modify the embedding procedure by using the edges weights W ij described above (i.e., based on the cosine similarity of the text between nodes i, j) and use the LINE algorithm to compute a k-dimensional embedding of the nodes in G.Distance-based Features Given a node pair represented by their k-dimensional node embedding, we generate features for the pair according to nine similarity measures. The nine measures used by us are Bray-Curtis distance, Canberra distance, Chebyshev distance, City Block (Manhattan) distance, Correlation distance, Cosine distance, Minkowski distance, Euclidean and squared Euclidean distance.Additional Features Besides the distance-based features, we can also add one or more other basic features related to nodes in the network. These include the following: (1) Network: The number of interactions between two nodes, e.g. number of emails sent and received. (2) Unigram: The unigram feature vector for text sent for each node. (3) Word embedding features: The word embedding vector for text sent for each node. Again we use the average of word embedding to represent documents.
2
Assuming a set of T emotions E = {e 1 , e 2 , ...e T } and a set of n instancesX = {x 1 , x 2 , x 3 , ..., x n }, each instance x i ∈ R dis associated with a ranked list of its relevant emotions R i ⊆ E and also a list of irrelevant emotions R i = E − R i . Relevant emotion ranking aims to learn a score function g(x i ) = [g 1 (x i ), ..., g T (x i )] assigning a s- core g t (x i ) to each emotion e t , (t ∈ {1, ..., T }).As mentioned before, it is unnecessary to consider the rankings of irrelevant emotions since they might introduce errors into the model during the learning process. In order to differentiate relevant emotions from irrelevant ones, we need to define a threshold g Θ (x) which could be simply set to 0 or learned from data (Fürnkranz et al., 2008) . Those emotions with scores lower than the threshold will be considered as irrelevant and hence discarded. The identification of relevant emotions and their ranking can be obtained simultaneously according to their scores assigned by the ranking function g. Here, the predicted relevant emotions of instancex i are denoted asR i = {e t ∈ E|g t (x i )>g Θ (x i )}.The goal of relevant emotion ranking is to learn the parameter of the ranking function g. Without loss of generality, we assume that g are linear models, i.e., gt (x i ) = w t • x i , t ∈ {1, 2, 3, ..., T } ∪ {Θ},where Θ denotes the threshold. Relevant emotion ranking can be regarded as a special case of multi-label learning. Several evaluation criteria typically used in multi-label learning can also be used to measure the ranking function's ability of distinguishing relevant emotions from irrelevant ones, such as hamming loss, one error, coverage, ranking loss, and average precision as suggested in (Zhang and Zhou, 2014) . However, these multilabel criteria cannot meet our requirement exactly as none of them considers the ranking among emotions which are considered relevant. Therefore, by incorporating PRO loss (Xu et al., 2013) , the loss function for the instance x i is defined as follows:L(x i , R i , ≺, g) = et∈R i ∪{Θ} es∈≺(et) 1 norm t,s l t,s(1) where e t refers to the emotion belonging to relevant emotion set R i or the threshold Θ of instance x i while e s refers to the emotion which is less relevant than e t denoted as ≺. Thus, (e t , e s ) represents four types of emotion pairs: i.e., (relevant, relevant), (relevant, irrelevant), (relevant, threshold), and (threshold, irrelevant). The normalization term norm t,s is used to balance those four types of emotion pairs to avoid dominated terms by their respective set sizes. The set sizes of the four different types of emotion pairs mentioned above are|R i | × (|R i | − 1)/2, |R i | × |R i |, |R i |, and |R i |, respectively. Here, l t,s refers to a modi- fied 0-1 error. Specifically, l t,s =      1, g t (x i ) < g s (x i ) 1 2 , g t (x i ) = g s (x i ) 0, otherwiseNote that l t,s is non-convex and difficult to optimize. Thus, a large margin surrogate convex loss (Vapnik and Vapnik, 1998) implemented in hinge form is used instead as follows:EQUATIONwhere (u) + = max{0, u}. However, Eq. 2 ignores the relationships between different emotions. As mentioned in Introduction section, some emotions often co-occur such as "joy" and "love" while some rarely coexist such as "joy" and "anger". Such relationship information among emotions can provide important clues for emotion ranking. Therefore, we incorporate this information into the emotion loss function as constraints. The objective function L (x i , R i , ≺, g) can be redefined as:L ω (x i , R i , ≺, g) = et∈R i ∪{Θ} es∈≺(et) 1 norm t,s × (1 + g s (x i ) − g t (x i ) + ω ts (w t − w s )) + (3)where the weight ω ts models the relationship between the t-th emotion and the s-th emotion in the emotion set and can be calculated in multiple ways. Since the Pearson correlation coefficient (Nicewander, 1988) is the most familiar measure of relationship between two variables, we use it to measure the relationship of two emotions using their original emotion scores across each corpus.From the above, it can be observed that the goal of relevant emotion ranking can be achieved through predicting an accurate relevant emotion set as well as the ranking of relevant emotions.After defining an appropriate loss function, we need to define a way to minimize the empirical error measured by the appropriate loss function and at the same time to control the complexity of the resulting model. It can be done by introducing a maximum margin strategy and regularization to deal with emotion ranking data, where a set of linear classifiers are optimized to minimize the emotion loss function mentioned before while having a large margin. We could potentially use an approach based on a label ranking method (Elisseeff and Weston, 2001) . It is worth mentioning that the margin of the (relevant, relevant) label pair needs to be dealt with carefully, which is not considered in (Elisseeff and Weston, 2001) .The learning procedure of relevant emotion ranking (RER) is illustrated in Figure 2 . The big rectangular dash line boxes denoted by x 1 to x n represent n instances in the training set. In each small box, e i , i ∈ {1, ...T } ∪ {Θ} represents an emotion of the instance where the shaded small boxes represent the relevant emotions while the dashed small boxes represent irrelevant ones and the last one e Θ is the threshold. Each emotion's corresponding weight vector is w i . We use m t,s to represents the margin between label e t and e s . There are four types of emotion pairs' margins in total, i.e., (relevant, relevant), (relevant, irrelevant), (relevant, threshold), and (threshold, irrelevant). Different types of emotion pairs' margins are denoted using different text/line colors. For each training instance x i , margin(x i ) represents the margin of instance x i which can be obtained by taking the minimum margin of all its possible label pairs m t,s . Similarly, the margin of the learning system margin(learningsystem) can be obtained by taking the minimum margin of all the training instances. By maximizing the margin of the learning system, the weight vector of each emotion can be derived from which the predicted emotion set and the ranking of relevant emotions can be obtained.The learning system is composed of T + 1 linear classifiers [w 1 ; ...; w T ; w Θ ] with one classifier for each emotion label and the threshold, where w t , t ∈ {1, ...T } ∪ {Θ} is the weight vector for the t-th classifier of emotion e t . For a training instance x i and its corresponding emotion label set E i , the learning system's margin on instance x i is defined as follows by considering its ranking ability on x i 's four types of emotion pairs, i.e., (relevant, relevant), (relevant, irrelevant), (relevant, threshold), and (threshold, irrelevant):EQUATIONHere, u, v returns the inner product u v. For each emotion pair (e t , e s ), its discrimination boundary corresponds to the hyperplane w t − w s , x i = 0. Therefore, Eq. 4 returns the minimum value as the margin on instance x i . The margin on the whole training set G can be calculated as follows:EQUATIONIf the learning algorithm is capable of properly ranking the four types of label pairs for each training instance, Eq. 5 will return a positive margin. In this ideal case, the final goal is to maximize the margin in Eq. 5: Suppose we have sufficient training examples such that for each label pair (e t , e s ), there exists x i ∈ Gsatisfying e t ∈ R i ∪ {Θ}, e s ∈≺ (e t ).EQUATIONThus, the objective in Eq.6 becomes equivalent to max w j min 1≤s<t≤T +1 1 ||wt−ws|| and can be rewritten as minw j max 1≤s<t≤T +1 ||w t − w s ||.Moreover, to overcome the complexity brought in by the max operator, the objective of the optimization problem can be re-written by approximating the max operator with the sum operator. Thus, the objective of Eq. 6 can be transformed as:EQUATION1 ≤ j ≤ T + 1, e t ∈ R i ∪ {Θ}, e s ∈ ≺ (e t )To accommodate real-world scenarios where constraints in Eq. 7 can not be fully satisfied, slack variables can be incorporated into the objective function:EQUATIONSince ξ its does not need to be optimized since it can be easily determined by w t , w s . The final objective function can be reformulated as:EQUATIONAs can be seen, Eq.9 consists of two parts balanced by the trade-off parameter λ. Specifically, the first part corresponds to the maximum margin of the learning system and it can also represent the complexity of the learning system, while the second part corresponds to the emotion loss function of the learning system implemented in hinge form.Let w = [w 1 ; ...; w T ; w Θ ], Eq. 9 is cast into a general form in SVM-type:min w,ξ 1 2 ||w|| 2 + λC ξ s.t. Aw ≥ 1 p − ξ, ξ ≥ 0 p (10)where p is the total number of label pairs, calculated by n i=1 et∈R i ∪{Θ} es∈≺(et) norm t,s and 1 p (0 p ) is the p × 1 all one (zero) vector. The entries in vector C correspond to the weights of hinge losses, i.e., the normalization term to balance the four kinds of label pairs. The matrix A corresponds to the constraints for instances which reflects the emotion relationships and the margin of the label pairs. ξ does not need to be optimized since it can be easily determined by w. Hence the objective function can be reformulated into the following form without ξ:EQUATIONThrough minimizing the objective function F (w, G), we can finally obtain parameter w and the ranking function g. Eq. 11 involves a large scale optimization. To address Eq. 11, we consider an efficient Alternating Direction Method of Multipliers (ADMM) solution (Bertsekas and Tsitsiklis, 1989). The basic idea of ADMM is to take the decomposition-coordinate procedure such that the solution of subproblems can be coordinated to find the solution to the original problem. We decompose G into M disjoint subsets, i.e., {G 1 , G 2 , ..., G M } and then Eq. 11 is converted into the following form:EQUATIONThe surrogate augmented Lagrangian Function (LF) was introduced into Eq. 12 and it was cast into the following form:EQUATIONwhere α, β are the Lagrange multiplies. The updating process of Eq. 13 is shown in Algorithm 1.Algorithm 1 Parameter updating process. 1: Decompose data set G into M disjoint subsets i.e., {G 1 , G 2 , ..., G M }. Set iteration i = 0. 2: Initialize {w 0 0 , w 1 0 , ..., w M 0 , α 1 0 , ..., α M 0 }Set i = i + 1 5: Update w 0 i , {w m i , α m i } M m=1 as: {w m i } M m=1 = argmin w 1 ..w m LF (w 0 i−1 , {w m i−1 , α m i−1 } M m=1 , β) w 0 i = argmin w 0 LF (w 0 , {w m i−1 , α m i−1 } M m=1 , β α m i = α m i−1 + β(w m i − w 0 i ) , ∀m = 1, 2, ..., M 6: end while Output: Final w 0 4 Experiments 4.1 SetupWe evaluate the proposed approach on two realworld corpora, one is document level and the other is sentence level: Sina Social News (News) was collected from the Sina news Society channel where readers can choose one of the six emotions such as Amusement, Touching, Anger, Sadness, Curiosity, and Shock after reading a news article. As Sina is one of the largest online news sites in China, it is sensible to carry out experiments to explore the readers' emotion (social emotion). News articles with less than 20 votes were discarded since few votes can not be considered as proper representation of social emotion. In total, 5,586 news articles published from January 2014 to July 2016 were kept, together with the readers' emotion votes.Ren-CECps corpus (Blogs) (Quan and Ren, 2010) contains 34,719 sentences selected from blogs in Chinese. Each sentence is annotated with eight basic emotions from writer's perspective, including anger, anxiety, expect, hate, joy, love, sorrow and surprise, together with their emotion scores indicating the level of emotion intensity which range from 0 to 1. Higher scores represents higher emotion intensity. The statistics of the two corpora are shown in Table 1 .Ren The two corpora were preprocessed by using word segmentation and filtering. The python jieba segmenter is used for the segmentation and a removal of stop words is performed based on a stop word thesaurus. Words appeared only once or appeared in less than two documents were re-moved to alleviate data sparsity. We used the single layer long short-term memory (LSTM) networks (Hochreiter and Schmidhuber, 1997) to extract the features of each text. LSTM is one kind of recurrent neural networks, which can capture sequence information from text and can represent meanings of inputs in the reduced dimensional space. It treats text as a sequence of word embeddings and outputs a state vector over each word, which contains the information of the previous words. The final state vector can be used as the representation of the text. In our experiments, we set the dimension of each text representation to 100. During LSTM model training, we optimized the hyper parameters using a development dataset which is built using external data. We train LST-M using a learning rate of 0.001, a dropout rate of 0.3 and categorical cross-entropy as the loss function. The mini batch (Cotter et al., 2011) size is set to 32. After that, the learned text representations are fed into the proposed system for relevant emotion ranking as has been previously presented in the Methodology section.
2
NMT systems can assign a conditional translation probability to an arbitrary sentence pair. Filtering based on this (Junczys-Dowmunt, 2018) won the WMT 2018 shared task on parallel corpus filtering (Koehn et al., 2018) . Intuitively, we could score every pair of source and target sentences using a translation system in quadratic time, then return pairs that score highly for further filtering. We approximate this with beam search.We build a prefix tree (trie) containing all sentences in the target language corpus (Figure 1 ). Then we translate each sentence in the source language corpus using the trie as a constraint on output in the target language. NMT naturally generates translations one token at a time from left to right, so it can follow the trie of target language sentences as it translates. Formally, translation typically uses beam search to approximately maximise the probability of a target language sentence given a source language sentence. We modify beam search to restrict partial translations to be a prefix of at least one sentence in the target language. The trie is merely an efficient data structure with which to evaluate this prefix constraint; partial translations are augmented to remember their position in the trie. We consider two places to apply our constraint.In post-expansion pruning, beam search creates hypotheses for the next word, prunes hypotheses to fit in the beam size, and then requires they be prefixes of a target language sentences. In practice, most sentences are do not have translations in the corpus and search terminates early if all hypotheses are pruned.In pre-expansion pruning, a hypothesis in the beam generates a probability distribution over all tokens, but only the tokens corresponding to children of the trie node can be expanded by the hypothesis. The search process is guaranteed to find at least one target sentence for each source sentence. Downstream filtering removes false positives.Algorithm 1 Trie-constrained beam search with maximum output length L, beam size B, vocabulary V and a pre-built trie triebeam 0 ← {<s>} match ← {} for time step t in 1 to L do beam t ← {} for hypothesis h in beam t−1 do V t ← V if pre-expansion then v2 V t ← V t ∩ Children(trie, h) v2 beam t ← beam t ∪ Continue(h, V t , B) beam t ← NBest(beam t , B − |match|) if post-expansion then v1 beam t ← beam t ∩ triev1 Move full sentences from beam t to match. if beam t is empty then return match return match Algorithm 1 presents both variants of our modified beam search algorithm. Besides canonical beam search, " v1" indicates post-expansion pruning while " v2" indicates pre-expansion pruning. The modified beam search algorithm allows us to efficiently approximate the comparison between a source sentence and M target sentences. We let B denote beam size and L denote maximum output length. Given each source sentence, our NMT decoder only expands the top B hypotheses intersecting with the trie, for at most L times, regardless of M . With N source sentences, our proposed method will reduce the comparison complexity from O(M N ) to O(BLN ), where BL M .Pre-expansion pruning leaves each source sentence with an output, which needs to be filtered out if not parallel. We propose to use two methods. When NMT generates an output, a sentence level cross-entropy score is computed too. One way to perform filtering is to only keep sentences with a better per-word cross-entropy than a certain threshold. Another way is to use Bicleaner, an off-the-shelf tool which scores sentence similarity at sentence pair level (Sánchez-Cartagena et al., 2018) . Filtering is optional for post-expansion pruning.The trie used in our NMT decoding should be fast to query and small enough to fit in memory. We use an array of nodes as the basic data structure. Each node contains a key corresponding to a vocabulary item, as well as a pointer to another array containing all possible continuations in the next level. Binary search is used to find the correct continuations to the next level. With byte pair encoding (BPE) (Sennrich et al., 2016) , we can always keep the maximum vocabulary size below 65535, which allows us to use 2-byte integers as keys, minimising memory usage.To integrate the trie into the decoder, we maintain external pointers to possible children nodes in the trie for each active hypothesis. When the hypotheses are expanded at each time step, the pointers are advanced to the next trie depth level. This ensures that cross-referencing the trie has a negligible effect on decoding speed.
2
Entropy-based Sampling. In order to sample documents that contain hard-to-classify spans from the target domain, we use an uncertainty-based sampling method, that uses entropy (Shannon, 1948) to discover documents containing targets the model is uncertain about. Let D s and D t represent the training data for the source and target domains respectively. For each document in D t , we predict the probability distribution over the 3 sentiment labels for each target, using a model trained on D s , and compute the entropy per target prediction. The average entropy across all targets of the document indicates the overall uncertainty for the document. This aims to select documents based on informativeness.Relative Salience (RS) based Sampling. We use Relative Salience (Mohammad, 2011) as a way to extract sentiment expressions that are more representative of the target domain when compared to the source domain. Based on the simplifying assumption that sentiment towards target spans are expressed through adjectives, we first extract all adjectives for each dataset using a Partsof-Speech tagger. For each cross-domain experiment, we compute the RS of an adjective w as, Table 3 ). For each cross-domain scenario, we select documents from the target training set that contain any of the top 10 adjectives with the highest RS score.RS(w|D s , D t ) = f t /N t f s /N s ,RS+Entropy Sampling. Our proposed method of sampling involves selecting documents collected from both the Relative Salience and Entropy-based methods in different proportions for model training. Given the number of documents we wish to sample, the various combinations we experiment with include selecting 50%-50%, 30%-70% and 20%-80% from RS and entropy-based strategies, respectively. Depending on the combination, we first pick the top k documents ordered from highest to lowest entropy score, followed by the remaining number of documents picked from the RS set. In Table 4 , we provide a few document samples picked by RS and Entropy. As expected, the RS method picks examples containing sentiment expressions that are more relevant to the target domain. With L (source) ! R (target), we see sentiment expressions such as friendly, delicious and romantic that are more representative of the Restaurant domain (see Table 3 ). Meanwhile, the Entropy-based approach selects examples that the model is most uncertain about. For example, targets such as Lobster bisque are unlikely to be present in the Laptops domain and result in the model's uncertainty in predictions. A similar behavior is observed with R!L and L!T.
2
Before diving into the details of our proposed model, we begin by introducing the basic mathematical notations and terminologies for the task of CER. The goal of CER is to infer the emotion label (happy, sad, neutral, angry, excited, and frustrated) for each utterance in a conversation. Given a CER dataset D, an example of the dataset is denoted as{U i , s i , y i } n i=1 , where U = {U 1 , U 2 , ..., U n } represents the n utterances of the conversation with each utterance U i containing l i words. Assuming there are M participants, the speaker corresponds to the i-th utterance U i is represented ass i ∈ {0, ..., M − 1}. y i ∈ {0, ..., N − 1} indicates the emotion label for utterance U i .An overview of our proposed SumAggGIN model is shown in Figure 2 , which consists of an Encoding module, a Summarization Graph, an Aggregation Graph and a Classification module. o 1 o 2 o 3 o 4 o 5 … … … LSTM LSTM LSTM LSTM U 1 U 2 U 3 U 4 u 1 u 2 u 3 u 4 v 2 v 3 v 4 w 1 w 2 w n … … 1.o 1 o 2 o 3 o 4 o 5 ' ' ' ' ' g 1 g 2 g 3 g 4 v 1 v 1 v 2 v 3 v 4Utterance nodeReasoningg 2 (s 2 ) g 4 (s 2 ) g 1 (s 1 ) g 3 (s 1 ) h 2 (s 2 )For words in utterances, we convert them into 300 dimensional pretrained 840B GloVe word embeddings (Pennington et al., 2014) . A TextCNN (Kim, 2014) is performed to capture n-grams information from each utterance U i . We use convolution filters of sizes 3, 4 and 5 with each filter containing 50 feature maps. The outputs of convolutions are further processed by max-pooling and ReLU activation (Nair and Hinton, 2010) . We concatenate these activation results and feed them to a 150 dimensional fully connected layer, whose outputs are denoted as {u i } n i=1 . Subsequently, based on local utterance features from TextCNN, we apply a bidirectional LSTM (BiLSTM) to capture sequential contextual information. We denote v i ∈ R d as the sequential context-aware utterance representation for the i-th utterance and d is the hidden size of BiLSTM.In this section, a heterogeneous Summarization Graph is constructed to recognize topic-related emotional phrases so as to explicitly model topic-related emotional interactions throughout the entire conversation. Phrases are extracted from utterances by TextRank (Mihalcea and Tarau, 2004) . Through exchange of information between utterances and phrases on the Summarization Graph, utterance representation can be enhanced with summarized phrase-level semantic connections from a global perspective.We denote our Summarization Graph asG sum = (V, E sum ), where V = V o ∪ Vu represents a node set composing of phrase nodes and utterance nodes and E sum stands for edges between nodes. V o = {o 1 , ..., o m } and V u = {U 1 , ..., U n } represent the m key phrases of the utterances and n utterances in the conversation, respectively. e ij ≥ 0 denotes the weight of the edge between the i-th phrase and the j-th utterance. In particular, e ij = 0 indicates that the i-th phrase does not appear in the j-th utterance. Self-loops are included to ensure that the original features of each node can be preserved in the course of message propagation. For phrase nodes in V o , their feature vectors are initialized by averaging pretrained GloVe embeddings of the constituting words. As for utterance nodes U i ∈ V u , they are initialized with the corresponding sequential context-aware utterance representation v i obtained from BiLSTM. Therefore, the feature matrices of phrase and utterance nodes are denoted as X o ∈ R m×dw and X u ∈ R n×2d respectively, where d w is the dimension of the word embedding. In our experiments, we have d w = 2d.To infuse relation importance about the edge between a phrase node and an utterance node, we use TF-IDF weights of the phrase in the utterance as suggested by Yao et al. (2019) . Term frequency is the number of times phrase o i occurs in an utterance U j , while inverse document frequency represents the logarithmically scaled inverse fraction of the number of utterances containing the phrase o i .We apply a variant of Graph Attention Network (GAT) (Veličković et al., 2018) to propagate information among nodes in the Summarization Graph. The hidden states of input nodes are denoted as g i ∈ R 2d×1 , i ∈ {1, ..., (m + n)}. A Multi-Layer Perceptron (MLP) is applied to compute attention coefficients between a node i and its neighbor j (j ∈ N i ) at layer t:EQUATIONwhere W t a and W t b are trainable parameters at the t-th layer, ⊕ denotes the concatenation operation, and N i denotes the set of neighbors of node i. Subsequently, the coefficients are normalized using the softmax function:α (t) ij = softmax j (p (t) ij ) = exp(p (t) ij ) k∈N i exp(p (t) ik ).( 2)Finally, we utilize the normalized attention coefficients to compute a linear combination of the neighbouring features. The updated feature vector for node i at the t-th layer is formulated as:g (t) i = j∈N i α (t) ij g (t−1) j .(3)Although utterance nodes are not directly connected, stacking 2 layers of GAT enables the indirect exchange of information between pairs of utterances through co-appearing phrases. Inspired by Transformer (Vaswani et al., 2017) , we further apply a position-wise feed-forward (FFN) layer after each GAT layer. The summarized representation for the i-th utterance after propagation is denoted as g i = g(2)i .The utterance representation obtained from Summarization Graph mainly captures global topic-related emotional interactions throughout the whole conversation. To further explore short-term emotional effects between neighbouring utterances, we construct an Aggregation Graph for modeling speaker-related context dependencies from a local perspective.An Aggregation Graph can be denoted as G agg = (V u , E agg , R), where V u represents the node set containing utterance nodes solely, E agg stands for edges between nodes, and R denotes the type of the edges. Each utterance node U i ∈ V u is initialized with the corresponding summarized utterance representation g i obtained from the Summarization Graph.To explicitly model speaker dependencies between utterances, we divide edges in E agg into 2 categories, i.e. edges towards the same speaker and edges towards a different speaker. To capture emotional patterns only from neighbouring utterances, we construct the edges by keeping a context window size of W . As a result, each utterance node U i only links to W utterances in the past (U i−W , U i−W +1 , ..., U i−1 ) and W utterances in the future (U i+1 , U i+2 , ..., U i+W ). The edge weights z ij are obtained from the cosine similarity between the feature vectors h i and h j of the two utterance nodes U i and U j :EQUATIONTo ensure that for each utterance node, the incoming set of edges receives a total weight contribution of 1. The edge weights are further normalized by the softmax function:EQUATIONTo pass messages between neighbouring utterance nodes, graph convolution is performed on the basis of the Aggregation Graph. A message passing strategy concerning different types of edges from (Schlichtkrull et al., 2018) is adopted:EQUATIONwhere N r i represents the set of neighbors of node i under edge type r ∈ R, β i,j and β i,i are the normalized edge weights. The normalization constant c i,r is set to be |N r i |, the number of neighbouring nodes of node i under edge type r. W The aggregated utterance representation h i from the Aggregation Graph is fed into a fully-connected network to obtain the final prediction results for emotion classification:EQUATIONwhere W c and b c are trainable parameters of the classifier.
2
A large quantity of training data is necessary for machine learning tasks. But labeled data are not easy to get. Snorkel (Ratner et al., 2017b) provides a solution to this bottleneck by using labeling functions to generate a large amount of labeled data. As stated in (Ratner et al., 2017b), based on theories and experiments, Snorkel has proven effective in training high-accuracy machine learning models, even using potentially lower-accuracy inputs. It has been recently applied to high-level NLP such as discourse parsing in (Badene et al., 2019) .Weakly supervised tools like Snorkel allows for quickly labeling extensive data with minimal but expert manual involvement. The use of Snorkel is to write some labeling functions (LF) to produce some useful training data with labels. A labeling function is a rule that attributes a label for some subset of the training data set. Using Snorkel, it will train a model that combines all the rules defined written to estimate their accuracy, along with the overlaps and conflicts among different labeling functions.The workflow of Snorkel distinguishes from traditional machine learning approaches; it is based on a data programming paradigm. Briefly, it is composed of two phases, and the first is to produce estimated labels using a generative model, the second is using these labels to train the ultimate model, a discriminative model. Within this design philosophy, the system design of Snorkel can be divided into three phases: first, pre-processing of the data to have the reorganized data for later use, such as word segmentation and POS tagging. Second, writing labeling functions. Labeling functions do not need to be entirely accurate or exhaustive and can be correlated. Snorkel will automatically estimate their accuracies and correlations in a provably consistent way, as introduced in (Ratner et al., 2016). Third, after the evaluation and calibration of the LFs, we decide on an optimal set of LFs to produce a set of labels to train a model.
2
Our approach is inspired by the fact that many languages are primarily oral, with writing systems that represent spoken sounds. We convert both text and audio into single common representation of sounds, or "phones," represented using the International Phonetic Alphabet, or IPA. Then, we perform both language model pre-training and the training of models for downstream tasks in this phonetic representation. Well-tested architectures, such as BERT-style transformer models (Vaswani et al., 2017) , are thus flexibly extended to either speech or audio data.Regarding the conversion process of text and audio data, we leverage recent advances to transliterate this data into corresponding sounds represented by IPA phonetic symbols. This transliteration is possible for speech/audio data using tools such as the Allosaurus universal phone recognizer, which can be applied without additional training to any language , though it can benefit from fine-tuning (Siminyu et al., 2021) . To convert text data to phonemes we can use tools such as the Epitran grapheme-to-phoneme converter , which is specifically designed to provide precise phonetic transliterations in low-resource scenarios. Fig. 1 shows how downstream models for certain NLP tasks, like Named Entity Recognition (NER), are performed in the phonetic representation. Labeled data sets for NLP tasks need to be mapped or encoded into the phonetic representation to train downstream models. However, once this mapping is accomplished, models trained in the phonetic representation can perform tasks with audio input that are typically restricted to processing text input.One complication arising from direct speech-tophone transcription is the loss of word boundaries in the transcription. This is expected, as natural speech does not put any pauses between the words in an utterance. This does, however, result in mixing text data sets containing clear word boundaries with speech data sets containing no clear word boundaries.Borrowing from techniques used on languages that do not indicate word boundaries by the use of whitespace, we address the problem by removing all whitespace from our data sets after phone transliteration. We train character-based language models over the resulting data. Character-based models such as CharFormer (Tay et al., 2021) or ByT5 (Xue et al., 2021) have shown promise in recent years for language modeling, even if this approach is known to have some trade offs related to shorter context windows.The transliteration of text and audio data into phonetic representations presents several other challenges related to potential loss of information or injection of noise:Figure 1: Our approach: input from either modality can be converted by phone recognition, e.g. Epitran for text, Allosaurus for speech. Then we test on several downstream tasks which we designate NER1, NER2, NER3.1. Loss of suprasegmental information: In some languages, meaning may be encoded through tones, or pitch changes across sounds (aka across segments, or "suprasegmental"). Particularly for tonal languages such as Mandarin Chinese [cmn], this loss can represent a significant informational loss particularly for homophones with different tones, as seen in (Amrhein and Sennrich, 2020). While IPA symbols can represent these intricacies, it adds complexity 2. Phone/phoneme differences: As noted in , speech sounds which are physically different (different phones), may be perceived as the same (one phoneme) by speakers of one language, but these same sounds could perhaps be distinguished by speakers of another language. For example, the French words words bouche, and bûche contain phones (/u/ vs. /y/) which may sound "the same" to English speakers, but are semantically different to French speakers. In other words, in English, both phones map to the same phoneme perceptually. As the Allosaurus phone recognizer recognizes the actual phones/sounds, not their perceived phonemes, it would transcribe these two phones to different representations even for English speech. This can be mitigated to an extent by customizing the output of Allosaurus on a per-language basis, see Sec. 4.3.3. Simple errors in phone recognition: As noted in (Siminyu et al., 2021) , even the best-trained Allosaurus models, fine-tuned on languagespecific data, have a non-trivial Phone Error Rate (PER).An important question, therefore, is whether these added sources of noise/information losses are outweighed by the potential benefits in terms of flexibility. Does working in a phonetic representation cause a prohibitive amount of information loss? We constructed our experiments and data sets in order to answer this question.
2
As shown in Figure 2 , our approach aims to effectively encode the textual description and formulas, and fuse these two kinds of information for understanding math problems. In what follows, we first present the base models for encoding math problems, and then introduce the devised syntax-aware memory network and continual pre-training tasks.Encoding Math Text. We use BERT (Devlin et al., 2019) as the PLM to encode the math text, i.e., the textual description d.Given d = {t 1 , t 2 , • • • , t L }of a math problem, the PLM first projects these tokens into corresponding embeddings. Then, a stack of Transformer layers will gradually encode the embeddings to generate the l-th layer representations {h(l) 1 , h (l) 2 , • • • , h (l)L }. Since the textual description d may contain specific math symbols that were not seen during pre-training, we add them into the vocabulary of the PLM and randomly initialize their token embeddings. These new embeddings will be learned during continual pre-training.Encoding Math Syntax Graph. We incorporate a graph attention network (GAT) (Veličković et al., 2018) to encode the math syntax graph, which is composed of an embedding layer and a stack of graph attention layers. Given a math syntax graph G with N nodes, the GAT first maps the nodes into a set of embeddings{n 1 , n 2 , • • • , n N }.Then each graph attention layer aggregates the neighbors' hidden states using multi-head attentions to update the node representations as:EQUATION2 https://stanfordnlp.github.io/stanza/ where n(l+1) iis the representation of the i-th node in the l + 1 layer, ∥ denotes the concatenation operation, σ denotes the sigmoid function, K is the number of attention heads, N i is the set of neighbors of node i in the graph, W(l)k is a learnable matrix, and α k ij is the attention value of the node i to its neighbor j in attention head k.To improve the semantic interaction and fusion of the representations of math text and the syntax graph, we add k syntax-aware memory networks between the last k layers of PLM and GAT. In the memory network, node embeddings (from the math syntax graph) with dependency relations are considered as slot entries, and we design multi-view read/write operations to allow token embeddings (e.g., explanation tokens or hints) to attend to highly related node embeddings (e.g., math symbols).Memory Initialization. We construct the memory network based on the dependency triplets and node representations of the math syntax graph. Given the dependency triplets {(h, r, t)}, we treat the head and relation (h, r) as the key and the tail t as the value, to construct a syntax-aware key-value memory. The representations of the heads and tails are the corresponding node representations from GAT, while the relation representations are randomly initialized and will be optimized by continual pre-training. Finally, we concatenate the representations of heads and relations to compose the representation matrix of Keys asK (l) = {[n (l) h 1 ; r 1 ], [n (l) h 2 ; r 2 ], • • • , [n (l) h N ; r N ]},and obtain the representation matrix of Values asV (l) = {n (l) t 1 , n (l) t 2 , • • • , n (l) t N }.Multi-view Read Operation. We read important semantics within the syntax-aware memory to update the token representations from PLM. Since a token can be related to several nodes within the math syntax graph, we design a multi-view read operation to capture these complex semantic associations. Concretely, via different bilinear transformation matrices{W S 1 , W S 2 , • • • , W S n }, we first generate multiple similarity matrices {S 1 , S 2 , • • • , S n} between tokens and keys (head and relation) within the memory, and then aggregate the values (tail) to update the token representations. Given the token representations from the l-th layer of PLM Figure 2 : Illustration of our COMUS. We encode the textual description and the math syntax graph using PLM and GAT, respectively, and insert the syntax-aware memory networks in the last k layers to fuse their representations. In the syntax-aware memory network, we utilize the token representations and the node representations as the queries and values, respectively, and implement the read and write operations to update them.H (l) = {h (l) 1 , h (l) 2 , • • • , h (l) L },the similarity matrix S i is computed asS i = H (l) W S i K (l) ⊤ (2)where W S i is a learnable matrix, and an entry S i [j, k] denotes the similarity between the j-th token and the k-th key in the i-th view. Based on these similarity matrices, we update the token representations by aggregating the value representations asĤEQUATIONwhere W O is a learnable matrix and α i is the attention score distribution along the key dimension.In this way, we can capture the multi-view correlations between tokens and nodes, and the token representations can be enriched by the representations of multiple semantic-related nodes. After that, the updated token representationsĤ (l) are fed into the next layer of PLM, where the Transformer layer can capture the interaction among token representations to fully utilize the fused knowledge from the syntax graph.Multi-View Write Operation. After updating the token representations, we update the representations of nodes from GAT via memory writing. We still utilize the multi-view similarity matrices{S 1 , S 2 , • • • , S h }.Concretely, we compute the attention score distribution β using softmax function along the token dimension of the similarity matrices, and then aggregate the token representations asEQUATIONwhere W R is a learnable matrix. Based on the aggregated token representations, we incorporate a gate to update the representations of the values asEQUATIONwhere W A and W B are learnable matrices. The updated node representationsV (l) are also fed into the next layer of GAT, where the graph attention mechanism can further utilize the fused knowledge from the text to aggregate more effective node representations.Continual pre-training aims to further enhance and fuse the math text and math syntax graph. To achieve it, we utilize the masked language model and dependency triplet completion tasks to improve the understanding of math text and math syntax graph, respectively, and the text-graph contrastive learning task to align and fuse their representations.Masked Language Model (MLM). Since the math text contains a number of special math symbols, we utilize the MLM task to learn it for better understanding the math text. Concretely, we randomly select 15% tokens of the input sequence to be masked. Of the selected tokens, 80% are replaced with a special token [MASK] , 10% remain unchanged, and 10% are replaced by a token randomly selected from the vocabulary. The objective is to predict the original tokens of the masked ones as:EQUATIONwhere V mask is the set of masked tokens, and p(t i ) denotes the probability of predicting the original token in the position of t i .Dependency Triplet Completion (DTC). In the math syntax graph, the correlation within the dependency triplet (h, r, t) is essential to understand the complex math logic of the math problem. Thus, inspired by TransE (Bordes et al., 2013) , we design the dependency triplet completion task to capture the semantic correlation within a triplet. Specifically, for each triplet (h, r, t) within the math syntax graph, we minimize the DTC loss byL DT C = max γ+d(n h +r, n t )−d(n h +r ′ , n t ), 0(10) where γ > 0 is a margin hyper-parameter, d(•) is the euclidean distance, and r ′ is the randomly sampled negative relation embedding. In this way, the head and relation embeddings can learn to match the semantics of the tail embeddings, which enhances the node and relation representations by capturing the graph structural information.Text-Graph Contrastive Learning (TGCL). After enhancing the representations of the math text and math syntax graph via MLM and DTC tasks respectively, we further align and unify the two types of representations. The basic idea is to adopt contrastive learning to pull the representations of the text and graph of the same math problem together, and push apart the negative examples. Concretely, given a text-graph pair of a math problem(d i , G i ),we utilize the representation of the [CLS] token h d i as the sentence representation of d i , and the mean pooling of the node representations n G i as the graph representation of G i . Then, we adopt the cross-entropy contrastive learning objective with in-batch negatives to align the two representationsEQUATIONwhere f (•) is a dot product function and τ denotes a temperature parameter. In this way, the representations of the text and graph can be aligned, and the data representations from one side will be further enhanced by another side.Overview. Our approach focuses on continually pre-training PLMs to improve the understanding of math problems. Given the math text and math syntax graph of the math problem, we adopt PLM and GAT to encode them, respectively, and utilize syntax-aware memory networks in the last k layers to fuse the representations of the text and graph.In each of the last k layers, we first initialize the queries and values of the memory network using the representations of tokens and nodes, respectively, then perform the read and write operations to update them using Eq. 3 and Eq. 8. After that, we feed the updated representations into the next layers of PLM and GAT to consolidate the fused knowledge from each other. Based on such an architecture, we adopt MLM, DTC and TGCL tasks to continually pre-train the model parameters using Eq. 9, Eq. 10 and Eq. 11. Finally, for downstream tasks, we fine-tune our model with specific data and objectives, and concatenate the representations of text h d and graph n G from the last layer for prediction.Discussion. The key of our approach is to deeply fuse the math text and formula information of the math problem via syntax-aware memory networks and continual pre-training tasks. Recently, Math-BERT (Peng et al., 2021) is proposed to continually pre-train BERT in math domain corpus, which applies the self-attention mechanism for the feature interaction of formulas and texts, and learns similar tasks as BERT. As a comparison, we construct the math syntax graph to enrich the formula information and design the syntax-aware memory network to fuse the text and graph information. Via the syntax-aware memory network, the token from math text can trace its related nodes along the relations in the math syntax graph, which can capture the fine-grained correlations between tokens and nodes. Besides, we model the math syntax graph Task Train Dev Test KPC 8,721 991 1,985 QRC 10,000 2,000 4,000 QAM 14,000 2,000 4,000 SQR 250,000 11,463 56,349 via GAT, and devise the DTC task to improve the associations within triplets from the graph, and the TGCL task to align the representations of the graph and text. In this way, we can better capture graph structural information and fuse it with textual information. It is beneficial for understanding logical semantics from formulas of math problems .
2
The distress call recognition is to be performed in the context of a smart home which is equipped with e-lio 1 , a dedicated system for connecting elderly people with their relatives as shown in Figure 1 . e-lio is equipped with one microphone for video conferencing. The typical setting and the distress situations were determined after a sociological study conducted by the GRePS laboratory [17] in which a representative set of seniors were included.From this sociological study, it appears that this equipment Sofa Carpet E−LIO Figure 1 : Microphone position in the smart home is set on a table in the living room in font of the sofa. In this way, an alert could be given if the person falls due to the carpet or if it can't stand up from the sofa. This paper presents only the audio part of the study, for more details about the global audio and video system, the reader is referred to [18] .The audio processing was performed by the software CIRDOX [19] whose architecture is shown in Figure 2 . The microphone stream is continuously acquired and sound events are detected on the fly by using a wavelet decomposition and an adaptive thresholding strategy [20] . Sound events are then classified as noise or speech and, in the latter case, sent to an ASR system. The result of the ASR is then sent to the last stage which is in charge of recognizing distress calls.In this paper, we focus on the ASR system and present different strategies to improve the recognition rate of the calls. The remaining of this section presents the methods employed at the acoustic and decoding level.The Kaldi speech recognition tool-kit [21] was chosen as ASR system. Kaldi is an open-source state-of-the-art ASR system with a high number of tools and a strong support from the community. In the experiments, the acoustic models were contextdependent classical three-state left-right HMMs. Acoustic features were based on Mel-frequency cepstral coefficients, 13 MFCC-features coefficients were first extracted and then expanded with delta and double delta features and energy (40 features). Acoustic models were composed of 11,000 contextdependent states and 150,000 Gaussians. The state tying is performed using a decision tree based on a tree-clustering of the phones. In addition, off-line fMLLR linear transformation acoustic adaptation was performed.The acoustic models were trained on 500 hours of transcribed French speech composed of the ESTER 1&2 (broadcast news and conversational speech recorded on the radio) and REPERE (TV news and talk-shows) challenges as well as from 7 hours of transcribed French speech of the SH corpus (SWEET-HOME) [22] which consists of records of 60 speakers interacting in the smart home and from 28 minutes of the Voix-détresse corpus [23] which is made of records of speakers eliciting a distress emotion.The GMM and Subspace GMM (SGMM) both model emission probability of each HMM state with a Gaussian mixture model, but in the SGMM approach, the Gaussian means and the mixture component weights are generated from the phonetic and speaker subspaces along with a set of weight projections.The SGMM model [24] is described in the following equations:         p(x|j) = M j m=1 cjm I i=1 wjmiN (x; µjmi, Σi), µjmi = Mivjm, wjmi = exp w T i v jm I i ′ =1 exp w T i ′ v jm .where x denotes the feature vector, j ∈ {1..J} is the HMM state, i is the Gaussian index, m is the substate and cjm is the substate weight. Each state j is associated to a vector vjm ∈ R S (S is the phonetic subspace dimension) which derives the means, µjmi and mixture weights, wjmi and it has a shared number of Gaussians, I. The phonetic subspace Mi, weight projections w T i and covariance matrices Σi i.e; the globally shared parameters Φi = {Mi, w T i , Σi} are common across all states. These parameters can be shared and estimated over multiple record conditions.A generic mixture of I gaussians, denoted as Universal Background Model (UBM), models all the speech training data for the initialization of the SGMM.Our experiments aims at obtaining SGMM shared parameters using both SWEET-HOME data (7h), Voix-détresse (28mn) and clean data (ESTER+REPERE 500h). Regarding the GMM part, the three training data set are just merged in a single one. [24] showed that the model is also effective with large amounts of training data. Therefore, three UBMs were trained respectively on SWEET-HOME data, Voix-détresse and clean data. These tree UBMs contained 1K gaussians and were merged into a single one mixed down to 1K gaussian (closest Gaussians pairs were merged [25] ). The aim is to bias specifically the acoustic model with the smart home and expressive speech conditions.The recognition of distress calls consists in computing the phonetic distance of an hypothesis to a list of predefined distress calls. Each ASR hypothesis Hi is phonetized, every voice commands Tj is aligned to Hi using Levenshtein distance. The deletion, insertion and substitution costs were computed empirically while the cumulative distance γ(i, j) between Hj and Ti is given by Equation 1.γ(i, j) = d(Ti, Hj)+ min{γ(i − 1, j − 1), γ(i − 1, j), γ(i, j − 1)} (1)The decision to select or not a detected sentence is then taken according a detection threshold on the aligned symbol score (phonems) of each identified call. This approach takes into account some recognition errors like word endings or light variations. Moreover, in a lot of cases, a miss-decoded word is phonetically close to the good one (due to the close pronunciation). From this the CER (Call Error Rate i.e., distress call error rate) is defined as:EQUATIONThis measure was chosen because of the content of the corpus Cirdo-set used in this study. Indeed, this corpus is made of sentences and interjections. All sentences are calls for help, without any other kind of sentences like home automation orders or colloquial sentences, and therefore it is not possible to determine a false alarm rate in this framework. 3. Experimentation and results
2
The basic concept behind our negative keyword generation system is to create context vectors for all senses of an ambiguous key phrases, then to identify components of the context vectors which correlate highly with negative senses and poorly with the positive sense. This is not complete WSD, since the concern is only explicitly identifying one sense, while all other senses are grouped together as negative senses.The basic steps of the algorithm are shown in Figure 1 , while sections 3.1-3.5 describe each step in more detail.The method can be applied to a set of positive key phrases or to a single key phrase; most steps only consider a single key phrase at a time, but step 4 is intended to improve processing of sets of key phrases. When processing a set of key phrases steps 1-3 are executed for all key phrases, and then step 4 uses all the resulting information.Given a positive key phrase, we find all possible senses (Wikipedia articles). To do this, we find all the links containing the key phrase. Then from those links, we collect all the final destination pages, also accounting for redirected pages. The set of destination pages for the key phrase is considered the set of possible senses; each sense includes a frequency metric, that is the number of links to the page that used the given key phrase. To optimize this step, we created an indexed database table of all the links in Wikipedia. We recommend flattening this table by storing not just the link destinations, but, if the destination page is a redirect, the redirected destination page.Consider the keyword "Corolla"; imagine that the word "Corolla" appears in links on pages A, B, C, D. The links on pages A, B and C go to the Toyota Corolla article, while the links on page D go to flower petals. Thus for Corolla the possible senses are Toyota Corolla and flower petal, with frequencies of 3 and 1 respectively.Our context vectors are generated from all unigrams (though larger n-grams can be considered) in all paragraphs containing links to a possible sense. In other words, for each possible sense we use the database table of all the links to find all the pages referring to a particular sense. We then tokenize each of the paragraphs containing a link to the sense being considered. All the words are recorded and counted as a dimension in the vector. Continuing our previous example, imagine a Toyota Corolla article also has references on pages X and Y (perhaps the link text is "Toyota small car"); while the flower petal article is referred to on page Z (with link text "flower petals"). We would generate a context vector for Toyota Corolla from pages A, B, C, X, and Y; and a context vector for flower petal from pages D and Z. Generating the context vector simply involves counting the words, in the paragraphs where the links appears.There are many ways that the intended sense can be assigned, depending on the resources available. WSD could be applied to an example context if one is available; in our case examples are likely the ads from the advertising campaign.A simple WSD method that can be used when no examples are available is selecting the most frequent sense of the key phrase; this can be deter-mined using the frequency information from step 1. We found that this method works quite well when multiple key phrases are being processed because step 4 will compensate for a few mislabeled senses. When examples are available, another simple WSD method is to compare the context vectors of a sense to the example contexts (in our case advertisements) and choose the sense with the most similar vector.This step is only relevant if multiple key phrases are being processed. This step requires all intended senses for all key phrases. We collect all the intended senses of all key phrases into what we call the broad scope intended sense list. There are a number of cases where a key phrase may have more than one intended sense, using this method we collect all the intended senses and avoid blocking secondary intended senses. False positive senses will generate unwanted impressions, which are undesirable, but false negative senses are more problematic because an ad may not be shown to the intended audience. There are often multiple positive key phrases assigned to any single sense and thus, by collecting all the intended senses, we reduce the risk of assigning a false negative sense. We observed that, even if a single key phrase is mislabeled (in our case due to choosing the most frequent sense), the correct label was consistently identified by other keywords.Furthermore, the collection of these senses could be used with clustering or other techniques that might reveal additional senses that should have been considered. These additional senses may even provide new positive key phrases.Consider setting up an advertising campaign for Toyota Vehicles. A small selection of key phrases that might be used in this campaign is: "Corolla", "Sienna", "Toyota minivan". If each key phrase was assigned the following senses, respectively, then the broad scope intended sense list would be: "Toyota Corolla", "Sienna Miller", "Toyota Sienna". "Sienna Miller" (an actress) is in fact a mislabeled sense, but due to other keywords, the correct sense has been included in the broad scope intended sense list, thus avoiding a false negative.We divide all senses of a positive key phrase into two sets of senses: the positive set (anything in the broad scope sense list), and the negative set (everything else). We evaluate all components of the context vectors from all senses: first we evaluate the components (unigram, bigram, etc.) using tf-idf (Salton, 1989) , where tf is simply the frequency from Step 1 and idf has been precalculated from the Wikipedia corpus. We then select the N highest valued (tf-idf) components above a minimum threshold, from the negative set, and then confirm that each component either never appears as a component in the positive set, or that the positive set tf-idf is below a choosen threshold.
2
To run a comprehensive evaluation of paraphrase techniques, we create many paraphrases of a common data set using multiple methods, then evaluate using human direct assessment as well as automatic diversity measurements.Input data was sampled from two sources: Reddit provides volumes of casual online conversations; the Enron email corpus represents communication in the professional world. 2 Both are noisier than usual NMT training data; traditionally, such noise has been challenging for NMT systems (Michel and Neubig, 2018) and should provide a lowerbound on their performance. It would definitely be valuable, albeit expensive, to rerun our experiments on a cleaner data source. As an initial filtering step, we ran automatic grammar and spell-checking, in order to select sentences that exhibit some disfluency or clear error. Additionally, we asked crowd workers to discard sentences that contain any personally identifiable information, URLs, code, XML, Markdown, and non-English sentences. The crowd workers were also encouraged to select noisy sentences containing slang, run-ons, contractions, and other behavior observed in informal communications.Expert human monolingual paraphrase. We hired trained linguists (who are native speakers of English) to provide paraphrases of the given source sentences, targeting highest quality rewrites. These linguists were also encouraged to fix any misspellings, grammatical errors, or disfluencies.Crowd-worker monolingual paraphrase. As a less expensive and more realistic setting, we asked English native speaking crowd workers who passed a qualification test to perform the same task.Human round-trip translation. For the first set of translation-based paraphrases, we employed human translators who translated the source text from English into some pivot language and back again. The translations were provided by a human translation service, potentially using multiple different translators (though the exact number was not visible to us). In our experiments we focused on a diverse set of pivot languages, namely: Arabic, Chinese, French, German, Japanese, and Russian.While French and German seem like a better choice for translation from and back into English, due to the close proximity of English as part of the Germanic language family and its shared vocabulary with French, we hypothesize that the use of more distant pivot languages may result in a greater diversity of the back translation output.We employed professional translators-native in the chosen target language-who were instructed to generate translations from scratch, without the use of any online translation tools. Translation from English into the pivot languages and back into English were conducted in separate phases, by different translators. Post-edited round-trip translation. Second, we created round-trip translation output based on human post-editing of neural machine translation output. Given the much lower post-editing cost, we hypothesize that results contain only minimal edits, mostly improving fluency but not necessarily fixing problems with translation adequacy.Neural machine translation. We kept the NMT output used to generate post-editing-based paraphrases, without further human modification. Given the unsupervized nature of machine translation, we hypothesize that resulting output may be closer to the source syntactically (and hopefully more diverse lexically), especially those source sentences which a human editor would consider incomplete or low quality.Crowd-worker monolingual paraphrase grounded by translation. Finally, we also use a variant of the Crowd-worker monolingual paraphrase technique where the crowd worker is grounded by a translation-based paraphrase output. The crowd worker is then asked to modify the translation-based paraphrase to make it more fluent than the source, and as adequate.Intuitively, one assumes that human translation output should achieve both highest adequacy and fluency scores, while post-editing should result in higher adequacy than raw neural machine translation output.Considering translation fluency scores, NMT output should be closer to both post-editing and human translation output, as neural MT models usually achieve high levels of fluency (Bojar et al., 2016; Castilho et al., 2017; Läubli et al., 2018) .We hypothesize that translation helps to increase diversity of the resulting back translation output, irrespective of the specific method.We measure four dimensions of quality: ) , and translation adequacy (NMT A ). Paraphrase evaluation campaigns referred to source and candidate text as "candidate A" and "B", respectively. Translation evaluation campaigns used "source" and "candidate text" instead.Paraphrase adequacy For adequacy, we ask annotators to assess semantic similarity between source and candidate text, labeled as "candidate A" and "B", respectively. The annotation interface implements a slider widget to encode perceived similarity as a value x ∈ [0, 100]. Note that the exact value is hidden from the human, and can only be guessed based on the positioning of the slider. Candidates are displayed in random order, preventing bias.Paraphrase fluency For fluency, we use a different priming question, implicitly asking the human annotators to assess fluency for candidate "B" relative to that of candidate "A". We collect scores x ∈ [−50, 50], with −50 encoding that candidate "A" is much more fluent than "B", while a value of 50 denotes the polar opposite. Intuitively, the middle value 0 encodes that the annotator could not determine a meaningful difference in fluency between both candidates. Note that this may mean two things:1. candidates are semantically equivalent but similarly fluent or non-fluent; or 2. candidates have different semantics.We observe that annotators have a tendency to fall back to "neutral" x = 0 scoring whenever they are confused, e.g., when semantic similarity of both candidates is considered low.Translation Adequacy We measure translation adequacy using our own implementation of sourcebased direct assessment. Annotators do not know that the source text shown might be translated content, and they do not know about the actual goal of using back-translated output for paraphrase generation. Except for the labels for source and candidate text, the priming question is identical to the one used for paraphrase adequacy evaluation. Notably, we have to employ bilingual annotators to collect these assessments. Scores for translation adequacy again are collected as x ∈ [0, 100].Paraphrase diversity Additionally, we measure diversity of all paraphrases (both monolingual and based on translation) by computing the average number of token edits between source and candidate texts. To focus our attention on meaningful changes as opposed to minor function word rewrites, we normalize both source and candidate by lower-casing and excluding any punctuation and stop words using NLTK (Bird et al., 2009) .We adopt source-based direct assessment (src-DA) for human evaluation of adequacy and fluency. The original DA approach (Graham et al., 2013 (Graham et al., , 2014 ) is reference-based and, thus, needs to be adapted for use in our paraphrase assessment and translation scoring scenarios. In both cases, we can use the source sentence to guide annotators in their assessment. Of course, this makes translation evaluation more difficult, as we require bilingual annotators. Src-DA has previously been used, e.g., in (Cettolo et al., 2017; Bojar et al., 2018) .Direct assessment initializes mental context for annotators by asking a priming question. The user interface shows two sentences:-the source (src-DA, reference otherwise); and -the candidate output.Annotators read the priming question and both sentences and then assign a score x ∈ [0, 100] to the candidate shown. The interpretation of this score considers the context defined by the priming question, effectively allowing us to use the same annotation method to collect human assessments with respect to the different dimensions of quality a defined above. Our priming questions are shown above in Table 3 .Some source segments from Reddit contain profanities, which may have affected results reported in this paper. While a detailed investigation of such effects is outside the scope of this work, we want Table 4 : Results by paraphrasing method. Adequacy (Par A ) and fluency (Par F ) are human assessments of paraphrases; paraphrase diversity (Par D ) is measured by the average string-edit-distance between source and paraphrase (higher means greater diversity); NMT A is a human assessment of translation quality.to highlight two potential issues which could be introduced by profanity in the source text:1. Profanity may have caused additional monolingual rewrites (in an attempt to clean the resulting paraphrase), possibly inflating diversity scores; 2. Human translators may have performed similar cleanup, increasing the likelihood of back translations having a lower adequacy score.
2
A common characteristic of CLWE methods that apply the orthogonality constraint is that they optimise using 2 loss (see § 2). However, outliers have disproportionate influence in 2 since the penalty increases quadratically and this can be particularly problematic with noisy data since the solution can "shift" towards them (Rousseeuw and Leroy, 1987) . The noise and outliers present in real-world word embeddings may affect the performance of 2 -lossbased CLWEs.The 1 norm cost function is more robust than 2 loss as it is less affected by outliers (Rousseeuw and Leroy, 1987) . Therefore, we propose a refinement algorithm for improving the quality of CLWEs based on 1 loss. This novel method, which we refer to as 1 refinement, is generic and can be applied post-hoc to improve the output of existing CLWE models. To our knowledge, the use of alternatives to 2 -loss-based optimisation has never been explored by the CLWE community.To begin with, analogous to 2 OPA (cf. Eq. (1)),1 OPA can be formally defined and rewritten asEQUATIONwhere tr(•) returns the matrix trace, sgn(•) is the signum function, and ∈ O denotes that M is subject to the orthogonal constraint. Compared to 2 OPA which has a closed-form solution, solving Eq. 3is much more challenging due to the discontinuity of sgn(•). This issue can be addressed by replacing sgn(•) with tanh(α(•)), a smoothing function parameterised by α, such thatargmin M∈O tr[(AM−B) tanh(α(AM−B))]. (4)Larger values for α lead to closer approximations to sgn(•) but reduce the smoothing effect. This approach has been used in many applications, such as the activation function of long short-term memory networks (Hochreiter and Schmidhuber, 1997) . However, in practice, we find that Eq. (4) remains unsolvable in our case with standard gradient-based frameworks for two reasons. First, α has to be sufficiently large in order to achieve a good approximation of sgn(•). Otherwise, relatively small residuals will be down-weighted during fitting and the objective will become biased towards outliers, just similar to 2 loss. However, satisfying this requirement (i.e., large α) will lead to the activation function tanh(α(•)) becoming easily saturated, resulting in an optimisation process that becomes trapped during the early stages. In other words, the optimisation can only reach an unsatisfactory local optimum. Second, the orthogonality constraint (i.e., M ∈ O) also makes the optimisation more problematic for these methods.We address these challenges by adopting the approaches proposed by Trendafilov (2003) . This method explicitly encourages the solver to only explore the desired manifold O thereby reducing the 1 solver's search space and difficulty of the optimisation problem. We begin by calculating the gradient ∇ w.r.t. the objective in Eq. (4) through matrix differentiation:EQUATIONwhere Z=α(AM−B) and is the Hadamard product. Next, to find the steepest descent direction while ensuring that any M produced is orthogonal, we project ∇ onto O, yielding 1π O (∇):= 1 2 M(M ∇−∇ M)+(I−MM )∇. (6)Here I is an identity matrix with the shape of M. With Eq. (6) defining the optimisation flow, our 1 loss minimisation problem reduces to an integration problem, asEQUATIONwhere M 0 is a proper initial solution of Eq. 3(e.g., 2 -optimal mapping obtained via Eq. (2)). Empirically, unlike the aforementioned standard gradient-based methods, by following the established policy of Eq. (6), the optimisation process of Eq. 7will not violate the orthogonality restriction or get trapped during early stages. However, this 1 OPA solver requires extremely small step size to generate reliable solutions (Trendafilov, 2003) , making it computationally expensive 2 . Therefore, it is impractical to perform 1 refinement in an iterative fashion like 2 refinement without significant computational resources.Previous work has demonstrated that applying the 1 -loss-based algorithms from a good initial state can speed up the optimisation. For instance, Kwak (2008) found that feature spaces created by 2 PCA were severely affected by noise. Replacing the cost function with 1 loss significantly reduced this problem, but required expensive linear programming. To reduce the convergence time, Brooks and Jot (2013) exploited the first principal component from the 2 solution as an initial guess. Similarly, when reconstructing corrupted pixel matrices, 2 -loss-based results are far from satisfactory; using 1 norm estimators can improve the quality, but are too slow to handle large-scale datasets (Aanaes et al., 2002) . However, taking the 2 optima as the starting point allowed less biased reconstructions to be learned in an acceptable time (De La Torre and Black, 2003) .Inspired by these works, we make use of 1 refinement to carry out post-hoc enhancement of existing CLWEs. Our full pipeline is described in 7, we record 1 loss per iteration and see if either of the following two stopping criteria have been satisfied: (1) the updated 1 loss exceeds that of the previous iteration;Algorithm 1 1 refinement Input: CLWEs {XL A , XL B } Output: updated CLWEs {XL A M , XL B } 1: DL A →LB ←(2) on-the-fly M has non-negligibly departed from the orthogonal manifold, which can be indicated by the maximum value of the disparity matrix asEQUATIONwhere is a sufficiently small threshold. The resulting M can be used to adjust the word vectors of L A and output refined CLWEs. A significant advantage of our algorithm is its generality: it is fully independent of the method used for creating the original CLWEs and can therefore be used to enhance a wide range of models, both in supervised and unsupervised settings.
2
We aim to predict the valence of each sentence using information extracted from the history preceding that sentence. For this purpose, we train machine learning models that assign an emotion value to each sentence given information available in the preceding context. There are three key challenges that need to be addressed. First, identifying the features of the preceding context that are relevant to this sentence-by-sentence valence assignment task. Second, identifying what size of context history is most informative. And third, determining the type of machine learning model which performs best in predicting these sentence valences. As a first step, we investigate the degree to which the relationship between current sentence valence and sentence context history information can be modelled using linear methods. We apply two models to this task -linear regression and a linear support vector regressor. In the second part of the study, we investigate whether the application of non-linear methods to the same feature sets can better model the relationship between the sentence context history and the current sentence valence. We implement these non-linear models using a random forest regressor.To train these models we explore a number of different feature combinations, to determine which kinds of information are most important for predicting sentence-level valence.We explore the scope of context relevant to inferring sentence valence, investigating different sizes of sentence context history and a variety of feature sets of different dimensionalities. This first stage of our study therefore focuses on the exploration of eighteen different feature sets combined in the following ways: (1) a history of sentence valence scores only (over a number of history window sizes, spanning 10, 50 and 100 sentences), and (2) a history of sentence valence combined with semantic information (i.e. pre-trained semantic word embeddings in the form of 50, 100, 200 and 300 dimension GloVe word embeddings (Pennington et al., 2014) , and 300 dimension FastText word embeddings (trained on subword information) (Bojanowski et al., 2017) again over the same number of context history window sizes (10, 50 and 100 sentences). The 18 different feature set combinations investigated correspond to the rows of the results table below (Table 1) .
2
To learn mappings between Proto-Slavic etyma and the Slavic reflexes that descend from them, we use an LSTM Encoder-Decoder with 0thorder hard monotonic attention (Wu and Cotterell, 2019) , trained on all languages in our data set. The basic model architecture used for the experiments in this study has the following structure (schematized in Figure 1 ): a trainable languagelevel embedding is concatenated to a one-hot representation of each input segment at each input time step; each concatenation is fed to a Dense layer (with no activation) to generate a embedding for each time step that encodes information about the input phoneme and language ID of the reflex; these embeddings subsequently are fed to the encoder-decoder in order to generate the output. The parameters of the encoder-decoder architecture are shared across languages in the data set; the sole language-specific variable employed is the language-level embedding fed to the model. In all experiments, we set the dimension of the language-level embedding and the language/character embedding to 128, and the hidden layer dimension to 256. In our experiments, we employ different representations of the languagelevel embedding, including a dense layer with no activation (DENSE model), a dense layer with sigmoid activation, (SIGMOID model) and a dense layer with straight-through activation (ST model), which uses the Heaviside step function (negativez i z i xi,1 x i,|xi| ei,1 e i,|xi|Encoder-Decoder values map to 0, non-negative values to 1). We train our model for 200 epochs with a batch size of 256 using the Adam optimizer with a learning rate of .001 with the objective of minimizing the mean categorical cross-entropy between the predicted and observed distributions of the output. To evaluate model performance, we carry out K-fold cross-validation (K = 10), randomly holding out 10% of the forms in each language, and greedily decoding the held-out forms using the trained model. For additional analyses regarding the interpretability of the embeddings learned, we train the model on all forms in the data set. Models are implemented in Keras (Chollet, 2015) and Larq (Geiger and Team, 2020). 1yi,1 y i,|yi| . . . . . .
2
There are two components to our topic-adjusted algorithm for ideology prediction. First, we focus on n-grams and skipgrams that are most correlated with ideology in the training data. For each topic within a topic mapping, we count the total number of times each phrase is used by all left-and all right-leaning economists. Then, we compute Pearson's χ 2 statistic and associated p-values and keep phrases with p ≤ 0.05. As an additional filter, we split the data into ten folds and perform the χ 2 test within each fold. For each topic, we keep phrases that are consistently ideological across all folds. This greatly reduces the number of ideological phrases. For LDA50, the mean number of ideological phrases per topic before the cross validation filter is 12,932 but falls to 963 afterwards. With the list of ideological phrases in hand, the second step is to iterate over each topic and predict the ideologies of economists in our test set. To compute the predictions we perform partial least squares (PLS): With our training data, we construct the standardized frequency matrix F t,train where the (e, p)-th entry is the number of times economist e used partisan phrase p across all of e's papers in t. This number is divided by the total number of phrases used by e in topic t. For papers with multiple authors, each author gets same count of phrases. About 5% of the papers in our dataset are written by authors with differing ideologies. We do not treat these differently. Columns of F t,train are standardized to have unit variance. Let y be the vector of ground-truth ideologies, test set ideologies are predicted as follows:1) Compute w = Corr(F t,train , y), the correlations between each phrase and ideology 2) Project to one dimension: z = F t,train w 3) Regress ideology, y, on the constructed variable z: y = b 1 z 4) Predict ideologyŷ e of new economist bŷ y e = b 1fe w, (f e is scaled frequency vector) To avoid over-fitting we introduce an ensemble element: For each t, we sample from the list of significant n-grams in t and sample with replacement from the authors who have written in t. 2 PLS is performed on this sample data 125 times. Each PLS iteration can be viewed as a vote on whether an author is left-or right-leaning. We calculate the vote as follows. For each iteration, we predict the ideologies of economists in the training data. We find the threshold f that minimizes the distance between the true and false positive rates for the current iteration and the same rates for the perfect classifier: 1.0 and 0.0, respectively. Then, an author in the test set is voted left-leaning if y t,test ≤ f and right-leaning otherwise.For a given topic mapping, our algorithm returns a three-dimensional array with the (e, t, c)-th entry representing the number of votes economist e received in topic t for ideology c (left-or right-leaning). To produce a final prediction, we sum across the second dimension and compute ideology as the percentage of right-leaning votes received across all topics within a topic-mapping. Therefore, ideology values closer to zero are associated with a left-leaning ideology and values closer to one are associated with a rightward lean.To recap, we start with a topic mapping and then for each topic run an ensemble algorithm with PLS at its core. 3 The output for each topic is a set of votes. We sum across topics to compute a final prediction for ideology.
2
For this task, we employ a neural architecture utilising structural features to predict semantic parsing tags for each sentence. The system maps a sentence from the source language to a probability distribution over the tags for all the words in the sentence. Our architecture consists of a GCN layer (Kipf and Welling, 2017), a bidirectional LSTM, and a final dense layer on top.The inputs to our system are sequences of words, alongside their corresponding POS and named-entity tags. 2 Word tokens are represented by contextualised ELMo embeddings (Peters et al., 2018) , and POS and named-entity tags are one-hot encoded. We also use sentence-level syntactic dependency parse information as input to the system. In the GCN layer, the convolution filters operate based on the structure of the dependency tree (rather than the sequential order of words).Graph Convolution. Convolutional Neural Networks (CNNs), as originally conceived, are sequential in nature, acting as detectors of Ngrams (Kim, 2014) , and are often used as featuregenerating front-ends in deep neural networks. Graph Convolutional Network (GCN) has been introduced as a way to integrate rich structural relations such as syntactic graphs into the convolution process.In the context of a syntax tree, a GCN can be understood as a non-linear activation function f and a filter W with a bias term b:EQUATIONwhere r(v) denotes all the words in relation with a given word v in a sentence, and c represents the output of the convolution. Using adjacency matrices, we define graph relations as mask filters for the inputs Schlichtkrull et al., 2017 ).In the present task, information from each graph corresponds to a sentence-level dependency parse tree. Given the filter W s and bias b s , we can therefore define the sentence-level GCN as follows:EQUATIONwhere X n×v , A n×n , and C o×n are tensor representation of words, the adjacency matrix, and the convolution output respectively. 3 In Kipf and Welling (2017), a separate adjacency matrix is constructed for each relation to avoid overparametrising the model; by contrast, our model is limited to the following three types of relations: 1) the head to the dependents, 2) the dependents to the head, and 3) each word to itself (self-loops) similar to Marcheggiani and Titov (2017) . The final output is the maximum of the weights from the three individual adjacency matrices. The model architecture is depicted in Figure 1 .
2
We aim to generate a perturbed sample by adding discrete noise that incurs the highest divergence of the model's prediction logits from the original one without significant changes in its semantics. Our augmentation is made on-the-fly depending on the current model to push the decision boundary during training effectively.Virtual Adversarial Discrete Noise We develop the consistency training framework by perturbing inputs with virtual adversarial discrete noise, called VAT-D. We want to perturb a given sentencex = (x 1 , . . . , x M ) ∈ V M of se- quence length M into a new sentence x = (x 1 , . . . , x M ) ∈ V M of the same length, where Vis the word vocabulary. In contrast with the continuous case, we constrain that x differs from x in only small portion of positions changing their surface forms, i.e. N eighbor(x) = { x − x H /M ≤ τ }where H denotes hamming distance in the tokenlevel and τ is the replacement ratio. In this work, we only focus on the replacement for simplicity.The white-box approaches having an access to the training model's internal states, mostly rely on the gradient vectors of the loss function with respect to the input embeddings for finding adversarial discrete noise (Ebrahimi et al., 2017) . However, for acquiring such gradient information under the framework of consistency training as in Eq. 1, naively resorting to the linear approximation of the loss function with respect to the input embeddings like in previous works (Ebrahimi et al., 2017; Michel et al., 2019; Cheng et al., 2019) does not hold since the first-order term from Taylor expansion is zero when the label information is substituted to model's predictions (Miyato et al., 2018) . We bypass the obstacle by sharpening the distribution of original examples' predictions to enable the linear approximation. Sharpening the distribution makes high probabilities higher and lower probabilities lower while not changing their relative order. By sharpening the distribution of the original inputs' predictions, the first-order term does not result in zero, hence can be utilized for the approximation. This is because the modified divergence loss is not zero when x = x indicating the non-negative divergence is not necessarily minimum at r = x − x = 0 (Note that the derivative of f (x) is zero when the f (x) is minimum at x). The optimizing objective of Eq. 1 is modified tõL(x, x ) = D[p sharp (• | x)), p(• | x )] (2)by sharpening the predicted distribution given an original input by the pre-defined temperature T asp sharp (• | x) = p(•|x) 1 T / p(•| x) 1 T 1 .Virtual Adversarial Token Replacement Consequently, the optimization problem to find a virtual adversarial discrete perturbation changes tôx = argmax x ∈N eighbor(x)L (x, x ).Finally, we train the modified consistency loss function from Eq. 2 with obtained discrete perturbation. The replacement operation of m-th token x m to the arbitrary token x can be written asδ(x m , x) := e(x) − e(x m ), where e(•) denotes embedding look-up. We induce a virtual adversarial token by the following criteria (Ebrahimi et al., 2017; Michel et al., 2019; Cheng et al., 2019; Wallace et al., 2019; Park et al., 2020) :x m = argmax x∈top_k(xm,V ) δ(x m , x) • g xm (3)whereg xm = ∇ e(xm)L (x, x )| x =x gxm is the gradient vector of the sharpened consistency loss from Eq. 2 with respect to the m-th token. In brief, we replace the m-th original token x m with one of the candidates x that approximately maximizes the consistency loss. We randomly select token indexes to perturb and replace them simultaneously. To bound the semantics similarity between the original sentence and the perturbed one, we use a masked language model (MLM) (Devlin et al., 2019; to restrict a set of possible candidates to replace x m . We filter top-k candidates (Cheng et al., 2019) , denoted as top_k(x m , V ), from the vocabulary having the highest MLM probability at position m when an original sentence x is given to the MLM. More training details are in Appendix A.4 Experimental Setup
2
This study takes 2,802 single words in CVAW 2.0 (Yu et al., 2016) and 2,250 multi-word phrases, both annotated with valence-arousal ratings, as training material. At word level, we use E-HowNet (Chen et al., 2005) , a system that is designed for the purpose of automatic semantic composition and decomposition, to extract synonyms of the words from CVAW 2.0, and expand it to 19,611 words with valence-arousal ratings, called WVA. Fig. 2 illustrates the proposed framework. In order to cope with the problem of unknown words, we separate words in WVA into 4,184 characters with valence-arousal ratings, called CVA. The valence-arousal score of the unknown word can be obtained by averaging the matched CVA. Moreover, previous research suggested that it is possible to improve the performance by aggregating the results of a number of valence-arousal methods (Yu et al., 2015) . Thus, we use two sets of methods for the prediction of valence: (1) prediction based on WVA and CVA, and (2) a kNN valence prediction method. The results of these two methods are averaged as the final valence score. First, we describe the prediction of valence values. As shown in Fig. 3 , the "完成" of the test data exists in the WVA, so we can directly obtain its valence value of 7.0. However, another word "通知" does not exist in the WVA, so we search in CVA and calculate a valence value of 5.6. Additionally, we propose another prediction method of the valence value, as shown in Fig. 4 , based on kNN. We begin by computing the similarity between words using word embeddings (Mikolov et al., 2013) . Then, 10 most similar words are selected and their scores calculated by Eq. 1. Valence KNN = x i=1 N x X (1) 通知 K-NNV:5.0 V:6.5 V:4.8 V:4.2 V:1.9 V:5.5 V:5.2 V:3.9 V:4.6 V:4.9 通知 V:6.4 Valence of nearest Neighborhood (V NN )Average Figure 4 : Word valence prediction method based on kNN.As for the arousal prediction, we propose two methods: (1) linear regression, and (2) support vector regression (SVR) which averages linear regression and SVM predictions as the final arousal score. As shown in Fig. 6 , this study considers the linear regression equation in each range according to the valence-arousal value of words in WVA. According to our observation of the data, valence values are generally distributed in the value of 3-7. In order to boost learning of different ranges of data, we distribute them in to two categories. For example, the work "殺害" has a valence value of 1.6. By our design, it will be distributed to categories with valance value of 1 and 2. When the linear regression training is finished, we can predict the corresponding arousal score according to the valence value of the word. As for the SVR-based approach, we first train 300-dimensional word embeddings for all words in WVA using online Chinese news corpus 1 . As shown in Fig. 6 , L is the label of the sample, and Dim represents the dimension of the features. We then predict the value of arousal through SVR. Finally, we aggregate the arousal scores predicted by these two methods by taking an average. We observe that the values obtained by linear regression are convergent, while the SVR values are more divergent. So, averaging of the two values can overcome the shortcomings of these methods.At phrase level, we first experiment with using the proposed word-level model to predict the va- lence and arousal values. Unfortunately, the results are not satisfactory. We then explore the possibility to incorporate linguistic knowledge into the model. Structurally, phrases can be split into the adverb (ADV) and the adjective (ADJ). An adverb is a word that modifies an adjective in a phrase. For instance, "開 心" (happy) with a preceding "非常 (very)" becomes "非常開心 (very happy)," which we consider has an increased degree of happiness. Following this line of thought, we explore ADVs as weighting factors for ADJs. The ADVList and ADVWeight List are extracted from 2,250 multi-word phrases. We employ them to split phrases into ADV and ADJ parts. Subsequently, the valence and arousal values of an ADJ is determined by the word-level prediction model, while those of the ADV is used as an offset. An illustration of our phrase-level prediction process is in Fig. 7 . As shown in Fig. 7 , in order to obtain the weight of the ADV word "最," we need to use ADVList to split phrases that contain "最" into the format of "[ADV] [ADJ] ." Then, our word prediction model is used to obtain valence (VA) value of the ADJ part. It will be deducted from the VA of the corresponding phrases, and then the remainders are averaged to become the final ADV weight of the word "最". That is, ADVWeight(最) = mean(VA Phrase − VA ADJ ). Most importantly, we hypothesize that ADVs have different effects on phrases with different ADJs, namely, those with valence values ≥ 5.0 and < 5.0. Thus, we have to consider them separately. In the end, there will be four weights for the ADV "最": Positive valence offset, Positive arousal offset, Negative valence offset, and Negative arousal offset.
2
Our contribution is an unsupervised method with the use of web search engine as a way to maximize the chances of finding all the slang words, abbreviations, non-standard expressions that a classic corpora will not include.The method is to calculate the sentiment score for a term w from the sentiment lexicons as shown in the Equation 1 (Kiritchenko et al., 2014) :SentSc(w) = P M I(w, pos) − P M I(w, neg) (1)PMI stands for pointwise mutual information, it measures the degree of statistical dependence between two terms. It is used in our work to calculate the degree of statistical dependence between a term and a class (negative or positive).EQUATIONWhere f req(w, pos) is the number of times a term w occurs as positive or in a positive tweet, f req(w) is the total frequency of term w in sentiment lexicons and labeled tweets, f req(pos) is the total number of positive terms in sentiment lexicons and labeled tweets, and N is the total number of terms in the data-set (Kiritchenko et al., 2014) . P M I(w, negative) is calculated similarly.For the English language, we have done our testing using the below manual constructed sentiment lexicons:
2
The algorithm is composed of 8 steps:1. Identification and expansion of abbreviations.2. Splitting the content of the document into m sentences.3. Identification of the n unique terms in the document that are potential keyphrases.4. Creation of a m × n sentence-term matrix X to identify the occurrences of the n terms within a collection of m sentences.5. Dimensionality reduction to transform data in the high-dimensional matrix X to a space of fewer dimensions.6. Data clustering performed in the reduced space. The result of the clustering is used to build a new representation of the source document, which is now considered as a set of clusters, with each cluster consisting of a bag of terms.7. Execution of LDA on the new document representation.8. Selection of best keyphrases by analyzing LDA's results.
2
The methodology applied in this task consists of training and using prediction values of four models: MultiFiT, BERT, ALBERT, and XLNet. After retrieving prediction values, our ensemble calculates an average of all softmax values from these four models, as shown in Figure 1 . Models BERT, ALBERT, and XLNet were trained on a DGX-1, while MultiFiT was trained on a GTX 1070 Ti 8GB. The hyperparameters of the four models are described in Table 1 . This step consists in eliminating noises and terms that have no semantic significance in the sentiment prediction. For this, we perform the removal of links, removal of numbers, removal of special characters, and transform text in lowercase.Nowadays, there are many advances in NLP, but the majority of researches is based on the English language, and those advances can be slow to transfer beyond English. The MultiFiT (Eisenschlos et al., 2019) method is based on Universal Language Model Fine-tuning (ULMFiT) (Howard and Ruder, 2018) and the goal of this model is to make it more efficient for modeling languages others than English.There are two changes compared to the old model: it utilizes tokenization based on sub-words rather than words, and it also uses a QRNN (Bradbury et al., 2016) rather than an LSTM. The model architecture can be seen in Figure 2 .The architecture of the model consists of a subword embedding layer, four QRNN layers, an aggregation layer, and two linear layers. In special this architecture, subword tokenization has two very important properties:• Subwords more easily represent inflections and this includes common prefixes and suffixes. For morphologically rich languages this is well-suited.• It is a common problem out-of-vocabulary tokens and Subword tokenization is a good solution to prevent this problem. Bidirectional Encoder Representations from Transformers (also abbrevitaed as BERT) (Devlin et al., 2018) is a model designed to pre-train deep bidirectional representations from unlabeled data. The pre-trained BERT model can be fine-tuned with just one single additional output layer, which can be used in sentiment analysis and others NLP tasks. The implementation of BERT there has two steps: pre-training and fine-tuning. In the pre-training step, the model is trained on unlabeled data over different pre-training tasks using a corpus in a specific language or in multiples corpus with different languages. For the fine-tuning step, the BERT model is first initialized with the pre-trained parameters, and all of the parameters are fine-tuned using labeled data from the specific tasks.Dataset repositories like NLP-progress 4 track different model results and progress in many Natural Language Processing (NLP) benchmarks, and also the current state for the most common NLP tasks. When doing a comparison between results available for reference in such repositories, BERT was able to achieve state-of-the-art in many NLP-related tasks, which gives an excellent reason to use BERT in our architecture, even while many reasons of BERT state-of-art performance are not fully understood (Kovaleva et al., 2019) (Clark et al., 2019) .Recent language models had shown a tendency to increase in size and quantity of parameters for training. They often offer many improvements in many NLP tasks, but they suffer as a consequence of the need for many hours of training, which consequently increases its costs of operation. ALBERT (Lan et al., 2019) : A Lite BERT for Self-supervised Learning of Language Representations, offers an alternative of parameters reduction to solve this problem.There are two changes to reduce the size of the model based on BERT. The first is a factorized embedding parameterization, this decomposing the large vocabulary embedding matrix into two small matrices.This decomposition approach reduces the trainable parameters and reduces a significant time during the training phase. The second change is the share parameter cross-layer, which also prevents the parameter from growing with the depth of the network.XLNet (Yang et al., 2019a) is a model that uses a bidirectional learning mechanism, doing that as an alternative to word corruption via masks implemented by BERT. XLNet uses a permutation operation over tokens in the same input sequence, being able to use a single phrase through different training steps while providing different examples. Phrase permutation in training fixes token position, but iterating in every token in training phrases, rendering the model able to deal with information gathered from tokens and its positions in a given phrase. XLNet also draws inspiration from Transformer-XL , relying specially in pre-training ideas.
2
3.1 Kernel KNN A key component of Q2R is direct clas- sification through kernel KNN. Let Z = {(x 1 , y 1 ), (x 2 , y 2 ) . . . , (x N , y N )}be the training set of query-document pairs, where x i is the text of a query and y i the identifier of the document that was matched to each query in a curated dataset. We emphasize that y i here refers to the document identity only, and not its content. Note that there may be more than one historical document y i for any given historical query x i . Furthermore, there are often many examples x i , x j , x i = x j with the same document label y i = y j ; this motivates the use of a kernel-weighted voting paradigm. For now, we assume that a feature function f is given and f (x) ∈ Φ is defined for each x, where Φ is a finite-dimensional Euclidean space.The kernel KNN is a generative model for clas-sification where the class conditional distributions p(X|Y ) are represented by a mixture:p(X = x|Y = y) = 1 |Z y | (x ,y )∈Zy ψ f (x) − f (x )where Z y = {(x , y ) ∈ Z : y = y} and ψ : Φ → R is a kernel function, with the following properties:ψ(u) ≥ 0, Φ ψ(u)du = 1.A frequently used, smooth kernel function is the Gaussian kernel ψ(u) ∝ exp{− u 2 2 }. Given a query x, classification is done based on the posterior, given by:p(Y = y|X = x) ∝ p(X = x|Y = y)p(Y = y) =   1 |Z y | (x ,y )∈Zy ψ f (x) − f (x )   |Z y | N ∝ (x ,y )∈Zy ψ f (x) − f (x ) .In practice, the computation of the posterior p(Y |X) is restricted to only the K nearest neighbors of x in the feature space Φ. Let Z K (x) ⊂ Z be the set of K nearest neighbors of x in Φ based on f (x) and Z K y (x) = Z K (x) ∩ Z y , then the kernel KNN relevance score between x and y is defined ass K (x, y) := (x ,y )∈Z K y (x) ψ f (x) − f (x ) . (1)Here, K is a hyperparameter that is optimized using a separate validation set. The feature function f plays a critical role and is optimized through metric learning on Z (Section 3.2). Notice that the relevance score s K between a query x and a document y as defined in (1) depends only on the features of x (the query) and x (training queries) and never on the contents of the document y.Q2R improves the relevance score s K in (1) by fixing the kernel ψ and optimizing the feature function f through metric learning. Assume that f is parameterized by θ ∈ Θ, and denote the particular instance f θ . In general, Θ can be a space of neural networks, and f θ can range from linear to very complex nonlinear mappings.The objective is to find f such that f (x) is close to f (x ) if both (x, y) and (x , y) are in Z. In other words, queries that have the same answers should be close to each other in the feature space. We use the triplet loss (2), a widely used objective function for metric learning (Weinberger and Saul, 2009; Schroff et al., 2015) .The idea is to create a set T of "triplets" (x a , x p , x n ) from Z. Each triplet contains an anchor example x a , a positive example x p that belongs to the same class as x a and a negative example x n that belongs to a different class. Given T , we find:EQUATIONFor large Z, the number of triplets can be huge. We propose an iterative sampling approach similar to that in Xiong et al. (2021) to optimize f θ as follows:1. Initialize θ randomly.2. Set T ← ∅.(x, y) ∈ Z, (a) Sample (x , y ) from Z y − {(x, y)} with weight ψ(f θ (x) − f θ (x )); let x p ← x . (b) Sample (x , y ) from Z − Z y with weight ψ(f θ (x) − f θ (x )); let x n ← x . (c) Let x a ← x, add (x a , x p , x n ) to T . 4. Solve (2) for θ * ; let θ ← θ * .5. Evalute f θ on validation set. Stop if no improvement after sufficiently many iterations.6. Otherwise, go to step 2.InStep 3, note that both the positive and the negative examples are sampled based on their similarities to the anchor example, preferring the more similar ones. Empirically we find that this approach performs better than always choosing a "hard" triplet i.e. picking the most dissimilar positive examples and the most similar negative examples. One reason could be that positive examples form clusters that are far from each other and including such distant examples in the triplet may actually harm the learning process. Twitter 3585 9622 493 472 Telco 1214 31,096 3859 3732 IBM 149,729 433,369 24,082 22,143 Step 3 can be repeated for each (x, y) ∈ Z to produce multiple triplets. In our implementation, for each anchor, we sample one triplet using the weighted distribution as described above and another triplet using uniform weights. For large Z, one can use a subset of Z in Step 3. For general neural networks, (2) can be solved using stochastic gradient descent or its variants on minibatches from T .As noted in Section 3.1, the content of a document y is never used in the direct classification approach via kernel KNN, only its identity. This relies on the presence of a sufficient number of labeled examples (x, y) in Z for each y ∈ D. In practice, this will be possible for some, but not all, y, especially when the collection of documents, D, is large. To provide answers to previously unseen, or under-represented documents in Z, Q2R makes use of a standard content-based retrieval approach in conjunction with the kernel KNN method. This is done through the Q2R Orchestrator.Suppose that s C is the relevance score for a content-based approach while s K is the relevance score for kernel KNN described above. Suppose that the objective is to return the top R documents. For a given query x, let Y C = {y C(1) . . . y C (R) } ⊂ D be the top R documents based on s C and respectively Y K = {y K(1) . . . y K (R) } ⊂ D the top R documents based on s K . The question of interest is formulated as a binary decision: decide whether to select Y C or Y K as the set of results to provide to the user. The Q2R Orchestrator thus trains a binary classifier to make this decision. For efficiency we use a linear classifier trained using logistic regression. To construct the training set, we identify examples in the validation set where the ground truth is contained in Y C or Y K , but not both. The input features for the classifier include minimally (sC (x, y C (1) ), . . . , s C (x, y C (R) ), s K (x, y K (1) ), . . . , s K (x, y K (R) )), and may include other features such as confidence intervals.
2
We use the current stable release (v3) of Moses, a state-of-the-art statistical phrase-based machine translation system.We trained translation models using the Europarl corpus (Koehn, 2005) , using the latest available versions (v7 for German-English and Czech-English, and v8 for Finnish-English), as well as the Common Crawl corpus and News Commentary (v10) corpus for German-English and Czech-English, and the Wiki Headlines corpus for Finnish-English.We trained a back-off language model (LM) with modified Kneser-Ney smoothing (Katz, 1987; Kneser and Ney, 1995; Chen and Goodman, 1998) on the English Gigaword v5 corpus (Parker et al., 2011) using lmplz from KenLM (Heafield et al., 2013) .
2
Our main objective is to use a cross-lingual word embedding and a thesaurus in a source language to generate a thesaurus in a target language without relying on parallel data. The source embeddings can be trained exclusively with data extracted from our ongoing analysis tasks, which is already available, easy to obtain and closely matches the characteristics of the information that will be analysed in the actual downstream use of the translated thesaurus. In this section we will describe the particular features of the knowledge base used in the application of this method, as well as characterise the bilingual lexicon induction techniques ap-plied and our evaluation strategies. Finally, we propose an optimisation strategy for manual validation applied to the translated thesaurus.In order to structure the non-financial information, a thesaurus in English has been manually created by experts in sustainability matters. The thesaurus is a formal specification of concepts related to 100 NFR disclosure topics. For example, air pollutant, air pollution, dust emission are some of the concepts covering the topic Air Emissions. Our terms are both words or multi-word expressions and there are a significant quantity of noun phrases. The thesaurus groups into topics more than 6000 terms in an ongoing effort that spans over five years. The terms of the thesaurus are expressed as lexical patterns to build a knowledge base of a matching algorithm responsible to automatically detect the mention of topics in different textual resources. The patterns were created using the spaCy NLP library (Honnibal and Johnson, 2015) . spaCy provides a rule-based matching feature that scans the text at token level according to some predefined patterns. These patterns are built considering some word-level features such as lemmatization, parts-of-speech, dependency parser and others. The matching algorithm compares the token attributes, specified by the pattern, with the attribute of the token in the input text to match or not the sequence. See below examples of the patterns we used.[ {"LOWER" : "dust"}, {"LOWER" : "emission"}, {"LOWER" : "diesel"}, { "LOWER" : "emissions"}, {"LOWER" : "air"}, {"LOWER" : "pollutant"} ] In the above patterns, each element inside the square brackets represents one or more words that should appear consecutively. Each element inside the curly brackets represents a token. The LOWER: diesel means that we want to match a word whose lower form is diesel. For example, any of the following sequences will be annotated with the second pattern: diesel emissions or DIESEL Emissions or Diesel emissions.Due to a lack of a Spanish thesaurus, we initially considered different alternatives to extract topics from Spanish texts: (1) maintaining a parallel thesaurus between source and target languages, which is a non-scalable process and required experts in target language;(2) using a Commercial Machine Translation System to translate the Spanish text into English. Although using a translation service seems a technically sound solution with adequate quality results, it is not financially feasible; or (3) training our own MT model, which requires too much effort and is also very costly. As a result, we moved on to BLI techniques to derive a Spanish thesaurus.To generate embeddings that can be used in the CLE method that we have selected for our translation purpose, we need two monolingual datasets: one in the source language in which our original thesaurus was built and another in the tar- get language to which we want to migrate the said thesaurus. We apply lowercase and tokenization to both datasets, with which we then train two fastText embeddings with default hyperparameters and limiting the vocabulary to the 200,000 most frequent tokens as per Artetxe et al. 2019, although any Word2vec-based toolkit should suffice. The English and Spanish spaCy models were used to apply lowercase and to tokenize the datasets in both languages.To obtain an inducted bilingual dictionary from monolingual data, we recreated the VecMap projection-based CLE method (Artetxe et al., 2018a) using the word embeddings mentioned in the previous section and mapped them into a shared space. We then extracted a source-to-target and target-to-source-phrase table using the technique described in Artetxe et al. (2018b) . A bilingual dictionary is directly induced from the sourceto-target phrase-table by ranking the entries of a given term according to their corresponding likelihood in the phrasetable, thus transforming the quantitative ranking of the phrase-table into a bilingual dictionary with positional ranking. Figure 1 shows a fragment of the phrase-table obtained for the English term market and its Spanish candidate translations. Terms with higher likelihood will appear first in the entry for market in the induced bilingual dictionary dictionary. This dictionary is used to translate the terms that make up our thesaurus. This approach maintains equivalence between source and target at the token level. However, many of the thesaurus terms are multi-word expressions. To cover this limitation and in order to build sensible combinations using the translated words, some heuristics are considered. As a result, token-level equivalence is often ignored.Using the cross-mapping embeddings we obtain a bilingual dictionary containing exclusively unigrams, which means that some techniques have to be applied in order to translate multi-word terms. In this section, we will outline several heuristic techniques that are applied to increase the coverage of the first bilingual dictionary. These heuristics use the phrase-table to generate new terms.Literal translation Multi-word expressions are translated term by term, maintaining their original structure. The chosen translation for each word is the first-ranked one in the bilingual dictionary, or a special symbol if there is no possible translation for that term. This is the crudest possible form of translation using a bilingual unigram dictionary, and Table 1 : Mean Reciprocal Rank that evaluates a bilingual dictionary against the full English to Spanish bilingual dictionary found in MUSE (Lample et al., 2018) .it serves as the baseline for all other heuristic approaches to building expressions. For example, for the English term diesel emissions, the literal translation that is obtained is diesel emisiones, which can be represented as the following pattern: [{"LOWER" : "emisiones"},{ "LOWER" : "diésel"}]Permutations Expressions are first translated term by term, after which all of their possible permutations are added into the thesaurus. In languages that do not share a very similar grammatical structure, translating the expressions maintaining their original order may produce incorrect sentences. Moreover, this technique may help capture all possible variations in languages that present a flexible word order, such as Romance languages, Hungarian, etc. See below an example of the pattern obtained for the English term diesel emissions after obtaining its literal translation in Spanish and applying the permutation heuristic explained in this paragraph.[{ "LOWER" : "diésel"},{"LOWER" : "emisiones"}]Lemmatized terms with permutations All terms are translated in their original order, then lemmatized. Finally, like in the previous case, every possible permutation is considered. We lemmatize all terms in an attempt to reduce the variability that morphologically rich languages (that commonly also have a rather flexible word order) might bring, which is often a source of problems for unsupervised bilingual dictionary induction methods, as per Søgaard et al. (2018b) . The following example shows the patterns generated using the current heuristic.[ {"LEMMA" : "emisión"},{ "LEMMA" : "diésel"}, { "LEMMA" : "diésel"},{"LEMMA" : "emisión"} ]Lemmatized terms with permutations and wildcard inclusion We use the same setup as in the aforementioned approach, but adding a wildcard match before and after every word with the intent of boosting the coverage of the annotation. The longest possible match for each wildcard is selected, where the match can contain multiple tokens, and its sequence within the analysed text is no longer eligible for new matches. That is, we avoid overlap between different term matches. This logic might reduce the overall precision of the system, since overlap between the terms belonging to different labels is possible. We chose to operate in this manner to preserve the structure of our original thesaurus, as it does not present any overlaps between the terms of different labels. See below an example of one of the patterns generated adding the wildcard heuristic.[ {"LEMMA" : "emisión"}, { "OP" : "*", "IS_ALPHA" : true}, {"LEMMA" : "diésel"} ] The corpora necessary to build the initial monolingual word embeddings were generated using a preexisting collection of news articles from different online sources that are used in the Datamaran platform 1 . We chose to build these embeddings from news corpora because the English thesaurus that we intend to translate is used within Datamaran to analyse the content of online news, which would also be the purpose of this new translated Spanish thesaurus. Therefore, the domain of the corpora from which the monolingual word embeddings are built matches that of the text analysed in the downstream application of our system. The contents of the employed corpora are detailed below:• Source language corpus, which contains 220,000 English news published during 2019, and more than 137,000,000 tokens.• Target language corpus, composed by 260,000 Spanish news that appeared in online press during 2018-2019, containing around 118,000,000 tokens.To validate the quality of the generated Spanish thesaurus we proposed a multi-label document classification task, that will be explained in Section 4.2.2. For that purpose, Version 7 of the English-Spanish Europarl corpus (Koehn, 2005) was used, as it contains a sufficient amount of terminology included in our particular thesaurus (datasets with very sparse annotation would not be very informative). Figure 2 shows an English sentence extracted form the Europarl corpus that mentioned the topic Workforce changes 2 (WFChges). The Europarl corpus contains documents published on the European Parliament's official website, therefore it does not belong to the same domain as the corpus used to build the embeddings, which is a corpus of the news domain. This ensures that the performance obtained in the evaluation taskPhrase composition heuristic Precision Recall KLD VecMac (Artetxe et al., 2018a) Literal translation 0.3871 0.2505 5.1149 VecMac (Artetxe et al., 2018a) Permutations 0.5295 0.4590 1.6293 VecMac (Artetxe et al., 2018a) Permutations and lemmatization 0.4236 0.5045 1.2235 VecMac (Artetxe et al., 2018a) Permutations, lemmatization and wildcards 0.4580 0.6976 0.8027 Commercial Machine Translation System None (the whole document is translated) 0.8209 0.8005 0.0233 never surpasses what would be achieved when operating over a dataset that closely matched the information used to generate the embeddings, thus providing a pessimistic estimation for the effectiveness of the evaluated translated thesaurus. We find this property desirable, as it allows us to estimate the quality of the translation in the worst cases with a higher confidence level. Additionally, it can reveal faulty translations that could go undetected in a corpus of the same domain because of context similarities. For instance, the term "typhoons" is translated as "aviones" ("airplanes") in the bilingual dictionary generated with the techniques detailed in 3.3. using the aforementioned datasets. This could be because in news about typhoons it is usually mentioned that there will be delays or cancellations in commercial flights that operate in the affected region. However, airplanes are not necessarily mentioned next to typhoons in the Europarl corpus nearly as often, which means that when performing a multi-label document classification task it will be possible to appreciate that articles that only discuss the effects of reducing commercial flights in pandemics or passenger rights issues are getting labelled as if they were related to natural disasters.
2
Given a set of AMR/English pairs, divided into train, development, and test sets, we follow these steps: Construct token-level alignments: We use the method proposed in (Pourdamghani et al., 2014) to construct alignments between AMR and English tokens in the training set. Extend training data: We use special realization components for names, dates, and numbers found in the dev/test sets, adding their results to the training corpus. Linearize AMR graphs: We learn to convert AMR graphs into AMR strings in a way that linearized AMR tokens have an English-like order (Section 3). Clean AMR strings: We remove variables, quote marks, and sense tags from linearized AMRs. We also remove *-quantity and *-entity concepts, plus these roles: :op*, :snt*, :arg0, :arg1, :arg2, :name, :quant, :unit, :value, :year, :domain-of.Phrase-Based Machine Translation: We use Moses (Koehn et al., 2007) to train and tune a PBMT system on string/string training data. We then use this system to produce English realizations from linearized development and test AMRs.
2
Inspired by Li et al. 2020, we build a crossdomain concept-resource graph G = (X, A) that includes resource nodes and concept nodes from both the source and target domains ( Figure 2 ). To obtain the node feature matrix X, we use either BERT (Devlin et al., 2019) or Phrase2Vec (Artetxe et al., 2018) embeddings. We consider four edge types to build the adjacency matrix A: A c,s : edges between source concept nodes; A rc : edges between all resource nodes and concept nodes; A r : edges between resource nodes only; and A c,t : edges between target concept nodes. In unsupervised prerequisite chain learning, A c,s -concept relations of the source domain-are known, and the task is to predict A c,t -concept relations of the target domain. For A rc and A r , we calculate cosine similarities based on node embeddings, consistent with previous works (Li et al., 2019; Chiu et al., 2020) . Cross-Domain Graph Encoder VGAE (Kipf and Welling, 2016) contains a graph neural network (GCN) encoder (Kipf and Welling, 2017) and an inner product decoder. In a GCN, the hidden representation of a node i in the next layer is computed using only the information of direct neighbours and the node itself. To account for cross-domain knowledge, we additionally consider the domain neighbours for each node i. These domain neighbours are a set of common or semantically similar concepts from the other domain. 1 We define the cross-domain graph encoder as:h (l+1) i = σ j∈N i W (l) h (l) j + W (l) h (l) i + k∈N D i W (l) D h (l) kwhere N i denotes the set of direct neighbours of node i, N D i is the set of domain neighbours, and W D and W are trainable weight matrices. To determine the domain neighbors, we compute cosine similarities and match the concept nodes only from source domain to target domain:cosine(h s , h t ).The values are then normalized into the range of [0,1], and we keep the top 10% of domain neighbors. 2 DistMult Decoder We optimize the original inner product decoder from VGAE. To predict the link between a concept pair (c i , c j ), we apply the DistMult (Yang et al., 2015a ) method: we take the output node features from the last layer,X, and define the following score function to recover the adjacency matrix by learning a trainable weight matrix R: =XRX. A Sigmoid function is used to predict positive/negative labels fromÂ.
2
We define a dialogue corpus D = {D 1 , . . . , D M } has M dialogue samples, and each dialogue sample D m has T turns of conversational exchange {U 1 , S 1 . . . , U T , S T } between a user and a system. For every utterance U t or S t , we have human-annotated domain, user intent, slot, and dialogue act labels. We first feed all the utterances to a pre-trained model and obtain user and system representations. In this section, we first discuss how we design our classifier probe and then introduce our mutual information probe's background and usage.We use a simple classifier to transform those representations for a specific task and optimize it with annotated data.EQUATIONwhereE i ∈ R d B is the output representation with dimension d B from a pre-trained model, F F N ∈ R N ×d Bis a feed-forward layer that maps from dimension d B to a prediction with N classes, and A is an activation layer. For domain identification and intent detection, we use a Softmax layer and backpropagate with the cross-entropy loss. For dialogue slot and act prediction, we use a Sigmoid layer and the binary cross-entropy loss since they are multi-label classification tasks.We first cluster utterances in an unsupervised fashion using either K-means (Lloyd, 1982) or Gaussian mixture model (GMM) (Reynolds, 2009) with K clusters. Then we compute the adjusted mutual information score (Vinh et al., 2010) between the predicted clustering and each of the true clusterings (e.g., domain and intent) for different hyperparameters K. Note that the predicted clustering is not dependent on any particular labels.K-means is a common clustering algorithm that aims to partition N samples into K clusters A = {A 1 , . . . , A K } in which each sample is assigned to a cluster centroid with the nearest mean.EQUATIONwhere µ i is the centroid of the A i cluster and the algorithm is updated in an iterative manner. On the other hand, GMM assumes a certain number of Gaussian distributions (K mixture components). It takes both mean and variance of the data into account, while K-means only consider the data's mean. By the Expectation-Maximization algorithm, GMM first calculates each sample's probability belongs to a cluster A i during the E-step, then updates its density function to compute new mean and variance during the M-step.In our experiments, we cluster separately for user utterances U and system response S. Note that K is a hyper-parameter since we may not know the true distribution in a real scenario. To avoid the local minimum issue, we run multiple times (typically ten runs) and use the best clustering result for mutual information evaluation.To evaluate two clusterings' quality, we compute the ANMI score between a clustering and its ground-truth annotation. ANMI is adjusted for randomness, which accounts for the bias in mutual information, giving high values to the clustering with a larger number of clusters. ANMI has a value of 1 when two partitions are identical, and an expected value of 0 for random (independent) partitions.More specifically, we assume two label clusterings, A and B, that have the same N objects. The mutual information (MI) between A and B is defined byEQUATIONwhere P (i, j) = |A i ∩B j |/N is the probability that a randomly picked sample falls into both A i and B j classes. Similarly, P (i) = |A i |/N and P (j) = |B j |/N are the probabilities that the sample falls into either the A i or B j class. The normalized mutual information (NMI) normalizes MI with the mean of entropy, which is defined asEQUATIONwhereH(A) = − |A| i=1 P (i) log(P (i))is the entropy of the A clustering, which measures the amount of uncertainty for the partition set.MI and NMI are not adjusted for chance and will tend to increase as the number of cluster increases, regardless of the actual amount of "mutual informaiton" between the label assignments. Therefore, adjusted normalized mutual information (ANMI) is designed to modify NMI score with its expectation, which is defined by EQUATION
2
We rely on an adversarial text generator as the backbone of our method. However, we still need data to pre-train the generator. Since we assume access to a general purpose OOD data, we delineate general principles to extract a training set from this source. Finally, we apply KD on a combination of the OOD training data and the adversarial training data. Figure 1 gives a visual illustration of the proposed ZSKD method.Our ZSKD method assumes that we do not have the original training data on which the teacher model is trained as well as any other task specific data. Similar to (Krishna et al., 2019) , we construct an out-of-domain (OOD) dataset. The idea is that using a general purpose corpus of text, we randomly sample sentences from the text. Then depending on the task we add simple heuristics to make the text suitable for the problems at hand. We summarize a list of targeted tasks all taken from the GLUE benchmark (Wang et al., 2018) .Sentiment Classification (SST-2). We do not modify the sampled sentences for this task but simply feed them to the teacher to get the sentiment output distribution, even though most sentences in the sampled text would have neutral sentiment.Pairwise Sentence Classification The training sequence typically consists of two input sentences. Depending on the task these can be:• In Natural Language Inference (NLI), the two input sentences are the hypothesis and the premise. Depending on the task, the goal can be to determine whether the hypothesis is true (entailment), false (contradiction), or undetermined (neutral) given the premise (MNLI) or whether the hypothesis entails the premise in the form of binary classification (RTE). For these tasks, we generate the OOD data by randomly extracting a sentence from the corpus to serve as the premise and then by random chance construct the hypothesis to either be a slightly changed version of the premise or be a completely new random sentence.• In tasks such as Quora Question Pair (QQP) and Microsoft Research Paraphrase Corpus (MRPC), the goal is to determine if the two input sentences are semantically equivalent or not. We follow a strategy similar to NLI tasks but for the QQP task we post-process the generated sentences by appending a question mark at the end.Question NLI. The goal of this task is to determine if the given paragraph contains the answer to the input question. We sample a paragraph from our corpus and, randomly, either sample a segment from within the paragraph to form a question or sample an unrelated sentence from the corpus.Then, we randomly append a questioning word such as Who, Where, What etc. to the start of the segment and a question mark at the end.Inspired by (Micaelli and Storkey, 2019) and on the promise of adversarial training for NLP (Zhu et al., 2019) , the key ingredient of our proposed method is to learn a generator that generates training samples. Our adversarial generation is close to adversarial training and we consider the adversarial samples to be perturbations of the OOD training data D. Therefore we pre-train the generator to follow the distribution of D. Specifically, our generator G is a masked language model, such as BERT, which can generate text from noise such that:EQUATIONwhere x p is the output of the generator and is a sequence of tokens, φ is the set of generator parameters and z ∼ N (0, std).The generator is pre-trained by minimizing the following loss function:EQUATIONwhere H CE is the cross-entropy loss and x k is a sample from the OOD training set D. Note that the noise z matches the length and dimension of the embedding of x k , with the classification token (CLS) added at the beginning and the separator tokens (SEP) inserted at the same locations as in x k .Most methods in adversarial training for NLP (Zhang et al., 2020) perturb the word embeddings instead of generating text due to the discreteness problem of text. In order to generate text, we need an argmax operation which breaks end-to-end differentiability. Since our goal is KD, embedding perturbation introduces the problem of size mismatch between the student and teacher embedding. Instead we generate text and sample from the argmax by using the Gumbel-Softmax distribution (Kusner and Hernández-Lobato, 2016; Jang et al., 2016), a continuous distribution over the simplex that can approximate one-hot samples from a discrete distribution.Once pre-trained, the generator is trained with two losses. The first loss maximises the KL-divergence between the teacher and student model on the generated data. The teacher and student model parameters are fixed. The goal is to generate training samples where the teacher and student diverge the most. However, this can lead to degenerate samples which are not useful for transferring teacher knowledge. The second loss is the same as Equation 3 and prevents the generator from diverging too much from the OOD training data. The overall loss, L T , for generator training is thus:EQUATIONwhere T is the teacher, S is the student, x k is a sample from the OOD training set, x p is the softmax output of the generator and x p , the one-hot output, is defined as:EQUATIONHere σ Gumbel is the Gumbel-Softmax and x l are the logits of the generator.In each training loop we train the generator for n G steps and the student for n S steps. Specifically, the student is optimized using a joint KD loss between the data samples generated from the generator G and the data samples coming from the OOD dataset. Overall the student is trained on:L G = D KL (T (x p ) || S(x p )) L OOD = D KL (T (x k ) || S(x k )) L = α • L G + (1 − α) • L OOD (6)where x k and x p are as defined above and α is a weight interpolation parameter. Note that unlike regular KD where we have a hard loss and a soft loss, here we have two soft losses. One matches the student and the teacher output on adversarially augmented data and the other on OOD data respectively. Algorithm 1 presents all the steps of our procedure.Algorithm 1: Zero-shot KD (Complete)pretrain: T (•) dataset: D initialize: G(•; φ) initialize: S(•; θ) # Pre-train Generator for k ← 1, 2, ..., N do z ← {z0, . . . , zl} ∼ N (0, std) x k ∈ D xp ← G(z; φ) LGP ← HCE(x k , xp) φ ← φ − λ ∂LGP ∂φ decay λ end # Adversarial Train for k ← 1, 2, ..., N do x k ← D # Adversarial Step for 1, 2, ..., nG do z ← {z0, . . . , zl} ∼ N (0, std) xlogits ← G(z; φ) xp ← Gumbel-Softmax(xlogits) LA ← −DKL(T (x p ) || S(x p )) LF ← HCE(x k , xp) LT ← L A +L F 2 φ ← φ − η ∂LT ∂φ end # Knowledge Distillation for 1, 2, ..., nS do z ← {z0, . . . , zl} ∼ N (0, std) xlogits ← G(z; φ) xp ← Gumbel-Softmax(xlogits)LG← DKL(T (x p ) || S(x p )) LOOD ← DKL(T (x k ) || S(x k )) L ← α • LG + (1 − α) • LOOD θ ← θ − η ∂L ∂θ end decay η end
2
We denote four kinds of features as F = {WORD, SENSE, S SENSE, SENTI} where WORD is a set of word-level features, SENSE is a set of sense-level features, S SENSE is a set of Regarding senselevel feature, we applied two different Word-Net based WSD algorithms, SimLesk and MostFreq (Miller et al., 2013) . Correspondingly, instead of SENSE, we have two different feature sets WN-S-LESK and WN-MFS. Thus, we finally have the feature list of F =Regarding semantic features, we focus on extracting topic information given input texts from different people.We firstly recognize lexical knowledge by applying Word-Net semantic labels 6 . For example, based on the given personal texts, after extracting word n-grams, the topic information is detected and organized in the form of pos.suffix. Here, pos denotes part-of-speech and suffix organizes groups of synsets into different categories (e.g., a tiger can be categorized into noun.animal and a tree is categorized into noun.plant). In this paper, DKPro Uby (Gurevych et al., 2012) is further employed to extract all above required information to represent in pos and suffix from given texts.For sentiment features, we extracted emotional information, which are extremely important to characterize personality according to Pennebaker and King (1999) . For example, neurotics use more negative emotion words (e.g., ugly and hurt) than positive emotion (e.g., happy and love). In details, we applied the sentiment word disambiguation algorithm (i.e., SentiWordNet) to match the disambiguated word senses for each term with three scores, Positive (P), Negative (N) and Objective/Neutral (O) scores. Finally, we obtained the individually final P, N and O scores for each personal text, which were averaged by the total number of sentiment features.Above, we have discussed and presented feature extraction for APC. However, one primary challenge in feature extraction is word sense ambiguity. To address this challenge, word sense disambiguation (WSD) is broadly applied to match the exact sense of an ambiguous word in a particular context. For word, sense, supersense, and sentiment features, it is necessary to first disambiguate the words to reduce the semantic gap.However, due to the high ambiguity of words, it is extremely challenging to detect the exact sense in a certain context. Postma et al. (2016) showed that current WSD systems perform an extremely poor performance on low frequent senses. To address this challenge, we propose an algorithm Selective.WSD to reduce the side effect of WSD by finding senses of a word subset rather than all possible words in the BoW model. Selective.WSD is presented in Algorithm 1. The algorithm takes a wordlevel document as an input to return a mixture of word-level and sense-level feature list. The wordLevelFeature(f) function in the algorithm will return a word-level feature (e.g., bank) of a sense-level feature (e.g., bank%1) by removing the extra notation (e.g., %1). The function of wsd.annotateSenses in the algorithm is implemented based on DKPro WSD (Miller et al., 2013) -annotating the exact sense of a disambiguated word in a context. In the following experimental study section, we will show the impact of WSD on personality prediction.Feature selection is naturally motivated by the need to automatically select the best determinants for each personality trait. Thus, we can derive a qualitative description of the state f eaturesL f return f eaturesL characteristics. In this way, the noisy features are filtered out. We used the χ 2 feature selection algorithm before feeding the features (i.e., word, sense, supersense, and sentiment features) to a classifier. The feature selection strategy was chosen empirically based on our preliminary experiments on training dataset, where we compared χ 2 with three other state-of-the art feature selection methods for the supervised classification (i.e., Information Gain, Mutual Information, and Document Frequency thresholding (Yang and Pedersen, 1997) ), and χ 2 outperformed.
2
In this paper, we mainly investigate the following research questions:• How important are the self-attention patterns of different heads for bridging anaphora resolution?• Whether pre-trained LMs capture information beneficial for resolving bridging anaphora in English?• How does distance between anaphorantecedent and context influence pre-trained language models for bridging inference?We designed a series of experiments to answer these questions which will be detailed in the coming sections. In these experiments, we used Py-Torch (Wolf et al., 2020) implementation of BERTbase-cased, BERT-large-cased, ROBERTA-base and ROBERTA-large pre-trained transformer language models with the standard number of layers, attention heads, and parameters. In the attention head-based experiments, we have limited our investigation only to the BERT-base-cased model as it is relatively smaller compared to other models and findings of this model can be generalized to other models as well.Probing Dataset We used ISNotes (Markert et al., 2012) dataset for all experiments. We choose this corpus because it contains "unrestricted anaphoric referential bridging" annotations among all available English bridging corpora (Roesiger et al., 2018) which covers a wide range of different relations. ISNotes contains 663 bridging anaphors but only 622 anaphors have noun phrase antecedents. 1 In our experiments, we only consider these 622 anaphors for investigation. For any anaphor, the predicted antecedent is selected from the set of antecedent candidates. This set is formed by considering all the mentions which occur before the anaphor. We obtained the candidate set for each anaphor by considering "gold mentions" annotated in ISNotes. Further, we observed that only 531 anaphors have antecedents in either previous 2 sentences from the anaphor or the first sentence of the document. Therefore, in the experiments when antecedent candidates are considered from the window of previous two sentences plus the document's first sentence, only 531 anaphors are considered. In all the experiments, accuracy is measured as the ratio between correctly linked anaphors to the total anaphors used in that particular experiment (not total 663 anaphors).
2
For solving the challenges proposed in MAMI, we build a system which architecture is depicted in Figure 1 . It a nutshell, our system works as follows. First, we select some of the documents of the training MAMI dataset to create a custom validation dataset. Next, we extract a subset of language-independent linguistic features (LF), noncontextual sentence (SE) and word embeddings (WE) from fastText, the contextual word embeddings from BERT (BF), and the image embeddings from BEiT (BI). Second, we train several neural network models by performing an hyperparameter tuning process. The models evaluated included one model per feature set, and models based on several feature sets together. Besides, we evaluate two ensembles based on soft voting (mode) and averaging all the probabilities (mean) of the neural networks trained with each feature set. To handle the multi-label challenge, we repeat this process per trait. That is, we evaluate the problem as a binary classification problem per trait.Next, some insights of the feature sets involved are given. As the memes are images with overlaying text, this shared-task has a multi-modal perspective. Our proposal uses several feature sets based on texts, and one for the images. First, we use the UMUTextStats tool to obtain a set of relevant psycho-linguistic features (LF). This tool has already been used in studies related to misogyny, such as (García-Díaz et al., 2021 , 2022a . The LF included low-level linguistic categories concerning phonetics and syntax's, and high-level features related to semantics and pragmatics, including features proper from figurative language (del Pilar Salas-Zárate et al., 2020). Moreover, these kinds of features have proven to be effective for performing other automatic classification tasks such as irony and satire identification (García-Díaz and Valencia-García, 2022). As some of the dictionaries of UMUTextStats are not translated to English, we select a subset of language-independent linguistic features, based on linguistic metrics, Part-of-Speech features and the usage of social media jargon. Second, we extract sentence and word embeddings using the pre-trained fastText model (Joulin et al., 2016) . Third, we use contextual sentence embeddings from BERT (Devlin et al., 2018) , which sentence embeddings are obtained in a similar manner as described at S-BERT (Reimers and Gurevych, 2019) . Forth, we use visual embeddings from BEiT (Bao et al., 2021) , which is a self-supervised model trained with ImageNet-21k with more than 21000 labels. BEiT learns the embeddings images as a sequence of fixed-size elements using relative position. This allows us to perform the classification using a mean-pooling strategy from the final hidden states of the patches instead of placing a linear layer on top of the final classification token. However, the suggested way to fine-tune the model for performing downstream tasks is to attach a new linear layer that uses the last hidden state of the Once all features are obtained, we train a neural network per feature set. The training of each neural network is performed with hyperparameter optimisation. Each training involved: (1) 20 shallow neural networks, that are multi-layer perceptrons (MLP) composed by one or two hidden layers with the same number of neurons per layer connected with one activation function (linear, ReLU, sigmoid, and tanh); (2) 5 deep-learning networks, that are MLP between 3 and 8 hidden layers, in which the neurons per layer are disposed for each layer in different shapes, namely brick, triangle, diamond, rhombus, and funnel, and connected with an activation function (sigmoid, tanh, SELU and ELU). The learning rate of the deep-learning models is 10e-03 or 10e-04. Besides, on the neural networks with the pre-trained word embeddings from fastText (WE) we also evaluate 10 convolutional neural networks (CNN) and 10 bidirectional recurrent neural network layers (BiLSTM). In all experiments, we evaluate two batch sizes: 16 and 32. These small values were selected because the training split was balanced, and a dropout mechanism ([False, .1, .2, .3]) for regularisation.Apart of the neural networks trained with the feature sets separately, we evaluate different forms for combining the strengths of each feature set in the same system. The combination of the feature sets is performed using two strategies: (1) knowledge integration, in which each feature set is used as input of the same neural network. For this, we train another neural network repeating the hyperparameter optimisation stage; and (2) ensemble learning, in which the output of each neural network model trained with a feature set is combined by averaging the predictions or calculating the mode of the predictions.
2
Our 2 megabyte test sample consisted of 42 selections of hand-segmented, grammatically tagged Thai text (LINKS). The original text was split into some 415,844 words over 53,242 lines, leaving 362,602 potential error points. We removed spaces and tags, then replaced English text, numbers, and punctuation (unambiguous breakpoints) with newlines. A dictionary-based method resegmented the text, generating all possible parse trees in the process. We intentionally used a very large word list -over 70,000 entries, including all words from the text sample -to maximize opportunities for ambiguous partitions, and to ensure that every sentence would be segmentable.Finally, special-purpose software selected outcomes that involved alternative partitions at least two words long. Given the string topend, we would select top end / to_pend as an ambiguous partition. However, given toolbox, we would not choose toolbox / toolbox as alternatives. With a few notable exceptions cos nam dii = good water vs. lig ndrndii = bile), these are not open to ambiguous interpretation unless the context is at least three words long (which gives the central word the opportunity of binding either left, right, or not at all). Moreover, the exocentric exceptions should be found in any ordinary dictionary, while very, very large numbers of unambiguous compounds are an inescapable artifact of any large corpus-based word list.This procedure described above produced some 36,267 candidate sequences, of which 9,253 were distinct (available on-line, along with most of the derived data dis- cussed here, at the Southeast Asian Language Data Archives, http://seasrc.th. net/sealda). We investigated three groups in detail: the most frequent 5%, 5% selected at random from the remainder, and 5% taken at random from single-appearance entries.
2
In this section, we detail our word-level adversarial attack model. It incorporates two parts, namely the sememe-based word substitution method and PSO-based adversarial example search algorithm.The sememes of a word are supposed to accurately depict the meaning of the word (Dong and Dong, 2006) . Therefore, the words with the same sememe annotations should have the same meanings, and they can serve as the substitutes for each other. Compared with other word substitution methods, mostly including word embedding-based (Sato et al., 2018) , language model-based (Zhang et al., 2019a) and synonym-based methods (Samanta and Mehta, 2017; Ren et al., 2019) , the sememe-based word substitution method can achieve a better trade-off between quality and quantity of substitute words.For one thing, although the word embedding and language model-based substitution methods can find as many substitute words as we want simply by relaxing the restrictions on embedding distance and language model prediction score, they inevitably introduce many inappropriate and low-quality substitutes, such as antonyms and semantically related but not similar words, into adversarial examples which might break the semantics, grammaticality and naturality of original input. In contrast, the sememe-based and, of course, the synonym-based substitution methods does not have this problem.For another, compared with the synonym-based method, the sememe-based method can find more substitute words and, in turn, retain more potential adversarial examples, because HowNet annotates sememes for all kinds of words. The synonymbased method, however, depends on thesauri like WordNet (Miller, 1995) , which provide no synonyms for many words like proper nouns and the number of a word's synonyms is very limited. An empirical comparison of different word substitution methods is given in Section 4.6.In our sememe-based word substitution method, to preserve grammaticality, we only substitute content words 1 and restrict the substitutes to having the same part-of-speech tags as the original words. Considering polysemy, a word w can be substituted by another word w * only if one of w's senses has the same sememe annotations as one of w * 's senses. When making substitutions, we conduct lemmatization to enable more substitutions and delemmatization to avoid introducing grammatical mistakes.Before presenting our algorithm, we first explain what the concepts in the original PSO algorithm correspond to in the adversarial example search problem. Different from original PSO, the search space of word-level adversarial example search is discrete. A position in the search space corresponds to a sentence (or an adversarial example), and each dimension of a position corresponds to a word. Formally,x n = w n 1 • • • w n d • • • w n D , w n d ∈ V(w o d ),where D is the length (word number) of the original input, w o d is the d-th word in the original input, and V(w o d ) is composed of w o d and its substitutes. The optimization score of a position is the target label's prediction probability given by the victim model, where the target label is the desired classification result for an adversarial attack. Taking a binary classification task as an example, if the true label of the original input is "positive", the target label is "negative", and vice versa. In addition, a particle's velocity now relates to the position change probability, i.e., v n d determines how probable w n d is substituted by another word. Next we describe our algorithm step by step. First, for the Initialize step, since we expect the adversarial examples to differ from the original input as little as possible, we do not make random initialization. Instead, we randomly substitute one word of the original input to determine the initial position of a particle. This operation is actually the mutation of genetic algorithm, which has also been employed in some studies on discrete PSO (Higashi and Iba, 2003) . We repeat mutation N times to initialize the positions of N particles. Each dimension of each particle's velocity is randomly initialized between −V max and V max .For the Record step, our algorithm keeps the same as the original PSO algorithm. For the Terminate step, the termination condition is the victim model predicts the target label for any of current adversarial examples.For the Update step, considering the discreteness of search space, we follow Kennedy and Eberhart (1997) to adapt the updating formula of velocity toEQUATIONwhere ω is still the inertia weight, and I(a, b) is defined asEQUATIONFollowing Shi and Eberhart (1998) , we let the inertia weight decrease with the increase of numbers of iteration times, aiming to make the particles highly dynamic to explore more positions in the early stage and gather around the best positions quickly in the final stage. Specifically,EQUATIONwhere 0 < ω min < ω max < 1, and T and t are the maximum and current numbers of iteration times. The updating of positions also needs to be adjusted to the discrete search space. Inspired by Kennedy and Eberhart (1997) , instead of making addition, we adopt a probabilistic method to update the position of a particle to the best positions. We design two-step position updating. In the first step, a new movement probability P i is introduced, with which a particle determines whether it moves to its individual best position as a whole. Once a particle decides to move, the change of each dimension of its position depends on the same dimension of its velocity, specifically with the probability of sigmoid(v n d ). No matter whether a particle has moved towards its individual best position or not, it would be processed in the second step. In the second step, each particle determines whether to move to the global best position with another movement probability P g . And the change of each position dimension also relies on sigmoid(v n d ). P i and P g vary with iteration to enhance search efficiency by adjusting the balance between local and global search, i.e., encouraging particles to explore more Table 1 : Details of datasets and their accuracy results of victim models. "#Class" means the number of classifications. "Avg. #W" signifies the average sentence length (number of words). "Train", "Val" and "Test" denote the instance numbers of the training, validation and test sets respectively. "BiLSTM %ACC" and "BERT %ACC" means the classification accuracy of BiLSTM and BERT.space around their individual best positions in the early stage and search for better position around the global best position in the final stage. Formally,EQUATIONwhere 0 < P min < P max < 1. Besides, to enhance the search in unexplored space, we apply mutation to each particle after the update step. To avoid excessive modification, mutation is conducted with the probabilityEQUATIONwhere k is a positive constant, x o represents the original input, and E measures the word-level edit distance (number of different words between two sentences).E(x n ,x o ) Dis defined as the modification rate of an adversarial example. After mutation, the algorithm returns to the Record step.
2
The proposed policy network adopts an encoderdecoder architecture (Figure 1 ). The input to the encoder is the current-turn dialogue state, which follows Li et al. (2018)'s definition. It contains policy actions from the previous turn, user dialogue acts from the current turn, user requested slots, the user informed slots, the agent requested slots and agent proposed slots. We treat the dialogue state as a sequence and adopt a GRU (Cho et al., 2014) to encode it. The encoded dialogue state is a sequence of vectors E = (e 0 , . . . , e l ) and the last hidden state is h E . The CAS decoder recurrently generates tuples at each step. It takes h E as initial hidden state h 0 . At each decoding step, the input contains the previous (continue, act, slots) tuple (c t−1 , a t−1 , s t−1 ). An additional vector k containing the number of results from the knowledge base (KB) query and the current turn number is given as input. The output of the decoder at each step is a tuple (c, a, s), where c ∈ { continue , stop , pad }, a ∈ A (one act from the act set), and s ⊂ S (a subset from the slot set).As shown in Figure 2 , the gated CAS cell contains three sequentially connected units for outputting continue, act, and slots respectively.c t-1 a t-1 s t-1 gate h t-1 c t a t-1 s t-1 gate a t act unit s t-1 gate s t slots unit h t c t-1 a t-1 s t-1continue unit Figure 2 : The gated CAS recurrent cell contains three units: continue unit, act unit and slots unit. The three units use a gating mechanism and are sequentially connected. The KB vector k is not shown for brevity.The Continue unit maps the previous tuple (c t−1 , a t−1 , s t−1 ) and the KB vector k into x c t . The hidden state from the previous step h t−1 and x c t are inputs to a GRU c unit that produces output g c t and hidden state h c t . Finally, g c t is used to predict c t through a linear projection and a softmax.x c t = W c x [ct−1, at−1, st−1, k] + b c x , g c t , h c t = GRU c (x c t , ht−1), P (ct) = softmax(W c g g c t + b c g ), L c = − t log P (ct).( 1)The Act unit maps the tuple (c t , a t−1 , s t−1 ) and the KB vector k into x a t . The hidden state from the continue cell h c t and x a t are inputs to a GRU a unit that produces output g a t and hidden state h a t . Finally, g a t is used to predict a t through a linear projection and a softmax.x a t = W a x [ct, at−1, st−1, k] + b a x , g a t , h a t = GRU a (x a t , h c t ), P (at) = softmax(W a g g a t + b a g ), L a = − t log P (at).( 2)The Slots unit maps the tuple (c t , a t , s t−1 ) and the KB vector k into x s t . The hidden state from the act cell h a t and x s t are inputs to a GRU s unit that produces output g s t and hidden state h s t . Finally, g a t is used to predict s t through a linear projection and a sigmoid. Let z i t be the i-th slot's ground truth.EQUATIONThe overall loss is the sum of the losses of the three units:L = L c + L a + L s annotationinform(moviename=The Witch, The Other Side of the Door, The Boy; genre=thriller) multiple choice(moviename) classification inform+moviename, inform+genre, multiple choice+moviename sequence 'inform' '(' 'moviename' '=' ';' 'genre' '=' ')' 'multiple choice' '(' 'moviename' ')' ' eos ' cas sequence ( continue , inform, {moviename, genre}) ( continue , multiple choice, {moviename}) ( stop , pad , {})
2
Firstly, we perform the pre-training of a NMT model until the convergence using the standard log-likelihood (LL) training on the supervised dataset (c.f. Table 1: (A)). The model, thus obtained, acts as our referenced MT system/actor. To demonstrate the improvements brought by the proposed curriculum-based AC fine-tuning over the above LL-based baseline in the sentiment preservation and machine translation tasks, we carry out the task-specific adaption of the pre-trained LL-based MT model (actor) by re-using a subset of the supervised training samples. It is worth mentioning here that, in the fine-tuning stage, the actor does not observe any new sentence, rather re-visit (randomly) a few of the supervised training samples which are now additionally annotated with their sentiment (c.f. Section 4). Actor-critic Overview : Here, we present a brief overview of our AC framework which is discussed at length in the subsequent section. In the AC training, the actor (NMT) receives an input sequence, s, and produces a sample translation,t, which is evaluated by the critic model. The critic feedback is used by the actor to identify those actions that bring it a better than the average reward. In the above context, a feedback of a random critic would be useless for training the actor. Hence, similar to the actor we warm up the critic for one epoch by feeding it samples from the pre-trained actor, while the actor's parameters are frozen. We then fine-tune these models jointly so that -as the actor gets better w.r.t its action, the critic gets better at giving feedback (see Section 4.2 for the dataset and reward used in the pre-training and fine-tuning stages). The details of the loss functions that the actor and critic minimizes are discussed in Section 3.1. Furthermore, to better utilize the data, we finally integrate CL into our AC framework (our proposed approach). Empirical results (Section 5.1) show that during fine-tuning, presenting the data in an easy-to-hard fashion yields a better learned actor model over the one obtained via vanilla (no-curriculum based) fine-tuning. Our proposed framework brought improvements over several baselines without using any additional new training data in the two translation tasks, i.e. (i). English-Hindi 4 and (ii). French-English 5 . Since our proposed framework is a combination of RL via AC method and CL, we first present the details of the main components of the AC model alongside their training procedure in Section 3.1. The details of the reward model are presented in Section 3.2, and then introduce the plausibility of CL in Section 3.3. Finally, we describe our proposed CL-based AC framework in Algorithm 1.The architecture of our AC-based framework is illustrated in Figure 1 . It has three main components viz. (i). an actor : the pre-trained neural agent (NMT) whose parameters define the policy and the agent takes action, i.e. sample translations according to the policy (ii). a reward model : a score function used to evaluate the policy. It provides the actual (true) estimated reward to the translations sampled from the model's policy. To ensure the preservation of sentiment and content in translation, the chosen reward model gives two constituent rewards -a classifier-based score and a SBLEU score (Section 3.2), respectively, and (iii). a critic : a deep neural function approximator that predicts an expected value (reward) for the sampled action. This is then used to center the true reward (step (ii)) from the environment (see Equation 2). Subtracting critic estimated reward from the true reward helps the actor to identify action that yields extra reward beyond the expected return. We employ a critic with the same architecture as of the actor.We see from the lower-left side of Figure 1 that, for each input sentence (s), we draw a single sample (t) from the actor, which is used for both estimating gradients of the actor and the critic model as explained in the subsequent section. Critic Network training: During the RL training, we feed a batch of source sentences, B j (s), to the critic encoder and the corresponding sampled translations obtained from the actor, B j t , to the decoder of the critic model. The critic decoder then predicts the rewards (i.e. value estimates, V φ , predicted for each time step of the decoder), and accordingly updates its parameters supervised by the actual (or true) rewards, R(t, s) 6 (steps to obtain this reward is discussed in Section 3.2) from the environment.The objective of the critic network is, thus, to find its parameter value φ that minimizes the mean square error (MSE) between the true reward (see R in Figure 1 ) from the environment, and the critic estimated reward (i.e. values predicted by the critic, see V φ in Figure 1 ). Accordingly, the MSE loss that the critic minimizes is as in Equation 1, where τ being the critic decoding step.EQUATIONNote that in this work we explore the setting, where the reward, R, is observable only at time step τ = n of the actor (a scalar for each complete sentence). Thus, to calculate the difference terms in Equation 1 for n steps, we use the same terminal reward, R, in all the intermediate time steps of the critic decoder.Actor Network training: To update the actor (G) parameters, θ, we use the policy gradient loss; weighted by a reward which is centered via the critic estimated value (i.e. the critic estimated value, V , is subtracted from the true reward, R, from the environment), as in equation 2. The updated reward is finally used to weigh the policy gradient loss, as shown in (3), where τ being the decoding step of the actor.EQUATIONThe actor and the critic both are global-attention based recurrent neural networks (RNN). Algorithm 1 summarizes the overall update framework. We run this algorithm for mini-batches.As our primary goal is to optimize the performance of the pre-trained NMT system towards sentiment classification and machine translation tasks, accordingly we investigate the utility of the following three reward functions (i.e. true reward, R in Equation 1 as R 1 , R 2 , R 3 ) for optimization through our vanilla AC method. Please note, for brevity we only choose the reward that serves the best to our purpose (i.e. harmonic reward as it ensures both, an improved cross-lingual sentiment projection, and a high quality translation with our vanilla AC approach, as discussed in Section 5.1) for our subsequently proposed curriculum-based experiment. The three types of feedbacks we explored are: (i). Sentence-level BLEU as a reward to ensure the content preservation, also referred as R 1 , is calculated following the Equation (4)EQUATION(ii). Element-wise dot product between the gold sentiment distribution and predicted sentiment distribution (e.g. [1, 0, 0] and [0.2, 0.1, 0.7] in Figure 1 evaluates to scalar value 0.2) taken from the softmax layer of the target language classifier to ensure sentiment preservation, also referred as R 2 . To simulate the target language classifier, we fine-tune the pre-trained BERT model (Devlin et al., 2019) . The tuned classifier (preparation steps discussed in Section 4.1) is used to obtain the reward R 2 as in Equation 5.EQUATIONand, (iii). Harmonic mean of (i) and (ii) as a reward, also referred to as R 3 to ensure the preservation of both sentiment and semantic during the translation, as in Equation 6.R 3 = (1 + β 2 ) (2 • R 1 • R 2 ) (β 2 • R 1 ) + R 2 (6)where β is the harmonic weight which is set to 0.5.The core of CL is (i). to design an evaluation metric for difficulty, and (ii). to provide the model with easy samples first before the hard ones.In this work, the notion of difficulty is derived from the harmonic reward, R 3 , as follows.Let, X = {x i } N i=1 = (s i , t i ) N i=1denotes the RL training data points. To measure the difficulty of say, i th data point, (s i , t i ), we calculate the reward, R 3 using (t i , s i ). In order to obtain the corresponding sample translation,t i , we use the LL-based model (pre-trained actor). We do this for the N data points. Finally, we sort the RL training data points from easy, i.e., with high harmonic reward, to hard as recorded on their translations. In the fine-tuning step, the entire sorted training data points are divided into mini-batches, B = [B 1 , ..., B M ], and the actor processes a mini-batch sequentially from B. Hence, at the start of each epoch of training, the actor will learn from the easiest examples first followed by the hard examples in a sequential manner until all the M batches exhaust. Another alternative is the use of pacing function f pace (s), which helps to decide the fraction of training data available for sampling at a given time step s, i.e. f pace (s)|D train |. However, we leave it to explore in our future work. The Pseudo-code for the proposed CL-based AC framework including pre-training is described by Algorithm 1.
2
We modify the vector representations of the 2-MRD WSD algorithm using four different vector representations: SVD, PCA, and word embeddings using continuous bag of words (CBOW) and skip-gram. Explicit vectors are word-by-word cooccurrence vectors, and are used as a baseline. The disadvantage of explicit vectors is that the wordby-word co-occurrence matrix is sparse and subject to noise introduced by features that do not distinguish between the different senses of a word. The goal of the dimensionality reduction techniques is to generate vector representations that reduce this type of noise. Each method is described in detail here.In this section we describe the 2-MRD WSD algorithm at a high level: a vector is created for each possible sense of an ambiguous word, and the ambiguous word itself. The appropriate sense is then determined by computing the cosine similarity between the vector representing the ambiguous word and each of the vectors representing the possible senses. The sense whose vector has the smallest angle between it and the vector of the ambiguous word is chosen as the most likely sense.To create a vector for a possible sense, we first obtain a textual description of sense from the UMLS, which we refer to as the extended definition. Each sense, from our evaluation set, was mapped to a concept in the UMLS, therefore, we use the sense's definition plus the definition of its parent/children and narrow/broader relations and associated synonymous terms. After the extended definition is obtained, we create the second-order vector by first creating a word by word co-occurrence matrix in which the rows represent the content words in the extended definition, and the columns represent words that cooccur in Medline abstracts with the words in the definition. Each word in the extended definition is replaced by its corresponding vector, as given in the co-occurrence matrix. The centroid of these vectors constitutes the second order co-occurrence vector that is used to represent the sense.The second-order co-occurrence vector for the ambiguous word is created in a similar fashion, only rather than using words in the extended definition, we use the words surrounding the word in the instance. Second-order co-occurrence vectors were first described by (Schütze, 1998) and extended by (Purandare and Pedersen, 2004) and (Patwardhan and Pedersen, 2006) for the task of word sense discrimination. Later, adapted these vectors for the task of disambiguation rather than discrimination.Singular Value Decomposition (SVD), used in Latent Semantic Indexing, is a factor analysis technique to decompose a matrix, M into a product of three simpler matrices, such thatM = U • Σ • V T .The matrices U and V are orthonormal and Σ is a diagonal matrix of eigenvalues in decreasing order. Limiting the eigenvalues to d, we can reduce the dimensionality of our matrix toM d = U d • Σ d • V T d .The columns of U d correspond to the eigenvectors of M d . Typically this decomposition is achieved without any loss of information. Here though, SVD reduces a word-by-word cooccurrence matrix from thousands of dimensions to hundreds, and therefore the original matrix cannot be perfectly reconstructed from the three decomposed matrices. The intuition is that any information lost is noise, the removal of which causes the similarity and non-similarity between words to be more discernible (Pedersen, 2006) .Principal Component Analysis (PCA) is similar to SVD, and is commonly used for dimensionality reduction. The goal of PCA is to map data to a new basis of orthogonal principal components. These principal components are linear combinations of the original features, and are ordered by their variance. Therefore, the first principal components capture the most variance in the data. Under the assumption that the dimensions with the most variance are the most discriminative, dimensions with low variance (the last principal components) can safely be removed with little information loss.PCA may be performed in a variety of ways, however the implementation we chose makes the parallels between PCA and SVD clear. First the co-occurrence matrix, M is centered to produce the matrix C. Centering consists of subtracting the mean of each column from values in that column. PCA is sensitive to scale, and this prevents the variance of features with higher absolute counts from dominating. Mathematically, this allows us to compute the principal components us-ing SVD on C. This is because C T C is proportional to the covariance matrix of M , and is used in the calculation of SVD. Applying SVD to C, such that C = U • Σ • V T , the principal components are obtained by the product of U and Σ (e.g. M P CA = U • Σ). For dimensionality reduction all but the first d columns of M P CA are removed. This captures as much variation in the data with the fewest possible dimensions.The word embeddings method, proposed by (Mikolov et al., 2013) , is a neural network based approach that learns a representation of a wordword co-occurrence matrix. The basic idea is that a neural network is used to learn a series of weights (hidden layer with in the neural network) that either maximizes the probability of a word given the surrounding context, referred to as the continuous bag of words (CBOW) approach, or to maximize the probability of the context given a word, referred to as the Skip-gram approach;For either approach, the resulting hidden layer consists of a matrix where each row represents a word in the vocabulary and columns a word embedding. The basic intuition behind this method is that words closer in meaning will have vectors closer to each other in this reduced space.
2
We implemented a selection of corpus similarity measures based on n-gram language models and topic models of the corpora, for comparison with the benchmark χ 2 similarity set by Kilgarriff (2001) . To evaluate our selection of corpus similarity measures, we assembled a suite of KSC collections, including KSC collections provided by Kilgarriff (2001) , and new KSC collections constructed using the same method.χ 2 similarity is a statistic that compares the corpus frequencies of words directly. Kilgarriff (2001) justifies this choice on the grounds that "reliable statistics depend on features that are reliably countable". In basing our additional similarity measures on generative language models, we too have a foundation in reliably countable phenomena, but aim to better capture syntactic and semantic differences between corpora. In this section we detail the similarity measures we studied and their metaparameters.We implemented χ 2 similarity as defined in Kilgarriff (2001) , however we varied the cap on the lexicon size to the top N words for N ∈ {200, 500, 1000, 2000, 4000}. We also tested χ 2 similarity with an uncapped lexicon (that is, all word types in the corpus contribute to the statistic). We did not discard words outside the top N completely. Instead, we counted them all as tokens of a single wordform OTHER . This ensures the χ 2 similarity is calculated as a sum across the contingency table of an entire event space.We implemented perplexity similarity using the SRILM language modelling toolkit (Stolcke et al., 2011) . To calculate the similarity between two corpora A and B, our perplexity similarity measure first builds an n-gram language model of each: M A and M B respectively. The final similarity is:− P (B, M A ) + P (A, M B ) 2where P (C, M ) is the perplexity of model M with respect to corpus C. The score is negated because high perplexity is indicative of difference, not similarity. Note that the perplexity similarity measure implemented by Kilgarriff (2001) had a much more complicated algorithm, for the sake of symmetry with the paired n-fold crossvalidation based homogeneity measure he used. We do not require a measure of corpus homogeneity for the knownsimilarity corpora we consider here. Rather than just use trigram language models as Kilgarriff (2001) did, we tested the perplexity similarity measure using n-gram models for n ∈ {1, 2, 3, 4, 5}. We applied SRILM in its default configuration which produces models with Good-Turing discounted estimates and uses the Katz backoff method (Stolcke, 2002) .Our final measure, topic similarity, combines the documents in the two corpora to be compared, and builds a topic model of the complete set of contained documents. It then builds a vector representation of each corpus and compares the resulting vectors to derive the similarity between the corpora.The vector representation a of corpus A has a dimension for each topic in the topic model. The value of a i is the number of tokens in A assigned topic i by the topic model. We used three vector similarity measures to compare corpus topic vectors.1. Euclidean similarity− ( a − b)The distance is negated to give more similar vectors a "greater" similarity. 3. Jensen-Shannon similarity1 − D JS ( a a 1 , b b 1 )where • 1 is the 1 norm (sum of absolute values) and D JS is the Jensen-Shannon divergence:D JS ( a, b) = D KL ( a, m) + D KL ( b, m) 2where m = ( a+ b)/2 and D KL is the Kullback-Leibler divergence, or relative entropy:D KL ( a, b) = i a i log a i b iWe subtract D JS from 1 to make the measure a positive value that increases with similarity.In our experiments, we used topic models with T topics, for T ∈ {10, 50, 100, 500, 1000}.Here, we give a short description of our implementation of the known-similarity corpus construction method. A full treatment of the method for constructing KSC can be found in Kilgarriff (2001) . When constructing KSC from source corpora A and B, we construct N = 11 KSC in all cases, 1 meaning that the percentages of B in individual KSC are exactly 0%, 10%, 20%, ..., 100% (and similarly the percentages of A are 100%, 90%, 80%, ..., 0%).We split the source corpora at the token level, assigning the same number of tokens to each KSC. However, we do preserve sentence and document boundaries for the purpose of the topic similarity measure, introducing artificial boundaries when splits occur mid-sentence or mid-document. Except where otherwise stated, text is assigned to KSC in contiguous chunks from the source corpora in a size appropriate to the position of the KSC in the KSC set. For example, if 60k words are assigned from corpus A to KSC 4 then the next 50k words of A will be assigned to KSC 5 . When a source corpus consists of multiple files, the order is determined by the lexicographical sort order of the source file names.KSC set Number of corpora in set acc gua 10 art gua 11 bmj gua 9 env gua 9 gua tod 11 Table 1 : KSC based on the BNC from Kilgarriff (2001) .We evaluated each of our measures on a subset of the KSC used in Kilgarriff (2001) , referred to as KILGARRIFF, comprising the text type pairs indicated in Table 1 . Within each KSC set, the number of words in the corpora varies between 111k and 114k. The three letter codes refer to subsets of the BNC, and are described in Kilgarriff (2001) .The WeSearch Data Collection ("WDC") is a collection of user-generated text designed to capture differences in both subject matter and writing style (Read et al., 2012) . It contains text on the separate topics of NLP and the Linux operating system taken from blogs, Wikipedia, software reviews and forums. Using Linux as a fixed topic and varying the writing style by varying the source through blogs, reviews and forums, we created three KSC with differences in genre. Then, using forums as a fixed source, we created additional KSC by mixing the topics of NLP and Linux. Table 2 shows details of each WDC KSC we constructed.The Gigaword corpus (Parker et al., 2009 , "GIGAWORD") is a collection of date-stamped newswire text. We used the L.A. Times/Washington Post ("ltw") subset and the New York Times ("nyt") subset to create large KSC sets with regional differences for the same time period and to compare time differences for the same region. Details of the GIGAWORD KSC we created can be found in Table 3 . nyt jun consists of texts from the nyt subset for June 2005 and 2006. This time-differentiated KSC consists of corpora that are roughly an order of magnitude larger than those used by Kilgarriff (2001) . nyt 5678 is similar, but consists of texts from May-August, and as such provides even larger corpora. ltw nyt consists of texts from the ltw and nyt subsets for June 2006, while ltw nyt long consists of texts from the same subsets for May-July 2006. These corpora allow us to compare region-differentiated corpora at two different sizes, both of which are again much larger than those used by Kilgarriff (2001) . Our standard implementation of the KSC construction method takes samples from the source corpora from the start of the dataset. If one of the source corpora is larger, it is effectively truncated by this selection policy. Since GIGAWORD source files are sorted chronologically and the NYT portion is larger than the LTW portion, a shorter timespan would be selected from the NYT portion for the KSC. To alleviate this, for GIGAWORD we alter the sampling method slightly: after each KSC has received its allo- cation from a source corpus, we skip forward in that corpus to a position proportional to the amount taken so far. This ensures that the final samples come from near the end of the time range. Note that although our method ensures that mixed LTW/ NYT KSC contain text drawn from aligned timespans, it will still be the case that the earlier KSC in the set come from earlier time periods than later KSC in the set. This means that our location-differentiated KSC are also somewhat time differentiated. We mitigate this by limiting the total timespan from which samples are drawn for location-differentiated KSC to three months, whereas timedifferentiated KSC are separated by one year.
2
We will use SVMs to learn lexico-syntactic patterns in our corpora corresponding to known properties in order to find new ones. Training an SVM requires a labelled training set. To generate this set we harness our already-known concepts/features (and their relationships) from the McRae norms to find instantiations of said relationships within our corpora. We use parsed sentence information from our corpora to create a set of attributes describing each relationship, our learning patterns. In doing so, we are assuming that across sentences in our corpora containing a concept/feature pair found in the McRae norms, there will be a set of consistent lexico-syntactic patterns which indicate the same relationship as that linking the pair in the norms.Thus we iterate over our chosen corpora, parsing each concept-containing sentence to yield grammatical relation (GR) and part-of-speech (POS) information from which we can create a GR-POS graph relating the two. Then for each triple, we find any/all paths through the graph which link the concept to its feature and use the corresponding relation to label this path. We collect descriptive information about the path in the form of attributes describing it (e.g., path nodes, labels, length) to create a training pattern specific to that concept relation feature triple and sentence. It is these lists of attributes (and their relation labels) which we employ as the labelled training set and as input for our SVM.We employ two corpora for our experiments: Wikipedia and the UKWAC corpus (Ferraresi et al., 2008) . These are both publicly available and webbased: the former a source of encyclopedic information and the latter a source of general text. Our Wikipedia corpus is based on a Sep 2009 version of English-language Wikipedia and contains around 1.84 million articles (>1bn words). Our UKWAC corpus is an English-language corpus (>2bn words) obtained by crawling the .uk internet domain.Our experiments use a British-English version of the McRae norms (see Taylor et al. (2011) for details). We needed to recode the free-form McRae properties into relation-classes and features which would be usable for our learning algorithm. As we will be matching the features from these properties with individual words in the training corpus it was essential that the features we generated contained only one lemmatised word. In contrast, the relations were merely labels for the relationship described (they did not need to occur in the sentences we were training from) and therefore needed only to be single-string relations. This allowed prepositional verbs as distinct relations, something which has not been attempted in previous work yet can be semantically significant (e.g., the relations used-in, used-for and used-by have dissimilar meanings).We applied the following sequential multi-step process to our set of free-form properties to distill them to triples of the form concept relation feature, where relation can be a multi-word string and feature is a single word:1. Translation of implicit properties to their correct relations (e.g., pig an animal → pig is an animal).2. Removal of indefinite and definite articles.3. Behavioural properties become "does" properties (e.g., turtle beh eats → turtle does eats).4. Negative properties given their own relation classes (e.g., turkey does cannot fly → turkey doesnt fly).5. All numbers are translated to named cardinals (e.g., spider has 8 legs → spider has eight legs).6. Some of the norms already contained synonymous terms: these were split into separate triples for each synonym (e.g., pepper tastes hot/spicy → pepper tastes hot and pepper tastes spicy).7. Prepositional verbs were translated to one-word, hyphenated strings (e.g., made of → made-of ).8. Properties with present participles as the penultimate word were split into one including the verb as the feature and one including it in the relation (e.g., envelope used for sending letters → envelope usedfor-sending letters and envelope used-for sending).9. Any remaining multi-word properties were split with the first term after the concept acting as the relation (e.g., bull has ring in its nose → bull has ring, bull has in, bull has its and bull has nose).10. All remaining stop-words were removed; properties ending in stop-words (e.g., bull has in and bull has its) were removed completely.This yielded 7,518 property-triples with 254 distinct relations and an average of 14.7 triples per concept.We parsed both corpora using the C&C parser (Clark and Curran, 2007) as we employ both GR and POS information in our learning method. To accelerate this stage, we process only sentences containing a form (e.g., singular/plural) of one of our training/testing concepts. We lemmatise each word using the WordNet NLTK lemmatiser (Bird, 2006) . Parsing our corpora yields around 10Gb and 12Gb of data for UKWAC and Wikipedia respectively.The C&C dependency parse output contains, for a given sentence, a set of GRs forming an acyclic graph whose nodes correspond to words from the sentence, with each node also labelled with the POS of that word. Thus the GR-POS graph interrelates all lexical, POS and GR information for the entire sentence. It is therefore possible to construct a GR-POS graph rooted at our target term (the concept in question), with POS-labelled words as nodes, and edges labelled with GRs linking the nodes to one another. An example graph can be seen in Figure 1 .We use SVMs (Cortes and Vapnik, 1995) for our experiments as they have been widely used in NLP and their properties are well-understood, showing good performance on classification tasks (Meyer et al., 2003) . In their canonical form, SVMs are nonprobabilistic binary linear classifiers which take a set of input data and predict, for each given input, which of two possible classes it corresponds to. There are more than two possible relation-labels to learn for our input patterns, so ours is a multi-class classification task. For our experiments we use the SVM Light Multiclass (v. 2.20) software (Joachims, 1999) which applies the fixed-point SVM algorithm described by Crammer and Singer (2002) to solve multi-class problem instances. Joachims' software has been widely used to implement SVMs (Vinokourov et al., 2003; Godbole et al., 2002) .Previous techniques for our task have made use of lexical, syntactic and semantic information. We are deliberately avoiding the use of manually-created semantic resources, so we rely only on lexical and syntactic attributes for our learning stage (i.e., the GR-POS paths described earlier).A table of all the categories of attributes we extract for each GR-POS path are in Table 2 .4, together with attributes from the path linking turtle and reptile in our example sentence (see Figure 1) .We ran our experiments with two vector-types which we call our 'verb-augmented' and our 'nonaugmented' vector-types. The sets are identical except the verb-augmented vector-type will also contain an additional attribute category containing an attribute for every instance of a relation verb (i.e., a verb which is found in our training set of relations, e.g., become, cause, taste, use, have and so on) in the lexical path. We do this to ascertain whether this additional verb-information might be more informative to our system when learning relations (which tend to be composed of verbs). Table 2 : An example vector for an instance of the relation-label is. The attributes are distinguished from one another by their attribute category. Relation verbs only appear in the verb-augmented vector-type and no such verbs appear in our example sentence, so this category of attribute is empty. All attributes in the table will receive the value 1.0 except the LEN attribute which will have the value 0.2 (the reciprocal of the path length, 5).We considered allocating a 'no-rel' relation label to those sets of attributes corresponding to paths through the GR-POS graph which did not link the concept to a feature found in our training data; however our initial experiments indicated the SVM model would assign every pattern we tested to the 'no-rel' relation. Therefore we used only positive instances in our training pattern data.We cycle through all training concepts/features, finding sentences containing both. For each such sentence, our system generates the attributes from the GR-POS path linking the concept to the feature (the linking-path) to create a pattern for that pair, in the form of a relation-labelled vector con-taining real-valued attributes. The system assigns 1.0 to all attributes occurring in a given path and the LEN value receives the reciprocal of the path-length. 1 Each linking-path is collected into a relation-labelled, sparse vector in this manner. In the larger UKWAC corpus this corresponds to over 29 million unique attributes across all found linkingpaths (this figure corresponds to the dimensionality of our vectors). We then pass all vectors to the learning module 2 of SVM Light to generate a learned model across all training concepts.Having trained our model, we must now find potential features and relations for our test concepts in our corpora. We again only examine sentences which contain at least one of our test concepts. Furthermore, to avoid a combinatorial explosion of possible paths rooted at those concepts we only permit as candidates those paths whose anchor node is a singular or plural noun and whose target node is either a singular/plural noun or adjective. This filtering corresponds to choosing patterns containing one of the three most frequent anchor node POS tags (NN, NNS and NNP) and target node POS tags (NN, JJ and NNS) found during our training stage. These candidate patterns constitute 92.6% and 87.7% of all the vectors, respectively, from our training set of patterns (on the UKWAC corpus). This pattern pre-selection allows us to immediately ignore paths which, despite being rooted at a test concept, are unlikely to contain property norm-like information.We next classified our test concepts' candidate patterns using the learned model. SVM Light assigns each pattern a relation-class from the training set and outputs the values of the decision functions from the learned model when applied to that particular pattern. The sign of these values indicates the binary decision function choice, and their magnitude acts as a measure of confidence. We wanted those vectors which the model was most confident in across all decision functions, so we took the sum of the absolute values of the decision values to generate a pattern score for each vector/relation-label. Table 3 : Parameter estimation both with and without relation, using our augmented and non-augmented vector-types and across our two corpora and the combined corpora set.From these patterns we derived an output set of triples where the concept and feature of a triple corresponded to the anchor and target nodes of its pattern and the relation corresponded to the pattern's relation-label. Identical triples from differing patterns had their pattern scores summed to give a final 'SVM score' for that triple.A brief qualitative evaluation of our system's output indicates that although the higher-ranked (by SVM score) features and relations were, for the most part, quite sensible, there were some obvious output errors (e.g., non-dictionary strings or verbs appearing as features). Therefore we restricted our features to those which appear as nouns or adjectives in WordNet and excluded features containing an NLTK (Bird, 2006) corpus stop-word. Despite these exclusions, some general (and therefore less informative) relation/feature combinations (e.g., is good, is new) were still ranking highly. To mitigate this, we extract both log-likelihood (LL) and pointwise mutual information (PMI) scores for each concept/feature pair to assess the relative saliency of each extracted feature, with a view to downweighting common but less interesting features. To speed up this and later stages, we calculate both statistics for the top 1,000 triples extracted for each concept only.PMI was proposed by Church and Hanks (1990) to estimate word association. We will use it to measure the strength of association between a concept and its feature. We hope that emphasising conceptfeature pairs with high mutual information will render our triples more relevant/informative. We also employ the LL measure across our set of concept-feature pairs. Proposed by Dunning (1993) , LL is a measure of the distribution of linguistic phenomena in texts and has been used to contrast the relative corpus frequencies of words. Our aim is to highlight features which are particularly distinctive for a given concept, and hence likely to be features of that concept alone.We calculate an overall score for a triple, t, by a weighted combination of the triple's SVM, PMI and LL scores using the following formula:score(t) = β PMI •PMI(t)+β LL •LL(t)+β SVM •SVM(t)where the PMI, SVM and LL scores are normalised so they are in the range [0, 1]. The relative β weights thus give an estimate of the three measures' importance relative to one another and allows us to gauge which combination of these scores is optimal.We also wanted to ascertain the extent to which the output from both our corpora could be combined to improve results, balancing the encyclopedic but somewhat specific nature of Wikipedia with the generality and breadth of the UKWAC corpus. We combined the output by summing individual SVM scores of each triple from both corpora to yield a combined SVM score. PMI and LL scores were then calculated as usual from this combined set of triples.
2
An overview of the proposed MSPAN is shown in Fig. 2 . The input is a short video and a question sentence, while the output is the produced answer.Video representation N frames are uniformly sampled to represent the video. Then we use the pre-trained ResNet-152 (He et al., 2016) to extract video appearance features for each frame. And, we apply the 3D ResNet-152 (Hara et al., 2018) pre-trained on Kinetics-700 (Carreira et al., 2019) dataset to extract video motion features. Specifically, 16 frames around each frame are placed into the 3D ResNet-152 to obtain the motion features around this frame. Finally, we get a joint video representation by concatenating appearance features and motion features. By using a fully-connected layer to reduce feature dimension, we obtain video representation asV = {v i : i ≤ N, v i ∈ R 2048 }.Question representation All words in question are represented as 300-dimensional embeddings initialized with pre-trained GloVe vectors (Pennington et al., 2014) . And a 512-dimensional question embedding is generated from the last hidden state of a three-layer BiLSTM, i.e., q ∈ R 512 .Each object in the video corresponds to a different number of frames, but previous methods (Seo et al., 2020; Lei et al., 2021) cannot treat various levels of visual information separately. Therefore, we construct clips of different lengths to express the visual information in the video delicately, and regard the length attribute as a scale.We use max-pools of different kernel-sizes to aggregate frame-level visual features, and kernelsize is the scale attribute of these clips. In this way, clip-level visual features are obtained, as follows:EQUATIONWhere K is the range of scales, and K ≤ N . Thus, we construct M i = N − i + 1 clips at scale i:V i = {v i j : 1 ≤ j ≤ M i , v i j ∈ R 2048 } (3)In order to reason the relationships between different objects in a video, we separately build a graph for each scale. Each node in a graph represents the clip-level visual features. Only when two nodes contain overlapping or adjacent frames, an edge will be connected between them. Frame interval of the j-th clip at scale i is [j, j + i − 1], so all edges in the K graphs can be expressed as:E i = {(x, y)|x − i ≤ y ≤ x + i} (4)Finally, these multi-scale graphs constructed in this paper can be denoted asG i = {V i , E i }.Before cross-scale feature interaction, the original node features of K graphs are copied asV o i = V i .Interaction at the same scale. For all nodes with the same scale, we apply a two-layer graph convolutional network (GCN) (Kipf and Welling, 2017) to perform relational reasoning over the K graphs. The process of graph convolution is represented as:X l+1 =D − 1 2ÂD − 1 2 X l W l (5)Where is the input adjacency matrix, X l is the node feature matrix of layer l, and W l is the learnable weight matrix. The diagonal node degree ma-trixD is used to normalizeÂ. Due to the small number of nodes in each graph, we decide to share the weight matrix W l when K graphs are updated.Interaction at top-down scale. We realize the interaction of adjacent scale graphs from small scale to large scale. Therefore, visual information is understood step by step from details to the whole through the interaction of top-down scale. Guided by the question, the nodes in graph G i are used to update the nodes in graph G i+1 . Visual features at different scales show hierarchical attention to the question, so we call it progressive attention.If the clip corresponding to node x in graph G i has the same frames as the clip corresponding to node y in graph G i+1 , there will exist a directed edge from x to y. Therefore, we can use the edge to fuse the cross-scale features of these same frames.Firstly, visual features and question embedding are fused to capture the joint features of each node in graph G i . Then, the process of message passing from graph G i to graph G i+1 can be expressed as:m xy = (W 1 v i+1 y ) ⊗ ((W 2 v i x ) (W 3 q)) T (6)Where ⊗ is the outer product, is the hadamard product. After receiving the delivery messages, the attention weights of these messages are calculated:EQUATIONWhere N y is the set of all neighbor nodes in graph G i through cross-scale edges. Consequently, all the messages passed into node y are summed to derive the update of node y, as follows:v i+1 y = x∈Ny w xy • ((W 4 v i x ) (W 5 q)) (8) V u i+1 = {ṽ i+1 y : y ≤ M i+1 ,ṽ i+1 y ∈ R 2048 } (9)When updating all nodes in graph G i+1 , we consider the new features V u i+1 and the original features V o i+1 . Therefore, we use the residual connection to preserve original information of the video:EQUATIONWhere [; ] is the concatenation operator. Above W 1 ∼ W 6 are learnable weights, and they are shared in the update of graphs G 2 ∼ G K . To summarize, the update of K − 1 graphs is a progressive process from small scale to large scale, hence it is referred to as top-down scale interaction.Interaction at bottom-up scale.After an overall understanding of the video, people can accurately find all details related to the question at the second time they watch the video. Therefore, we achieve an understanding of the video from global to local through bottom-up scale interaction. After the previous interaction, we realize the interaction of adjacent graphs from large scale to small scale.Following the same method as top-down scale interaction from Eq. 6 to Eq. 10, we apply graph G i to update graph G i−1 in this interaction. But the weights W 1 ∼ W 6 are another group in the update of graphs G K−1 ∼ G 1 . After this interaction, graph G 1 can grasp the all-scale video features related to the question by progressive attention.After T iterations of cross-scale feature interaction, we read out all the nodes in graph G 1 . Then, a simple attention is used to aggregate the N nodes. And, final multi-modal representation is given as:EQUATIONWhere ELU is activation function, above W 7 ∼ W 11 are learnable weights and b is learnable bias. We can find the answer by applying a classifier (two fully-connected layers) on multi-modal representation F . Multi-label classifier is applied to open-ended tasks, and cross-entropy loss function is used to train the model. Due to repetition count is a regression task, we use the MSE loss function. For the multi-choice task, each question corresponds to R answer sentences. We first get the embedding of each answer in the same way as the question embedding. Then we use the multi-modal fusion method in Eq. 11∼13 to fuse the answer embedding with node features. After using two fully-connected layers, the answer scores{s i } R i=1have appeared. This model is trained by minimizing the hinge loss (Jang et al., 2017) of pairwise comparisons between answer scores{s i } R i=1 .3 Experiments 3.1 Datasets TGIF-QA (Jang et al., 2017) is a widely used largescale benchmark dataset for VideoQA. And four task types are covered in this dataset: repeating action (Action), repetition count (Count), video frame QA (FrameQA) and state transition (Trans.). MSVD-QA (Xu et al., 2017) and MSRVTT-QA (Xu et al., 2016) are open-ended tasks which are generated from video descriptions. In both datasets, questions can be divided into 5 types according to question words: what, who, how, when and where.
2
First, we describe how we obtained various PTMs and created DSM embeddings for different models as well as how we fine-tuned/trained the classifiers for the studies. Then, we discuss the datasets and the study design.We use the following PTMs: BERT, RoBERTa, ALBERT, XLNet, ELMo, Word2Vec, and GloVe.Deep PTMs: We get the BERT base model (uncased) for sequence classification from the Hugging Face library (Wolf et al., 2020) . The embedding vectors are 768-dimensional. This BERT PTM adds a single linear layer on top of the BERT base model. The pretrained weights of all hidden layers of the PTM and the randomly initialized weights of the top classification layer are adapted during fine-tuning using a target dataset. The XL-Net is obtained from the Hugging Face library (Wolf et al., 2020) and fine-tuned similar to BERT. Its embedding vectors are 768-dimensional. The RoBERTa (obtained from (Wolf et al., 2020) ) and ALBERT (obtained from (Maiya, 2020)) are used by first extracting embeddings from their final layer and then adding linear layers. While the RoBERTa embeddings are 768-dimensional, the ALBERT embeddings are 128-dimensional.We get the ELMo embeddings from TensorFlow-Hub (Abadi et al., 2015) . Each embedding vector has a length of 1024. The Word2Vec embeddings are obtained from Google Code (Google Code, 2013) . The embedding vectors are 300-dimensional. We get the GloVe pretrained 300-dimensional embeddings from (Pennington et al., 2014) .CNN: The ELMo, Word2Vec, and GloVe embeddings are used to train a CNN classifier with a single hidden layer (Kim, 2014) . The first layer is the embedding layer. Its dimension varies based on the dimension of pretrained embeddings. The second layer is the one-dimensional convolution layer that consists of 100 filters of dimension 5 x 5 with "same" padding and ReLU activation. The third layer is a one-dimensional global maxpooling layer, and the fourth layer is a dense layer with 100 units along with ReLU activation. The last layer is the classification layer with softmax activation. We use this setting for the CNN architecture as it was found empirically optimal in our experiments. We use cross-entropy as loss function, Adam as the optimizer, and a batch size of 128. The embedding vectors are kept fixed during the training (Kim, 2014) .We create the DSMs using two approaches: graphbased and non-graph. For the graph-based approach, we use the following models: Text GCN and VGCN-BERT. For training the Text GCN model, we pre-process the data as follows. First, we clean the text by removing stop words and rare words whose frequencies are less than 5. Then, we build training, validation, and test graphs using the cleaned text. Finally, we train the GCN model using training and validation graphs and test the model using a test graph. During the training, early stopping is used. For training the VGCN-BERT model, first, we clean the data that includes removing spaces, the special symbols as well as URLs. Then, the BERT tokenizer is used to create BERT vocabulary from the cleaned text. The next step is to create training, validation, and the test graphs. The last step is training the VGCN-BERT model. During the training, the model constructs embeddings from word and vocabulary GCN graph.For the non-graph approach, we create em-beddings from the target dataset by using the Word2Vec and GloVe models. First, we preprocess the raw text data by converting the text (i.e., a list of sentences) into a list of lists containing tokenized words. During tokenization, we convert words to lowercase, remove words that are only one character, and lemmatize the words. We add bigrams that appear 10 times or more to our tokenized text. The bigrams allow us to create phrases that could be helpful for the model to learn and produce more meaningful representations. Then, we feed our final version of the tokenized text to the Word2Vec and the GloVe model for creating embeddings. After we obtain the embeddings, we use them to train the CNN classifier described in the previous sub-section, except that the domainspecific word embeddings are adapted during the training.We use two COVID-19 datasets for the study, i.e., CoAID (Cui and Lee, 2020) and CMU-MisCov19 (Memon and Carley, 2020). The CoAID dataset contains two types of data: true information and misinformation. We use this dataset to investigate the generalizability of the models along three dimensions.• Temporal dimension: Train a model using data from an earlier time, then test its generalizability at different times in the future.• Size dimension: Train models by varying the size of the training dataset.• Length dimension: Train models by varying the length of the samples, e.g., tweet (shortlength data) and news articles (lengthy data).The CMU-MisCov19 dataset is used to analyze a model's performance in fine-grained classification.The CoAID dataset (Cui and Lee, 2020) is used for binary classification since it has only two labels: 0 for misinformation and 1 for true information. This dataset contains two types of data: online news articles on COVID-19 and tweets related to those articles. Datasets of these two categories were collected at four different months in 2020: May, July, September, and November. Thus, the total number of CoAID datasets is 8. The class distribution is heavily skewed with significantly more true information samples than misinformation samples. Sample distribution per class (both for the tweets and news articles) is given in the appendix.The CMU-MisCov19 dataset contains 4,573 annotated tweets (Memon and Carley, 2020). The tweets were collected on three days in 2020: March 29, June 15, and June 24. The categories are finegrained comprising of 17 classes with skewed distribution. This dataset does not have any true information category. Its sample distribution per class is given in the appendix.We use the CoAID dataset to understand whether the context of the COVID-19 text evolves. To detect a change in the context over time, we investigate how the distribution of the high-frequency terms evolve for the two categories of the data: tweets and news articles. For each category, we select the top 10 high-frequency words from the 4 non-overlapping datasets belonging to 4 subsequent months, i.e., May, July, September, and November in 2020. Our goal is to determine whether there exists a temporal change in the distribution of high-frequency words. Figure 1 shows context evolution in the tweets category. We see that during May, the two highfrequency words were covid and coronavirus. The frequent words represent broader concepts such as health, disease, spread, etc. However, over time the context shifted towards more loaded terms. For example, in July two new high-frequency words, such as mask and support, emerged. Then, in September words like contact, school, child, and travel became prominent. Finally, during November, we observe a sharp change in the nature of the frequent words. Terms with strong political connotations (e.g., trump, fauci, campaign, and vaccine) started emerging. The evolution in the high-frequency words indicates a temporal shift in the context in the tweets dataset. We observe similar context evolution in the news articles dataset, reported in the Appendix with additional analysis.We describe the design of the studies for comparing the NLP approaches for misinformation detection. Study 1 is designed to explore a model's generalizability in the temporal dimension of the data. We fine-tune/train a model using CoAID data collected from May 2020 and test it using data obtained from 3 different months in "future": July, September, and November. The following models are tested in this study: BERT (Mixed-domain Transfer Learning), ELMo (Mixed-domain Transfer Learning), Word2Vec (Mixed-domain Transfer Learning and DSM-based), GloVe (Mixed-domain Transfer Learning and DSM-based), Text-GCN (DSM-based), and VGCN-BERT (DSM-based).Study 2 is designed to test the performance of a model along the length dimension of the data. We use both short-length data (tweets) and lengthy data (news articles). Specifically, we train a model using the CoAID Twitter dataset to understand a model's performance on the short-length data. Then, we train a model using the CoAID news articles dataset to study a model's performance on the lengthy data. The models used in this study are the same as in Study 1.In study 3, we evaluate a model along the size dimension of the data. We replicate studies 1 and 2 using a large target dataset, which is created by merging the datasets from May, July, and September. The November dataset is used as the test set. We experiment with two models for this study: BERT (Mixed-Domain Transfer Learning) and Word2Vec (DSM-based).To further study the effectiveness of the Transformer-based mixed-domain transfer learning approach, we experiment with two variants of the BERT PTM, i.e., RoBERTa and ALBERT. In addition to this, we study the performance of an autoregressive model XLNet that induces the strength of BERT. For this study, we only use the news articles dataset.Study 5 is designed to test a model's performance on the fined-grained CMU-MisCov19 dataset. The models tested are the same as in Study 1.
2
We manually prepared a dataset of queries on the Academic domain of our university. The university database was used as the source of in-formation. Examples of tables in the database schema are courses, labs, students consisting of attributes like course name, course id, student name, lab name etc. The database has relationships like register (between student and course), teach (between professor and course) etc. Each token in the sentence is given a tag and a set of features. If a token is an attribute, it is assigned a tag which corresponds to a SQL clause to which the attribute belongs. If a token is not an attribute, it is given a NULL (O) tag. The tagging was done manually. Our tag set is simple and consists of only 4 tags, where each tag corresponds to a SQL clause. The tags are SELECT, WHERE, GROUP BY, HAVING. Formally, our task is framed as assigning label sequences to a set of observation sequences. We followed two guidelines while tagging sentences. Sometimes it is possible that an attribute can belong to more than one SQL clause. If an attribute belongs to both SELECT and GROUP BY clause, we tag the attribute as a GROUP BY clause attribute. This is done with an aim to identify higher number of GROUP BY clause attributes as SELECT clause attributes are very common and are comparatively easier to identify. The second guideline that we followed was, if an attribute belongs to both the SELECT and the WHERE clause, we tag the attribute as a SELECT clause attribute. This is done because the WHERE clause attributes can often be identified through a domain dictionary. Table 2 shows an example of the tagging scheme. Each token in a sentence is given a set of features and a tag. In Table 2 , we have shown only one feature due to space constraints. We trained our data and created models for testing. We used Conditional Random Fields (Lafferty et al., 2001 ) for the machine learning task. The next subsection describes the features employed for the classification of explicit attributes in a NL query.The following features were used for the classification of explicit attributes in a NL query. Token-based Features These features are based on learning of tokens in a sentence. The isSymbol feature checks whether a token is a symbol (>, <) or not. Symbols like > (greater than), < (less than) are quite commonly used as aggregations in NL queries. This feature captures such aggregates. We also took lower case form of a token as a feature for uniform learning. We considered a particular substring as a feature. If that substring is found in the token, we set the feature to 1 else 0 (for example, in batch wise or batchwise, the attribute batch is identified as GROUP BY clause attribute using substring wise). Grammatical Features POS tags of tokens and grammatical relations (e.g. nsubj, dobj ) of a token with other tokens in the sentence were considered. These features were obtained using the Stanford parser 2 (Marneffe et al., 2006) . Contextual Features Tokens preceding and following (local context) the current token were also considered as features. In addition, we took the POS tags of the tokens in the local context of the current token as features. Grammatical relations of the tokens in local context of the current token were also considered for learning. Other Features: isAttribute This is a basic and an important feature for our problem. If a token is an attribute, we set the feature to 1, else 0. Presence of other attributes This feature aims to identify the GROUP BY clause attributes only. In SQL, the HAVING clause generally contains a condition on the GROUP BY clause. If a NL query is very likely (>95%) to have a HAVING clause attribute, then the SQL clause will certainly have a GROUP BY clause as well. This feature is marked as 1 for an attribute if it has a local context which may trigger a GROUP BY clause and at the same time if the NL query is very likely to have the HAV-ING clause attribute. The likeliness of the HAVING clause attribute is again decided based on the local context of the attribute. Thus, GROUP BY clause attribute is not just identified using its local context, but also depending on the presence of HAV-2 http://nlp.stanford.edu/software/lex-parser.shtml ING clause attribute. In simple terms, this feature increases the weight of an attribute to belong to the GROUP BY clause of the SQL query.Trigger words An external list is used to determine whether a word in the local context of an attribute may trigger a certain SQL clause for the attribute. (eg., the word each may trigger GROUP BY clause).Until now, we have only identified attributes and their corresponding SQL clauses. But this is not sufficient to get a complete SQL query. In this section, we describe how we can generate a complete SQL query after the classification of attributes. To build a complete SQL query we would require: 1. Complete attribute and entity information. 2. Concepts 3 of all the tokens in the given query. 3. Mapping of identified entities and relationships in the Entity Relationship schema to get the joint conditions in WHERE clause.Our system can extract attribute information using explicit attribute classifier for explicit attributes and domain dictionaries for implicit attributes. Sometimes, we may not have complete attribute information to form a SQL query. That is, there can be attributes other than explicit attributes and implicit attributes in a SQL query. For example, consider: Example 1: Which professor teaches NLP ? Example 2: Who teaches NLP ? The SQL query for both the examples is: SELECT professor name FROM prof, teach, course WHERE course name= NLP AND course id=course teach id AND prof teach id=prof id .In example 1, our system has complete attribute information to form the SQL query. Since professor is explicitly mentioned by the user in the query, here professor name is identified as a SE-LECT clause attribute by our system. But in example 2, we do not have complete attribute information. Here identifying the SELECT clause attribute professor name is a problem, as there is no clue (neither explicit attribute nor implicit attribute) in the query which points us to the attribute professor name. To identify attributes which cannot be identified as implicit attributes or explicit at-tributes, Concepts Identification (Srirampur et al., 2014) is used. In Concepts Identification, each token in the NL query is tagged with a concept. Using Concepts Identification, we can directly identify Who as professor name. These attributes are known as the Question class attributes. Most of the times, since question words are related to the SELECT clause, the attribute professor name can be mapped to the SELECT clause, thereby giving us complete information of attributes. We also use Concepts Identification to identify relationships in the NL query. In both the examples, teach which is a relationship in the Entity Relationship schema can be identified through Concepts Identification (CI). Once the attributes are identified, entities can be extracted. For example, entities for the attributes course name, professor name are COURSES and PROFESSOR respectively. The identified entities and relationships are added to the FROM clause.All the identified entities and relationships can now be mapped to the Entity Relationship (ER) schema to get the joint conditions (Arjun Reddy Akula, 2015) in the WHERE clause. We create an ER graph using the ER schema of the database with entities and relationships as vertices in the ER graph. We apply a Minimum spanning tree (MST) algorithm on the ER graph to get a noncyclic path connecting all the identified vertices in the ER graph. With this, we get the required join conditions in the WHERE clause. Arjun Reddy Akula (2015) discusses the problem of handling joint conditions in detail. Note that new entities and relationships can also be identified while forming the MST. These extra entities and relationships are added to the FROM clause in the SQL query. We now have a complete SQL query.6 Experiments and Discussions
2
In this section, we first introduce the MIML framework, and then describe the model architecture we propose for relation extraction, which is shown in Figure 1 .In MIML, the set of text sentences for the single entity pair or multiple entity pairs 2 (maximum two entity pairs in this paper) is denoted by X = {x 1 , x 2 , ..., x n }. Assumed that there are E predefined relations (including NA) to extract. Formally, for each relation r, the prediction target is denoted by P (r|x 1 , ..., x n ).Input Representation: For each sentence x i , we use pretrained word embeddings to project each word token onto the d w -dimensional space. We adopt the position features as the combinations of the relative distances from the current word to M entities and encode these distances in M d p -dimensional vectors 3 . For single entity pair relation extraction, M = 2; for multiple entity pairs relation extraction, we limit the maximum number of entities in a sentence to four (i.e. two entity pairs). As three entities in one instance is possible when two tuples have a common entity, we set the relative distance to the missing entity to a very large number. Finally, each sentence is transformed into a matrixx i = {w 1 , w 2 , ..., w L } ∈ R L×V ,where L is the sentence length with padding andV = d w + d p * M .Bi-LSTM Layer: We make use of LSTMs to deeply learn the semantic meaning of a sentence. We concatenate the current memory cell hidden state vector h t of LSTM from two directions as the output vectorh t = [ − → h t , ← − h t ] ∈ R 2Bat time t, where B denotes the dimensionality of LSTM.We import word-level attention mechanism as only a few words in a sentence that are relevant to the relation expressed (Jat et al., 2018) . The scoring function is g t = h t × A × r, where A ∈ R E×E is a square matrix and r ∈ R E×1 is a relation vector. Both A and r are learned. After obtaining g t , we feed them to a softmax function to calculate the final importance α t = sof tmax(g t ). Then, we get the representationx t = α t h t .For a given bag of sentences, the learning is done using the setting proposed by (Zeng et al., 2015) , where the sentence with highest probability of expressing the relation in a bag is selected to train the model in each iteration.Primary Capsule Layer: Suppose u i ∈ R d denotes the instantiated parameters set of a capsule, where d is the dimension of the capsule. Let W b ∈ R 2×2B be the filter shared across different windows. We have a window sliding each 2-gram vector in the sequencex ∈ R L×2B with stride 1 to produce a list of capsules U ∈ R (L+1)×C×d , totally C × d filters.EQUATIONwhere 0 ≤ i ≤ C × d, 0 ≤ j ≤ L + 1, squash(x) = ||x|| 2 0.5+||x|| 2 x ||x|| , bc j|i =â j|i • sof tmax(b j|i ) 6:for all capsule j in layer l + 1 do7: v j = squash( i c j|iûj|i ), a j = ||v j || 8:for all capsule i in layer l and capsule j in layer l + 1 do 9:b j|i = b j|i +û j|i • v j return v j , a jDynamic Routing: We explore the transformation matrices to generate the prediction vector u j|i ∈ R d from a child capsule i to its parent capsule j. The transformation matrices share weights W c ∈ R E×d×d across the child capsules, where E is the number relations (parent capsules) in the layer above. Formally, each corresponding vote can be computed by:u j|i = W c j u i +b j|i ∈ R d (2)The basic idea of dynamic routing is to design a nonlinear map:{û j|i ∈ R d } i=1,...,H,j=1,...,E → {v j ∈ R d } E j=1where H = (L + 1) × C. Inspired by (Zhao et al., 2018) , we attempt to use the probability of existence of parent capsules to iteratively amend the connection strength, which is summarized in Algorithm 1. The length of the vector v j represents the probability of each relation. We use a separate margin loss L k for each relation capsule k:L k = Y k max(0, m + − ||v k ||) 2 + λ(1 − Y k )max(0, ||v k || − m − ) 2(3)where Y k = 1 if the relation k is present, m + = 0.9 , m − = 0.1 and λ = 0.5. The total loss can be formulated as:L total = E k=1 L kFor single entity pair relation extraction, we calculate the length of the vector v j which represents the probability of each relation. For multiple entity pairs relation extraction, we choose relations with top two probability meanwhile bigger than the threshold (We empirically set the threshold 0.7). Finally, we may get one or two predicted relations r. Given entity pair (e 1 , e 2 ), in order to choose which relationship the tuple belongs to, we adopt the pretrained embeddings of entities and relations 4 and calculate r k = arg mink |t − h − r k |, where t, h are the embeddings of entities e 1 , e 2 respectively and r k is the relation embedding. The relation with the closest embedding to the entity embedding difference is the predicted category.
2
The system we developed to evaluate our research idea was completed in collaboration with an in-dustrial partner, an online news company 1 . The purpose of the "#GE11 Twitter Tracker" was to allow users, and our partner's journalists, to tap into the content on Twitter pertaining to the election, through an accessible dashboard-style interface. To that end, the "Twitter Tracker" featured a number of abstractive and extractive summarization approaches as well as a visualisation of volume and sentiment over time (see Figure 1) .The Irish General Election took place on 25th February, 2011. Between the 8th of February and the 25th we collected 32,578 tweets relevant to the five main parties: Fianna Fáil (FF), the Green Party, Labour, Fine Gael (FG) and Sinn Féin (SF). We identified relevant tweets by searching for the party names and their abbreviations, along with the election hashtag, #ge11. For the purposes of the analysis presented here, we do not consider the independent candidates or the minority parties 2 . Tweets reporting poll results were also filtered out.The standard measure of error in predictive forecasting is Mean Absolute Error (MAE), defined as the average of the errors in each forecast:M AE = 1 n n i=1 |e i | (1)where n is the number of forecasts (in our case 5) and e i is the difference in actual result and predicted result for the i th forecast. MAE measures the degree to which a set of predicted values deviate from the actual values. We use MAE to compare Twitter-based predictions with polls as well as with the results of the election. To provide a reference point for our analysis, we use nine polls which were commissioned during the election. These polls guarantee accuracy to within a margin of 3% and in comparison to the final election results, had an average MAE of 1.61% with respect to the five main parties. There have been varying reports for Twitter-based predictions in the literature where the observed error can vary from very low (1.65%) (Tumasjan et al., 2010) to much higher (17.1% using volume, 7.6% using sentiment) (Gayo-Avello et al., 2011).It is reasonable to assume that the percentage of votes that a party receives is related to the volume of related content in social media. Larger parties will have more members, more candidates and will attract more attention during the election campaign. Smaller parties likewise will have a much smaller presence. However, is this enough to reflect a popularity at a particular point in time, or in a given campaign? Is measuring volume susceptible to disproportionate influence from say a few prominent news stories or deliberate gaming or spamming? We define our volume-based measure as the proportional share of party mentions in a set of tweets for a given time period:SoV (x) = |Rel(x)| n i=1 |Rel(i)| (2)where SoV (x) is the share of volume for a given party x in a system of n parties and |Rel(i)| is the number of tweets relevant to party i. This formula has the advantage that the score for the parties are proportions summing to 1 and are easily compared with poll percentages. The sets of documents we use are:• Time-based: Most recent 24 hours, 3 days, 7 days• Sample size-based: Most recent 1000, 2000, 5000 or 10000 tweets• Cumulative: All of the tweets from 8th February 2011 to relevant time• Manual: Manually labelled tweets from pre-8th February 2011When we draw comparison with a poll from a given date, we assume that tweets up until midnight the night before the date of the poll may be used. The volume of party mentions was approximately consistent in the approach to the election, meaning the cumulative volume function over time is linear and monotonically increasing.Our previous research has shown that supervised learning provides more accurate sentiment analysis than can be provided by unsupervised methods such as using sentiment lexicons (Bermingham and Smeaton, 2010). We therefore decided to use classifiers specifically trained on data for this Positive Negative Neutral Mixed Total Week 1 255 1,248 1,218 47 2,768 Week 2 629 1,289 2,411 106 4,435 Total 884 2,537 3,629 153 7,203 Table 1 : Annotation counts election. On two days, a week apart before the 8th of February 2011, we trained nine annotators to annotate sentiment in tweets related to parties and candidates for the election. The tweets in each annotation session were taken from different time periods in order to develop as diverse a training corpus as possible.We provided the annotators with detailed guidelines and examples of sentiment. Prior to commencing anntoation, annotators answered a short set of sample annotations. We then provided the gold standard for these annotations (determined by the authors) and each answer was discussed in a group session. We instructed annotators not to consider reporting of positive or negative fact as sentiment but that sentiment be one of emotion, opinion, evaluation or speculation towards the target topic. Our annotation categories consisted of three sentiment classes (positive, negative, mixed), one non-sentiment class (neutral) and the 3 other classes (unannotatable, non-relevant, unclear) . This is in line with the definition of sentiment proposed in (Wilson et al., 2005) .We disregard unannotatable, non-relevant and unclear annotations. A small subset (3.5%) of the documents were doubly-annotated. The interannotator agreement for the four relevant classes is 0.478 according to Krippendorff's Alpha, a standard measure of inter-annotator agreement for many annotators (Hayes and Krippendorff, 2007) . We then remove duplicate and contradictory annotations leaving 7,203 document-topic pairs (see Table 1 ). Approximately half of the annotations contained sentiment of some kind.The low level of positive sentiment we observe is striking, representing just 12% of the documenttopic pairs. During this election, Ireland was in a period of economic crisis and negative political sentiment dominated the media and public mood. This presents a difficulty for supervised learning. With few training examples, it is difficult for the learner to identify minority classes. To mitigate this effect, when choosing our machine learning algorithm we optimise for F-measure which balances precision and recall across the classes. We Table 2 : Accuracy for 3-class sentiment classification disregard the mixed annotations as they are few in number and ambiguous in nature.Our feature vector consists of unigrams which occur in two or more documents in the training set. The tokenizer we use (Laboreiro et al., 2010) is optimised for user-generated content so all sociolinguistic features such as emoticons (":-)") and unconventional punctuation ("!!!!") are preserved. These features are often used to add tone to text and thus likely to contain sentiment information. We remove all topic terms, usernames and URLs to prevent any bias being learned towards these.Unsatisfied with the recall from either Support Vector Machines (SVM) or Multinomial Naive Bayes (MNB) classifiers, we evaluated a boosting approach which, through iterative learning, upweights training examples from minority classes, thus improving recall for these classes. We used Freund and Schapire's Adaboost M1 method with 10 training iterations as implemented in the Weka toolkit 3 (Freund and Schapire, 1996) . Following from this, we use an Adaboost MNB classifier which achieves 65.09% classification accuracy in 10-fold cross-validation for 3 classes (see Table 2 ).It is difficult to say how best to incorporate sentiment. On the one hand, sentiment distribution in the tweets relevant to a single party is indicative of the sentiment towards that party. For example, if the majority of the mentions of a party contain negative sentiment, it is reasonable to assume that people are in general negatively disposed towards that party. However, this only considers a party in isolation. If this negative majority holds true for all parties, how do we differentiate public opinion towards them? In a closed system like an election, relative sentiment between the parties perhaps has as much of an influence.To address the above issues, we use two novel measures of sentiment in this study. For inter-party sentiment, we modify our volume-based measure, SoV , to represent the share of positive volume, SoV p , and share of negative volume, SoV n :EQUATIONFor intra-party sentiment, we use a log ratio of sentiment:EQUATIONThis gives a single value for representing how positive or negative a set of documents are for a given topic. Values for Sent(x) are positive when there are more positive than negative documents, and negative when there are more negative than positive for a given party. 1 is added to the positive and negative volumes to prevent a division by zero. The inter-party share of sentiment is a proportional distribution and thus prediction error can be easily measured with M AE. Also, as it is nonparametric it can be applied without any tuning. We fit a regression to our inter-party and intraparty measures, trained on poll data. This takes the form:y(x) = β v SoV (x) + β p SoV p (x) + β n SoV n (x) +β s Sent(x) + εThis builds on the general model for sentiment proposed in (Asur and Huberman, 2010) . The purpose of fitting this regression is threefold. Firstly, we wish to identify which measures are the most predictive and confirm our assumption that both sentiment and proportion of volume have predictive qualities. Secondly, we want to compare the predictive capabilities of our two sentiment measures. Lastly, we want to identify under optimum conditions how a Twitter-model for political sentiment could predict our election results. For many applications there is little to be gained from measuring sentiment without being able to explain the observed values. We conclude our study with a suggestion for how such sentiment data may be used to explore Twitter data qualitatively during an election.
2
In this section, we describe the details of our proposed methods, including data preprocessing, neural networks and ensemble strategy.As data released by WASSA2018 is crawled from the internet, raw tweets may contain a lot of useless (even misleading) information, such as some punctuations and abbreviations. Therefore, we perform a few preprocessing steps to improve the quality of raw data for the ongoing study: (1) The positions in the tweets where the emotion words have been removed are marked with [#TRIG-GERWORD#] (see Figure 1 ), so we remove them from the raw data. (2) We remove the useless link "http : //url.removed" and some meaningless punctuations such as semicolon and colon. 3We restore some abbreviations in the tweets, e.g., substituting "have" for "'ve". (4) All characters are then transformed into lowercase. 5The TweetTokenizer 2 tool is used to split tweets into a list of words. We try to remove stopwords via nltk.corpus 3 , but there is no performance improvement, so we ignore this processing. Our models consist of an embedding layer, a L-STM or BiLSTM layer, an attention layer and two dense layers. Figure 2 shows the architecture of the BiLSTM-Attention model. For the LSTM-Attention model, it shares the same architecture with the BiLSTM-Attention model, except that the BiLSTM layer is replaced with the LSTM layer.To extract the semantic information of tweets, each tweet is firstly represented as a sequence of word embeddings. Denote s as a tweet with n words and each word is mapping to a global vector (Mikolov et al., 2013) , then we have:EQUATIONwhere vector e i represents the vector of i-th word with a dimension of d. The vectors of word embeddings are concatenated together to maintain the order of words in a tweets. Consequently, it can overcome deficits of bag-of-words techniques. For our methods, Word2vec-twitter-model, a pretrained word embedding model using Word2vec technique (Mikolov et al., 2013) on tweets is exploited. The embedding dimension of Word2vectwitter-model is d=400.In this emotion classification task, we model the twitter messages using Recurrent Neural Network (RNN), to be exact, we respectively examine L-STM and Bidirectional LSTM (Zeng et al., 2016) to process the tweets. LSTM firstly introduced by (Hochreiter and Schmidhuber, 1997) has proven to be stable and powerful for modeling long-time dependencies in various scenarios such as speech recognition and machine translations. Bidirectional LSTM (Graves and Schmidhuber, 2005; Graves et al., 2013) is an extension of traditional LST-M to train two LSTMs on the input sequence.The second LSTM is a reversed copy of the first one, so that we can take full advantage of both past and future input features for a specific time step. We train both LSTM and Bidirectional LST-M networks using back-propagation through time (BPTT) (Chen and Huo, 2016) . After the embedding layer, the sequence of word vectors is fed into a single-layer LSTM or Bidirectional L-STM to achieve another representation of h = LST M/BiLST M (s). In order to maintain consistency of dimensions, the number of neurons is configured as 400 in both the LSTM Layer and the BiLSTM Layer.Generally, not all words in a tweet contribute equally to the representation of tweet, so we leverage word attention mechanism to capture the distinguished influence of the words on the emotion of tweet, and then form a dense vector (Yang et al., 2017) considering the weights of different word vectors. Specifically, we have:u ti = tanh(W h ti + b), α ti = exp(u T ti uw) n j=1 exp(u T tj uw) , s t = i α ti h ti .(2) t represents t-th tweet, i represents i-th word in the tweet and n is the number of words in a tweet. h ti represents the word annotation of the i-th word in the t-th tweet which fed to a one-layer MLP to get u ti as a hidden representation of h ti . More specifically h ti is the concatenation output of the LST-M/BiLSTM layer in our model. W is a weight matrix of the MLP, and b is a bias vector of the MLP. Then we measure the importance of words through the similarity between u ti and a word level context vector u w which is randomly initialized. And after that, we get a normalized importance weight α ti through a softmax function. α ti is the weight of the i-th word in the t-th tweet. The bigger α ti is, the more important the i-th word is The attention layer is followed by two dense layers with different sizes of neurons. The output of attention layer is fed into the first dense layer with 400 hidden neurons. The activation function of this layer is tanh. And in order to avoid potential overfitting problem, dropout is utilized between these two dense layers. And we try different dropout rates to find the best configurations. The output is then fed into the second dense layer with 6 hidden neurons, and the activation function in this layer is softmax. So we can obtain the probability that the excluded word belongs to each of the six classes.Ensemble strategies (Dietterich, 2000) have been widely used in various research fields because of their ascendant performance. Ensemble strategies train multiple learners and then combine them to achieve a better predictive performance. Many ensemble strategies have been proposed, such as Voting, Bagging, Boosting, Blending, etc 4 . In our methods, a simple but efficient ensemble strategy called soft voting is utilized. It means that for a classification problem, soft voting returns the class label of the maximum of the weighted sum of the predicted probabilities. We assign a weight equally to each classifier, then the probability that a sample belongs to a certain class is the weighted sum of probabilities that this sample belongs to this class predicted by all classifiers. And the class with the highest probability is the final classification result. It can be defined as Eq.3 (Zhou, 2012):EQUATIONi represents i-th classifier, T is the total number of classifier. j is the class label where j is an integer between 0 and 5, because there are 6 classes in our task.x is a sample. h j i (x) represents the i-th classifier's predictive probability towards the sample x on the j-th class label, it is a probability which is between 0 and 1. Finally, H j (x) represents the probability that the sample x belongs to j-th class after ensembling.
2
We explore patterns by alternating between analysis based on linguistic and logical knowledge and computational analysis and synthesis using the tools of the workbench. In this paper, we began with an analytic phase, transforming the English description of a pattern into a logical description. During this translation process, one can use the abstract characterizations of the complexity classes as a guide to the form needed to express a given constraint. Assuming this translation is successful, these constraints provide an upper-bound on the complexity of the pattern as a whole.Alternatively, for patterns already described by automata, one can start with a computational analytic phase, using the workbench to extract systems of SL, SP, coSL and coSP constraints from those automata (Rogers and Lambert, to appear) . If the pattern is not simply SL+SP+coSL+coSP the result of these computational methods is an approximation that is.Another alternative is to work directly from a corpus of annotated examples using learning algorithms based on , which are currently being incorporated into the workbench. Again, the result is a (possibly exact) approximation that is SL + SP + coSL + coSP.In all three cases, the workbench implements a computational synthesis phase which represents these systems of constraints as automata. If an automaton is provided for the pattern the correctness and completeness of the constraints can be checked computationally against this automaton by constructing an automaton that recognizes the symmetric difference between the two, which can either be examined directly or used to generate examples of strings that satisfy the constraints but should be excluded or those that should not be excluded but fail to satisfy the constraints. If there is no existing automaton to work against, one can generate strings up to a given length-bound and look for inconsistencies. In any case, if there is under-or overgeneration the structures of these residues guide a return to the analytical phase, adding or modifying constraints in order to account for the differences.Once the conjunction of logical constraints correctly describes the pattern in question, the workbench can minimize the description by removing constraints that are logically implied by others. Because these subregular classes form a proper hierarchy and are all closed under intersection, the complexity of the stringset is simply the maximum of the complexity of the constraints that describe it.Our workbench can find all of the minimal independent subsets of a set of constraints constraints that describe the same pattern as the full set. However, if a constraint of higher complexity is implied by a set of lower-complexity constraints a smaller subset in which the higher-complexity constraint is explicit will be not be an accurate indication of the complexity of the stringset, rather a larger set of lower-complexity constraints would be preferred. So the workbench can also check sets of constraints provided by the user, such as the constraints identified earlier in the process either from analysis of other patterns or from theoretical analysis of the linguistic phenomena under study, against the rest of the set and find independent subsets that are minimally complex. It should be noted that minimal descriptions from a linguistic perspective may well not be the same as the minimal descriptions from a complexity-theoretic perspective. But as long as they are logically equivalent, the complexity result is still valid. One can use this same mechanism to determine whether a pattern satisfies a putative universal constraint by merely checking whether this constraint is implied by those that describe the pattern in question. When the corpus of patterns of Stress-Typ2 was tested for both obligatoriness and culminativity, it was discovered that while every lect satisfies culminativity, there are two that do not satisfy obligatoriness, namely Seneca and Cayuga. These are languages that Hyman identifies as not satisfying "the more accent-like properties of obligatoriness and culminativity" (Hyman, 2009) .▹ < oneσ LTT 1,2 PT 2 obligatoriness coSL 1 PT 1 culminativity LTT 1,2 SP 2 no H beforeH S F S P 2 no * H withĹ L T 1 SP 2 nothing beforeĹ S L 2 SP 2 alternation SL 2 SF no light monosyllables SL 3 PT 2When working with a set of lects, one will collect a library of non-strict constraints. The workbench can use this to automatically determine if a new pattern can be completely described by a conjunction of strict constraints and some subset of this library. When the StressTyp2 corpus was analyzed, only five non-strict constraints were needed to describe the entire set of patterns.10 Yidin: An example Yidin is described as:(5) a. In words of all sizes, primary stress falls on the left-most heavy syllable, else on the initial syllable.b. In words of all sizes, secondary stress falls iteratively on every second syllable in both directions from the main stress. c. Light monosyllables do not occur. Table 1 shows the constraints we derived from this description. The left column is an English gloss of the constraint, while the remaining two columns note the complexity class in which the constraint falls on both the local and piecewise branches of the hierarchy. Alternation, SF on the piecewise side, is only SL 2 on the local side. Similarly, "no H beforeH" is SF on the local side, but only SP 2 on the piecewise side. Thus, considering either branch of the hierarchy in isolation brings the conclusion that the pattern is SF, but using a mix of constraints from both sides shows that the pattern is SL 3 + coSL 1 + SP 2 .
2
For the LF task, it was straightforward to turn dependency structures into LFs. Since the LF formalism does not attempt to represent the more subtle aspects of semantics, such as quantification, intensionality, modality, or temporality (Rus, 2002) , the primary information encoded in a LF is based on argument structure, which is already well captured by the dependency parses. Our LF generator traverses the dependency structure, turning POStagged lexical items into LF predicates, creating referential variables for nouns and verbs, and using dependency labels to order the arguments for each predicate. We make one change to the dependency graphs originally produced by the parser. Instead of taking coordinators, such as and, to modify the constituents they coordinate, we take the coordinated constituents to be arguments of the coordinator.Our LF generator builds a labeled directed graph from a dependency structure and traverses this graph depth-first. In general, a well-formed dependency graph has exactly one root node, which corresponds to the main verb of the sentence. Sentences with multiple independent clauses may have one root per clause. The generator begins traversing the graph at one of these root nodes; if there is more than one, it completes traversal of the subgraph connected to the first node before going on to the next node.The first step in processing a node-producing an LF predicate from the node's lexical item-is taken care of in the graph-building stage. We use a base form dictionary to get the base form of the lexical item and a simple mapping of Penn Treebank tags into 'n', 'v', 'a', and 'r' to get the suffix. For words that are not tagged as nouns, verbs, adjectives, or adverbs, the LF predicate is simply the word itself.As the graph is traversed, the processing of a node depends on its type. The greatest amount of processing is required for a node corresponding to a verb. First, a fresh referential variable is generated as the event argument of the verbal predication. The out-edges are then searched for nodes to process. Since the order of arguments in an LF predication is important and some sentence constitutents are ignored for the purposes of LF, the out-edges are chosen in order by label: first particles ('VP|PRT'), then arguments ('S|NP-SBJ', 'VP|NP', etc.), and finally adjuncts. We attempt to follow the argument order implicit in the description of LF given in (Rus, 2002) , and as the formalism requires, we ignore auxiliary verbs and negation. The processing of each of these arguments or adjuncts is handled recursively and returns a set of predications. For modifiers, the event variable also has to be passed down. For referential arguments and adjuncts, a referential variable also is returned to serve as an argument for the verb's LF predicate. Once all the arguments and adjuncts have been processed, a new predication is generated, in which the verb's LF predicate is applied to the event variable and the recursively generated referential variables. This new predication, along with the recursively generated ones, is returned.The processing of a nominal node proceeds similarly. A fresh referential variable is generatedsince determiners are ignored in the LF formalism, it is simply assumed that all noun phrases correspond to a (possibly composite) individual. Outedges are examined for modifiers and recursively processed. Both the referential variable and the set of new predications are returned. Noun compounds introduce some additional complexity; each modifying noun introduces two additional variables, one for the modifying noun and one for composite individual realizing the compound. This latter variable then replaces the referential variable for the head noun.Processing of other types of nodes proceeds in a similar fashion. For modifiers such as adjectives, adverbs, and prepositional phrases, a variable (corresponding to the individual or event being modified) is passed in, and the LF predicate of the node is applied to this variable, rather than to a fresh variable. In the case of prepositional phrases, the predicate is applied to this variable and to the variable corresponding to the object of the preposition, which must be processed, as well. The latter variable is then returned along with the new predications. For other modifiers, just the predications are returned.The rules for handling dependency labels were written by hand. Of the roughly 1100 dependency labels that the parser assigns (see Section 2), our system handles 45 labels, all of which fall within the most frequent 135 labels. About 50 of these 135 labels are dependencies that can be ignored in the generation of LFs (labels involving punctuation, determiners, auxiliary verbs, etc.); of the remaining 85 labels, the 45 labels handled were chosen to provide reasonable coverage over the sample corpus provided by the task organizers. Extending the system is straightforward; to handle a dependency label linking two node types, a rule matching the label and invoking the dependent node handler is added to the head node handler.On the sample corpus of 50 sentences to which our system was tuned, predicate identification, compared to the provided LFs, including POS-tags, was performed with 89.1% precision and 87.1% recall. Argument identification was performed with 78.9% precision and 77.4% recall. On the test corpus of 300 sentences, our official results, which exclude POS-tags, were 82.0% precision and 78.4% recall for predicate identification and 73.0% precision and 69.1% recall for argument identification.We did not get the gold standard for the test corpus in time to perform error analysis for our official submission, but we did examine the errors in the LFs we generated for the trial corpus. Most could be traced to errors in the dependency parses, which is unsurprising, since the generation of LFs from dependency parses is relatively straightforward. A few errors resulted from the fact that our system does not try to identify multi-word compounds.Some discrepancies between our LFs and the LFs provided for the trial corpus arose from apparent inconsistencies in the provided LFs. Verbs with particles were a particular problem. Sometimes, as in sentences 12 and 13 of the trial corpus, a verb-particle combination such as look forward to is treated as a single predicate (look forward to); in other cases, such as in sentence 35, the verb and its particle (go out) are treated as separate predicates. Other inconsistencies in the provided LFs include missing arguments (direct object in sentence 24), and verbs not reduced to base form (felt, saw, and found in sentences 34, 48, 50).
2
After paraphrase extraction we have paraphrase pairs, (f 1 , f 2 ) and a score S(f 1 , f 2 ) we can induce new translation rules for OOV phrases using the steps in Algo. (1): 1) A graph of source phrases is constructed as in (Razmara et al., 2013); 2) translations are propagated as labels through the graph as explained in Fig. 2 ; and 3) new translation rules obtained from graph-propagation are integrated with the original phrase table.We construct a graph G(V, E, W ) over all source phrases in the paraphrase database and the source language phrases from the SMT phrase table extracted from the available parallel data. V corresponds to the set of vertices (source phrases), E is the set of edges between phrases and W is weight of each using the score function S defined in Sec. 2. V has two types of nodes: seed (labeled) nodes, V s , from the SMT phrase table, and regular nodes, V r . Note that in this step OOVs are part of these regular nodes, and we try to find translation in the propagation step for all of these regular nodes. In graph construction and propagation, we do not know which phrasal nodes correspond to OOVs in the dev and test set. Fig. 2 shows a small slice of the actual graph used in one of our experiments; This graph is constructed using the paraphrase database on the right side of the figure. Filled nodes have a distribution over translations (the possible "labels" for that node). In our setting, we consider the translation e to be the "label" and so we propagate the labeling distribution p(e|f ) which is taken from the feature function for the SMT log-linear model that is taken from the SMT phrase table and we propagate this distribution to unlabeled nodes in the graph.Considering the translation candidates of known phrases in the SMT phrase table as the "labels" we apply a soft label propagation algorithm in order to assign translation candidates to "unlabeled" nodes in the graph, which include our OOV phrases. As described by the example in Fig. 2 we wish two outcomes: 1) transfer of translations (or "labels") to unlabeled nodes (OOV phrases) from labeled nodes, and 2) smoothing the label distribution at each node. We use the Modified Adsorption (MAD) algorithm (Talukdar and Crammer, 2009) for graph propagation. Suppose we have m different possible labels plus one dummy label, a soft labelŶ ∈ ∆ m+1 is a m + 1 dimension probability vector. The dummy label is used when there is low confidence on correct labels. Based on MAD, we want to find soft label vectors for each node by optimizing the objective function below:EQUATIONIn this objective function, µ i and P i,v are hyperparameters (∀v :Σ i P i,v = 1). R v ∈ ∆ m+1is our prior belief about labeling. First component of the function tries to minimize the difference of new distribution to the original distribution for the seed nodes. The second component insures that nearby neighbours have similar distributions, and the final component is to make sure that the distribution does not stray from a prior distribution. At the end of propagation, we wish to find a label distribution for our OOV phrases. We describe in Sec. 4.2.2 the reasons for choosing MAD over other graph propagation algorithms. The MAD graph propagation generalizes the approach used in (Razmara et al., 2013) . The Structured Label Propagation algorithm (SLP) was used in (Saluja et al., 2014; Zhao et al., 2015) which uses a graph structure on the target side phrases as well. However, we have found that in our diverse experimental settings (see Sec. 5) MAD had two properties we needed compared to SLP: one was the use of graph random walks which allowed us to control translation candidates and MAD also has the ability to penalize nodes with a large number of edges (also see Sec. 4.2.2).After propagation, for each potential OOV phrase we have a list of possible translations with corresponding probabilities. A potential OOV is any phrase which does not appear in training, but could appear in unseen data. We do not look at the dev or test data to produce the augmented phrase table. The original phrase table is now augmented with new entries providing translation candidates for potential OOVs; Last column in Table 2 shows how many entries have been added to the phrase table for each experimental settings. A new feature is added to the standard SMT log-linear discriminative model and introduced into the phrase table. This new feature is set to either 1.0 for the phrase table entries that already existed; or i which is the log probability (from graph propagation) for the translation candidate i for potential OOVs. In case the dummy label exists with high probability or the label distribution is uniform, an identity rule is added to the phrase table (copy over source to target).4 Analysis of the Framework
2
To demonstrate how the given dataset can be used to classify troll memes, we defined two experiments with four variations of each. We measured the performance of the proposed baselines by using precision, recall and F1-score for each class, i.e. "troll and not-troll". We used ResNet (He et al., 2016) and MobileNet (Howard et al., 2017) as a baseline to perform the experiments. We give insights into their architecture and design choices in the sections below.ResNet has won the ImageNet ILSVRC 2015 (Russakovsky et al., 2015) classification task. It is still a popular method for classifying images and uses residual learning which connects low-level and high-level representation directly by skipping the connections in-between. This improves the performance of ResNet by diminishing the problem of vanishing gradient descent. It assumes that a deeper network should not produce higher training error than a shallow network. In this experiment, we used the ResNet architecture with 176 layers. As it was trained on the ImageNet task, we removed the classification (last) layer and used GlobalAveragePooling in place of fully connected layer to save the computational cost. Later, we added four fully connected layers with the classification layer which has a sigmoid activation function.This architecture is trained with or without pre-trained ImageNet weights.We trained MobileNet with and without ImageNet weights. The model has a depth multiplier of 1.4, and an input dimension of 224×224 pixels. This provides a 1, 280×1.4 = 1, 792 -dimensional representation of an image, which is then passed through a single hidden layer of a dimensionality of 1, 024 with ReLU activation, before being passed to a hidden layer with input dimension of (512,None) without any activation to provide the final representation h p . The main purpose of MobileNet is to optimize convolutional neural networks for mobile and embedded vision applications. It is less complex than ResNet in terms of number of hyperparameters and operations. It uses a different convolutional layer for each channel, this allows parallel computation on each channel which is Depthwise Separable Convolution. Later on the features extracted from these layers have been combined using the pointwise convolution layer. We used MobileNet to reduce the computational cost and compare it with the computationally intensive ResNet.
2