Table of contents:
Introduction
This post is a collection of best practices for using neural networks in Natural Language Processing. It will be updated periodically as new insights become available and in order to keep track of our evolving understanding of Deep Learning for NLP.
There has been a running joke in the NLP community that an LSTM with attention will yield state-of-the-art performance on any task. While this has been true over the course of the last two years, the NLP community is slowly moving away from this now standard baseline and towards more interesting models.
However, we as a community do not want to spend the next two years independently (re-)discovering the
next
LSTM with attention. We do not want to reinvent tricks or methods that have already been shown to work. While many existing Deep Learning libraries already encode best practices for working with neural networks in general, such as initialization schemes, many other details, particularly task or domain-specific considerations, are left to the practitioner.
This post is not meant to keep track of the state-of-the-art, but rather to collect best practices that are relevant for a wide range of tasks. In other words, rather than describing one particular architecture, this post aims to collect the features that underly successful architectures. While many of these features will be most useful for pushing the state-of-the-art, I hope that wider knowledge of them will lead to stronger evaluations, more meaningful comparison to baselines, and inspiration by shaping our intuition of what works.
I assume you are familiar with neural networks as applied to NLP (if not, I recommend Yoav Goldberg's excellent primer [
43
]) and are interested in NLP in general or in a particular task. The main goal of this article is to get you up to speed with the relevant best practices so you can make meaningful contributions as soon as possible.
I will first give an overview of best practices that are relevant for most tasks. I will then outline practices that are relevant for the most common tasks, in particular classification, sequence labelling, natural language generation, and neural machine translation.
Disclaimer:
Treating something as
best practice
is notoriously difficult: Best according to what? What if there are better alternatives? This post is based on my (necessarily incomplete) understanding and experience. In the following, I will only discuss practices that have been reported to be beneficial independently by
at least
two different groups. I will try to give at least two references for each best practice.
Best practices
Word embeddings
Word embeddings are arguably the most widely known best practice in the recent history of NLP. It is well-known that using pre-trained embeddings helps (Kim, 2014) [
12
]. The optimal dimensionality of word embeddings is mostly task-dependent: a smaller dimensionality works better for more syntactic tasks such as named entity recognition (Melamud et al., 2016) [
44
] or part-of-speech (POS) tagging (Plank et al., 2016) [
32
], while a larger dimensionality is more useful for more semantic tasks such as sentiment analysis (Ruder et al., 2016) [
45
].
Depth
While we will not reach the depths of computer vision for a while, neural networks in NLP have become progressively deeper. State-of-the-art approaches now regularly use deep Bi-LSTMs, typically consisting of 3-4 layers, e.g. for POS tagging (Plank et al., 2016) and semantic role labelling (He et al., 2017) [
33
]. Models for some tasks can be even deeper, cf. Google's NMT model with 8 encoder and 8 decoder layers (Wu et al., 2016) [
20
]. In most cases, however, performance improvements of making the model deeper than 2 layers are minimal (Reimers & Gurevych, 2017) [
46
].
These observations hold for most sequence tagging and structured prediction problems. For classification, deep or very deep models perform well only with character-level input and shallow word-level models are still the state-of-the-art (Zhang et al., 2015; Conneau et al., 2016; Le et al., 2017) [
28
,
29
,
30
].
Layer connections
For training deep neural networks, some tricks are essential to avoid the vanishing gradient problem. Different layers and connections have been proposed. Here, we will discuss three: i) highway layers, ii) residual connections, and iii) dense connections.
Highway layers
Highway layers (Srivastava et al., 2015) [
1
] are inspired by the gates of an LSTM. First let us assume a one-layer MLP, which applies an affine transformation followed by a non-linearity
g
" role="presentation" style="font-variant: inherit; font-stretch: inherit; line-height: normal; font-family: inherit; vertical-align: baseline; border-width: 0px; border-style: initial; border-color: initial; display: inline; word-spacing: normal; word-wrap: normal; float: none; direction: ltr; max-width: none; max-height: none; min-width: 0px; min-height: 0px;">
g
g
to its input
x
" role="presentation" style="font-variant: inherit; font-stretch: inherit; line-height: normal; font-family: inherit; vertical-align: baseline; border-width: 0px; border-style: initial; border-color: initial; display: inline; word-spacing: normal; word-wrap: normal; float: none; direction: ltr; max-width: none; max-height: none; min-width: 0px; min-height: 0px;">
x
x
:
h
=
g
(
W
x
+
b
)
" role="presentation" style="font-variant: inherit; font-stretch: inherit; line-height: normal; font-family: inherit; vertical-align: baseline; border-width: 0px; border-style: initial; border-color: initial; display: inline; word-spacing: normal; word-wrap: normal; float: none; direction: ltr; max-width: none; max-height: none; min-width: 0px; min-height: 0px;">
h
=
g
(
W
x
+
b
)
h
=
g
(
W
x
+
b
)
A highway layer then computes the following function instead:
h
=
t
⊙
g
(
W
x
+
b
)
+
(
1
−
t
)
⊙
x
" role="presentation" style="font-variant: inherit; font-stretch: inherit; line-height: normal; font-family: inherit; vertical-align: baseline; border-width: 0px; border-style: initial; border-color: initial; display: inline; word-spacing: normal; word-wrap: normal; float: none; direction: ltr; max-width: none; max-height: none; min-width: 0px; min-height: 0px;">
h
=
t
⊙
g
(
W
x
+
b
)
+
(
1
−
t
)
⊙
x
h
=
t
⊙
g
(
W
x
+
b
)
+
(
1
−
t
)
⊙
x
where
⊙
" role="presentation" style="font-variant: inherit; font-stretch: inherit; line-height: normal; font-family: inherit; vertical-align: baseline; border-width: 0px; border-style: initial; border-color: initial; display: inline; word-spacing: normal; word-wrap: normal; float: none; direction: ltr; max-width: none; max-height: none; min-width: 0px; min-height: 0px;">
⊙
⊙
is elementwise multiplication,
t
=
σ
(
W
T
x
+
b
T
)
" role="presentation" style="font-variant: inherit; font-stretch: inherit; line-height: normal; font-family: inherit; vertical-align: baseline; border-width: 0px; border-style: initial; border-color: initial; display: inline; word-spacing: normal; word-wrap: normal; float: none; direction: ltr; max-width: none; max-height: none; min-width: 0px; min-height: 0px;">
t
=
σ
(
W
T
x
+
b
T
)
t
=
σ
(
W
T
x
+
b
T
)
is called the
transform
gate, and
(
1
−
t
)
" role="presentation" style="font-variant: inherit; font-stretch: inherit; line-height: normal; font-family: inherit; vertical-align: baseline; border-width: 0px; border-style: initial; border-color: initial; display: inline; word-spacing: normal; word-wrap: normal; float: none; direction: ltr; max-width: none; max-height: none; min-width: 0px; min-height: 0px;">
(