The recent advances introduced by neural machine translation (NMT) are rapidly expanding the application ﬁelds of machine translation, as well as reshaping the quality level to be targeted. In particular, if translations have to ﬁt some given layout, quality should not only be measured in terms of adequacy and ﬂuency, but also length. Exemplary cases are the translation of document ﬁles, subtitles, and scripts for dubbing, where the output length should ideally be as close as possible to the length of the input text. This paper addresses for the ﬁrst time, to the best of our knowledge, the problem of controlling the output length in NMT. We investigate two methods for biasing the output length with a transformerarchitecture: i)conditioning the output to a given target-source length-ratio class and ii) enriching the transformer positional embedding with length information. Our experiments show that both methods can induce the network to generate shorter translations, as well as acquiring interpretable linguistic skills.