Posts

Showing posts from January, 2018

Literary Text: What Level of Quality can Neural MT Attain?

Image
Here are some interesting results from guest writer Antonio Toral, who provided us a good broad look at how NMT was doing relative to PBMT last year. His latest research investigates the potential for NMT in assisting with the translation of Literary Texts. While NMT is still a long way from human quality, it is interesting to note that NMT very consistently beats SMT even at the BLEU score level. At th eresearch level this is a big deal. Given that BLEU scores tend to favor SMT systems naturally, this is especially promising, and the results are probably quite strikingly better when compared by human reviewers. I have also included another short post Antonio did on the detailed human review of NMT vs SMT output to show those who still doubt that NMT is the most likely way forward for any MT project today. ---------------------- Neural networks have revolutionised the field of Machine Translation (MT). Translation quality has improved drastically over that of the previous dominant ap...

2018: Machine Translation for Humans - Neural MT

Image
This is a guest post by Laura Casanellas @ LauraCasanellas  describing her journey with language technology. She raises some good questions for all of us to ponder over the coming year.   Neural MT is all the rage now and it now appears in almost every translation industry discussion we see today. Sometimes depicted as a terrible job-killing force and sometimes as a savior, though I would bet that it is neither. Hopefully, the hype subsides and we start focusing on solving issues that enable high-value deployments. I have been interviewed by a few people about NMT technology in the last month, so expect to see even more on NMT, and we continue to see that GAFA and the Chinese/Korean giants (Baidu, Alibaba, Naver) also introduce NMT offerings.  Open source toolkits for NMT proliferate, training data is easier to acquire, and hardware options for neural net and deep learning experimentation continue to expand.  It is very likely that we will see even more generic NM...