Posts

Showing posts from April, 2017

Optimizing LSP Performance in the Artificial Intelligence Landscape

Image
Artificial Intelligence and Machine Learning have been all over the news of late, and we see that all the internet giants are making huge investments in acquiring AI expertise and/or using "machine intelligence" which is the term being used when describing how these two areas come together in business applications. It is said that AI is attracting more venture capital investment than any other single area in VC at the moment, and there are now people regularly claiming that the AI guided machines will dominate or at least deeply influence and transform much of our lives in future perhaps dangerously so. This overview of AI and this detailed overview of  Neuralink i s quite entertaining and gives one a sense of the velocity of learning, and knowledge acquisition and transmission that we are currently facing and IMO are worth skimming through at least. However. machines learn from data and find ways to leverage patterns in data in innumerable ways. The value of this pattern le

LSP Perspective: MT Post-Editing Means a Drastic Reduction in Translation Cost

Image
This is a short guest post by @ translationguy also known as Ken Clark.   These initial preamble comments in italics are mine.  Today, many LSPs and Enterprises are working with MT and there is enough evidence that MT works even when you don't really know what you are doing. Unfortunately, many agencies still try to do it themselves with Moses and most of these DIY experiments either completely fail or produce systems that are not as good as the public systems produced by Microsoft and Google, which defeats the whole point of doing it. MT as a technology only provides business leverage if you have a superior MT system and have aligned processes to take advantage of this.  Ken differentiates between light and full post-editing in his view of post-editing, and I would like to add another dimension to this discussion. It is my experience that full post-editing is done with smaller (in MT terms) projects, or when the information translated is very critical to get right. Thus, in a kn

The Problem with BLEU and Neural Machine Translation

Image
There has been a great deal of public attention and publicity given to the subject of Neural Machine Translation in 2016. While experimentation with Neural Machine Translation (NMT) has been going on for the last several years, 2016 has proven to be the year that NMT broke through and became a big deal, and became more widely understood to be of great merit outside of the academic and research community, where it was already understood that NMT has great promise for some years now. The reasons for the sometimes excessive exuberance around NMT are largely based on BLEU (not BLUE) score improvements on test systems which are sometimes validated by human quality assessments. However it has been understood by some that BLEU, which is still the most widely used measure of quality improvement, can be misleading in its indications when it is used to compare some kinds of MT systems. The basis for the NMT optimism is related both to the very slow progress in recent years with improving