In the 215th Tool Box Newsletter, issue 12-11-215, Jost Zetzsche reported about the EU Directorate-General for Translation’s (DGT) choice of its translation environment tool. According to the DGT official who answered Jost’s questions, “DGT intends to use the product as provided, thanks to the good APIs provided by SDL.” As a matter of fact, the availability of APIs is relatively recent news for SDL, to contrast the emergence of new competitors, and it is hard to believe that “the reason for not awarding the contract in the previous restricted procedure was that no tender had been found to fulfill the minimum quality criteria”.
In fact, even though the public notice and the scope statement for the procurement were both intricate as usual, a dirigisme legacy appeared noticeable, with a subtle aim for continuity. Not surprisingly, according to the same DGT official, “in the negotiated procedure that followed, several tenders improved their products with the result that they fulfilled the compulsory requirements.” Which was the rationale for a negotiated procedure? Finally, the same DGT official acknowledged that “integration with DGT’s information systems will be necessary, as well as automation of some processes.” Again, why then opting for just another commercial software product rather than requiring compliance with standards and selecting an open platform? Integration with the existing information systems could eventually result in a ‘stranglehold’.
The DGT official above can hardly be trustworthy when saying that “the Commission has been using for many years commercial products without any problem.” As the problems with Systran confirm, a much too close relationship with one technology provider could eventually prove harmful. Customizing a commercial software product usually means developing many pieces of ad hoc software, mostly elaborate word macros in the case of Trados.
Again not surprisingly, DGT released two embarrassing booklets on clear writing (in all EU languages) and on writing for translation, to be later blatantly blamed, from the inside, for misusing English terminology in EU publications.
Trados’s success in the 1990’s was mostly due to the endorsement as the DGT translator’s workbench. Most LSPs working with DGT conveyed much of the new technological burden on freelancers, forcing them to use Trados, thus helping it spread.
If not the translation software market, is DGT altering the information market, at least from a linguistic point of view?
As many other commercial translation software today, the ‘new’ DGT purchase can be connected with remote linguistic data, large collections of translation memories, corpora, and term bases, also other than EU’s.
Some contend that this linguistic data, including EU’s, is to be considered as an asset. Will DGT use it anyway?
The same commercial translation software usually provides for machine translation ‘suggestions’ when no matches from translation memories are found.
At the tenth biennial conference of the Association for Machine Translation in the Americas (AMTA) in San Diego California, Don De Palma reported that Common Sense Advisory’s data shows that 7.3% of freelancers use machine translation (MT) on every project, noting that productivity has stagnated at 2,684 words/day in spite of technology innovation.
Will DGT translators be allowed — and willing — to use the MT-suggestion feature?
As Franco Urzì reported for The Big Wave, the EU corpus is characterized by a high degree of mutually referencing texts. Most often this results in a validation of linguistic choices made upstream and based on specific ‘house rules’, even unstated, and in a sort of ‘controlled language’, although without the strict rules of a typical controlled language. These features, together with the immediate incorporation in promptly available multilingual corpora, make the EU texts MT-fit, especially for SMT.
Huge — and steadily increasing— volumes of digitized information are flooding all channels every day. At least some of it could be translated, and yet many organizations simply don’t have enough money, or translators, or even the time needed. According to Common Sense Advisory, only 0.00000067% of data created is translated.
This is the rationale for the (early) adoption of machine translation: possibly doubling the number of words translated with the same budget and continuing content expansion.
A reflection should be made on how DGT is altering the translation industry also from an economic point of view, in respect to standards (as the European Commission did by adopting commercial office software), to education and to compensations.
Beyond being another sign of dirigisme, the EMT program is short-sighted and hypocrite, mostly targeting DGT’s interests (and possibly meeting only the ambitions of some vainglorious academics). According to many DGT senior translators, the bouquet of competences for professional translators, experts in multilingual and multimedia communication profiled in the EMT program can hardly be found in any DGT translation vendors and would possibly be even harder to find in any newly graduates from EMT universities. In any case, it could prove a waste for the new trend in the translation industry.
At the 53rd ATA conference in San Diego California, just preceding the AMTA conference, during an informal lunch, Corinne McKay asked three questions to the software companies exhibiting at the conference. All answers were enlightening, especially three of them:
- most users don’t take advantage of any training materials available;
- people want something that’s cheap, feature-rich and requires no training to learn;
- translation agencies put too much of the technological burden on freelancers.
Recently, Catherine Christaki’s reported about the joint CIoL-ITI rates and salaries survey for translators and interpreters. It is a very interesting reading, especially for the average age of respondents (45.8, with 53% falling in the 40-59 range) and the average rates reported by respondents (€ 0.04 per word).
However, when calculated by dividing the average annual gross income (€ 27,600) by the average annual productivity (215,500 words), the average rate results more than three times as much (€ 0,13 per word).
Catherine Christaki noticed that the average output per year for translators sounds way too low. Using Common Sense Advisory’s reported average productivity of 2,684 word a day, a professional translator working 22 days a month (with no sick days), 11 months a year should produce (242 x 2,684=) 649,528 words a year, for a gross income of € 25,981.
This data would suggest that respondents were most probably lying, following the old adage “I’m not reporting what I actually charge/produce, but what I’d like to charge/produce (in my dreams).”
On the other hand, it is clear that, although demand for translation services is up and on the rise, the average price has been falling, over 30% since 2010 and over 40% since 2008.
According again to Common Sense Advisory, a deconstruction of translation rates into the component parts is supposed to have affected pricing.
In reality, translation demand grows much faster than the industry’s capacity to meet it and to produce (good) translators.
The translation demand is increasing so fast as to make current prices unaffordable, especially for durability and the general turnaround requirements.
Also, an old, unresolved and harmful problem of the translation industry, mostly due to industry players as well as academics, lies in the arrogance reserved for customers and end users. The main effect is the poor consideration for translation: translations are often subject to criticism, usually from outsiders who say with complacency they could have done — and could do — better.
It is no coincidence that industry events are most rarely visited by customers and industry publications generally do not reach the laymen.
Yet it is well known that discontented users complain openly, while content users do not, and industry organizations should work on unhappy users and potential detractors with targeted campaigns for awareness and dissemination.
In this respect, it would be interesting and it is unfortunately impossible to know how many customers and end users would buy and read the book by Nataly Kelly and Jost Zetzsche.
Since a typical translation buyer can’t assess a translation effort, offerings are usually compared on price. And since happy users usually do not manifest any appreciation for a translation, taking it for granted, the typical translation buyer can’t see why a translation should be that expensive. This is why it’s increasingly hard to prevent a decrease in translation prices.
The unconceivable expensiveness of translations leads buyers to require that every single piece of content consist of repeatable units to be efficiently reused, even in translation. Cost effectiveness is still the king, not content.
All the above is the result of the typical information asymmetry of the translation industry, and yet it is really hard to convince people in this industry that it is applicable to translation, although this should be apparent.
People in the translation industry should easily see how information asymmetry occurs in their industry: sellers usually know more about their products than buyers, with the consequent imbalance of power and the risk for transactions to go awry.
On the other hand, even if George Akerlof’s paper introducing information asymmetry (for the used cars market) is one of the most-cited papers in modern economic theory (Akerlof was awarded the Nobel Prize for his information asymmetry theory), it was not welcome initially. Critics argued that a lemons market actually did not exist in used vehicles, since consumers themselves can seek ways to assure the quality of a car and that a used-car salesperson may work to maintain his reputation rather than pass off a ‘lemon’, and that if his theory were correct, then no goods could be traded.
Changing ‘car’ with ‘translation’ should be enough to see the validity of Akerlof’s assumption even for the translation market in 2012.
The translation industry is a ‘lemons market’ for the following reasons:
- information asymmetry (buyers cannot accurately assess the value of the product/service through examination before sale is made, while sellers can more accurately assess the value of the same product/service prior to sale, and is in true for every transaction pairs);
- an incentive exists for the seller to pass off a low quality product as a higher quality one; decreasing profit margins are leading sellers to resort to lower resources, in an endless downward loop;
- sellers have no credible disclosure technology; most innovations in the translation industry have always come from the outside and often from outsiders/underdogs;
- buyers are sufficiently pessimistic about the seller’s quality; it’s no news that translations are generally marked as bad, just because good ones are ‘invisible’.
- lack of effective public quality assurances; how can the highest quality be achieved at the lowest cost? The answer is “highest quality is lowest cost”, but translation quality standards are irrelevant to buyers, as they are for mere self-contentment.
Even the remedies to information asymmetry are commonly applied in the translation industry. Major translation buyers with huge expenditure capacity usually resort to Michael Spence’s solution, signaling. Nevertheless, since the main topic in this case is money, the adverse selection model ensuing from information asymmetry is reversed on SLVs and eventually on freelancers. MLVs, in turn, resort to Joseph E. Stiglitz’s solution (screening) with SLVs.
In a not too far future, more and more translations will be done with more and more sophisticated (and possibly remotely accessible) translation tools, to maximize cost effectiveness. Professional human translation (HT) is and will still be expensive, further improvements in MT will make HT as faulty as MT, at least for certain text types. Not everything can be machine translated, and will probably be the same in the future, but “information wants to be free” and now and henceforth possibly translated almost instantly.
In his book Information Wants to Be Shared, the Australian economist Joshua Gans reminds us that the origin of this saying was a remark by Stewart Brand to Steve Wozniak at the Hackers Conference in 1984. Brand said: “On the one hand, information wants to be expensive, because it is so valuable. The right information in the right place just changes your life. On the other hand, information wants to be free, because the cost of getting it out is getting lower and lower all the time. So you have these two fighting against each other.”
Gans also writes that “in its entirety, [Brand’s comment] is a statement regarding what price information might sell for and it is agnostic regarding whether that price might be high or low. [...] The cost of distributing information is falling and is now arguably costless. [...] If distribution costs are zero (or nearly so), then a price of zero will not stand in distribution’s way.” The misunderstanding is right in the logic that “since it isn’t costing anything for you to release information to me, I might expect to be able to get it without having to pay.” And if information/content producers are facing users arguing that information must be free, why should they not require the same from their vendors?
This is the fundaments for commodities, produced and distributed at relatively low marginal cost, and yet the great demand does not reflect the eventual price. Both supply and demand of translation have increased, although differently. Quoting Gans again: “Economics 101 tells us that, under these circumstances, one cannot say whether its price should rise or fall.”
In less than a decade, music publishers have abandoned digital rights management (DRM) entirely. DRM only persist in the traditional publishing industry struggling to move to digital publishing. Just like the music industry worked out how to price music when the costs of distribution had fallen to zero, traditional translation purveyors should be able to profit from technology rather than struggling to curb it in the doomed effort to protect their business model.
“From the consumer’s perspective, there is a huge difference between cheap and free,” Chris Anderson wrote in Free. “Give a product away, and it can go viral. Charge a single cent for it and you’re in an entirely different business… The truth is that zero is one market and any other price is another.”
The magic of the word free creates instant demand among consumers, then free represents an enormous business opportunity. Money should be made around the thing being given away. Anderson itself cautions that this philosophy of embracing the free involves moving from a scarcity mind-set to an abundance mind-set.
Not surprisingly, Anderson’s reading book (in both paper and e-book formats) and the abridged audiobook are behind a paywall, whereas the unabridged audiobook is given away free of charge. The rationale for paying half the audiobook is in the service offered with cutting three hours from the length without losing key concepts.
If a deconstruction of translation rates into the component parts has really affected pricing, a freemium model should be devised for translation (services) to provide the basic product free of charge, and charge for extra/advanced features, services, or products.
Distribution still requires an expensive infrastructure accounting for most of the final cost of a product. Production costs are only a small part of that, so disintermediation could be a way to revert the downward trend of translation prices.