A small gem from the telegram era
Over the gorgeously sunny August Bank Holiday weekend I had the pleasure of camping with old friends. We visited the truly spectacular gardens at Stourhead, one of the leading historic properties managed by the National Trust.
I shall not recount the history of the place, since it is readily available online. Whilst inside the old mansion, I saw a collection of telegrams from 1917.
What caught my eye was not the handwritten messages of war and loss, but the boilerplate text.
Here is what it says in small print:
If the Receiver of an Inland Telegram doubts its accuracy, he may have it repeated on payment of half the amount originally paid for its transmission, any fraction of 1d. [i.e. a penny] less than ½d. being reckoned as ½d.; and if it be found that there was any inaccuracy, the amount paid for repetition will be refunded. Special conditions are applicable to the repetition of Foreign Telegrams.
On what basis would one have ‘doubts of accuracy’? How would you make the cost-benefit calculation of whether to gamble on a retry? How common were errors, and where in the system were they introduced?
It’s the ‘best effort’ of its day, whereby the telegramco is guaranteed the revenue of at least one message. However, the cost of failure is pushed onto the end user. If there were never any errors, nobody would ever ask for a retry, so there would be no ‘retried and no error’ revenue uplift.
That means the telegramco revenue maximisation strategy would by necessity involve some level of transcription, transmission or relaying error. This creates possibly perverse incentives. Would you no longer care about quality? How did the error rate for foreign telegrams compare to national ones on the intertelegramnet?
The parallels to modern broadband are clear. Poor packet scheduling is today’s “doubt of accuracy”. Rather than variable information loss, we have variable timing. Email is fine, as its quantity and quality demands are low. It is the bulky or urgent information deliveries that are badly performed.
In doing so, we too have created perverse incentives. Broadband service providers are delivering low quality, depending on quantity to deliver quality. This includes forced network idleness, packet retransmission, forward error correction, and duplicate content storage and delivery.
These in turn all create a perceived need for “speed” as the quack doctor treatment for the “inaccuracy” illness. Then users pay through the nose for the failure of the transport provider to manage quality right in the first place!
A century has passed, and we’re still figuring out the business model for “repetition” to fix quality problems. The more times change, the more they stay the same in the virtual post and digital telegram business.
You can hardly tell 1917 from 2017, as we grapple with similar omnipresent quality problems and performance pricing. And to top it all, fault isolation in digital supply chains is still not solved in the real world, especially those crossing national and network borders!
About Martin Geddes
I am a computer scientist, telecoms expert, and consultant. I collaborate with leading practitioners in the communications industry to create game-changing new technologies and businesses.