Musings from a Tech-Noob: Protocols & Connections

Yeo Yong Kiat
Government Digital Services, Singapore
5 min readJan 27, 2023

(It’s been about 5 months in a tech organisation, where I am at GovTech Singapore, and I must say understanding communication and behaviour has been an eye-opener. I’ve been observing a lot of the interactions between software engineers, DevOps specialists and even data analysts — it recently dawned upon me that most of them, unknowingly, speak in terms of paradigms.

Frontend developers, who code in ReactJS, speak of managing project structures in terms of composability and reusability. Delivery managers and product owners who have a background in computer science apply separation of concerns when it comes to product backlog estimations and structuring scrum teams. Even managers operate in paradigms of declarative programming, when they leave it to self-managing teams to figure out the best way to develop a certain set of features.

Quite a fascinating and coherent world this sector is, and one could learn a thing or two about communicating with developers and delivery teams just by picking up something about the technical world they operate in.

Which brings me to this “tech-noob” mini-series about exploring some of the important concepts every tech-sector entrant should at least get his feet wet in. In this first article of what I hope to be many, I focus on a preliminary description of the ubiquitous TCP/IP protocol)

What are Protocols?

One thing that is true of humans, and computers (which are a human construct through and through), is that effective communication (and by effective, we mean a consistent experience that has an objective interpretation) is all about two things:

  • Establishing a common set of behaviours; and
  • Constructing a common language set to codify the behaviour.

This creates what is termed a protocol in the parlance of computer-speak, which is not too far off from what you might understand from everyday human protocols: when we negotiate business discussions, when we instruct our children on completing a certain task, or even when we purchase something from merchants.

Obviously there will be groups of related protocols — for example, if I’m a teacher, there are certainly protocols for engaging my students and their parents within the same space of interaction. These related protocols can all be categorised into what we call a protocol suite.

TCP/IP Protocol Suite

If you’ve never heard of TCP/IP, it stands for the Transmission Control Protocol/Internet Protocol protocol suite, of which there are three foundational protocols:

  • TCP
  • IP
  • User Datagram Protocol

Broadly speaking, this open-source protocol suite sets out the rules and language for communication on the Internet infrastructure and other similar computer networks. And it has stayed true to being an open-source system, because much of its implementations are actually publicly available at little or no cost, and forms the basis of our Internet today.

If you break down the objectives of TCP/IP, there was an overarching goal of developing a highly efficient set of rules to utilise the different branches of interconnected computer networks.

But if only principles and objectives were all that we need to funnel down execution options! As most open-source development goes, there generally were many ways to construct and develop the TCP/IP protocol suite, and there were many competing frameworks in those days.

What’s important for us to learn is that critical development trajectories very often go down the trail of a few design or paradigm options that happen to gain momentum. In tech development work, it is common to have entire systems arise from the spirit of the day, which may explain why developers also like to work in extensive open-source communities.

Historical Development

You see, for much of the timeframe of development of the Internet, in terms of visualising or conceptualising a way to formalise communications, we were all caught up in the concept of a telephone network, which had the follow conceptual peculiarities:

  • You have users who connect to one another, in what is termed a call — of course, for telephones, we deal with physical circuits of connecting wires
  • Service-wise, the call duration and the identity of the connection endpoints were used to bill the users
  • Tech-wise, the connection gave users bandwidth to transmit information between themselves via the call — data coming in from one connection endpoint would predictably emerge from the other endpoint in sequence, with some latency

But this also describes very much what the TCP/IP protocol is all about! Every TCP/IP connection starts with someone starting the connection by “dialing” a server; and at the receiving end of this call, a server needs to be listening for these calls to pick up the connection when a call does come in.

The key difference however, is the connectionless data that TCP/IP uses. There’s this concept of transmitting data as packets, which is a chunk of data. Each packet is sent towards the destination server, and each packet takes a certain path of least resistance, but not necessarily the same path (unlike telephone circuits, which are linked by physical wires).

Suppose something breaks in-between the connection, and a packet is lost, all other packets automatically follow different paths. Instead of losing the entire conversation, you then only have a brief aberration in data flow, but the rest of the conversation is re-routed and continued.

The End-to-End Paradigm

I personally find the architectural principles underpinning the TCP/IP design fascinating and a world into the mind of network developers. You see, when designing large systems such as operating systems or, in this case, a protocol suite, we often ask ourselves at which layer of the architecture we should build certain features.

  • Should we have a “dumb network”, and “smart systems” connected to a dumb network?
  • Or should we code in decision making within the network, and build in features there?

There is a certain end-to-end paradigm in network development that goes like this:

Features can only be completely and corrected implemented with the help of applications built at the end points of a communication system.

Let’s ponder for a while what this paradigm entails:

  • This doesn’t mean that providing features as part of the communication packets is impossible. It just means that it risks being an incomplete feature.
  • It is only at the end points where the needs of a communication system can be suitably determined, to correctly code for a feature. For example, things like encryption, acknowledge of message receipt, error control.
  • This also means that important features will never be perfected by implementing at overly-low layers of a solution architecture. Low-level features should never aim for perfection.

Development along this paradigm tends to support designs with a “dumb network”, prioritising “smart systems” that are connected to the network to enable features to be developed completely and correctly.

So the next time you hear of packets, protocols, end-to-end design and the like, hopefully this gives you some context to understand the conversation.

--

--

Yeo Yong Kiat
Government Digital Services, Singapore

Teacher l Data Analyst | Policy Maker: currently exploring the tech sector