How the Results of Disruption Changes the Discussion Between Aficionados of Specific Languages and Environments

jupyter data-story tool (Python)
Grafoscopio data-story tool (Pharo)

Although the previous articles only give a broad outline of the results of disruption and a single example. If accepted as broadly the case, they change the ‘argument’ or discussion, as I prefer to think of it, particularly that between aficionados of environments that, while niche, are nevertheless general-purpose, stable and reliable.

Firstly, those environments are as good as they are not despite being niche, but due to being niche. Making them mainstream would likely only cause them to experience the disruptive tendencies that have prevented more mainstream environments from becoming as good.

Secondly, the niches are very different, and as a result even similar libraries and frameworks built on each are very different in their way of working. A good example is Grafoscopio in Pharo compared with jupyter in Python. While both are very good, they stress different things. Which is preferred in each case depends on the specifics of that case.

At the same time, as languages, they are not all that dissimilar, though certain key differences account for the different niches. Learning either makes the other relatively easy to learn as a result. In addition, the ability to call Python code directly using Pharo Smalltalk syntax and vice versa implies no need for them to compete. Both qualify as reliable core technologies that can be built on, technologies that complement rather than compete with one another.

The decisions, rather than debates, that thus become relevant include the following:

1. As engineers, whether to use mainstream tools, or as far as possible, to utilize reliable core technologies while interfacing with mainstream artifacts.

2. The above in turn somewhat depends on:

a. Whether individually one can afford to not use mainstream tools, given that many paid roles require them.

b. The degree to which the annoyances and difficulties of making things that work using them overwhelms the basic desire to make things that work.

c. The degree to which niche, but reliable core technologies can be utilized within mainstream roles to augment and enhance mainstream products.

These decisions are individual, and depend on specific circumstances and specific experiences.

That cooperation rather than competition in core technologies is advantageous can be seen in the semiconductor industry, where rather than competing on core technologies, fund a cooperative institute to advance those, gradually, while competing on products built on them.

While an outright refusal to use mainstream tools is not practical for most engineers, and not possible for most ‘coders’, since learning a very different paradigm is going to be more difficult for them, a reluctance to do so, stronger where possible, will affect the way those environments are seen. That reluctance, whether strongly expressed or left implicit, over time will affect managements view, since almost by definition it will be their best engineers that are the most reluctant, those who both get most aggravated by things that don’t work well and have the most circumstantial ability to be overtly reluctant.

Outside North America things may be different, but within North America, this appears to be a reasonable assessment of the situation.

Ways in which more reliable core technologies can augment mainstream ones, without the hassles involved with things like JNI/JNA or linking to other VM’s such as V8 via C library type linkages, include MoM, lighter protocols such as Vert.x, and other integration technologies that already exist and have achieved a decent level of reliability in mainstream environments. Specific MoM technologies include lightweight, fast service buses such as Synapse rather than the heavyweight ESB’s. Mainstream technologies such as JINI (Apache River) can be used to extend the reach of niche technologies without huge development effort. Other ways of integrating, such as Tarantalk, also have their place. Finally, accessing technologies such as Hadoop using things such as the PostgreSQL overlay to HDFS and Xtreams in Pharo provide a means of accomplishing what technologies such as Storm or Spark accomplish, without any need to write further infrastructure, and with levels of concurrency, speed and reliability Storm or Spark can’t match.

Much of this work has been or is already being done in Pharo and Python, as two examples I’m personally familiar with. There’s no reason to think that it won’t continue. Grafoscopio and jupyter, in different ways, include connections to a wide variety of data sources. Combined with GLORP and Voyage, in Pharo’s case, provides object-relational and object-document mapping (for NoSQL stores) to complement simple access. Stamp in Pharo enables connectivity to MoM, while VertStix connects it to Vert.x in a more reliable way than the other Vert.x clients. Similarly, RediStix provides more reliable access to Redis. Other integrations include Atlas mentioned above, that allows Pharo code to call Python code in Pharo syntax and vice versa, a shared memory Pharo/C++ bridge, wrapping of COM objects for use in Pharo applications on Windows, and others in more initial stages of development.

Integrations to other common tools, such as the Iceberg GIT integration, that provides the ability to use GIT repositories while maintaining the Smalltalk way of using code repositories, are another huge help.

Providing interfaces via Pharo or Python to more industry-specific or software type-specific niche tools, such as LISP, Prolog, ERLANG and the facilities written in them, would be another means of extending mainstream environments in a more reliable manner than a direct interface could accomplish. To that end I have integrated a version of LISP, allowing Pharo to use ACT-R cognitive modeling, and a version of Prolog, allowing Pharo to access Wordnet for semantic applications.

It requires thinking creatively, and by doing so solving problems that would require more work and more testing, assuming they would work at all, in mainstream environments alone. Those are the things that define engineers, and distinguish them from mechanics, whose roles are otherwise virtually identical.

In that sense, the appropriate difference between a coder and a software engineer mirrors the difference between a mechanic and an engineer in other industries. Neither need to be less appreciated for their abilities, but they do need to be differentiated. In that way each does both what they’re good at and are most interested in.

Like what you read? Give Andrew Glynn a round of applause.

From a quick cheer to a standing ovation, clap to show how much you enjoyed this story.