Connecting the dots

Pavel Vlasov
Nasdanika
Published in
14 min readJul 5, 2024

--

Modern enterprise systems/architectures are complex. I’ve never seen a single system capable of capturing all entities and relationships and also supporting the temporal aspect of things. Perhaps, even if such a system was possible, it would be useless because it would be too complex to comprehend and manage.

This is why some people say that it is impossible to have a comprehensive model of an enterprise — there always will be “you need to know who to talk to”. It might have been OK in the past — a good level of tribal knowledge to glue together disparate information systems. However, nowadays it poses a problem — GenAI is not yet capable of asking around, setting up meetings, escalating, and squeezing information from people. And without the tribal knowledge any GenAI insights are bound to be incomplete. So, connecting the dots in the enterprise is more important today than it was a few years ago.

This story explains an approach to managing complexity which I’ve been using for more a decade. It is based on Eclipse EMF and Domain-Driven Design.

The goal of this story is to show that there is “water” (some problems which seem unsolvable are actually solvable) and direction to that “water”. Whether you have enough “thirst” for holistic insights into what you have to walk to the “water” and “drink” it is up to you!

The approach does not use a single system — raw data is loaded from multiple sources (files, URL’s, source and binary repositories, …) and then cross-referenced/federated.

Recently I’ve published Executable (computational) graphs & diagrams story. This story is complementary to that story with some overlap. The approach explained here includes building executable graphs and loading data from diagrams. However, you may build executable graphs and diagrams without using other techniques explained in this story. Also, you don’t have to use graphs and diagrams to “connect the dots”. As a matter of fact, they are a relatively recent addition to the Nasdanika arsenal of solution building blocks.

Table of Contents

· Layers
· Raw Data
· URI Handlers
URL
Classpath
GitLab
Maven
· Metamodels & Resource Factories
Drawio
Excel
Java
Coverage
PDF
Architecture
GitLab
Function Flow
Maven
· Model
· Graph
· Graph Processors
· Interfaces
· Applications
Application modernization
Cross-system analysis
Architecture as code
Process documentation
· Conclusion

Layers

Click to open an interactive diagram

The above diagram shows complexity management building blocks organized into layers. As it was mentioned above, some layers are optional.

Raw Data

Raw data is the bottom layer. It doesn’t contain any building blocks, just source information in a variety of formats in a variety of sources.

URI Handlers

URI handlers’ responsibility is to convert a URI (not URL) to InputStream for loading a model from a resource identified by the URI and to OutputStream to save the model to the resource.

URL

Eclipse EMF provides URI handlers to load/save models from/to files and URL’s e.g. HTTP URL’s.

Classpath

Nasdanika provides a URI handler for loading from ClassLoader resources. This allows to distribute models in Maven jars and reuse them by adding dependencies to pom.xml

GitLab

GitLab URI handler is built on top of GitLab4J™ API. It allows to load resources from GitLab without cloning a repository. In the future it will also support committing modifications made to the resources to a new branch.

You can find examples of using GitLab URI handler here.

Maven

Maven URI handler is a planned building block. It will load resources from Maven repositories using Maven coordinates.

Metamodels & Resource Factories

Metamodels define structure of models similar to how Java classes define structure of Java objects and database schemas define structure of data in tables.

Resource Factories load (create) resources from URI’s. EMF Ecore provides XMIResourceFactory and BinaryResourceImpl to load/save models from/to XMI and binary respectively.

The following section provide an overview of metamodels and resource factories provided by Nasdanika. The metamodels may be used AS-IS or as a base for creating custom metamodels. E.g. you may have a metamodel with a Project class extending GitLabProject class and adding org-specific details such as linking to the org structure.

Drawio

Nasdanika Drawio modules provide Java API’s and Ecore model to work with Drawio diagrams — read, traverse, create, and write.

You can work with Drawio diagram elements directly or map them to semantic models representing your problem domain as explained in the Beyond diagrams book.

A few examples:

Excel

Excel metamodel represents the structure of MS Excel and CSV documents. There are several factories for loading MS Excel and CSV files to the metamodel or to map to semantic models.

Family Excel demo provides an example of a resource factory mapping a workbook to the family metamodel mentioned above.

Java

Java metamodel represents high-level (referenceable) constructs of the Java language — classes, methods, … Currently it doesn’t have elements for statements/expressions etc.

There is a resource factory which loads .java files. Also there is support for loading test coverage information from jacoco.exec file.

I plan to add support of loading Java models from bytecode using ASM in the future.

Coverage

Coverage metamodel represents Jacoco results. Models are loaded from jacoco.exec files and bytecode (class files). This metamodel is complementary to the Java metamodel. It is used in JUnit Test Generation practice.

PDF

PDF metamodel represents the structure of PDF documents as loaded by Apache PDFBox. There is a resource factory to load .pdf files. I used this metamodel and factory in my RAG experiments.

Architecture

This metamodel was mentioned above. It can be used to describe architectures and generate documentation. I used it on a number of occasions AS-IS and extended, as will be explained below.

GitLab

GitLab metamodel represents some GitLab and Git objects. There is a loader to load the model over GitLab REST API.

I used this metamodel to load information about thousands of repositories for cross-referencing with information from other sources.

Function Flow

Function Flow metamodel is a specialization of the Architecture metamodel for creating executable flow diagrams. It is a young product with initial functionality. There is still work to be done to create an initial set of implementations (types of elements).

Maven

Currently the Maven metamodel has just two classes, so it is more a “placeholder”. I plan to extend it using Maven Model, provide integrations with the Java metamodel, implement a URI handler and ClassLoader loading from resources/classes from Maven repositories.

Model

The diagram above shows a model loaded from multiple sources using multiple metamodels and URI handlers. The Java class is shown with gradient colors to indicate that it is a sub-class of both Java compilation unit and Text Repository File. Such combination can be achieved using multiple inheritance (is-a). E.g. JavaTextRepositoryFile would extend both CompilationUnit and TextRepositoryFile — it is both a compilation unit and a repository file. Or it may be achieved by using “facets” (has a) — a CompilationUnit may have a TextRepositoryFile facet or vice versa. Multiple inheritance is a preferred way, but it may result in conflicts if some features have the same name or have different semantics. In this case the facet approach shall be used.

Graph

Sometimes it might be more convenient to operate on a graph created from the model than on the model itself. Graph elements may contain additional information needed for a task at hand, or the logic may be easier expressed in terms of operations on a graph than on the underlying model.

org.nasdanika.graph.emf package contains classes and interfaces for building graphs on top of Ecore models.

Graph Processors

One more step in managing complexity is creation of graph element processors as explained in Executable (computational) graphs & diagrams, Compute Graph demo, and online documentation. You may create different sets of processors for different tasks at hand. E.g. processors to generate code, perform simulations or search, …

For example, metamodel documentation referenced above was generated by creating graphs on top of metamodels, and then creating graph processor which generate documentation.

Interfaces

You may build HTML UI interfaces on top of the lower levels using whatever technology you like. I usually generate HTML Application model and then generate HTML from it. This approach allows me to think about interactions with humans in terms similar to programming interfaces — a collection of actions (methods/operations).

Command line interfaces can be built with Nasdanika CLI (also see Beyond PicoCLI story) or any other means of your choice. If you decide to use Nasdanika CLI, there are mix-ins and base command classes for working with models, generating documentation, …

And finally, API. You may create a Java API and publish it as a Maven module. Shall you need a REST API, you may create a CLI command using classes from the HTTP module.

Applications

The previous sections provided an overview of how a complex problem involving data in multiple formats stored in a variety of sources can be solved by using metamodels, models, graphs, graph processors and different sorts of interfaces.

The following sections explain real-life applications of the above.

Application modernization

I used the modeling approach to application modernization twice — the first time more than a decade ago, and the second time recently.

In both case there was a large codebase — Java and other technologies. Thousands of classes, in the first case more than a thousand packages.

I used the Java model in the second case and its predecessor loaded from bytecode using ASM in the first.

Once the model was loaded it was used to create an HTML UI with generated class diagrams, sequence diagrams and other visualizations. It was also used for semi-automated refactoring of a large monolithic codebase into modules.

I also used Drawio API to generate diagrams from a legacy diagramming format. The diagrams were used to generate documentation. Function Flow was used to demonstrate that such diagrams can be used as runtime components as well.

Cross-system analysis

In this effort I built an organization-specific metamodel on top of the GitLab metamodel. Then I loaded information about thousands of projects from GitLab and cross-referenced it with the organization hierarchy, information from other systems exported as JSON and Excel, and a manually maintained Excel tracker with a few thousand rows. The loaded model was used to generate HTML documentation site providing insights into alignment of GitLab assets to the org hierarchy. The site featured visualizations generated using ECharts-Java. It was also used to generated an Excel report.

In the future I plan to extend the metamodel, load information from pom.xml files and resolve dependency relationships between projects and, transitively, between organizational units.

Architecture as code

Architecture as code is a big topic which deserves its own story, a book chapter, and maybe eventually its own book. This section is very brief overview of what Architecture As Code is, how I used it, and how it may be used.

In a nutshell, Architecture As Code is an approach to architecture where architecture descriptions (models) are treated as software assets — stored in a version control system, releases (versions) are published as Maven modules to binary repositories, documentation is generated, architecture artifacts may be also be used as runtime artifacts (executable graphs) or to generate code, …

With this approach there no single repository of architecture information — the model is loaded from multiple sources (federated). It is pretty much the same as a Java application is assembled from multiple (Maven) modules.

I used Architecture as code on a number of occasions:

  • Converted a Visio architecture diagram to Drawio, mapped diagram elements to the Architecture metamodel, generated documentation, and published it to intranet. This allowed me to have focused discussions with different stakeholders and capture their input at a diagram element level, republish architecture documentation and share it with all stakeholders.
  • Documented POC’s — infrastructure, components — with detailed descriptions of each element.

In my opinion, Architecture As Code can be of great value for a technology organization because it allows SME’s to own architecture descriptions of assets they are responsible for and keep them together with those assets. For example, an architecture description of a microservice can be stored in a sub-folder in the same source repository as the microservice. It can be updated together with the code or even before (design documentation), not as a post factum activity. As it was mentioned above, architecture descriptions may be published to a binary repositories and federated.

Architecture As Code can be adopted gradually and evolve along multiple dimensions as shown on the above diagram. For example, a team may create initial documentation using the C4 Model. I’m planning to add C4 support to the architecture model in July. The C4 Model focuses just on visualizations, but with the architecture as code approach a team may also have documentation associated with architecture elements.

Initially the architecture description may stop at, say, Maven project level and don’t go down to classes.

If the team decides to expand along the technological depth dimension, they may add code elements to the description. I may be done automatically by analyzing source/byte code. The team may load test coverage, vulnerability, and other information from different system and associate it with architecture elements.

Similarly, the team may decide to load information about runtime components. E.g. load information from cloud/Kubernetes into the model.

Expanding along the technological breadth means creating sub-classes of base architecture classes. Say, represent a Maven component with a specialized metamodel class, not a generic “Component”.

And finally, the organizational dimension(s) represent adoption of Architecture As Code by org. units. Say, one team may produce very detailed architecture descriptions, another just high-level ones. These descriptions may reference each other. A department may produce an aggregated architecture documentation which would reference/contain architecture descriptions from multiple teams.

Federated architecture documentation may be used for analysis, search, RAG.

In my opinion the value of Architecture As Code shall grow exponentially in the organization due to the Network Effect.

Process documentation

During my entire career I haven’t seen a comprehensive documentation of IT processes. I’ve seen hundreds if not thousands of diagrams, typically useless without somebody explaining what a diagram means. I’ve seen long documentation pages and documentation sites, very often outdated. In a nutshell, corporate IT is typically a “foggy box” — people know their stuff and in order to build a holistic picture one needs to know who to talk to. It’s like traveling before invention of maps about 8 millennia ago, or navigating uncharted territory — talk to locals, maybe they’ll tell you how to get where you need to.

In my opinion, providing a “public good” solution for documenting processes and federating such documentation can be of great value. It may be used together with Architecture As Code and be a part of it. E.g. an architecture description of a system or component may contain or reference processes associated with the system/component — release, deployment, …

In the past I built a prototype diagram editor with Eclipse Sirius and used it to document the end-to-end development process. The process had more than a hundred activities, dozens of roles and tools. The organization where I worked at the time did not have an “Eclipse ecosystem”. I.e. I could not proceed beyond the prototype stage because there was no “pipeline” for Eclipse-based solutions.

So I switched to Drawio/Maven based approach to documenting processes. I documented a few POC processes by mapping a diagram to the HTML Application model as shown in this old demo. This approach is simple, but it does not establish bi-directional links between, say, an activity and tools used in that activity and transitive links like “all roles using a specific tool, all tools used by a specific role”.

Mapping to the Flow metamodel would provide deeper insights, once the metamodel is ready — I need to refactor it to extend the Architecture metamodel.

Below is a list of potential uses of a process model:

  • Documentation
  • Search on top, including RAG
  • Improvement opportunities collected at the activity level
  • Automated generation of issues in issue trackers — instances of activities. Issues may have links to activity documentation.
  • Execution stats may be pulled from issue trackers if issues have information about process activities they are instances of. Stats may be visually reported on process diagrams using representation filtering.
  • Simulation — it would help to prioritize improvements

You may say “Wait a minute, there are tons of workflow tools, including open source tools, with simulation etc.” Yep! However, I haven’t seen them widely used to document IT processes. Below is a list of possible reasons:

  • Not a public good — a tool needs to be installed, maintained. The approach explained here operates on top of technology infrastructure which is typically already available in software development organizations — a source repository, a binary repository proxying Maven Central, ability to edit Drawio diagrams — VS Code plug-in (this is what I use at work), Confluence plug-in, in-browser editor (can be hosted on the intranet), or desktop editor (this is what I use at home).
  • Light on activity documentation. Tools I’ve seen would provide a small text box with plain text or rich text at best to document an activity. It is not enough most of the time. With the approach explained here you can have rich markdown documentation, see example.
  • Requires knowledge of workflow notation. Workflow notations are designed for automated execution. So, such a notation constrains process documenters in many ways — they need to have pre-existing knowledge of how to document processes, they need to know notation elements which are of no use for them because they are documenting human processes, not designing processes to be executed by an engine, and the notation may not provide enough expressive power. In contrast, with the approach explained here a diagram does not have to comply with any notation at the beginning — SME’s may draw it any way they like and then it can be mapped to some metamodel. Over time the organization may develop its own notation relevant to how things are done in the organization. That notation my include colors. E.g. in UML and BPMN colors have no semantics (to the best of my knowledge) — they are style for humans. At the same time colors have a lot of meaning for humans. So, there is a gap. With the mapping approach shape styling can be used to select element type as demonstrated in SyncProcessorFactory.createReferenceProcessor — processor type is determined by the shape color.

One interesting potential feature of process documentation is process inheritance — similar to inheritance in Java and the way how container (Docker) images are built. E.g. there might be a high-level process maintained by a central function with “abstract” or “default” activities. Org units/teams would extend that process, implement abstract activities and customize “default” activities as needed. There might be “final” activities not to be customized — like final methods/fields/classes in Java. This way a process for a particular organization would be assembled from multiple pieces owned by different teams.

Conclusion

Some of you may think that what I explained in this story is a utopia, it would never get adopted in your organization. And you might be quite right — for any change, however good, easy and well-thought, there might be people who lose if the change is implemented — luddites being a classic example. See TOGAF Stakeholder Management for additional information.

However, what is described in this story is a “derisked utopia”:

  • The underlying technology works — a number of applications and demos
  • It does not require tooling/infrastructure expenditure/approvals
  • It can be adopted gradually — no huge up-front investments

If you are a hands-on person, you may use the approach explained in this story to increase your personal or your team’s effectiveness and efficiency. If you are a leader who is fed up with being fed PowerPoints or Visios as “architecture documentation” — know there is different approach!

You may start adoption of things like Architecture As Code by using it in POC’s, research and prototypes. However, the biggest bang for the buck, in my opinion, would be in applying it for product lines.

--

--

Nasdanika
Nasdanika

Published in Nasdanika

Stories about Nasdanika capabilities and their applications

Pavel Vlasov
Pavel Vlasov

Written by Pavel Vlasov

I'm a principal engineer at U.S. Bank at day and an Open Source enthusiast at my free time focusing on productivity capabilities for "Java Guerilla Innovators"

No responses yet