JSON object which isn’t available out of the box.
The script that adds that object is nothing more than plain JS (IBM BPM’s embedded JS engine, Rhino, supports up to ES5), which means you can create your own internal libraries of functions, increasing the DRY-ness of your application.
But maybe you already knew this and you’re big proponent of writing good JS in your BPM apps. However, do you know under what conditions your bit of JS lives and dies?
- When is your server side JS run?
- How often does it re-initialise?
(For this post, I’ll be using the term ‘JS context’ to refer to a single JS execution context. For example, if you were caching values, this context is where your values live. Separate contexts do not share values.)
I had done some experimenting on my own and had some basic answers, but after reading this comment on dW Answers I had to conduct a full and thorough investigation!
Our setup has the following parts:
- A toolkit with the managed server JS file (“A Server JS File” toolkit)
- Two identical process apps (“App 1” and “App 2”)
The server JS file
A simple log wrapper. It stores its creation timestamp and a random number (multiple contexts can be created in the same millisecond).
When the exported function is called, it prints out the thread name, the random number, timestamp, and the optional label.
The Process(es) and Service
The identical processes (note, the new ‘Process’ type, not BPD, but doubt either will behave differently) in each of the apps are simple:
- first step is a system service — its implementation just has a script to make a call to the JS method
jssnap.log(‘service call from process X’)where ‘X’ is whichever process app it was in.
- second step is a process-level script which makes a similar call
jssnap.log(‘process script call in process X’)
Both steps are configured as a multi-instance loop = 5, to test if concurrent contexts are created and used if demand was high enough.
The service in step 1 will also be executed separately to test standalone behaviour.
I’m running on 8.6.0, in a SingleCluster configuration for both my Center and online Server.
On the Process Center:
- Run Process in App 1. Run Process in App 2.
- Run standalone service in App 1. Also for App 2.
- Trigger a new snapshot (make a non-functional change: update documentation, or similar) and save.
- Repeat step 1 and 2.
On the Process Server:
- Install V1
- Run Process in App 1. Run Process in App 2.
- Run standalone service in App 1, and in App 2.
- Install V2 (no function changes, as above) — leave, do not migrate
- Repeat step 2 and 3 for V2
- Repeat step 2 and 3 for V1
The upcoming Part 2 will contain the results of the Process Server portion of the test. The Process Center results are below.
Behold, the results (part 1, at least)!
The original logs are also available here. The logs below have been trimmed and emoji-fied for easier correlation.
Let’s break this down:
- When a Process is run, two contexts are created for threads named: WebContainer, WorkManager (lazy instantiation)
- Running a Process (or service) from a new Snapshot, will create new contexts
- WebContainer, WorkManager threads are just names — contexts are seemingly portable (e.g.: follow the 🍊)
- Contexts exist for each snapshot concurrently (follow the markers below between the initial and re-runs for both apps) and are not shared
- If the demand requires it, multiple contexts named ‘WorkManager’ can be created. Below we can see two always being created, but the second is never actually used
- Standalone service calls are always executed on a thread named ‘WebContainer’.
- (bonus — this isn’t reflected in the logs below, just trust me :) ) After a new snapshot, running a service will create a WebContainer context without creating the WorkManager context.
The takeaway (a.k.a — what do I do with this info?)
- If you are using any caching mechanisms, or are writing potentially memory-leaky code, the short lifetimes of the JS contexts mean you’re very unlikely to run into any issues on a Center. On a Server, however, you may want to make sure your JS isn’t leaky.
- Unless you have a particularly massive library of server JS in your apps/toolkits, I wouldn’t sweat any ‘JS context creation’ performance hits. It would seem to be a much bigger deal for a Process Center than Servers, in any case, due to the limited number of new snapshots on Servers.
Bottom line: If your JS was good, I wouldn’t concern yourself with performance
But if you had so much in-house JS written that it actually is a concern, then there may be more significant problems than the performance hit from loading it.
When are these contexts destroyed?
Stay tuned for this in part 2.
What is the lifetime of a context over a long uptime snapshot?
I’ll cover this in Part 2, as it’s much more relevant to Process Servers.
What are the performance implications?
For me to fully answer this question, I’ll also need to run the Process Server tests.
From a RAM usage perspective, I’ll have to do some digging around and report back.
Also, stay tuned for a micro-benchmarks post!
Also, perhaps someone can enlighten me on what Rhino optimisation setting IBM BPM is using?