Imandra does interesting work on using SMT-based verification for testing and auditing complex models. Grant Passmore, co founder, got his PhD at university of Edinburgh and gave a talk today.
One of key points I got from his talk here is that most applications can be neatly stacked into layers, which by itself is not surprising, but that lower layers tend to be high frequency IO, but then at the higher level of abstractions there is low frequency interaction, which is usually human driven. A verification tool is perhaps best placed by acting in between these extremes.
I like their punchline: reasoning as a service.
PS In financial trading there is a specification of when the trading happens, but this specification is often given in 100+ page document. And instructions for executing against that specification looks very logical, eg do X subject to Y provided there is Z left in resources.
PPS A previous lab member Lewis Hammond interned at imandra a couple of years ago.
Every paper began with an idea, that was developed and refined and tested to yield the final result. If on reading the paper, the idea is not obvious, do the refinements and tests really matter?
I tend to find Stephen wolfram’s blog posts self-serving and territorial, meaning no disrespect, but this Union he suggests is precisely the kind of logic-learning and neuro-symbolic bridge that the starAI community has been arguing for decades. Given his resources, I expect he might have a successful product soon.
In a session with philosophers, linguists, lawyers and social scientists. The diversity in perspectives should make everyone wake up to the benefits of cross-disciplinary research.
The question on everyone’s mind is this: for any socio technical entity, which party is held accountable? Which party should have the burden of understanding the assumptions and error profiles of devices? And who should factor these into something that management can process and deal with?