Karim: On the TML side, Juan has continued his work on the built-ins for arithmetic transformation or Bit Universe Transformation and he’s also taken over Murisi’s work who unfortunately has left the team. Lucca has finished his implementation of a persistent and canonical union find data structures for his 2-CNF extraction that he’s been working on since he started. He also implemented a canonical shared data structure for 2-CNF as well and continued his research on Agda.

Tomas has made some big accomplishments this month with his delivery of an updated TML IDE (Integrated Development Environment). You can find it and we encourage you to play with it at TML.IDNI.org. You’ll find the latest version of TML and all the sample codes that you can run right from your browser. That’s been updated to the latest version where he fixed many compiler warnings and a few bugs with the command line options. He continued his work on persistence as well.

On the parsing TML side, Umar started working on the early parsing techniques that Ohad recommended and thinks that it’s going to be very beneficial for TML. He worked on techniques to extract the parse tree from the early parser and fixed some bugs with no symbol processing. He’s also started working on the method to visualize the parse trees, which should be very helpful.

Andrei, on the Agoras live side, has been working on refactoring the back end with an eye towards DevOps issues. So, he separated the database server from the backend server, the application server to make it easier to scale the application when we go live. He also finished writing the Swagger documentation for all the backend to make it easier potentially to switch frontends or to make it easier to implement several frontends to the application including mobile frontends. On my side, I’ve been working on fine tuning with Fola. I’ve been working on interviewing several candidates. We’ve had several job offers out there on all the recruiting websites. We’re looking for several candidates; we’ve interviewed quite a few this month. I’ve also been working with Fola on fine tuning our financial model and pitch deck.

Prof Benzmüller: I have a couple of bullet points I’ve been working on this month. One was in study of the long-standing proposal by Ohad on how we could go with the consistency predicate. The modeling I looked at was a way to get an alternative view to the existing proposals, but in the end also share some commonalities was to actually provide a consistency predicate at an outer layer. So, the higher layer of a logic and then make it usable at the object language. But without being a primitive notion at the object level itself. This can get us out of trouble similar to the other approaches which do something not so different. Namely also to provide a notion of a layer of a logic that comes without a consistency predicate and then adds that on a more abstract layer. There are commonalities which we then also discussed in the academic panel. I think that we have reached some interesting conceptual agreement that this kind of separation where the consistency predicate is provided, and not provided, and where it can be used and not be made available as a primitive notion that we have certainly some commonalities, detected in the academic panels. On the other end. I was working on an encoding again of this alternative proposal, which is actually very close to Ohad’s approach, Rudeanu approach, in Isabelle HOL.

Enrico presented an alternative depiction of this approach and I was able to relatively quickly encode it and also provide a short documentation. I also wrote in a document of the first part which was then uploaded in Slack and did some experiments there. I’m still working on that because there’s an interesting question whether the hypothesis that there’s a restriction in a very variable assignment to base logic layer is possible or not under the well-foundedness conditions and I’m still working on that proof. We’ll continue to look at whether I can formally do that just to get assurance here. I mentioned the academic discussions we participated in, but I also complied as we agreed last week to get an overview on papers, on related papers on contract languages and documents which I uploaded. It provides an overview of the state of the art in contract languages and other projects. Including projects like Ethereum or others and this is available in Slack. I studied on the MSO site, where to start with in terms of choices about the basic concept. So, which signature to choose, whether I should support primitive notion of functions. Because that has an implication on how rich my encoding language has to be. And I haven’t made a clear decision yet, but I need to come to that before I can enter more implementation.
It also depends a little bit on essential choices in the project because you want to have that aligned. So, before you say “I want to provide a notion of MSO and implement it”, it should be clear whether we for instance natively want to support primitive function symbols or not, etc.

Dr. Marcos Cramer: I will report a bit about some of the things we have attained this month in our academic panel discussions focusing on some points that were not fully discussed in detail by Prof Benzmüller. One thing that has become clearer this month is the different approaches that have been developed by different members of the academic panel including Ohad himself, really kind of complement each other. So, rather than having to choose between one of those approaches, I think we now understand better that we have to combine them to get the best of each approach.

So, let me just sketch briefly what I mean by that. We had for example, Ohad had for a long time proposed a method based on brewing algebras to avoid certain problems that we have to deal with when developing the logic of the Tau system, in order to enable talking about its own consistency. Its law of changing the law, how users can change the system from within. Recently, Professor Enrico Franconi formalized a somewhat simplified fragment of Ohad’s ideas and in the discussion about that formulation it became clear that this approach is really necessary to ensure that the semantic definition of the quantifiers in that logic is a well-founded definition so that we really have clear semantics here. That was something that for example my previous approach was kind of overlooking. So, it has become clear that we should probably combine these different approaches. Because there’s still other things where my previous approach does certain things that Ohad’s approach does not deal with properly. So, I think the take home message here is really we need to combine these things. This is something I think that we will be working on. In the next few meetings that has come out of what we’ve recently done.

Tomas: Regarding IDE, I’ve added more settings into the setting tab. These allow some new features including state blocks and bit universe. As Karim already said, I’ve deployed the current version of TML and more aggression tests and examples to TML.IDNI.org. So, anyone can try these features in their browser right now. Regarding TML, I fixed the behavior of TML when there is an unknown command line option. I’ve removed some compiler warnings. I’ve added a JS module which builds an ES6 module, which works well on the web. And also, I’ve continued to work on resistance.

Juan: This month, I kept working on our built-ins and on the integration of the Bit Universe Transform. Particularly, I have been addressing some issues that we had there in one of our arithmetic built-ins, in particular the addition of BDDs. So, I’ve actually completed the design and implementation of that feature with the new comprehensive tests which are already committed into the repository. In that sense, we are extending and we are arithmetic capabilities in a way that are compatible with the Bit Universe Transform and this is in different ways making TML stronger at its handlers feature, which we also call built-ins.

So, this work will keep going with also integrating the bit universe transform with the noble BDD less or equal operator. And we also have in there a multiplier, also BDDs we are looking forward to grab and to make it compatible with this binary universe transformation that Umar put together a couple of months ago. Finally, I have been also reviewing some of Murisi’s efforts where he detected some improvements to be done in our core libraries in terms of performance. So, I’ve been also following that work in order to make sure that we are not losing anything there. So, I’ve been also paying attention to some of the core BDD operators that we have already in our engine.

Lucca: This month I’ll continue to work on the 2-CNF Universe. So, the 2-CNF extraction feature, also supposed somehow store the 2-CNF that we have extracted from the BDD. And to do that in an economical way, we need new data structures. A 2-CNF can be seen to be composed of an equality part, constant parts or variables which are just true and false. And implications. Each of these things are very useful to have right at hand and so we are storing them in these three different ways. For each different way of storing it, you need a good storage method.What I’ve been doing this month is, I’ve created a nice storage for the equality part which consists of a persistent canonical union find data structure. And then for the constant part so the true and false this is simply sets. And so, we want to have a shared canonical set union, the shared canonical set data structure. So, these two are in place. The one for the implications for next month. On my type theory journey, I’ve been diving more into actor, and really getting used to writing Agda code. This has been a fun journey and yeah, this will also continue next month.

Andrei: Last month I finally finished a very important part of our Agoras Live development which is writing documentation to our Agoras Live backend. Also, I did a full refactoring of Agoras Live backend code. Now it’s clean and it’s well-structured and it will allow us to get more developers working on Agoras Live with that readable code. The Swagger documentation opened us a very important part not only to do testing the right way, the testing of Agoras Live. But also, now we can easily switch for example the frontend from what we now have to some other frontend. Because we have a well documented backend Now we can also develop a mobile application and connect our backend. Not in the very far future, because Agoras Live backend was built with the idea of stateless system. In some future, it will allow us to go to a fully decentralized system where Agoras live backend and frontend will run on a user side without any centralized servers. Last month I completely separated the frontend and backend and database among different servers. Because we’re heading for a production and this architecture will allow us to easily scale the system according to user load. It took a lot of time of course to do all these things, but finally it’s over. I’m looking next month to give away that documentation to a tester to test our Agoras Live platform and then we will do better testing, closed public, and then to production.

Professor Umar: In the earlier weeks of this month, I’ve been working on visualizing the proofs that we get from the TML. I’ve also found a few bugs with the null grammar production when it gets translated into the proof item, and these proofs can be visualized as a dot file. So, this has been checked into the master branch. It can be used by anyone who wants to look at the visual representation of the proofs. The second thing I’ve been working on is understanding the earlier parsing techniques, and the challenge has been how to convert the recognizer into the parser.

So, the literature under reports the work on this conversion and yet there are some blogs, papers and material that mentions how to do that conversion. However, there are some points that I discovered while going through that were missing. We need to take care of them while building the parse forest from the early items. I’ve started implementing the first version of the approach that I have in mind. The coding is going on, and it’s based on the Ohad’s recognizer that he shared. Hopefully this will be completed and I’ll be testing it with some basic examples and some complex examples as per literature.

Ohad: A lot of what has been said I’ve been involved in to large part. I also dealt a lot with how to turn our logics into logics that specify programs in the specific sense that works for us, so we can have a consensus-oriented programming and so on. We need to study the existing approaches in the literature, but I also sketch a new approach that is based on our assumptions that we can have as input and output logical formulas. Building on that I could add a form of comprehension to our languages. Comprehension as in the comprehension axiom from set theory so in this sense of comprehension. This might be yet another approach to consider how to specify Tau in Tau.

--

--

Tau

Software Development, by Describing it in Sentences