Sand Dunes & Digital Mires cont.

Ceilidh Gray
safenetwork
Published in
5 min readMar 27, 2019

An Interview With Community DApp Developer / part 2

Missed part 1? Read here.

Photo by Ryan Stone on Unsplash

Great thank you for that overview. Who do you imagine using your app?

Everyone. Your grandma can keep her list of roses that she has in her garden, her will etc. Journalists can keep their investigations on state corruption on it. Companies can keep their data on it. You can have your receipts, unpublished memoirs and your code or inventions on it.. etc.

I think we’re ready to get stuck into this now. You’ve talked about the users, but what other type of users are you envisioning or aiming at supporting?

SAFE.AppendOnlyDb is basically a simple event store, but it is based on AppendableData, which I implemented over MutableData, according to how it seems to become designed eventually.

SAFE.DataStore is more of an experimental document database, with rudimentary indexing and searching capabilities. They will be continually improved as to enable easier and more capable usage in applications. They could even become end user products like data management applications.

I have ideas for various applications that could use them, but there’s still plenty that can be done with SAFE.NetworkDrive to keep me occupied with that :)

That’s actually a large part of what motivated the development of the data storage solutions; I saw it as the first building block in making end user applications.

I have also put an auth client into its own component, which I think can be used by any application, and rather easily. It encapsulates both browser flow auth as well as direct auth with credentials. All of these (AppendOnlyDb / AuthClient / DataStore) can be found as (pre-release) NuGet packages, so they can be experimented with in applications today.

How about files with a long history of changes? Is there any type of checkpoint every so often so reconstructing a long history is not needed to get the latest version? (I think you call this ‘snapshotting’)?

That’s right, snapshotting. So given the implementation of AppendableData that I have, each AD is an endless sequence of values, split into segments of 1000, (where the first is metadata and the rest is data). Each such segment is based on an MD, which has its distinct location in the network. It makes sense to base the snapshotting on that number (but a smaller or bigger number would work just fine) and what it means is that you would never need to fetch more than one segment, and always have all the data of the AppendableData — no matter how long the history is.

Event sourcing is a way of storing data, that captures every incremental change, instead of the current state of things. So since each entry in a segment is an incremental change, you can take all entries from the beginning, and apply them through the function that builds your current state — then you store that current state as a snapshot. Next time you want current state, you don’t need to go all the way to the beginning, you just go to the latest snapshot, and then you apply all new events after it, to it.

Let’s take an example:

Say you have 999 events in the very first segment in your AD. These are withdrawals and deposits into your account. Adding one more element will make it overflow. At that point a snapshot is made, and passed to the next segment to be stored in its metadata (a pointer to the snapshot rather). The function that builds current state performs addition for deposit and subtraction for withdrawal. You calculate the result of all of them and get a balance of 345 euros. So, your snapshot is that: “Balance: 345 euros”.

Then you do another 123 deposits and withdrawals. Next time you want to get your balance, you don’t fetch all 999 + 123 events, but only the snapshot and the 123 new deposits and withdrawals, and you can calculate your current balance. And next time you reach the end of a segment, you do a new snapshot.

It works the same with the SAFE.NetworkDrive, but the current state is what the file system hierarchy currently looks like; what folders and subfolders are there, what files, and what are the current pointers to the file content. Every change is like: folder added, folder renamed, file content set, file deleted, etc. etc. This also means that the drive can be restored to every state it ever was in, by just replaying up to that event.

Does this become difficult to accomplish given the event-source nature of how files are stored? If so, what are your thoughts around adopting RDF somehow to support portability of data across apps (this is already briefly brought up on the forum thread)? For example, storing files which I can then publish as a website/webapp, or being able to read the stored files with any other SAFE app which manages files.

It is a distinct way of handling data, and right out of the box you cannot process event-sourced data with applications that expect a current state model.

The way this is handled, is by introducing projections. Let’s say that SAFE.NetworkDrive would have an opt in RDF Projections module. If you enabled that, it would mean that you diverted some CPU, memory, bandwidth and spent some PUTs, to produce a projection of the events, into the RDF format. The RDF data model would be eventually consistent.

The file system that is built in memory is basically also a projection of the event stream, so there is already a projections module implemented. Events-to-RDF would just be another one.

Apart from supporting it in other platforms like Linux and OSX as I guess must be in your roadmap, what other ideas do you have for the future of the project that can share to inspire others?

Yes, Linux and OSX are on the roadmap, and should be fairly prioritised now that there is an alpha for Windows.

There is something else brewing actually, an extension to this project, which I think will be quite popular — it’s an end-user thing, not a technical detail. I’ll focus on multiple platforms before announcing it though :)

Great thanks Edward!

Head over to the forum to join the conversation, and as always, any questions, comments or have a project you’re working on you’d like to share with the team, drop us a line!

--

--