SATIS 2018 — Day 3

Javier Bustos
Aug 17, 2018 · 6 min read

Day 2 report: https://medium.com/@madestro/satis-2018-day-2-9222dac84495

It is a rainy day here in Sommarøy. Talks started at 9:30, but some of the students leave to the airport at 4:30.

First talk was from Fred Schneider, “Security from Tags”, a joint work with Elisavet Kozyri.

Reference monitors typically enforce security policies by intercepting operation invocations — the policy to be enforced is decomposed into operation-specific checks. Fred’s lecture discuss a data-centric alternative. Here, labels are attached to data, and each label gives a policy that describes how the associated value may be used.

Mediate operation invocations: Employ a reference monitor for operations, checks typically performed at runtime by OS, handles all “safety properties”.

Mediate uses of values: associate tags with values, often implemented by type-checking handling some “hyper-safety properties”. But, a property like confidentiality can’t be checked looking only an execution. Confidentiality is not a safety property.

Mediate uses of values:

  • Confidentiality: Restrict which principals allowed to “learn” information about v
  • Integrity: Restrict which pincipals must be trusted in order for v to be trusted
  • Privacy: Restrict allowed uses of v.

Enforcement: Access Control. Indentity of requests is basis for restricting access, there is access control on containers (requires understanding system internals) and on contents (requiring policy independence from internals and can provide “end to end” guarantees). To suport access control on contents he used tags and values: a tag Tv is associated with every value v and variable v, it specifies allowed uses of v and implementing enforces restrictions in tags. What will happen with the restriction for derived values? the restrictions could depend on the restriction of variables, the semantic of the operation and/or the values of the variables. So, we need a way to introduce the tags into derived values for a function F used to combine values.

[Tx]F.i is the tag of variable X when it is passed as i-th parameter to F

There is also a restrictiveness relation for tags (squared ≤), defining that a value y is allowed to influence values in x. This is very interesting for instance to study newspaper biases. Tags form a join semi-lattice.

Flow derived restrictions: Restrictions associated with a value v are determined only from restrictions associated with values v’ that influenced v. Lattice of tags specifies when “influence” is allowed.

The idea, then, is to associate tags with restrictions enforcing flow-derived restriction policies on program. Using type-checking or dynamic enforcement if the join is computable and sound (no false positives) if not complete (false negatives possible) decision procedure for the restrictiveness. That will produce a “checkable” class of tags. A new class of reactive information flow (RIF) labels is needed to fully support this view.

Values and variables are tagged with labels and subject to restrictions R(p) [v]F has tag T(p,F) and it is subject to restrictions R(T(p,F)). So now the restrictiveness can be defined in terms of R.

Schneider discussed the design of these RIF labels and describe a static enforcement scheme, giving examples of their use and presenting JRIF (from JIF, a RIF extension for Java, further information here). He also discussed the need for label chains when run-time enforcement is employed giving some foundational results to characterize trade-offs.


Group Photo Pause

Last (but not least) talk of the School was Idit Keidar with “Distributed Storage Fundamentals”.

Do you know really where is your data?

With the increase in storage capacity demands, scalable storage solutions are increasingly adopting distributed storage solutions, where storage nodes communicate over a network.

If you talk about systems, you have to talk about failures. Can we provide the illusion of reliable atomic shared-memory in a message-passing system? in an asynchronous system? where clients and server can fail?

This talk discussed the principles of building fault-tolerant distributed storage. For instance, in a Failure-Free case we can implement state machine replication with atomic broadcast to propagate updates. But, if a process can crash this solution doesn’t work anymore.

So, let’s start with a simple Single Reader SingleWriter. We will have to move to a sort of ABD [Attiy, Bar-Noy, Dolev 95] algorithm for emulating reliable shared storage using fail-prone storage nodes and the quorum-replication approach.

ABD assumes up to f < n/2 processes can fail, it stores value at majority of processes before write completes and reads from majority. Read intersects write, hence sees latest value. What is the linearization order when there is overlap between read and write? What if 2 reads overlap?.

When reader receives (“read-ack”, v, tag) from majority it choose value v associated with largest tag, store thes values in x,t send (“propagate”,x,t) to all (except write). Upon receive (“propagate”, v, tag) from process i it will update its copy if newer, and send back an ack to i.

But, what if two writes use the same tag for writing different values? Need to ensure unique tags. What if a later write uses a smaller tag than an earlier one? We need to performe a “read before write” (2 phase write).

Can we solve shared memory consensus in a general way? No

It is all based on the Ω leader election, the leader only fails if there are contention. In shared memory systems, Ω is called contention manager. Leader always proposes its own value or one previously proposed as earlier value.

Optimization: the first write (of b) does not write consensus values, a leader running multiple consensus instances can perform the first write once and for all and then perform only the second write for each consensus instance.

Idit Keidar presented atomic transactions over shared storage using a combination of ABD and SM paxos. Then she followed with some open problems and challenges:

See: “Dynamic Reconfiguration: Abstraction and Optimal Asynchronous Solution”. Alexander Spiegelman, Idit Keidar and Dahlia Malkhi, DISC 2017.

And space bounds on distributed storage:

Used on reliable storage.

What happend when there is concurrency? See: Spiegelman et al. at PODC 2016, and Berger et al. at DISC 2018.


Final remarks: I would like to thank the organization of SATIS 2018, everything was perfect and I will be glad to go to the second school (if it takes less than 20 hours of flight).

    Javier Bustos

    Written by

    PhD, computer networks researcher.