<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:cc="http://cyber.law.harvard.edu/rss/creativeCommonsRssModule.html">
    <channel>
        <title><![CDATA[Stories by SpiderOak on Medium]]></title>
        <description><![CDATA[Stories by SpiderOak on Medium]]></description>
        <link>https://medium.com/@SpiderOak?source=rss-50bb10e05ffe------2</link>
        
        <generator>Medium</generator>
        <lastBuildDate>Sun, 17 May 2026 02:44:30 GMT</lastBuildDate>
        <atom:link href="https://medium.com/@SpiderOak/feed" rel="self" type="application/rss+xml"/>
        <webMaster><![CDATA[yourfriends@medium.com]]></webMaster>
        <atom:link href="http://medium.superfeedr.com" rel="hub"/>
        <item>
            <title><![CDATA[Zero Trust]]></title>
            <link>https://medium.com/@SpiderOak/zero-trust-f480b44f5092?source=rss-50bb10e05ffe------2</link>
            <guid isPermaLink="false">https://medium.com/p/f480b44f5092</guid>
            <category><![CDATA[zero-trust]]></category>
            <category><![CDATA[software]]></category>
            <category><![CDATA[security]]></category>
            <dc:creator><![CDATA[SpiderOak]]></dc:creator>
            <pubDate>Mon, 30 Mar 2020 20:14:26 GMT</pubDate>
            <atom:updated>2020-04-01T16:01:15.563Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*R54mKscpxQI3wwR9LI4vjQ.jpeg" /><figcaption>Photo by <a href="https://unsplash.com/@jeremyperkins?utm_source=unsplash&amp;utm_medium=referral&amp;utm_content=creditCopyText">Jeremy Perkins</a> on <a href="https://unsplash.com/s/photos/zero?utm_source=unsplash&amp;utm_medium=referral&amp;utm_content=creditCopyText">Unsplash</a></figcaption></figure><p>For the last thirty years the prevailing approach to securing IT has been to secure the network from the outside world: “build a tall, strong, wall with well-guarded gates.” From many perspectives this has been a good choice; InfoSec teams focus their efforts and budget on ingress-egress points without having to manage the complexity and churn of an organization’s internal affairs. Unfortunately, it also means that any breach of the perimeter often leads to catastrophic failure.</p><p>In practice, organizations do watch the inside of their networks for threat actors, both insider and external, who might mean them harm; but even this approach still largely trusts the IT network.</p><p>More recently a new concept has gained popularity: <strong>Zero Trust Networks</strong>. With this approach all services on the network are mutually distrustful of each other and require authentication and authorization amongst themselves. This approach is a large leap forward from the perspective of operations and InfoSec teams; a single breach of an IT system is not game over event… unless it is.</p><p>What happens from the perspective of a user if the system breached holds the information they need protected? What happens if the system breached is the one upon which a user depended, or worse yet a key system like the directory service or network filesystem server? The problem is not the idea behind Zero Trust Networks, but that Zero Trust Networks don’t go far enough.</p><p>What if IT systems are not trusted at all? This has become popular in the consumer market with end to end cryptography (e2e), protecting messages and files from the sender’s device all the way to the recipient’s device. In e2e systems, even if service operators wish to eavesdrop on customers’ communications they can’t. This is the end game for Zero Trust, where IT systems and their operators are part of the threat model. An administrator of the communications system will not see the contents of the communications, not because the operator is following the rules/policies/compensating controls, but because there are technical measures that protect data from all but the intended parties.</p><p>This is not a dream but can be done today. The tools are ready for <strong>Zero Trust Infrastructure</strong> to be deployed to protect data in an enterprise environment without trusting anyone but the owners of the data.</p><h3>Articles in this series</h3><ol><li><a href="https://medium.com/@SpiderOak/the-spideroak-vision-and-mission-a37981bb6d64">SpiderOak Mission and Vision</a></li><li>Zero Trust (this article)</li></ol><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=f480b44f5092" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[The SpiderOak Vision and Mission]]></title>
            <link>https://medium.com/@SpiderOak/the-spideroak-vision-and-mission-a37981bb6d64?source=rss-50bb10e05ffe------2</link>
            <guid isPermaLink="false">https://medium.com/p/a37981bb6d64</guid>
            <category><![CDATA[security]]></category>
            <category><![CDATA[software]]></category>
            <dc:creator><![CDATA[SpiderOak]]></dc:creator>
            <pubDate>Mon, 30 Mar 2020 19:56:36 GMT</pubDate>
            <atom:updated>2020-03-30T20:15:07.768Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*veifkIn1-ndiMx5LSpJPXQ.jpeg" /><figcaption>Photo by <a href="https://unsplash.com/@nasa?utm_source=unsplash&amp;utm_medium=referral&amp;utm_content=creditCopyText">NASA</a> on <a href="https://unsplash.com/photos/Q1p7bh3SHj8">Unsplash</a></figcaption></figure><p>SpiderOak’s vision is to Secure the World’s Data.</p><p>Our goal is to reduce the complexity of the security surface to the point where it can be reasoned about, so that assumptions are few, and those that remain are well understood. This means reducing both lines of code and the number of people who must be trusted to enable secure data storage and access.</p><p>As a customer-focused company, we understand that technology alone cannot solve all problems. Products and support that <em>work with</em> your existing investments are required.</p><p>This is our new mission. We are a thirteen-year-old company which provides tens of thousands of businesses and consumers with best-in-class secure backup and messaging. Now with new management we have extended our expertise to deliver a whole new class of capabilities for securing shared data and managing authority.</p><p>The world is a safer place when its data is secure.</p><p>How do we intend to accomplish this? Our key innovation allows data to be stored and shared using untrusted networks and infrastructure. We use a decentralized approach to authority, integrity, and confidentiality, which combines new ideas from end to end encryption, digital ledgers, access control, and policy-based security.</p><p>Over the coming weeks and months we will discuss more about the future of our data security.</p><h3>Articles in this series</h3><ol><li>SpiderOak Mission and Vision (this article)</li><li><a href="https://medium.com/@SpiderOak/zero-trust-f480b44f5092">Zero Trust</a></li></ol><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=a37981bb6d64" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[The Little-Known Story of the Navajo Code Talkers: US Cryptography in WWII]]></title>
            <link>https://medium.com/@SpiderOak/the-little-known-story-of-the-navajo-code-talkers-us-cryptography-in-wwii-f647cc868172?source=rss-50bb10e05ffe------2</link>
            <guid isPermaLink="false">https://medium.com/p/f647cc868172</guid>
            <category><![CDATA[cryptography]]></category>
            <category><![CDATA[navajo]]></category>
            <category><![CDATA[wwii]]></category>
            <category><![CDATA[communication]]></category>
            <dc:creator><![CDATA[SpiderOak]]></dc:creator>
            <pubDate>Thu, 20 Dec 2018 20:37:18 GMT</pubDate>
            <atom:updated>2018-12-20T20:37:18.659Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*A9OuFNKpKcK_T13BNkHxBQ.jpeg" /></figure><p>Cryptography, the art and science of encrypting sensitive information, is becoming increasingly commonplace in our day to day lives. From iPhones to bank accounts, most of us already interact with cryptography daily, and increasing numbers of people are recognising <a href="https://thebestvpn.com/cryptography/">the value of VPNs</a> when it comes to protecting their privacy. Computers and the internet have allowed the development of a public encryption standard (DES) and the invention of public-key cryptography, two processes which have hauled cryptography, traditionally the preserve of governments and militaries, into the public domain.</p><h3>A Brief History of Cryptography</h3><p>The history of cryptography and encryption can be traced much further into the past than most people might think, certainly beyond the dawn of the computer age. Evidence of cryptography has been discovered in Ancient Egypt, and <a href="https://www.dcode.fr/caesar-cipher">Julius Caesar developed a cipher for his personal communications</a>. Al-Kindi, an Arab polymath, developed cryptography as we recognise it today, however it remained a slow and clumsy method for communication. As recently as the Second World War, US soldiers were forced to make the decision whether to wait for hours to send or receive an encrypted message, or share the information with enemy eavesdroppers in the hope that allied forces would react more quickly. It was, however, during this conflict that a new encryption method was developed, based on the Navajo language, which remains the only spoken military code never to have been deciphered.</p><h3>The Second World War and the Navajo</h3><p>In June 1942, the Second World War was not going well for the Allies. Great Britain had survived the Battle of Britain but was yet to score a significant victory against the Axis powers, the US suffered the first invasion of American soil in 128 years following the occupation of Attu and Kiska by the Japanese, and Case Blue, the <em>Wehrmacht </em>plan to capture Stalingrad and the Caucasus, had begun. However, on the other side of the globe in sunny California a promising, top-secret, new weapon <a href="http://edition.cnn.com/2017/11/28/us/navajo-code-talkers-trump-who/index.html">was being developed</a>. Based at Camp Elliott, near San Diego, Platoon 382 of the US Marine Corps represented this new weapon. Composed of 29 Navajo young men selected for their skills with both Navajo and English, Platoon 382 were charged with devising an unbreakable code based on the Navajo language and becoming “Code Talkers” in the USMC.</p><p>The Navajo language was viewed favourably for several reasons. It bears very little resemblance to English, didn’t have a written form, it has a complex grammar, and, depending on pronunciation, a Navajo word can have four distinct meanings. Platoon 382 developed a code based on their language at Camp Elliott, and then proved its speed and accuracy to various high-ranking officers — all within 13 weeks. The code had two levels, the first being a 26-letter alphabet adapted from the Joint Army/Navy Phonetic Alphabet, and the other being a collection of terms common to a US Marine. As a result, a bomber was encoded under the Navajo word for “Buzzard”, which is <em>jeeshóó.</em></p><p>In November 1942, members of Platoon 382 found themselves wading to shore among floating bodies at <a href="http://www.history.com/topics/world-war-ii/battle-of-guadalcanal">Guadalcanal</a>. After finding their units and digging in, the men of Platoon 382 were given a test to prove their worth by their commanding officer. They were put up against the existing method, the Shackle protocol, in which a machine would encode a message into a jumble of numbers of letters, which was then verbally transmitted to a receiver, who would then use a cipher to decode the message. Their commanding officer estimated it would take 4 hours to encrypt, send, and decrypt the message using the Shackle protocol, and challenged the Navajo men to beat it. They did so easily, with the receiver transmitting the decrypted message word for word in two and a half minutes.</p><p>Platoon 382 developed its skills during the Guadalcanal campaign, becoming an invaluable unit for directing artillery and mortar fire quickly and accurately, sending warnings to unaware formations, and reporting US troop movements and casualty figures. Such was their importance at Guadalcanal that half of the Platoon was asked to stay after their Division rotated to Australia. They had become a vital and integral part of US Military operations against the Japanese on their first deployment.</p><p>Two and a half years later Navajo Code Talkers were still impressing their brothers in arms, this time during the fiercest and bloodiest fighting seen in the Pacific Theatre during the Second World War, <a href="%22http">Iwo Jima</a>. Major Howard Connor, a signal officer in the USMC 5th Division, had six Navajo Code Talkers working around the clock for the first two days of the battle, in which time they sent and received over 800 messages — with 100% accuracy. Connor later noted “Were it not for the Navajos, the Marines would never have taken Iwo Jima.”</p><p>Navajo code talkers continued to be used by the USMC throughout the Korean War and into the early years of Vietnam. The Navajo code remains the only spoken military code never to have been deciphered.</p><p><strong>About the author</strong></p><p><em>Sam Bocetta is a retired engineer who worked for over 35 years as an engineer specializing in electronic warfare and advanced computer systems. Past projects include development of EWTR systems, Antifragile EW project and development of Chaff countermeasures. Sam now writes for The Strategy Bridge and </em><a href="https://gunnewsdaily.com/"><em>Gun News Daily</em></a><em> as an independent correspondent, and teaches at Algonquin Community College in Ottawa, Canada as a part time engineering professor.</em></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=f647cc868172" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Coding with security in mind: interfaces]]></title>
            <link>https://medium.com/@SpiderOak/coding-with-security-in-mind-interfaces-6ea2d66c360f?source=rss-50bb10e05ffe------2</link>
            <guid isPermaLink="false">https://medium.com/p/6ea2d66c360f</guid>
            <category><![CDATA[security]]></category>
            <category><![CDATA[spideroak-engineering]]></category>
            <category><![CDATA[golang]]></category>
            <dc:creator><![CDATA[SpiderOak]]></dc:creator>
            <pubDate>Wed, 24 Oct 2018 15:01:00 GMT</pubDate>
            <atom:updated>2018-11-13T19:39:08.971Z</atom:updated>
            <content:encoded><![CDATA[<p><em>This post is by Tomás Touceda, a member of SpiderOak’s Engineering team.</em></p><p>At SpiderOak we do a lot of coding in Go. I personally really like Go because it feels like Python with the type safety guard rails and almost magical concurrency helpers. But the point of this post is not to do yet another deep dive into why Go might be better than some other language, or how we use it at SpiderOak. The goal is to talk briefly about subtleties in coding that mean the difference between a secure environment and an insecure one.</p><h3>Hashing</h3><p><em>DISCLAIMER: This finding is not mine, but Frank Sievertsen’s.</em></p><p>Hashing is an operation in use in many different scenarios, some not even really security related. A popular hashing algorithm is SHA 256, let’s take that as an example.</p><p>Here’s how you calculate the SHA 256 sum of a sequence of bytes in Go:</p><pre>sum := sha256.Sum256([]byte(&quot;hello world\n&quot;))</pre><p>Super simple.</p><p>Ok, so what’s the problem? Nothing with that line above, but how about this code:</p><pre>h := sha256.New() sum := h.Sum([]byte(&quot;hello world\n&quot;))</pre><p>If you read the documentation closely, you’ll see that that’s wrong. And a lot of people would argue that you <strong>need</strong> to read the documentation in a lot of detail, specially when you are using cryptographic primitives.</p><p>But… if we all did everything the way we have to do it, this world would be extremely different.</p><p>In case the problem above is not clear, the issue is that sha256.New() returns something that implements the hash.Hash interface. That interface has a Sum([]byte) []byte method, but here&#39;s the <a href="https://golang.org/pkg/hash/#Hash">documentation for it</a>:</p><pre>type Hash interface { ... // Sum appends the current hash to b and returns the resulting slice. // It does not change the underlying hash state. Sum(b []byte) []byte ... }</pre><p>So we are adding what we want to hash to what we think is the hash sum of it. Not good at all.</p><p>You could say that’s a silly mistake, and in some ways you’d be correct, but when it comes to cryptography, I prefer to design things trying to prevent misuse as much as possible.</p><p>Of course, there’s no easy way to detect <em>really bad</em> use, but since appending a byte slice is not a complicated thing to write, maybe the helper in Sum([]byte) []byte is not really needed.</p><h3>Encryption</h3><p>A really well known person in the cryptography community is <a href="https://cr.yp.to/">djb</a>. I know close to nothing about the mathematical background behind basic cryptography, and even less about djb’s work, but one thing I really like about the way djb designs his cryptographic primitives is that they tend to be really hard to implement them in an unsafe way (keywords here are: side channel attack resistant). I won’t dive into the specifics; maybe that’ll be a future post although there’s a lot of material online already.</p><p><a href="https://nacl.cr.yp.to/">NaCL</a> is among the many gems that came from djb et al. It’s an encryption library that’s amazingly fast and really secure.</p><p>One of the main reasons it’s secure is the abstractions it works with. When you use it, you don’t care about specifics: you care about the functionality and trust the writers of the library to make the right call for you because, chances are, they know better.</p><p>This library introduces the abstraction of “box” and “secret box”. The idea is that a “box” is something you can “seal” in a way that only somebody else can “open”. If the box is tampered with, the receiver can detect this.</p><p>A “secret box” is similar, but the way the keys work is different. For more details, please see the official documentation.</p><p>Since NaCL uses primitives that are easy to implement securely, there are a lot of implementations. In our case, we’ll focus on 2: Go’s official implementation and libsodium (actually, the Go wrappers for libsodium).</p><p>NaCL itself is implemented in the most efficient way possible, so there’s quite a bit of assembler code in it. This is the reason why <a href="https://download.libsodium.org/doc/">libsodium</a> exists, it’s a compatible library that is implemented in a more portable way.</p><p>Let’s see how you would generate a key pair to be used when sealing/opening boxes in <a href="https://godoc.org/golang.org/x/crypto/nacl">Go’s official implementation</a>:</p><pre>publicKey, privateKey, err := box.GenerateKey(crypto_rand.Reader)</pre><p>Super simple, great! That’s the point!</p><p>Let’s assume you start implementing whatever you’re working on and you realize that for some reason you’d rather migrate to libsodium. You open up your favorite search engine, type “libsodium go” and chances are you’ll find <a href="https://github.com/GoKillers/libsodium-go/">this version of it</a>.</p><p>So you change your key generation:</p><pre>publicKey, privateKey, err := cryptobox.CryptoBoxKeyPair()</pre><p>And that’s it! … right?</p><p><strong>WRONG</strong>. CryptoBoxKeyPair returns first the secret key, so it should be like this:</p><pre>privateKey, publicKey, err := cryptobox.CryptoBoxKeyPair()</pre><p>One option to address this is to make use of Go’s type system, return a PublicKey and PrivateKey type, which can be []byte underneath, but they will be checked when they are used.</p><p><a href="https://pynacl.readthedocs.io/en/stable/public/">PyNaCL</a>, the python bindings for libsodium (not NaCL), does this differently:</p><pre>from nacl.public import PrivateKey sk = PrivateKey.generate() pk = sk.public_key</pre><p>This is a lot harder to get wrong.</p><p><a href="https://github.com/dchest/tweetnacl-js#public-key-authenticated-encryption-box">Tweetnacl.js</a> returns an object with publicKey and privateKey as members.</p><p>Finally, we have <a href="https://github.com/google/tink">Tink</a> that is developed specially to “(…) provide simple and misuse-proof APIs for common cryptographic tasks”.</p><p>Let’s look at how this library let’s you generate a key and encrypt data (example copied from the README):</p><pre>import com.google.crypto.tink.Aead; import com.google.crypto.tink.KeysetHandle; import com.google.crypto.tink.aead.AeadFactory; import com.google.crypto.tink.aead.AeadKeyTemplates; // 1. Generate the key material. KeysetHandle keysetHandle = KeysetHandle.generateNew( AeadKeyTemplates.AES128_GCM); // 2. Get the primitive. Aead aead = AeadFactory.getPrimitive(keysetHandle); // 3. Use the primitive. byte[] ciphertext = aead.encrypt(plaintext, aad);</pre><p>That looks quite amazing. Inspecting the Go version of it, key generation for ECDSA at first hand <a href="https://github.com/google/tink/blob/master/go/signature/signature_factory_test.go#L89">looks too complicated</a>, but closer inspection shows <a href="https://github.com/google/tink/blob/3e40262213916338ed9d1d5715437e15b986d7d4/go/signature/proto_util.go#L24">that it doesn’t simply use []byte</a> and they introduce the idea of a <a href="https://github.com/google/tink/blob/3e40262213916338ed9d1d5715437e15b986d7d4/go/signature/ecdsa_sign_key_manager.go">key manager</a>.</p><p>If we look at the interfaces it presents to the user, it’s pretty straight forward:</p><pre>type Aead interface { // Encrypt encrypts {@code plaintext} with {@code additionalData} as additional // authenticated data. The resulting ciphertext allows for checking // authenticity and integrity of additional data ({@code additionalData}), // but does not guarantee its secrecy. Encrypt(plaintext []byte, additionalData []byte) ([]byte, error) // Decrypt decrypts {@code ciphertext} with {@code additionalData} as additional // authenticated data. The decryption verifies the authenticity and integrity // of the additional data, but there are no guarantees wrt. secrecy of that data. Decrypt(ciphertext []byte, additionalData []byte) ([]byte, error) }</pre><p>Clearly there are a lot of ways to implement the same thing. The goal is perfection, but since that is unreachable by definition, you want to shoot for pushing the bar as high up as you can to make it harder to misuse.</p><p>So… if you are a user of libraries implementing cryptographic libraries, read the documentation really carefully!</p><p>If you are implementing a cryptography library, please have the requirement “hard to misuse” in the same level as “provide secure primitives”.</p><p><em>Originally published at </em><a href="https://engineering.spideroak.com/2018/10/24/coding-with-security-in-mind-interfaces/"><em>engineering.spideroak.com</em></a><em> on October 24, 2018.</em></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=6ea2d66c360f" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Estimation & Planning]]></title>
            <link>https://medium.com/@SpiderOak/estimation-planning-8941d65b8a64?source=rss-50bb10e05ffe------2</link>
            <guid isPermaLink="false">https://medium.com/p/8941d65b8a64</guid>
            <category><![CDATA[spideroak-engineering]]></category>
            <category><![CDATA[agile]]></category>
            <category><![CDATA[software-development]]></category>
            <dc:creator><![CDATA[SpiderOak]]></dc:creator>
            <pubDate>Wed, 03 Oct 2018 11:48:00 GMT</pubDate>
            <atom:updated>2018-11-07T19:59:04.983Z</atom:updated>
            <content:encoded><![CDATA[<p><em>This post is by Renee Jackson, a member of SpiderOak’s Product Management team.</em></p><p>It’s 4:30PM on a Friday and the Sales representative working with your department says “I have a client meeting Monday — can you tell me how long it will take to implement the new feature we talked about today?” <em>Groan</em>.</p><p>Off-the-cuff estimate requests can be stressful; are you better off to overestimate and risk a long explanatory conversation? To give a quick estimate and cross your fingers you won’t be held to it if you discover there was a lurking dependency? Does “I don’t know” send the message that you are unable to plan reliably?</p><p><strong>Planning Theory</strong><br> There are many brilliant and in-depth articles out there explaining more eloquently and in greater detail the theory behind planning successes and failures. An excellent example from Mary Poppendieck can be found <a href="https://chrisgagne.com/1255/mary-poppendiecks-the-tyranny-of-the-plan/">here</a>. In her presentation, Poppendieck suggests that meticulous plans spanning months into the future fall apart when cascading dependencies or changes in direction subvert careful calculations. Interruptive work that is the newest hair-on-fire-priority push planned items behind schedule or, more simply, estimates can be wildly inaccurate.</p><p>So why plan at all? There are parts of your business that will always need timelines. Your board will want to know when you expect to go to market. Sales will need to reliably tell a prospect when they might expect a prototype. Marketing will need to plan campaigns to line up with a major release. Other departments with dependencies based on your work will need to know when to expect your team’s portion, so you may as well foster harmony and learn to estimate accurately.</p><p><strong>Timelines are hard to predict accurately</strong><br> You can’t plan for everything, and interruptive work skews carefully made predictions, but if we allow every fire that crops up to push out planned work, we cannot reliably deliver on what we’ve committed to during planning. If we hold fast to planning, the machine runs smoothly but a given fire may burn for a while before it’s resolved. Additionally, any schedule has the potential for cascading delays based on dependencies. Waterfall methods which necessitate one team finishing their work before another even starts can result in pushing a careful year-long plan off target every time something is delayed by even one day.</p><p>Consider this scenario: Design promises completed wireframes, specs, and components in two weeks. After three days’ effort the team realizes there’s a cleaner, more user-friendly option and they have to rework some of what they’ve already completed. Now, Engineering is going to get the work two days later than expected, but they still need the three weeks they quoted and don’t have wiggle room to “save” the two days back. Worse, some stomach bug is sweeping the office and now the whole project is five days behind schedule.</p><p>Estimation is plainly and simply a skill that can (and needs to) be honed.</p><p><strong>The Fibonacci Method</strong><br> One of the benefits of sprints and other short planning periods is the very fact that you have a built in opportunity to stop, reassess the situation, and change direction as needed to reduce the impact of cascading delays. We use the Atlassian stack, so we’ve adopted their terminology (Sprint, Scrum, Project, and Board are in heavy use in my vocabulary)</p><p>At SpiderOak we’re running on the theory that “time” is inherently trickier to predict than “effort”. “Time” may vary for similarly difficult tasks depending on how much concentrated time is devoted to it, or whether the difficult part is the planning stage or actual implementation. Additionally, if work changes hands at any point, one person’s “week of work” could be twice as much time as someone else’s. We instead have opted to track “level of effort” (LOE) as our back of the napkin estimation technique. When we plan a sprint, every ticket is assigned “story points”. We use the Fibonacci sequence (1, 2, 3, 5, 8, 13, with our scale ending at 21) to determine “level of difficulty” in story points. The idea is that it’s much easier to decide quickly if something is five versus eight points than it is to decide if something is five rather than six points. If a developer is unsure whether something is “five ” or “eight”, we encourage them to err on the side of caution, especially when they are first getting used to this system.</p><p>Ivan Bienco, one of our developers, created the following table to help explain the Fibonacci Method to new onboards:</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*8zP4Ecv4UlyfM6e0sY510w.png" /></figure><p>Points Measurement of Problem 1 Easy 2 Slightly harder 3 2 level work with “risk”. 5 This may take a week to complete. 8 This is likely greater than a week’s worth of work. If the task proves any bigger it will need broken down. 13 I have an idea but this needs to be broken down. 21 OH MY GOSH I HAVE NO IDEA. (Whether because no research has been done, the problem is too complex, and/or it may involve research and exploratory work</p><p>The rule of thumb we use is that each senior developer can action about five points worth of work in a week’s time. It follows, then, that a developer who is onboarding or at a junior level should plan to take two to three points per week until they are fully up to speed. This allows us to estimate level of effort somewhat statically across a squad (this terminology is borrowed from Spotify. Full explanation <a href="https://blog.crisp.se/wp-content/uploads/2012/11/SpotifyScaling.pdf">here</a>), while still accounting for differences in skill level across participants.</p><p>At this point in our process, we’re using three week sprints and planning for two weeks worth of work. We do this knowing that while we’ve gotten quite good at estimating developer and design effort, we are not quite confident yet in our ability to estimate other factors such as the time it takes to do code review or to triage and resolve bugs found by QA. Additionally, we acknowledge that some active time will be taken up by meetings that, while valuable, reduce the amount of time available for coding.</p><p>Another thing we rely heavily on is breaking tasks down to their smallest parts. We’ve found that at higher LOE estimates, the level of risk is higher. For example, our likelihood of running into unforeseen and unforeseeable items on a ticket of 13 points — which we define as “more research needed” — is much higher than a ticket of two points. Our developers have found that breaking 13 and 21-point tickets down into smaller actionable tasks makes planning much more reliable.</p><p><strong>Who predicts level of effort?</strong><br> Each of our departments is responsible for predicting the effort involved for their own work. We operate under the theory that only the people who will be responsible for completing the work are aware enough of the intricacies involved to give a good prediction. If I am responsible for predicting a timeline in a meeting without squad representatives present, I will refer directly to their scrum board to estimate future effort, based off their own previous predictions. If LOE estimates are not yet available for any reason, I will state that I cannot accurately predict how long the feature will take. I may give an estimate of how long until I will know or give historical precedent for a similar feature as a guideline, but confidence in our predictions are dependent on making sure the people doing the work are the ones predicting timelines.</p><p><strong>So what’s the takeaway?</strong><br> Off-the-cuff estimate requests can be stressful, but they don’t need to be. If your system provides a shared vocabulary for estimation you can reduce the panic that comes with a 4:30PM question. Our advice is to codify what “effort” means within your organization, practice estimation regularly, and encourage your staff to take some time investigating before committing to timelines. So maybe now you have some tools to answer that “… can you tell me how long it will take to implement…?” question. “I’m not sure yet. Without looking at the tickets more carefully I’ll have to call it an 8. It should be a fairly simple fix, but we haven’t investigated it yet so I’d prefer to give you an estimate that allows for risk.”</p><p><em>Originally published at </em><a href="https://engineering.spideroak.com/2018/10/03/estimation-planning/"><em>engineering.spideroak.com</em></a><em> on October 3, 2018.</em></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=8941d65b8a64" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Agile workflow: Sprint planning]]></title>
            <link>https://medium.com/@SpiderOak/agile-workflow-sprint-planning-7fbdd5d6de03?source=rss-50bb10e05ffe------2</link>
            <guid isPermaLink="false">https://medium.com/p/7fbdd5d6de03</guid>
            <category><![CDATA[sprint-planning]]></category>
            <category><![CDATA[agile]]></category>
            <category><![CDATA[spideroak-engineering]]></category>
            <dc:creator><![CDATA[SpiderOak]]></dc:creator>
            <pubDate>Wed, 12 Sep 2018 14:26:00 GMT</pubDate>
            <atom:updated>2018-10-10T19:55:32.276Z</atom:updated>
            <content:encoded><![CDATA[<p><em>This post is by Ben Zimmerman, a member of SpiderOak’s Engineering team.</em></p><p>In my last post I talked about planning features at a high level. This time I’ll talk about how we plan for an individual sprint. The goal with sprint planning is to accurately plan what you can accomplish while working at a sustainable pace.</p><h3>Points system</h3><p>For sprint planning we use a point system based on the fibonacci sequence. 1, 2, 3, 5, 8, 13, 21. The idea is that as estimates get larger uncertainty grows so the gaps between estimates grow. Each point is roughly one days worth of work.</p><p>Most tasks should be broken down until the sub tasks are five points or less. It’s very difficult to accurately estimate something that’s more than a few days of work.</p><p>Eight points means a task is more than one week of work, but probably less than three weeks. We recently assigned eight points to porting a fairly large app to python 3. With that sort of task we knew it probably wasn’t huge, but there was no real way to break it into smaller tasks.</p><p>13 points means we have a reasonable idea of what needs to be done, but probably won’t finish in a single sprint. This should only be used for tasks that can’t be broken into smaller pieces.</p><p>21 points is used when we have no idea how long something will take. This usually means that more investigation is required. Examples would be starting to use a new language or framework, or a difficult bug that we haven’t been able to reproduce.</p><h3>Sprint planning</h3><p>We work in three week sprints with approximately 10 points per developer, so we’re scheduling about 3 points per week. The missing two points are to account for the inevitable distractions we’ll run into. I have one day a week where I use the morning for planning and I have three meetings in the afternoon. I intentionally pack most of my meetings into one day so I have more large blocks of time to work on other days. The other extra day is there for emergencies, assisting other developers, and inaccurate estimates.</p><p>With this schedule I’m usually able to complete all of my tasks. Often, one of my tasks will end up being bigger than expected and fill most of my cushion. When everything goes smoothly I can finish by Wednesday or Thursday of the final week of the sprint. I’ll then use the extra time for long term planning, refactoring, or I’ll pick up an extra ticket.</p><p>You should take your circumstances into account when assigning points. I work remotely and we’re pretty good at minimizing meetings so most of my time is available for development. If you have meetings every day it may only be realistic to take one or two points per week. Hopefully if you need to do that it’ll highlight how many distractions you have and you can push to reduce them.</p><h3>Be flexible</h3><p>The goal of planning is to accurately estimate what will be accomplished. It’s not to push people to get more done. Therefore, it’s important to take into account anything that’ll take away from development time. For example, we recently had the primary support person for one of our products leave. I scheduled slightly less points for the next two sprints so I could be available to answer questions and train his replacement. If you don’t take things like this into account in your planning, people will either neglect important tasks or be overwhelmed by the unreasonable workload.</p><p><em>Originally published at </em><a href="https://engineering.spideroak.com/2018/09/12/sprint-planning/"><em>engineering.spideroak.com</em></a><em> on September 12, 2018.</em></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=7fbdd5d6de03" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Using WebAssembly to Accelerate Markdown Rendering]]></title>
            <link>https://medium.com/@SpiderOak/using-webassembly-to-accelerate-markdown-rendering-c64184470cec?source=rss-50bb10e05ffe------2</link>
            <guid isPermaLink="false">https://medium.com/p/c64184470cec</guid>
            <category><![CDATA[react]]></category>
            <category><![CDATA[markdown]]></category>
            <category><![CDATA[webassembly]]></category>
            <category><![CDATA[javascript]]></category>
            <category><![CDATA[spideroak-engineering]]></category>
            <dc:creator><![CDATA[SpiderOak]]></dc:creator>
            <pubDate>Wed, 29 Aug 2018 10:10:00 GMT</pubDate>
            <atom:updated>2018-10-10T19:56:12.895Z</atom:updated>
            <content:encoded><![CDATA[<p><em>This post is by Chip Black, a member of SpiderOak’s Engineering team.</em></p><p>Markdown rendering is very important to the performance of Semaphor — every message you send and read is a Markdown document — so we’re always looking for ways to improve the performance of rendering Markdown. A couple months ago Jonathan Moore and I wondered how easy it would be to integrate WebAssembly into a React component, replacing the render() function, and we thought that moving Markdown parsing into Rust would be a great way to test this idea out.</p><p>What we came up with is <a href="https://github.com/SpiderOak/react-wasm-bridge">react-wasm-bridge</a>, an experimental component that passes props into a Rust WebAssembly module and provides an interface to build React render trees (and more!).</p><p>Rust was a natural choice because it has moved very quickly towards supporting WebAssembly. Originally it supported compilation through Emscripten, which produced really large, bloated binaries. Emscripten was designed to support existing C/C++ code, and includes basic POSIX support, including a virtual filesystem. It’s great when you want to <a href="https://archive.org/details/msdos_One_Must_Fall_2097_1994">port DOSBOX to the web</a>, but it’s a bit much when all you want to do is some calculation. Thankfully, WebAssembly was eventually supported <a href="https://www.hellorust.com/news/native-wasm-target.html">as a direct target</a>, which allowed for much more slim modules.</p><p>One of the standout efforts in Rust’s WebAssembly support is <a href="https://github.com/rustwasm/wasm-bindgen">wasm-bindgen</a>. WebAssembly implements a very basic machine akin to low-level physical hardware. The only data type it really understands is numbers. Programmers of course understand that everything is just numbers, but the translation between JavaScript’s high level concepts and WebAssembly’s low-level concepts creates a pain point. It’s not just tedious — it’s very easy to get wrong! My very first implementation allocated and copied strings manually, and because I forgot to null-terminate the string, it would crash. Wasm-bindgen creates cross-language bindings that perform this drudgery for you by simply adding the #[wasm_bindgen] attribute to your function. You really shouldn&#39;t be writing a Rust/WebAssembly project without it.</p><p>We wanted a couple of things out of a decent Markdown renderer:</p><ol><li>It should be <em>safe</em> — parsing user input is a dangerous game, and the more we can do to isolate it, the better</li><li>It should be <em>fast</em> — rendering messages is most of what Semaphor does</li></ol><p>Semaphor’s current Markdown renderer is <a href="https://github.com/markdown-it/markdown-it">markdown-it</a>. It’s a very robust and surprisingly fast implementation, but using it with React is not entirely safe. Since markdown-it outputs an HTML string, we have to inject it into a &lt;div&gt; with dangerouslySetInnerHTML. We&#39;ve never really been happy with that solution.</p><p>So one of the goals of this new implementation is that it wouldn’t involve any HTML injection. It would create elements (or element representations) directly. To this end we created a Builder class (again, using very cool <a href="https://rustwasm.github.io/wasm-bindgen/design/importing-js-struct.html">wasm-bindgen features</a>) that allows Rust code to construct a React element tree through a stateful procedural interface (we do want to create a declarative interface, but this was easier for the proof of concept and maps especially well to Markdown parsing). The fun thing about this Builder interface is that it can be theoretically used to build any kind of tree, like a JS object or DOM nodes (more on that later). And for the security conscious, you can make a restricted Builder that refuses to output certain elements or attributes.</p><p>In addition to the safety of more restrictive element generation, the WebAssembly environment acts as a sandbox. It has no access to JavaScript except via functions exported to it, making any code execution exploit in the parsing code far less useful to an attacker.</p><p>And of course we wanted it to be faster. It seems like you could gain speed by removing the HTML parsing step. But you must always <em>bench it</em>. How much faster is it <em>really</em>? Well, the first time I benchmarked it, it wasn’t faster at all. And even after working on it for a bit, the answer is still “It depends.”</p><p>The first problem is that loading and instantiating WebAssembly isn’t as fast as it could be. Browsers are making strides in streamlining this process, but the initial load will still take some time compared to JS, which is very well optimized. If you only want to render one Markdown document, this would be a very poor approach.</p><p>The second problem is that the way React builds DOM is slow. The original Builder called React.createElement to make a tree suitable for returning from a React component&#39;s render function. But this turned out to be about 50% slower than the markdown-it solution. We were excited about the potential security advantages, but half again slower is a bitter pill to swallow.</p><p>After some discussion, we decided to try taking React out of the loop and create a Builder that outputs DOM nodes directly. After all, Semaphor’s messages are immutable, so there’s never a need to re-render them. And it’s a slightly more fair comparison — our markdown-it approach <em>also</em> skips React. Adopting that approach made it far more competitive.</p><p>The final problem is that the WebAssembly is still on the bleeding edge. Initially I was only able to test in Firefox because wasm-bindgen and Webpack didn’t yet support asynchronous loading and Chrome prohibited synchronous loading of WASM modules over 4KB. But when that was fixed, the results were surprising. In Firefox, markdown-it is still slightly faster. In Chrome, our WASM approach came out way ahead. All of these results are measured from componentWillMount to componentDidMount, in production/release mode, rendering 100 test documents.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/592/1*_GXruBSSMyK1PWZCVuUcFg.png" /></figure><p>As you can see, it’s not a clear win if you need broad browser support. But the technology is improving every day and I expect this will change rapidly. And there are still improvements to be made to the bridge.</p><p>You can take a look at the Markdown implementation at <a href="https://github.com/SpiderOak/react-markdown-wasm">https://github.com/SpiderOak/react-markdown-wasm</a> (which of course built on the react-wasm-bridge at <a href="https://github.com/SpiderOak/react-wasm-bridge">https://github.com/SpiderOak/react-wasm-bridge</a>)</p><p>Lessons learned:</p><ol><li>Understand your problem. You can make useful optimizations when you nail down your use case.</li><li>Don’t assume WebAssembly is faster. Bench it. And bench it tomorrow, because it’ll probably be different.</li><li>WebAssembly can provide useful isolation for security purposes.</li><li>React is still not fast at rendering deep trees of trivial HTML elements.</li></ol><p>I hope this was interesting and thank you for reading!</p><p><em>Originally published at </em><a href="https://engineering.spideroak.com/2018/08/29/using-webassembly-to-speed-up-message-rendering/"><em>engineering.spideroak.com</em></a><em> on August 29, 2018.</em></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=c64184470cec" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[The Revamped SpiderOak Support Site]]></title>
            <link>https://medium.com/@SpiderOak/the-revamped-spideroak-support-site-3b2294aff49d?source=rss-50bb10e05ffe------2</link>
            <guid isPermaLink="false">https://medium.com/p/3b2294aff49d</guid>
            <category><![CDATA[support]]></category>
            <category><![CDATA[customer-service]]></category>
            <dc:creator><![CDATA[SpiderOak]]></dc:creator>
            <pubDate>Mon, 20 Aug 2018 17:09:57 GMT</pubDate>
            <atom:updated>2018-10-05T15:37:02.178Z</atom:updated>
            <content:encoded><![CDATA[<p>Providing you with tools that help you get the most value out of SpiderOak products is a top priority for our customer success team, and our Help Center is a key component of this mission. We recently made some updates to our support site that we want to tell you about.</p><h3>New URL</h3><p>SpiderOak’s Help Center is now hosted at <a href="https://spideroak.support/">https://spideroak.support</a>. This is a small change from the original address, support.spideroak.com, but gives a big benefit for our users in terms of the speed and responsiveness of our Help Center. Direct links to support articles will all redirect to the new domain. If you do happen to find a URL that doesn’t load correctly please be sure to let us know at <a href="https://spideroak.support/hc/en-us/requests/new">https://spideroak.support/hc/en-us/requests/new</a>.</p><h3>New self-service tools</h3><p>We believe that you should always have a clear path to contact our support team; some issues need help from our team. However, in many cases it’s possible to self-solve using the information in our Help Center. To help you in doing so we have two new resources:</p><ul><li><a href="https://spideroak.support/hc/en-us/articles/115003637003-SpiderOak-ONE-and-SpiderOak-Groups-Troubleshooter">Interactive Troubleshooter for SpiderOak One</a>: Follow a step-by-step diagnostic process to resolve issues</li><li>Auto answers: When a new support ticket is opened our system will automatically suggest relevant Help Center articles based on the text of your request.</li></ul><h3>We’re here for you</h3><p>One of the things we pride ourselves on is the quality of our support. We want you to have a successful experience with all SpiderOak products and our team of support experts is available to help you with anything from simple questions to complex issues.</p><p><em>Originally published at SpiderOak.com on August 20, 2018.</em></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=3b2294aff49d" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Agile workflow: Planning for a new feature]]></title>
            <link>https://medium.com/@SpiderOak/agile-workflow-planning-for-a-new-feature-f056f0dfba6e?source=rss-50bb10e05ffe------2</link>
            <guid isPermaLink="false">https://medium.com/p/f056f0dfba6e</guid>
            <category><![CDATA[software-development]]></category>
            <category><![CDATA[agile]]></category>
            <category><![CDATA[spideroak-engineering]]></category>
            <dc:creator><![CDATA[SpiderOak]]></dc:creator>
            <pubDate>Wed, 15 Aug 2018 10:05:00 GMT</pubDate>
            <atom:updated>2018-10-10T19:58:56.560Z</atom:updated>
            <content:encoded><![CDATA[<p><em>This post is by Ben Zimmerman, a member of SpiderOak’s Engineering team.</em></p><p>Whenever I see discussions of agile workflows I read developers complaining about how it doesn’t work. They’re constantly pushed to do more and to do it faster. That leads to buggy code and missed deadlines. It also, makes working as a developer miserable. In this series I’m going to talk about some of the things we’ve done to make using an agile process enjoyable. I don’t expect this exact process to work for everyone. However, it does work for my team.</p><p>First up is planning a new feature. This involves writing a specification and providing a rough estimate. The purpose of these is to communicate with non-technical people within your company. You need everyone to agree on what you’re going to build and to know roughly when it’ll be ready.</p><h3>Writing a Spec</h3><p>The purpose of writing a spec is to achieve consensus on what is going to be built. It’s very helpful to have a complete picture of what you’re going to build in one place. It’s impossible to make intelligent technical tradeoffs if you don’t know what you’re going to be building or its constraints.</p><p>Don’t try to design everything in this initial spec. The purpose is to get a rough idea of what you’re going to build and any constraints. For example, if you’re designing some dynamic web pages you would want to know:</p><ul><li>What pages will there be?</li><li>What information will be displayed on each page?</li><li>Where will the information come from?</li><li>What actions can be performed on each page?</li><li>Are there any constraints such as a minimum response time?</li></ul><p>It’s helpful to include the motivation for the project. If you know why you’re building something you may be able to think of better ways to solve the problem. Also, include any help you expect to need from others. If you’ll need some graphics designed or a new service to be deployed it’s best to let those people know as soon as possible.</p><p>Finally get all the stakeholders involved to review and sign off on your spec. This allows you to get feedback and clarify the design. Include a representative from every department who will be involved. This might include business, development, QA, OPS, and design.</p><h3>Estimation</h3><p>It’s not fun to have to estimate something that’s only vaguely defined, but it’s often necessary. If someone on the business side wants a new feature they need to know the rough level of effort to know whether it’s worth doing. A feature that’s a huge win if it takes three months may not be worth doing at all if it takes a year.</p><p>A key here is to add plenty of padding to account for the inevitable changes and distractions. It’s easy to think about how long something would take if you focused on it exclusively, but that’s not realistic. You’re going to have meetings, emergency bug fixes, and some features will take longer than expected.</p><p>Here’s how I might go about putting together an estimate. At first glance I estimate the spec will take two to four weeks to implement. I know it’s likely the design will change somewhat as we go so I increase the estimate to six weeks to account for that. I increase it to two months because I expect to get pulled away for something else at least once during the project. I then add an extra month to give some padding even in the worst-case scenario. So, my final estimate is three months.</p><p>That might seem like a crazy high estimate to you, but there’s a purpose in doing it this way. The goal is not to give an accurate estimate of how long the development alone will take. It’s to give your stakeholders an upper bound on how long the project will take to end up in user’s hands. This may include design creating assets, development writing the code, QA testing, and OPS deploying the feature. If you’re able to do this well the people waiting on your work will be able to confidently plan around the new feature.</p><p>Also, if there’s some reason that estimate won’t work, the best time to discuss that is before you start. Your team may realize that the project isn’t worth doing. Or you can discuss what tradeoffs you can make to get done faster. This might include cutting features or pulling in someone else to take some of your usual responsibilities.</p><p>It can be difficult to give as long of an estimate as you ought to. However, you’ll find that if you’re able to consistently hit your estimates that others will love you for it. Also, having a reasonable timeline takes away a lot of the pressure to cut corners. If you keep cutting corners eventually you end up with a mess that makes everything take longer anyway.</p><p><em>Originally published at </em><a href="https://engineering.spideroak.com/2018/08/15/agile-workflow-planning-for-a-new-feature/"><em>engineering.spideroak.com</em></a><em> on August 15, 2018.</em></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=f056f0dfba6e" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[A Transparency Report is a Canary | SpiderOak]]></title>
            <link>https://medium.com/@SpiderOak/a-transparency-report-is-a-canary-spideroak-62cb235016b1?source=rss-50bb10e05ffe------2</link>
            <guid isPermaLink="false">https://medium.com/p/62cb235016b1</guid>
            <category><![CDATA[encryption]]></category>
            <category><![CDATA[spideroak-canary]]></category>
            <dc:creator><![CDATA[SpiderOak]]></dc:creator>
            <pubDate>Mon, 06 Aug 2018 17:36:35 GMT</pubDate>
            <atom:updated>2018-12-19T18:16:39.281Z</atom:updated>
            <content:encoded><![CDATA[<p>Over the weekend there has been chatter on the internet about the change at SpiderOak from a <a href="https://spideroak.com/canary">Warrant Canary</a> to a <a href="https://medium.com/@SpiderOak/transparency-report-94d8c0170285">Transparency Report</a>. We understand that some people are concerned that this is a signal that we have in some way been compromised. To be completely clear: Nothing has changed other than the way we report our interactions with the government from the first time we posted a canary in August 2014.</p><p><a href="https://spideroak.com/transparency/">We have received: 0 Search Warrants, 0 Subpoenas, 0 Court Orders, and 0 National Security Letters</a>.</p><p>Even better for our customers, we couldn’t hand over their data even if we were asked to. The <a href="https://spideroak.com/no-knowledge/">No Knowledge</a> approach that SpiderOak uses means that we we don’t have the keys to decrypt the data you trust us to store for you.</p><p>Thank you for choosing SpiderOak and deciding to trust in cryptography instead of promises.</p><p><em>Originally published at SpiderOak.com on August 6, 2018.</em></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=62cb235016b1" width="1" height="1" alt="">]]></content:encoded>
        </item>
    </channel>
</rss>