Written by Gonzalo Buteler

I love when intern candidates ask how we run internships. And if they don’t ask, I tell them anyway and proceed to explain why they should ask. What does the answer to “how does a company run internships” tell you? It’s simple: it tells you the amount of time you will spend working closely with experienced members of the organization from engineers to product managers to UX designers, etc. And, how you spend that time working with great engineers is the fastest way to become a great engineer.

How a company runs internships is one of…


Written by Remi Carton

About two years ago it became apparent that our frontend architecture was showing its limits. We dreaded adding new features because of side-effect bugs. Changing a dropdown could break unrelated parts of the UI. These issues occurred mostly because of the way we managed our application state, or actually didn’t: the DOM was the state.

It’s always the state

It’s easy to fall into this trap: you start with a simple frontend application with little to no javascript, because you set out to make an application that could work without javascript enabled. Then one day you need some form of…


Written by Carlos Monroy Nieblas

Garbage Collection can take a big toll on any Java application, so it’s important to understand its behavior and impact. After a JVM upgrade of Knewton’s Cassandra database, we needed a tool to compare the performance and impact of different garbage collection strategies, but we couldn’t find an existing tool that would parse gc logs and analyze them.

This post explains the process followed and discusses some results that we believe may be useful while evaluating Java garbage collection strategies.

Evolution of Garbage Collection on Java

One of the biggest challenges for any programming language is the way that it will allocate memory while executing a task. Garbage collection is the action to regain sectors of memory that are no longer in use. These mechanisms are usually evaluated in three dimensions that are known as the three legs of the performance stool: latency, throughput, and footprint. In real life, there is always a tradeoff, and if all the stars align, you can optimize for two of the three dimensions.

carlos-blog-1
carlos-blog-1

Source: Explorations of the three legged performance stool

Over the course of…


Written by Seth Charlip-Blumlein

Photo by Michael VH via Flickr (CC BY 2.0)
Photo by Michael VH via Flickr (CC BY 2.0)

Photo by Michael VH via Flickr (CC BY 2.0)

Introduction

So you’ve built an important new service. It satisfies current use cases and is also forward looking. Pretty soon it’s mission-critical, so you optimize where you can and keep it humming along.

Then the product changes, and your service needs to do something different. You don’t have time to rewrite from scratch (and you’re never supposed to do that anyway), so you refactor what you can, then start bolting stuff on.

A couple of years later, the product changes again. This time, refactoring isn’t going to cut…


Written by Josh Wickman

As discussed previously, Knewton has a large Cassandra deployment to meet its data store needs. Despite best efforts to standardize configurations across the deployment, the systems are in a near-constant flux. In particular, upgrading the version of Cassandra may be quick for a single cluster but doing so for the entire deployment is usually a lengthy process, due to the overhead involved in bringing legacy stacks up to date.

At the time of this writing, Knewton is in the final stages of a system-wide Cassandra upgrade from Version 1.2 to 2.1. Due to a breaking change…


Written by Jeff Berger

Everyone who works in tech has had to debug a problem. Hopefully it is as simple as looking into a log file, but many times it is not. Sometimes the problem goes away and sometimes it only looks like it goes away. Other times it might not look like a problem at all. A lot of factors will go into deciding if you need to investigate, and how deep you need to go. …


Written by Jemma Issroff

Why Build a Client Library?

As part of Knewton’s mission to personalize learning for everyone, Knewton strives to provide an easy-to-use API that our partners can leverage in order to experience the power of adaptive learning. Knewton’s API follows industry norms and standards for authentication and authorization protocols, error handling, rate limiting, and API structure.

To authenticate incoming API requests from client applications, Knewton uses two-legged OAuth 2.0. Currently, each partner has to implement OAuth 2.0 on the client side from scratch, which may increase the potential of authentication failures when users try to log in.

Partners also need to be…


Written by John Thornton

The team is in agreement: the Flimflamulator service is a mess of tech debt and something needs to be done. Other software teams at the company write code that depends on this service, however. They’ve built workarounds, and any changes will propagate into their systems. Flimflamulator provides some customer-facing functionality; the Product team will want to weigh in too.

How do you make sure make sure you’re not creating new problems? What if someone has already thought through solutions to this? There are so many stakeholders that a design review meeting would be chaos. …


Written by Paul Sastrasinh, Robert Murcek

The Knewton API gives students and teachers access to personalized learning recommendations and analytics in real time. In this post, we will pull back the covers of our API to explain how we handle user requests. You will first learn how to build an edge service with Netflix Zuul, the framework we chose for its simplicity and flexibility. Then, we’ll dive into the Knewton edge service to show you how it improves API simplicity, flexibility, and performance.

What’s in an Edge Service

An edge service is a component which is exposed to the public internet. It acts as a…


Written by Prithvi Raj

Previous blog posts have explained Knewton’s motivation for implementing distributed tracing, and the architecture we put together for it. At Knewton, the major consumers of tracing are ~80 engineers working on ~50 services. A team consisting of three engineers handled designing and deploying the tracing infrastructure detailed in the previous sections. This section highlights our experience and challenges running Zipkin at Knewton over the past few months.

Technical Hurdles

We elected to use Redis on Amazon Elasticache as our storage for spans, which are detailed in the previous section. This caused us a few problems:

Redis was inexplicably slow

Our initial rollout…

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store