We believe that data science should be treated as software engineering.
It’s easy and fun to ship a prototype, whether that’s in software or data science. What’s much, much harder is making it resilient, reliable, scalable, fast, and secure. We’ve spent five years building and running our platform, and want to share some thoughts about what we’ve learnt along the way. Above all else, we believe that data science should be treated as software engineering.
Our mission at Ravelin is to generate accurate predictions of risk at speed and scale, and we apply our predictions to the task of stopping fraud online. We are purposely not dogmatic about the methods employed to make those predictions, and use a combination of Expert Rules, Graph Databases and Machine Learning (ML). …
This article attempts to offer some of the rules for a productive, cohesive and enjoyable working environment for tech teams. It is a semi-working article that I will add to over time.
Tidy, readable, simple code is paramount. It makes reviewing PR easier, it makes coming back to the code after 18 months easier, it makes your peers like you more.
Constructive conversations over background criticism. Call it out when you see it (even if in other parts of the company). If you feel the need to complain, make a difference and give feedback instead.
During busy times, when you are deep in code and stress is mounting, it is easy to forget to Treat Others as You Would Expect to be Treated and be “approachable”. Others should not be worried about talking to you. Definitely no jerks, even if they are geniuses. …
Deadline: 31st October 2017 at 23:59 GMT
To keep your integration with Ravelin secure, we are removing support for the following old technologies: SHA-1, TLS 1.0, and TLS 1.1. (these protocols currently power the ‘Secure’ in ‘HTTPS’.) All Ravelin APIs and websites, will only be served over the TLS 1.2 or TLS 1.3 protocol as of 1st November 2017. Connections from browsers and Operating Systems that only support TLS 1.1 or earlier will be refused.
Ravelin currently support TLS 1.2 and SHA-256 but falls back to SHA1 and TLS 1.0 and TLS 1.1, only if the client is using an older browser or OS. When web traffic is encrypted with TLS, users will see the green padlock in their browser window and “https” in the address bar. …
I write this post as an overview of each aspect Ravelin’s technology stack; almost not worthy of the hours and days that each component has taken to implement. This overview will not attempt to deep dive into any given topic but, where possible, link out to sources and reference material. The intention is to write more specifically and in far greater detail over the coming months.
The Ravelin stack was entirely green field, which meant no legacy systems to maintain and complete freedom with technology choices. If you think back to the end of 2014, Kubernetes, GRPC and Microservice Frameworks didn’t exist or, if they did, were in alpha/beta. The term containers had just become mainstream and Docker was the new hot thing on the block. Google Cloud Platform (GCP) was basically just App Engine, Compute and Storage. …
Because security matters
External Pen Testers are useful, but only have a limited amount of time to get to know our systems and are not intimately familiar with the Ravelin codebase. On the other hand, engineers and data scientists within Ravelin know the codebase inside out (they wrote it after all), they understand the API and the expected functionality. Why not train all technical staff on Pen Testing? Not only will they gain experience with security awareness, leading to a more secure code but also have a bit of fun during War Games.
For those not familiar with War Games, the general idea is to have an attacking team and a defending team. The purpose is to test the infrastructure & application in order to discovery unknown vulnerabilities and put in place fixes. It is also useful for training staff who are required to be on-call as during war games the infrastructure can become very unstable and diagnosing issues quickly can make the difference between winning or losing …
The demise of the permanent address
A “permanent address” has become less relevant as society is driven forward by technology. I no longer need (or want) physical post, at best it is an inconvenience and at worst it causes me to miss important information. I’m removing my postbox.
As I see it, the only reason it seems like we need permanent address is really as PoA for borrowing but banks, pay day loans, credit card companies mostly give out unsecured loans via the phone or internet so it it really that useful? Moreover, they trust credit agencies, who trust electoral registers and other financial institutes to have the right address. When asked to send PoA to whomever, they typically ask for a utility bill or driving licence. The Driver and Vehicle Licensing Agency (DVLA) get proof by posting you the licence and I haven’t received a physical copy of a utility bill in two years. …
I’m always on the look out for new great “how to” videos or books for setting up and marketing web apps. I’ve always liked the presentations and talks from Aaron Patzer from Mint.com, which he started and sold for $170 Million within 3 years. He’s talks are very open and frank, and I think they make a lot sense.
Just found this great talk from FOWA Miami 2010. Some of the key points Aaron makes are:
I’m not sure how many videos related to business I’ve watched (probably hundreds) but only three come to mind when asked to recommend some.
How to Start a Movement | Derek Sivers
Start With Why| Simon Sinek
How To Run A Company With (Almost) No Rules | Ricardo Semler
Ravelin moved from a multi-repository set-up to a mono-repository.
We run a microservices architecture, originally with a separate Git repository for each service. The implicit dependencies between different versions of different services were not expressed anywhere, which led to various problems in building, continuous integration, and, notably, repeatable builds.
As a first step in our journey to improve the stability, predictability and reliability of our build system, we merged all the different services into one repository, called core.
To this end, we created and open-sourced a program which merges multiple separate repositories into one big repository, each into their own…