You Got This, Team!

Lessons from my Tour of Duty with our Security Engineering team.

Thilina Ratnayake
26 min readMay 29, 2018

There I was, in the first week of my tour of duty, asking the question:

“So, What IS dev ops?”

THOUGHT. LEADERSHIP.

I’ve been relatively quiet since my last post because I’ve been working hard on my second tour of duty, which I’m sad to say wrapped up two Fridays ago. I was with the IAM (Identity Access Management) team for approximately 3 months and I listened, laughed, led (some thought leadership) and most importantly, learned from a great bunch of folks.

But before I go into detail about what I learned, I figured it might be a good idea to provide an explanation of what a Tour of Duty is.

So, What IS a Tour Of Duty?

Photo by Goh Rhy Yan on Unsplash

A Tour of Duty is a home-grown concept that came out of (I’m told) OpenDNS days. There’s no “formal” definition of what it is, but in its essence — it involves people embedding with other teams for a period of time to learn more and try something new.

These tours of duty are great for all parties because it:

  • Allows the touring member to experience new processes and gain new ideas by being really embedded within a team for a prolonged period of time.
  • Increases collaboration and reduces information siloes within the organization which also has the added benefit of building team inter-team cohesion.
  • Allows both teams to learn new things from other parts of the organization and bring it back to / improve their existing processes.

In my case, where I’m at the point of my career which I’m ready to make the transition from Support, to Engineering, these tours of duty are great because:

  • It allows me an opportunity to sample other teams in great detail from really working alongside and on the same things they’re working on to really learn in great detail by really working alongside them on the same things, and to learn if if that specific team would be a good fit for me, and conversely;
  • Allows teams to try me out — and, if I’m a good fit, fill a position with someone that has already proven they are a good fit and can do the work.

How does it work?

Usually, engineers just arrange between the two managers to be loaned out to another team for a week or two. In my case, since my role involves being on phones and providing coverage for the North American time-zone, it’s not usually possible for me to be away full-time from the team because that means we’d be one short on phones. However, thanks to my supportive manager(s), I’ve been working under the following arrangement:

  • Being present within the team for morning stand-up meetings.
  • Allocating roughly 1 hour per day to working within my engineering team.
  • Attending extra meetings / events if my schedule and work-load allowed for it.

So what do you do?

The usual pattern that’s emerged is that the first few weeks involve getting acclimated with the teams processes, architecture and codebase (which means asking a LOT of annoying questions) and slowly but surely, actively finding out if there are specific features, tasks or jobs that I can work on.

Sometimes, as in the case of my first tour of duty with Quadra, an opportunity presents itself for me to work more on a project while using my daily hour to sync-up with the engineers and help me power through a problem or area I’m stuck on. The benefit of working with Quadra was that their stack was more in line with my background in dealing with network engineering and packet crafting, which allowed me to work a bit more independently.

In others teams, the work do is much higher; here , I would mainly observe and ask questions and try my best to contribute where I can. In this tour, the arrangements were made for me to ramp up with the team using 10% time and then finally spend one week embedded with the team full-time.

Are you a full member of the team?

Yes! I was very worried going into these tours of duty thinking that I’d be a drain on the team’s resources or an annoying house-guest. In fact, I was fully expecting to stay out of the way in the corner, but at every turn I was met with positivity answering my questions and helping me learn. I’m very thankful that all of the teams really took it upon themselves to make me feel like a full member of the team, whether it be inviting me to dinners or even incorporating me into the intra-team memes, traditions and jokes.

You can read more about my first Tour of Duty with Quadra, here.

Tour of Duty #2 — SecEng IAM

My second Tour of Duty was with the Cloud Infrastructure Engineering — Security Engineering (SecEng) Identity Access Management (IAM) team.

What do they do?

They handle the management of identities and access for an internal platform. This means making the actual API’s that their users interact with, but also working on the Infrastructure, Monitoring and Operations as well. This means that one day you could be working on code for a feature in the API, and the next day be wearing a completely different hat working on the CI/CD pipeline or infrastructure orchestration platform. The fact that we operate on almost the full-stack means that there’s a lot of variety in what we get to do, and no two days feel the same.

Left to Right, Adriano, Tal, Hugh, Gary

Who are they?

The team is currently made up of 3 engineers — Hugh, Adriano and Gary who are led by their Technical Lead, Tal, under the management of Chris.

The first thing I need to note about this team is that they are all extremely smart. But above all else, all of them are extremely passionate in what they do, positive and (this is where I find that a lot of engineers can sometimes falter), good at explaining what they do.

While I could go into a lot of detail about each team member and what I learned about and from them, I’ll try to keep my assessments brief:

Hugh is the most visible and if not for that, the most audible member of the team. I was drawn to this team from frequently hearing his laugh across the office and being curious about “the dude who juggles and wears a lot of space stuff”. On any given day, you can see him decked out in either NASA/SPACEX/JPL merch or (hilarious) communications/security t-shirts while standing at his desk juggling bean bags and talking his way through problems. While the rest of SecEng are mainly systems engineers, Hugh comes from a sys-admin and operations background which I found very good to have when building a service that you yourself would need to be operating.. You’ll remember that it’s Hugh’s laugh that made me curious about the team, but his frequent explanations to Adriano that I witnessed while walking past made me really want to join this team because I knew that I would learn a lot.

Adriano was my desk mate and who convinced me that I should definitely do my tour of duty with IAM next. He’s currently an intern from UBC and being the youngest on the team, I found that my favourite thing about him is that he’s an extremely fast thinker who’s able to analyze and assess problems with initial solutions on the fly. His background is Computer Engineering (CENG) from UBC building the actual chips, but his passion is in CI/CD which was a huge asset in helping me learn the pipeline.

Gary is one of the other full-time engineers within IAM and he brings in the experience from having worked both as a Software Engineer and currently as an instructor at BCIT. This combination really came through in our weekly “team training” sessions where his lesson on Kubernetes allowed me to get caught up quickly on how it works and how we use it.

Tal is our technical lead and if there’s one way I can describe him, it’s that he is very cool, calm and collected. In addition to his super power of being able to thrive in a flood of notifications, he’s very knowledgeable in a whole slew of technologies and concepts. I think what impressed me the most is that whenever we’re tackling a new problem, he was usually able to say: “Hmm, what if we tried X”, or direct to a resource where the answer could be find. I mean I guess that’s implied as a technical lead, but I was surprised that there wasn’t a moment that he seemed stumped. Which I guess is also good because in times of pressure — he’s able to calmly analyze what’s happening and determine the next course of action.

Chris is the manager for SecEng, and while I didn’t get to work or talk that much with, what I do take away from my interactions is his positivity to let me join the team and his belief / trust in the team. I think what stood out the most was what I noticed from when he didn’t interact with the team — taking the brunt of the “boring manager things” so that we could focus on doing the fun things such as building our product. I did notice that he was always visible and active on our chat though and there to help out and offer direction, guidance and take on tasks whenever we needed.

What did I do, what did I learn?

While the purpose and goal of my Tour of Duty was to mainly observe and learn from the team, I’m very happy that I was also able to write some code and contribute to the team’s work stack. To better outline the things I learned, I think it might good to frame it all against the things I got to do.

A typical day!

1. Wrote and shipped code within an organized team while working on a relatively large codebase.

On my tour, I got the chance to work on — and ship! — two code changes tackling different parts of our service. Seeing as how I plan to become an engineer, this is probably my proudest achievement and where I feel I learned the most.

Even though they were both relatively “minor” changes with regards to lines of code, the lessons learned were comprehensive as I got to touch so many different areas.

Photo by Kelly Sikkema on Unsplash

Gathering Requirements

For example, even in gathering requirements — I learned that usually, there’s more to the change to be made than what’s initially in the Trello card. I also learned that it’s very normal to need to ask more questions and do more research with regards to the high level description of the work to be done, and more importantly, that it’s normal for those requirements to change over the course of the week or sprint.

Reflecting back, I remember how in one of my initial weeks after getting assigned a card to work on one of the “small” CLI change features — the PR went back and forth between multiple people discussing the merits, disadvantages and alternatives for the changes I made. Initially, I found this frustrating as I estimated this change taking only a few days, but this new discussion was starting to (1) to extend the timeline over a week and (2) need more and more changes on my part! What I learned however, was that sometimes this is normal and it’s worth slowing down and revising your change to make sure you get it right. Even though pushing this change out took a lot longer due to all these conversations, I feel more confident now as it was thoroughly vetted (by multiple people) and because after this process I have a much deeper understanding of why we engineered things the way we did.

Using Version Control (properly)

In school I “learned” about version control, but I’d never really used it properly. Most of my work in academia as well as personal projects mainly consisted of creating a repo and simply making commits to the master branch. One cool takeaway from this Tour of Duty was getting to see Version Control utilized properly through the use of branches, merges and Pull Requests.

Pull Requests specifically were very new to me, but after this tour, I can understand how they were/are a big part of the Open Source Software community. They make collaborating on code in teams very easy by providing a linked and central and chronological interface to view and discuss changes while being in the same context as the code. I can never imagine myself going back to a system of making comments on a Trello card / email again.

One lesson I learned, which thinking back should have been more obvious but, I’m happy that I learned how to review a PR. First off, I was flattered that I would even be invited to review a PR (mainly because I felt that everyone else on the team was so much smarter than me) but also very nervous because I was worried I’d have no idea how to make sense of some of these changes if I didn’t know enough about what we were doing. Fortunately, I learned that (1) getting people to review PR’s are good (even if they don’t have a full grasp of the whole team’s activities) just for the effect of having “eyes-on” and being able to detect small issues and efficiencies, and (2) sometimes it’s better if you don’t have a grasp of the full picture because the code you’re reviewing should do a good job of explaining what it does.

And, if at any point in the two steps written above things don’t make sense, the beauty in PR’s is that it’s meant for asking questions and discussing the changes. So if it’s your first “eyes-on” the code, or you don’t understand what’s happening — the solution is simply to ask more questions, which is baked in to the mechanism of how PR’s work.

Designing Code

I remember in the same change referenced above, I went in (as most new engineers do) guns blazing and thought I was “done” within a day. I was super confident that I’d completed all the items on the Trello card checklist and all that might be needed was some slight tweaks. Boy was I in for a nice awakening when I learned that there’s a difference between “done” and “Done Done”.

You see, “done” is subjective, and something that you as the engineer who wrote the code might perceive. However “Done Done” only happens when your PR builds successfully and has been accepted by your reviewers. In this tour, I learned that the gap between “done” and “Done Done” can get excruciatingly large, especially if you went in guns-blazing to simply write code to “check off” features.

I remember submitting my first PR and deflating as Tal went through my PR for a “first pass” that consisted of quite a few questions and recommendations for changes. To be clear, I didn’t think that I would pass on my first try (in fact, I learned that it’s indeed very rare for a PR to get approved right away) but I did learn that it’s good to spend more time designing your solution before creating it. From this experience, I was able to extract some things that I could employ in the design stage of my next features and indeed I noticed on my consequent PR’s, I didn’t make the same mistakes.

If I could sum those lessons up, it’d be:

  • Challenge and optimize your solution in the design stage so that you can fend off those questions/change requests before they’re made.
  • It’s easier to ask questions and “strengthen” your solution before you write code than after.

Working on a Large Code Base

This was my first time that I got to work on a codebase created by a team larger than a group of 5 students and over more than a few weeks which at first seemed very intimidating. Take for example one of my features where I had to essentially make sure that a check happened prior to another event — I remember pondering “Where even do I start? What should my entry point be?”. Initially, I decided to fire up my IDE with debugger and step through the code and that got me where I needed to be, at least for the first pass. But I couldn’t help but feel that I was missing some occurrences.

As a solution, Hugh recommended I use a tool called ag, which allows one to grep through a code base for invocations of functions (i.e. search for “foo(“) while also ignoring unnecessary directories (i.e. dependencies). Using this method, I was able to be sure that I was able to find and code my check into the pertinent occurrences of that function without feeling like I “missed a spot”.

Testing Code

A lot of people don’t like testing; it’s the “eating your vegetables” of Software Engineering — But it’s one of the most crucial steps to ensure that what you’re building actually..well..works. I think that I had a sour taste in my mouth for testing from using some outdate and archaic testing setups in the past (Gradle and Java anybody?) but what I was pleased to find is that GoLang makes testing VERY EASY — to the point that I can see “Test Driven Development” being a viable option for even fast projects with Go.

However, since the “nitty gritty” of testing was taken care of with the language I was using, the real lessons learned came from learning to ask the questions “Do I need to test this? How do I test this?” Sure your code only does a simple data validation and massage of attributes, but how do you test that it’ll work with your database? In this experience, I learned that while Unit Testing is relatively easy, it’s testing with external dependencies (i.e. databases, external services) that gets tricky (and annoying) very fast and the solution to that is to mock or “shim” just the pieces to imitate what your external dependencies should do.

Secure Coding

Engineering at Cisco makes use of something called the Cisco Secure Development Lifecycle (CSDL) — also known as “how do you write and operate your system without getting owned”. In school I learned about things like input sanitization and principle of least privilege, but the application seemed a little vague. Again one of best things about working with SecEng is seeing how it worked “in real life”, and I guess the takeaway is that you just need to constantly be evaluating how you could be owning yourself. In SecEng this goes from designing code, all the way to conducting fire-drills and considering security even in post-mortems. But the thing I found most impressive is thinking about security even when it’s not part of the process. Specifically, I recall a few occasions where either Tal or Adriano would come into the morning or stand-up saying “So I was thinking about the way we were doing ____and I think there might be a way to make it more secure”.

It’s clear, security is key. All. The. Time.

Writing A Change Notice

I got to go through the full “process” from start to finish and write the change-notice that would be sent out to our customers. The lesson I learned from this is to be concise for which I’m happy that I was able to draw upon experiences from wearing my CSR hat.

I remembered that our customers (and especially us as support staff) only really care about how this would affect the majority of our use cases, and that most of the time — simply explaining the main effects would go a long way into being able to understand and diagnose any related issues.

One good takeaway I got from Hugh, was that as verbose as I want to be, I should make sure that my change notice is concise because “as much as people will (or most likely won’t) read your full change notice when you send it out, they will definitely be reading it when shit breaks, and that’s when you want to make sure you have the most important details, and only the most important details”.

He also made a good point that while pretty formatting is nice and good, one should always be prepared for the possibility that written documentation might need to be rendered in a CLI.

Speaking of Fire Drills, I had the unique opportunity to participate in one and “break” one of our internal environments. I learned that the difference between “Red Team/Blue Team” and “Fire Drills” are that the former frames the scenario as malicious intent (bad actor) with the intent being to compromise or penetrate, versus the latter being a possible more common outage scenario mean to test the ability to detect and remediate. One of the more notable experiences I remember is breaking our environment and watching our mounted TV monitor to see the effects taking shape, and then observing the process as our “on-call” team mates (Hugh and Gary) worked the problem in their live outage doc.

2. Agile At Work!

Again, in school we learned about Agile and how it was a good project management methodology for most projects. As a quick recap, it’s considered good because it places an emphasis on visibility and knowing exactly what state you’re at by routinely assessing and prioritizing any 2 of the 3 pillars of Time, Scope, Cost.

What I’ve learned in the “real world” both from reading online and talking to my class-mates is that a lot of different people have different ideas of how Agile works in practice. Some people think it’s just scrums and stand-ups (where they’re really just sitting-down), and some people use it as a crutch to load engineers up with work on terrible timelines.

One thing I was very happy to see with SecEng was a very solid adherence to “agile” (at least, as best as I’ve encountered). I noticed that their process consisted of a few pieces:

A planning session at the beginning of quarter to determine some high level goals.

  • This was usually a meeting with all members including Chris where we considered our long-term goals and tried to firm up some of those goals into things we could ship. One takeaway from this session I got was Tal’s diagram of a “filter diagram” where instead of saying “We’re going to do X Y and Z by time A”, it’s more realistic to probably outline multiple different things that could be done by that time which allows us to line up our planning to the scatter shot that is usually the execution of said planning (due to things that come up such as prioritizations, external dependencies, etc)

A weekly planning session

  • This is usually done on Monday mornings and involves taking a look at our Trello board to determine the state of things and determining what needs to be done in the week.
  • One of the two things I took away from this is (specifically Hugh) routinely asking the question “How do we feel about this? Do we think we can get these things done?”. The point being that the importance is getting better at estimating our actual realities of performance versus committing to unrealistic expectations.
  • The second takeaway for the value of check-lists. Again, Hugh is a big proponent of really refining what it is that we’re doing and while implementation is up to the engineer, what better way than to get down and dirty with defining the scope of what needs to be done than by using checklists!

Daily “stand-ups”.

  • These were usually done on Tuesdays-Thursdays either standing up at our pod with a talking ball, or via Cisco Teams video chat if any of us were out of the office.
  • The point of these meetings were to talk about what we did the previous day and what we were working on, and again — the real value from this meeting came from being able to get an idea of where everyone was and provide some higher level feedback to each other (“Oh, you should check out ____, I heard it might help with that”) versus trying to boast about what you had or hadn’t done.

Weekly Retrospectives

  • Retros were usually done at the end of the week and sometimes at our local coffee shop (shout-out to Quantum, the unofficial coffee shop for IAM).
  • The point of retro was to discuss how the week went and really express how we felt without blame. ( I really enjoy the concept of “blameless retros”). I actually got to lead the retro during my full-time week!
  • In retros, there’s the usual “What went well, what did we not like, how can we make it better next week” but also what I found interesting was discussing experiments and gratitude.

§ Experiments are things related to “what’s something we can try in the way that we work” and members can bring up things to try in the upcoming week. One example from a retro was to do more pairing. By contrast, in the next week we decided to not try any new experiments and continue with what we were already doing. The point being that as a team we’re constantly trying new (small) things to see if it helps us become better (and stay fresh!)

§ Gratitude is one of my favourite parts of the retro, and it answers the question “what are we thankful for?” Usually, this is where members will talk about some internal or external collaboration that was done in the week and will take the time to acknowledge and appreciate those efforts because ultimately, it’s the relationships that make this worth it!

Outside of process related to scheduling, one thing I noticed about SecEng was their attention to the “little things” in Agile.

For example, Hugh is a very big proponent of using a talking-ball/stick/marker to ensure that it is only one person talking at a given time. Initially I found it a little silly but over time, I came to really appreciate that structure of one person talking at a time.

Or there’s actually standing up for stand-up, or staying on track/time with the meetings — while I don’t think any of us uttered the words “Lets take this offline” (at least, outside of making a joke), I did value the fact that I didn’t ever feel like my eyes were glazing over while listening to an overly verbose status update or derailed discussion. Meetings had a clear purpose and usually resulted in all the desired outcomes.

3. Handled an Outage!

While I went into my full-time week excited to write more code, we were immediately hit with an outage right after our planning session! You’ll notice that I’m writing a whole point on this outage, and that’s because it affected us greatly seeing since we build and own the whole service which we provide. This means that while we’re product engineering, we’re also the operations and support personnel when things hit the fan.

Being from support, I’m happy that I kind of had an idea of what goes on during an outage (determining an Incident Commander, coordinating messaging etc) but the interesting part was that I got to see the other sides of the hat, specifically what the operations and engineering personnel do in an outage.

Which I learned is a lot of communication, information parsing, analysis and exercising the ability to stay calm.

The last point being interesting to me because as front-line support in an outage, your job is pretty simple in that you’re mainly grouping customer tickets together and providing outage messaging.

However as operations and engineering, you’re trying to:

  • Determine scope of impact
  • Communicate within and out of the team.
  • Visualize the problem
  • Test to verify if that’s the problem
  • And finally solve the problem as well.

It’s like playing 4D chess, while the table’s on fire and your apple watch won’t stop notifying you of the smoke alarm going off.

One take-away I got from this experience was watching Hugh essentially step up as the Incident Commander (IC) and co-ordinate with the rest of the team to assign checks/roles to figure out what was going on. Coming from a military background, I have a high sense of respect for teams that are able to organize quickly in a stressful situation and I felt that IAM did just that. I also enjoyed watching over Hugh’s shoulder as his fingers danced across his keyboard spinning up multiple terminals to get into resources — though the best part was that he was constantly communicating with us to explain what he was looking for and seeing. It was kind of like Mission Control.

I’m happy that even-though I couldn’t examine the infrastructure or retrieve logs, I was able to make myself useful by starting and updating a timestamped log of events which ended up coming in handy later on.

Thankfully within a few hours, we’d handled and rectified the outage, but our work was not over. The real learning and work came within the next 48 hours where we all worked on documenting what happened and why it happened, in order to present it in a retro to specific stakeholders (and any other interested engineers) on Wednesday. The team prioritized the post-mortem doc I did not envy them the task of having to present on our failure to other people to have it be picked apart. Nevertheless, I watched them piece together the document and expertly craft the tale describing the series of unfortunate events.

Fast forward to a bright and early 0800 WebEx on Wednesday morning when it was time for the outage retro with our team members, assorted managers and even engineers from other teams. At first I was curious why other engineers would want to be present, but I understand now that it’s for the desire to learn from others.

Amidst what I thought was going to happen, I thought that it went very well. Hugh took everyone through the doc explaining what happened while pausing for questions and discussions from everyone that was in attendance. Two things that I really like about our retro outage process is the concept of Blameless Retros and “5 Why’s”. The first means the understanding that the importance is on talking about what really happened without fear of retribution; the second is about about getting to the core of our assumptions, problems and actions. I feel that at the end of that retro, all parties came out with a better understanding of what happened, and most importantly ideas on how to ensure that it doesn’t happen again.

One takeway from this experience was a quote that Hugh shared in the retro:

“Good judgement comes from experience, and experience comes from bad judgement” -Rita Mae Brown

What I’d Do Differently Next Time

  1. Schedule a 1–1 meeting early with the group/group members to do a “discovery” session to get a 10,000 foot view early on of the different moving pieces. I found that I could’ve used my time in the initial weeks a lot better by asking questions and getting the information from the builders and maintainers instead of mentally piecing together the whole puzzle by reading the code and documents.
  2. Ask more questions without hesitation in my future tours. Similar to above, I feel that I wasted a lot of time in the beginning because I was scared I’d be asking a lot of dumb questions or getting in the way, but as they constantly encouraged me: “It’s better to spend 5 minutes asking a question and figuring out your next steps than an hour trying to wade through things on your own”.

Final Takeaways

Overall I had a great time on my Tour of Duty and feel that I got to learn and grow both professionally and personally.

Professionally, my technical ability and scope of knowledge was definitely levelled up.

Tooling

Before this tour, I had never touched GoLang but ended it with writing two fixes that touched a back-end system and a CLI as well (and associated tests too!). Similarly, I’d only interacted with singular containers in my previous experiences with Quadra and Docker, but became confident with writing Dockerfiles and even orchestrating them through the use of applications like Kubernetes.

Architecture

I also got to see how a micro-service architecture comes together and the challenges (Hyrums law, adherence of service contracts and sometimes highly dependant services) and benefits (separation of concerns, domain expertise and more modularity)

Process

I got to follow a well-defined (yet nimble) process for writing code that focused on security, verification and the use of multiple environments. It is a far cry from my “push to master/prod” days and after seeing how well (and sometimes even not so well) the use of a CI/CD pipeline can solve a lot of “easy” problems, I think I’m definitely going to continue doing this for the future.

Operations

One of my favourite parts of working for CIE is the fact that most of the teams are multi-disciplinary and in that vein, I am very fortunate to have had the chance to work with Hugh who brought a very strong operations background to the team. I learned loads from being able to look over his shoulder during multiple pairing sessions and especially the outage. Specifically, I was very fond of the workshop he ran for us on our Thursday “team training” session for “some good things to keep in mind during an outage (from my experience dealing with multiple outages)” where he drilled it down into the following points:

  • What do you think is happening?
  • Why do you think that is happening?
  • How do you know that it’s telling you the truth? What is your monitoring / data source’s point of view?
  • Can you test your assumptions?
  • Can you disprove your assumptions?

While these were things to keep in mind during an outage, I can see how these are good to also consider when operating to ensure that my picture / understanding of a system has some strength to it.

Personally, I learned that it’s okay to not be the smartest person in the room as long as you’re willing to work hard, and above all, help out the team. Impostor syndrome is something I definitely struggle with, but they did a very good job of making me feel welcome and safe to learn.

Finally, I think my most valuable lessons are regarding the team in general.

It’s very apparent that this is a team that places importance on learning, teaching, and doing things “right” as opposed to just getting things done fast.

  1. Both learning and teaching are similar, but continuously striving to learn new things and then teaching them to each other is something that seems to be baked into every facet of the team. I remember that what drew me to the team was hearing Hugh coach Adriano about various things, but I was happy to see that it was just as much Adriano teaching him and the rest of us the things that he learned too. In this vein, pairing and spiking on cards was a common occurrence and was very much encouraged and especially came together nicely for the Thursday afternoon deep-dive “Team Training” sessions. It was a nice tie-in to me also learning about the Feyman technique separately in the same week, and realizing this was constantly put into use at IAM.
  2. Going hand-in hand with learning and teaching is creating an environment that values constant learning through making questions and criticisms. Even as a dude from support that was only on a tour of duty, I was constantly encouraged to ask questions and and challenge the status quo.

“But why are we using X?”

“Could we not do Y instead?”

These are not questions that I would’ve thought were “my place” to ask, but I quickly learned the value of the statement:

“If your ideas can’t stand a challenge in explanation, how will it ever fare in production?”

3. I remember in my first week I was curious: “Wait..so what happens if this card doesn’t get done this week..?” The response was: “Well, we just make sure it gets done next week. If we didn’t get it done, that means we were probably focusing doing some other things that needed to be done, or that we need more time to do it right”. This statement alone carries a lot of weight and proves that the value is in a strong and stable service as opposed to a strict focus on deadlines.

Lastly, what I enjoyed most about this team was super positive and supportive atmosphere towards each other and the genuine excitement about what they are doing. I enjoy how every member brings something interesting to the team, both technically and personally and how each of those members quirks are (teased and) celebrated. Whether it’s Tal’s beard oil, Gary’s ability to make sure he’s GOT THIS, Adriano’s love for avocados (and QUANTUM) or Hugh’s general love for EVERYTHING SPACE and radios — everyone’s got a “thing”. The level of cohesion in this team is high, and the benefits of this shows in their work.

Security is hard stuff, and it’s imperative that you get it right, always — but even when things sometimes don’t go according to plan (even during an outage), I’m happy to see that the people building the things that keep our platform secure, are a bunch of folks that are intelligent, positive and above all, passionate about what they’re doing.

--

--

Thilina Ratnayake

Site Reliability Engineer, DJ, and a huge fan of Dogs. Thoughts on tech, leadership and music.