AWS re:Invent 2018 — Day 1 — What did I learn ?

Tony Zoght
Solace PubSub+
Published in
4 min readNov 27, 2018

Day 1 was an ultramarathon

A lot to take in and calling it “overwhelming” is an understatement

re:Invent spans 7 Las Vegas mega hotels, over 40K attendees and 2545 sessions and demos.

To tackle this, you need a strategy and you cannot just go-with-the-flow

So here’s my strategy:

  1. Hear from the attendees, and gauge where they are on their Cloud journey, to better understand how Solace could help them. “Straight from the horse’s mouth”. That’s possible since Solace is also a sponsor and we have a booth at the Expo.
  2. Try to squeeze in as many sessions (that I am interested in) to hear about what AWS is selling now.

Given the sheer volume of sessions and the fact that I wanted to stay within close proximity to the Solace booth, I’ve been succeeding on 1 and failing on 2. Tomorrow is another day

So what did I learn from the AWS:ReInvent Attendees ?

I had the chance to engage with over 60 attendees, I went a bit beyond the Solace pitch (that was a great practice) when I had the right audience. I simply asked them:

“Where are you now on the journey ? What’s not going well ? What’s going well?”

And in some cases, I got a lot of info, the lead scanner was on fire :-)

So this what I gathered, from day 1:

  1. Majority knew the concept of “event-driven” and wanted to go there.
  2. 60 % on premise (OpenStack ??), 20% hybrid, 10% in public clouds
  3. Data movement is one of their pain points —Security, Visibility, Audibility
  4. Everybody knew what IOT is, but very small percentage are building an IOT solution
  5. Less than 1 year experience using AWS, and they’re all surprised by the cost :-)
  6. The move to the cloud was mandated by executives and for the purpose of saving money and becoming agile (top down). I was surprised to see a handful of reluctant developers (reluctant to move to the cloud) — Fear Of Change is greater than Fear Of Missing Out

So what did I learn from Day 1 Sessions ?

ENT302: Optimizing Costs as You Scale on AWS (slideshare)

“The cloud offers a first-in-a-career opportunity to constantly optimize your costs as you grow and stay on the leading edge of innovation. By developing a cost-conscious culture and assigning the responsibility for efficiency to the appropriate business owners, you can deliver innovation efficiently and cost effectively. In this session, we share The Vanguard Group’s real-world experience of optimizing their costs, and we review a wide range of cost planning, monitoring, and optimization strategies”

What I learned:

  1. Everybody is surprised initially with the cost of running in a public cloud
  2. IT, Finance & Procurement are excluded from the buy decisions, and when they get to the party, they’re doing so to cut the cost or control the security — not a good strategy
  3. Companies have no clear account strategy (AWS Accounts I mean) — Possibilities are Business unit, Lifecycle, Project based. Pick one, anyone :-)
  4. Tag, tag, tag …. You need a tagging strategy (Environment, Project, Team, Application ID, Cost Centre are all a must-have tags) In addition you can add Expiration Date, Automation Support (yes/no), User
  5. AWS Cost Explorer is your friend, IF you have the right tags and consistently
  6. RI (Reserved Instances) and SI (Spot Instances) are your friends, don’t be shy to bulk buy when you’re running at scale
  7. Interesting suggestion, but AWS was pushing 3rd party tools for cost management, I was surprised not to see RightScale, I guess they don’t believe in Gartner reports
  8. Oh Yeah, 2 more things:
  9. You need an enforcement strategy (This is why automation must support tags), don’t be afraid to shutdown systems with no tags, or have overstayed their welcome (expiration date) — Email tag comes in handy, warn, warn, warn then shutdown :-)
  10. Where is bandwidth cost in this picture, nobody said one thing about it, it’s all compute and storage, what about bandwidth ( I followed up with Keith Jarrett, stay tuned for his response :-))

NET312: Another Day in the Life of a Cloud Network Engineer (slideshare coming soon)

“Making decisions today for tomorrow’s technology — from DNS to AWS Direct Connect, ELBs to ENIs, VPCs to VPNs, the Cloud Network Engineering team at Netflix are resident subject matter experts for a myriad of AWS resources. Learn how a cross-functional team automates and manages an infrastructure that services over 125 million customers while evaluating new features that enable us to continue to grow through our next 100 million customers and beyond.”

What I learned:

  1. Huge estate on AWS (no surprise)
  2. It’s all based on Globally Unique IP Space and it’s now extended to containers
  3. They use Titus not K8S
  4. VPC Peering everywhere
  5. They have a unique and sophisticated DNS resolution strategy (blew my mind — I can’t describe it in few lines, but the video and slideshare will explain it)
  6. It’s all based on Eureka and Zuul (REST based discovery, not event-driven)
  7. Oh yeah, when faced with an issue (missing feature from a service in AWS), they build their own
  8. They run their own MPLS Internet Backbone for reachability to their AWS estate from their enterprise network
  9. They fail a whole region during testing, they have 3 regions (us-east, us-west and europe) — My surprise: some customers on the East coast are served by the Europe Region

--

--

Tony Zoght
Solace PubSub+

Tech Leader, Architect, Builder and aspiring Data Scientist