CTO Corner #3: Test Your Vision on Willing Subjects
I work as a software developer at Jebbit, and I’ll be interviewing new engineering department heads for the CTO Corner. Doug Fields is the VP of Technology at SERMO, a technology company that delivers products to social network members, healthcare panelists and corporate clients through a platform. In this week’s edition of the CTO Corner, I talked with Doug about the challenges he went through, positive impacts he made as a VP, the importance of listening to and teaching clients, and how to test new technologies and platforms.
How did you get into engineering?
When I was in maybe second or third grade, my father convinced the school to let me take a computer class with the “upper schoolers.” The school had a PDP-8 with a bunch of enormous monochrome terminals and I distinctly remember one thing: writing a BASIC program that printed a US flag. Summer of age 9, he sent me to a week sleep-away camp where I learned to program on Apple ][s and TI-99/4As. Then on my 10th birthday, he got me an Apple ][+ with a Z80 co-processor running CP/M, along with Apple Logo, Visicalc and UCSD Pascal with quite a few books, not to mention Wizardry, Zork and other games. I fell in love with Pascal and wrote many Infocom-style games over the course of the 80s in both that and Apple BASIC.
Things took off from there. Nearly every gifting opportunity I was given something computer related. The high point was a PS/2 Model 80 with twin 110mb ESDI disks and a ton of RAM (which I think may have been 8 or 12 megabytes) along with SCO Xenix/386 (later SCO Unix). My father installed four phone lines and I ran an open-access Unix computer, BBS, and UUCP node. Among other things, I wrote a multi-user real-time interactive chat program similar to IRC but with character-by-character display, ANSI coloring, etc., all built on System V IPC. Later, in college, he got me a Gateway ‘486 on which I ran the earliest versions of Linux (0.09 sticks in my memory) and contributed some keyboard/console driver code to that project.
I had my first programming job at age 14, when I built a dedicated BBS for a client to distribute daily marine bunker fuel prices. This ran on an original IBM PC with two whole 5.25” floppy drives. I eventually studied Computer Science at college in a very mathematical and algorithmic manner, but didn’t really learn about software engineering as an engineering discipline unto itself until a Microsoft internship after my sophomore year of college. I learned more after graduation, doing distributed derivatives calculation systems for Lehman Brothers. So, I was a hacker and programmer long before I was an engineer.
The bottom line is that I got into engineering because of my father. Years after graduation and with hindsight I asked him why he got me computers for almost all my gifts. He said he read something in the late 70s or early 80s in the Wall Street Journal about how personal computing was going to be the next big revolution, and he played out that vision on a willing test subject. His foresight has provided me a natural career focus that has worked out well for me.
What are the challenges of being a VP at a tech company?
I’ve had the role of head of technology at several different stage companies at this point in my career, and each stage has its own challenges. SERMO is a privately held company formed from the merger of the original SERMO social network for doctors and a successful healthcare-focused market research company. It’s now a technology company that delivers more and more products to social network members, healthcare panelists and corporate clients through the “SERMO Platform.” So, the challenges here at SERMO are much different than those faced by a well-funded (or self-funded) startup.
One of the challenges SERMO faces is due to the combination. We have two different technology stacks operating various parts of the company. Furthermore, the stacks were integrated largely at the database layer rather, so dependencies tend to be relatively tightly coupled. Adding new functionality often requires new services to break these dependencies for our future’s sake and so implementation can be more complex than a green-field startup.
We also have a huge amount of data, as you might imagine from a market research operation spanning a decade and a half and a social network going back almost a decade. One of our present challenges is making that data more readily available and actionable and mining it to guide our decision making.
What major changes did you make to positively impact SERMO that they didn’t do before?
There are three major changes that have come about since I have joined SERMO. First, I have been fully aware of and fully aligned with the CEO’s vision of the future of the company since I signed on, and communicated that vision to the entire engineering team regularly. Second, I have attempted to enforce a limitation of scope across the board for what the engineering teams are endeavoring to do at any time, along with a clear set of priorities for the items in that limited scope. Finally, I have reorganized the engineering personnel throughout the company into five relatively small teams to allow them to tailor their focus to these restricted scopes.
How important is it to listen to your clients? When do you think you need to teach/show clients what they need vs. build what the clients want?
My client is the CEO: my job is to define and implement the product vision of said client. In some cases the vision transcends the perspective of our clients or even the visionary’s own lieutenants and colleagues, and so for new products, the enumerated needs of customers may not be the ideal guide.
Our new products take the vision of the CEO and turn it into a shipping, production product. He is heavily involved in all aspects of these initial rollouts, and solicits input from all his leaders. Indeed, we also use our market research capabilities for our own purposes when necessary. That said, once the product finally hits the hands of the clients and members, we begin to receive feedback and quickly add relevant suggestions to our product roadmap.
For our SERMO Pages product release, we had a long but deliberately paced roadmap of enhancements planned. After the product hit the market, we had a flood of feedback from various client constituencies requesting various features so they could adopt the product and participate in our social network. Working with our engagement and marketing teams, we identified the most important features and our CEO revised our roadmap to accelerate the deployment of myriad items so that these important clients could better leverage Pages. The CEO’s initial vision was spot on, but we had underestimated customer demand for follow-on functionality.
How does SERMO test new technologies and platforms? What is your approach? For example, how was this path taken with SERMO Pages?
In general, when we have a variety of ways of solving a problem we take some time to do a “research spike” and investigate in some depth several alternatives. Some examples recently were which front-end framework to use for a future product (e.g., Backbone, React, Angular 1/2, etc.), which pub/sub messaging infrastructure to use (e.g., Kafka, Kinesis, NSQ, etc.), which rich text editor to integrate, which SMTP relay service to use (e.g., SparkPost, SendGrid, etc.), and even which deployment mechanism to use. We try to build a small but realistic proof of concept using each technology and see how it performs.
In general, I try to limit the scope of these research spikes as engineers (including myself) love to learn and try new things, sometimes to the exclusion of doing the actual product work. One thing I took from business school years ago was a generalization of Andy Grove’s thinking: a good enough decision made in a timely fashion is often better than a perfect one made too late. Sometimes I’m a bit too forceful in this but I really abhor “analysis paralysis.”
We saw this directly in practice with SERMO Pages. We built a new Rails application from scratch using the latest best practices. We explored how best to deploy this new application in a scalable manner. In the end we chose AWS Elastic Beanstalk, although we did investigate several other options like Docker and custom deployments with CloudFormation. While I don’t love the vendor lock-in that EB requires, nor the slow pace of Amazon’s availability of new Ruby versions, it has turned out to be a reasonably elegant solution for us and we have used this as a model for our other applications under development.
Having spent the past 30 years being an engineer, where do you think tech is heading?
My creativity lies in how to solve specific problems creatively rather than what problems we should strive to solve, so I’m not much of a prognosticator. (It also means I pair well with a visionary like our CEO.) That said, I think we will see a continued move away from object-oriented design and development. Momentum is building toward more functional ways of solving problems, which is something I strongly support. (I’ve been a Haskell acolyte for over two decades.) I think we’ll continue to move away from traditional multithreading to more easily understood and easily reasoned forms of high performance computing. Despite this, I think the most effective high performance engineers will need a solid understanding of what’s going on at the lowest levels of computing.
I think Rails has also shown the benefit of a focus on engineer productivity. I think we will continue to see more frameworks and even language design that will amplify the ability of engineers to solve problems in certain well-trod domains and simplify concurrency. Some of these, like Elixir/Phoenix, will try to address problems of the past and provide a solid framework for moving forward.
Finally, and entirely obviously, I think we will see more and more use of the cloud and outsourced infrastructure. The savings in manpower and workload are just too great. Despite this, however, there will always be a need to pick and choose how to deploy things at the fringes of the performance envelope. For a real-world example, it may be possible to deploy a non-cloud database for a $70,000 one-time capital cost to a traditional data center with vastly better performance than even a $25,000/month cloud database.