How We Are Fixing Our Computers (Updated Aug 2023)

--

By Colt Whittall, Chief Experience Officer, Department of the Air Force. January 2023 (updated Aug 2023)

I get a lot of requests to speak in public about our approach to User Experience of enterprise IT and software. I can’t speak to everyone, so I wanted to provide a brief overview of what we are doing, some of the results we are seeing, and several lessons learned and other observations based on my experience. Everything that follows are my personal views. They are not necessarily the views of the Department of the Air Force.

Background. DAF IT organizations, in my opinion, are focused on ensuring reliable connectivity and security. These are mission critical, obviously, but they are not sufficient. Much of what affects user experience, resilience and performance occur at high levels in the technology stack. The network links can be perfectly adequate, but our IT can be practically inoperable when the endpoint security, encryption, domain controllers, Cloud Access Points and other services bog down. On older PCs, users can open the Task Manager and plainly see applications fighting for system resources, often spiking CPU and Disk Input/Output to 100%, for minutes at a time or even hours. I have been known to actually regale our vendors and contractors with videos of this behavior. Our users are clearly telling us the situation is unsatisfactory, as anyone can confirm with a few a few quick searches of LinkedIn, Reddit, Facebook, Discord, etc.

User Experience Strategy. I summarize our User Experience Strategy as follows: Treat Airmen as customers. Measure and track the level of service we are providing, as they perform their/our mission. Then proactively manage our service levels from their perspective. This is also known as an “outside in” approach. It measures the service level actually experienced by the user and then marshals the organization’s resources to deliver increasingly better service. I see this as the only way to improve User Experience in an organization that’s as large and decentralized as the DAF. I start with the assumption that we at the Pentagon can’t centrally control the entire experience, end to end. But we can set expectations and then put our “chips” where we need to in order to get the required, measurable effect. In other words, we are looking for “outcomes.” One of my colleagues refers to me as the “Chief Outcomes Officer” and he is not wrong.

Measuring User Experience. So, how exactly do we measure and manage IT User Experience? Starting in January 2020, we added three main tools to our UX arsenal. They are:

Digital Experience Monitoring. This tracks performance of all the software on the computer and sends the data to a central collector for analysis. As of this writing, the Digital Experience Monitoring agents are on about 5% of our computers on the unclassified network, with a plan to expand to the classified network.

Enterprise Performance Management System (EPMS). This tracks performance of the Wide Area Network and Base Area Network at all levels of the technical stack (1–7 in the OSI model, for networking nerds). It uses practical tests such as the speed to upload a 10MB file or get a response from a critical PKI-related service operated by the Defense Information Systems Agency (DISA). EPMS is tracking data on 75+ bases with plans to expand and add new features.

AF IT Pulse. This tracks user perception of and feedback on IT service. Every week we send about 15,000 invitations to respond to a short survey. There are 3 core questions: 1) Where do you primarily work (telework vs on base). 2) How satisfied are you with Air Force or Space Force IT? And, 3) what is the primary reason for your response? Then there are several “optional” questions, including one that tracks impact to the user’s productivity and one that can be compared to a commercial benchmark of a major research company. All the responses are tied to demographics about the respondent, so we can analyze the results by location, career field, whether the user works on base or remotely, etc.

What has all of this data done for us?

Provides insight into the user experience “landscape.” We can now quantify and track the overall topology of user experience in terms of where it’s good, where it’s bad, and roughly the causal relationship. For example, we now know the cohort of users that get the best user experience and the cohort that gets the worst. We know how the experience compares for on-base vs VPN, CONUS vs OCONUS, computer age and specifications, distance to DISA interchange points and much more.

Informs proactive management of service levels. When performance drops at a particular base we can see it and respond. It usually means something in the network changed, whether initiated by the base, another agency or a vendor. We can also see which bases perform below the norm and respond. Usually with additional bandwidth and/or tech refresh.

Enables results orientation. We can take action to improve performance and then observe the results. We can and do, for example, challenge our IT providers to deliver a specific measurable result such as “let’s reduce the Activity Response Time of Outlook by 50% at a particular location.” It’s a specific, actionable, measurable challenge that is impactful to the user and permits accountability.

Informs our IT investment and budgeting conversations. The data is helping connect user experience to productivity and mission impact. The data tells a story that is compelling in discussions about how to maximize return on investment of IT spend.

Actions. Informed by data, we have been working to improve user experience across the DAF. Not everything has produced measurable results. But the impactful actions to date include:

Accelerating replacement of older PCs. Starting in 2021 we deployed about 2X more PCs than the year before. We increased that pace in 2022 and deploy close to that many in 2023. New PCs meeting our updated standards deliver a 2.5X improvement in performance and IT issues impacting productivity are cut by 50%, on the same network with the same software image.

Implementing “Cloud Device Management” which keeps the operating system and office automation software consistently up to date. This is associated with improved performance with fewer crashes and other issues.

Monitoring of Wide Area Network service performance at the base boundary. Our team has identified 100+ issues so far in 2023 that would have resulted in degraded performance or outages had they not been proactively addressed.

Improving performance of endpoint security, including updates to the endpoint security technology

Increasing bandwidth at bases with slow performance

Tech refresh of network devices at bases with slow performance

Other ongoing IT improvements, some of which we must do anyway, include replacement of administrative policies known as Group Policy Objects with rules implemented as part of Cloud Device Management, getting other key enterprise software updated consistently, streamlining boot scripts where we can and more.

Another area of focus is user communications. Our users need to be educated on many things, such as, when to ask for new PC, when to submit a ticket, how to avoid problems (do required restarts, manage inbox size, etc.) and what alternatives are available, such as DAF365 Anywhere and Desktop Anywhere. We are pushing these updates on a newsletter, top user tips which can be found on the DAF CIO website: https://www.safcn.af.mil/.

Results So Far.

User satisfaction with enterprise IT is improving. In 2020 and 2021, dissatisfied users outnumbered satisfied users. In 2023 that’s reversed. Satisfied users outnumber dissatisfied users about two to one. That is extraordinary progress for that period of time.

Activity Response Time of Outlook, a key metric we follow, has improved significantly on the vast majority of bases, often by 50% or more.

The User Experience Index of Outlook, a metric that reflects the performance and stability of this key client software, has improved from 2.9 stars in Feb 2022 to 3.6 stars today.

Operationalizing User Experience. To scale these efforts, we have transitioned them under Air Combat Command’s 38th Engineering Squadron. This move puts the tools and data closer to the operators of the network, helping them be more responsive to user needs, and making a more direct path to implement solutions.

Observations and Lessons Learned. This experience has taught us several things, including…

User experience data facilitates alignment of our technology vendors and contractors. I find conversations with technology vendors to be far more effective when we are looking at the same metrics and figuring out together how best to drive improvement.

We often don’t get the performance benefits we expect. Sometimes an improvement just does not prove-out in the real world. Sometimes, we find, a change we tested and approved did not get fully implemented at scale. If we were not looking for the results in the data, we would never know.

We should empower more of our people with user experience data. For example, some of the data we track would be useful to and actionable by communications squadrons. We are working on mechanisms to empower communications squadrons and other organizations with data.

We have proven that this data informed approach delivers results on NIPRNet. We must next apply this approach to SIPRNet and other networks.

Conclusion. We are making progress at “fixing our computers.” Our results are measurable. Our strategy is sound, but we are flexible and will adjust it as needed. To deliver mission value, information technology must perform. This is the path to deliver the performance our mission requires.

--

--

Colt Whittall, Founder BRAVO17, Fmr Air Force CXO

Founder/CEO of BRAVO17 which does system design, development and optimization for the federal government, with a focus on delivering superior UX outcomes..