Speed has never been so vital to our clients. The COVID-19 crisis greatly disrupted our work, priorities, and execution on strategy. In a recent study published by HfS (HfS Webinar: Unraveling our New State of Everything), 55% of the 630 enterprise clients interviewed expect changes in their business priorities. Of those enterprises, 28% are identifying emerging opportunities and shifting their investment focus. In this very uncertain time, I find myself looking to that 28 % and appreciating their optimism and willingness to move.
But these 28% — how nimble are they once they decide to move? Can they pivot? Do they have the infrastructure, data, talent, processes, and culture needed to quickly make this shift and to fully capitalize on the identified opportunities? In the IBM Garage, we work alongside our clients to get them to production-ready Minimum Viable Products (MVPs) in 4–8 weeks. One of the secrets to that Speed is having the right blueprints and playbooks. For example, UI/UX Designers who already have Design Language Systems can ensure the final product will meet marketing guidelines without needing to wait for extensive sign-offs. Data scientists with a common framework for developing and publishing AI and Machine Learning algorithms won’t need to refactor their code once the Software Engineer gets their hands on it because it will already be written with production-grade principles.
However, we see many data scientists begin largely from scratch when starting the development of a new MVP, pulling in open source libraries as needed and perhaps working with some code samples from the internet. This approach has significant downsides when it comes to developing enterprise-grade AI/ML solutions — projects developed this way often do not meet production requirements around testability, security, performance, and scalability. They also lose the opportunity to benefit from development synergies across projects, which significantly slows time to value. Remember, Speed to Value is what we are after!
IBM Services has developed our own framework and accelerators for implementing both the workflow and technology platforms required to enable production-grade machine learning — we call it AI at Scale. Our approach heavily leans on open source principles and proven software development patterns. While it considers required software platform capabilities, AI at Scale is agnostic of tool stack (on-prem, cloud, open-source, enterprise tools, etc.). It has been hardened “on the ground” through a partnership with clients across many markets and industries.
AI at Scale includes the implementation of project development blueprints, which enforces production standards in the development of solutions. Using development blueprints to build solutions provides significant benefits to the AI/ML workflow through:
- Use case templates — Pre-built data and model pipelines to support use cases like advanced classification & regression, recommendation engine, clustering, and others; users can take these templates as a starting point to build their own solutions
- Standardized AI/ML algorithms — Extendable library of automated model selection/feature selection algorithms, automated feature transformation.
- The library is curated over time with the latest modern technology from within an organization, academia and open source communities
- Pipelines — Standardized, testable ML pipelines like model train and forecast
- Opinionated project scaffolding — Automatically generate file scaffolding that enforces modern engineering standards
- Services adapters — integrates with data and analytics services like popular databases, managed spark clusters, queues, etc.
When our IBM Garage clients don’t have their own ML/AI framework, we happily share ours. The more automation and reusability our clients have, the faster they can deliver on those emerging opportunities and realize Speed to Value.