Published in


Case Study: Using Generative Design for Microprocessor Product Development

Learn how Manceps helped a microchip developer improve the AI solutions used in their research, design, and development process.

Executive Summary

A Fortune 500 microchip design company engaged Manceps to help optimize and improve existing AI and Machine Learning solutions used in their research, design, and development process. By refining their models and developing a private ML platform architecture to support rapid prototyping and inference, the company was able to accelerate their product development cycle and streamline deployments of reliably trained, tuned and validated models at scale to their manufacturing customers.

The Problem

Microchip design is exceedingly complex with several handoffs to various engineering departments along the way.

First, computer scientists use synthetic data to work up theoretical designs, which are then given to engineers who are charged with mapping these capabilities onto silicon layers thousands of times thinner than a human hair.

Layer by layer, a design is etched out, refined, and optimized. Once completed, the nano-sized topographical architecture would compare to a detailed diagram of a large city, if scaled.

Due to the complexity of these designs, not to mention the laws of physics and the ever-increasing pressure to decrease the distance between transistors in each layer, our client turned to Manceps to enhance their ML models, infrastructure, training and serving architecture across several steps in this process.

Special Considerations


Due to the proprietary nature of their work, our client was extremely concerned about security. All datasets and Machine Learning models needed to be carefully secured. This meant designing and deploying solutions in air-gapped data centers with highly encrypted solutions.


Given the complexity of the data we were working with, Manceps needed to design a robust and reliable infrastructure from which to serve these models.

At-the-edge Inference

Deployed models needed to be able to deliver accurate results from new data, which could be difficult when data isn’t being processed in the cloud. Edge ML serving solutions allowed the client to continue to optimize their models without exposing the organization to additional security risks.

Our Solution

After a detailed design discovery effort, our team went to work on several ML-related tracks to support various stages of the client’s product development process.

ML Modeling and Application Prototyping

The first step was to review the Machine Learning applications in the design process in order to make it easier for researchers to explore and test novel chip design ideas. We inspected their existing ML code for opportunities for improvement. We also facilitated the design review process by streamlining the ability of their design software to compute and retrieve results.

Infrastructure re-design and optimization

In addition to enhancing the ML models themselves, we also architected the IT infrastructure required to reliably deploy these systems. This meant redesigning their computation platform to composable, portable, highly scalable and industry-standards based clusters.


Continued research into model architectures and data-feature extraction improved performance and accuracy while providing unforeseen insights into physical modeling. At the same time, they were able to accelerate their research cycle by using an effective platform for both hardware and software.

This case study originally appeared at Manceps makes it easy for enterprise organizations to deploy AI solutions, at scale. Explore our other case studies or download our ebook —



Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
Luke A. Renner

Director of Marketing for Cyngn. Cyngn makes it easy for companies to bring self-driving capabilities to the fleets they already manage.