A to B Comparison Simulations: Efficiency-Driven Product Development
You have been tasked with developing a new product. It is a clean-sheet design with a reasonably large budget and enough resources to launch a robust, best-in-class product. You come up with several concept designs that seem to meet all the relevant design criteria, but which of them are going to offer the greatest chance of success when you present them to engineering leadership? Which are going to have the greatest chance of passing physical testing and making it to store shelves? You could always prototype them, but is it realistic (or prudent) to develop four or more concepts concurrently?
In some cases, engineers may not have the product, design, or validation experience to instinctually narrow down the concept designs they’ve come up with. On top of this, project budgets (and engineering project leadership) don’t often accommodate multiple design alternatives to be vetted out during the design phase of product development (typically referred to as “parallel path” designs) to determine which is the best one. Oftentimes, engineers must make their best guess up front with little to no background data to support their claim. This is where simulation, particularly A-to-B comparisons, come into play.
Leveraging What You Know
At the front-end of product development (i.e., the concepting stage), an engineer’s main focus is selecting a concept that has the best chance of meeting the design criteria. Ensuring that the given prototype performs to a given test criteria (physical testing) comes later in the process (and what more intensive simulation and physical testing are for). As such, assessing the “best” concept needs to remain simple in nature: the goal is to get data. But what happens if you don’t have the exact test conditions or the final material properties? This is why A-to-B comparisons are so powerful: they don’t require exact, production-accurate simulation inputs. As long as both designs are simulated the same way with the same boundary conditions, meaningful data can be extracted.
“But That Data Doesn’t Make Sense”
This is the common complaint design and program managers have with A-to-B comparisons. If stresses, temperatures, or pressures don’t appear to be in the same ballpark as what they’d expect in real life, it calls into question the validity of the simulation data in their mind. The key point that is often forgotten is the intent of A-to-B comparisons isn’t to get accurate and precise data, but rather to show what design fundamentally performs better or worse than another for a given set of parameters. If Design 1 has 60% lower fatigue life than Design 2, it doesn’t matter what those stresses are. What matters is that you aren’t investing any more engineering resources into a design that isn’t going to work. This line is bolded for a reason. Too often engineering teams rely on an engineering judgement that two designs “should work” instead of leveraging simulation to help give them the confidence to push the better design forward. Parallel path designs are expensive and time consuming. On top of that, there is a sizable risk the two designs selected on “gut feeling” aren’t actually the best designs, resulting in crippling redesigning, retesting, and retooling costs.
So When Should I Use A-to-B Comparisons?
The best window of opportunity for leveraging A-to-B comparisons is during the concepting and design phase of the product development cycle. If prototypes haven’t been made, prototype tooling hasn’t been discussed, or there’s multiple designs you are proposing, it’s the right time to leverage some quick simulations to refocus the direction of your project. The amount of time you invest in these types of simulations will pay dividends down the line when you are focusing on one design going to market instead of two or more.
About the Author
Follow on Linkedin More Content by Krystian Link