In many organizations, simulation is a critical part of the product development process. The ability to predict and optimize an engineer’s design has been instrumental in reducing R&D spend and project timelines, but there are still tradeoffs an engineer must make when simulating their system. Simplifying models, choosing mesh types, and configuring hardware for simulation software all can slow down or speed up the simulation workflow. In this blog post, we’ll walk through some of these choices and how they can impact the simulation outcome.
Defeaturing: How Much Is Too Much, and How Little Is Too Little?
Any simulation engineer will tell you the CAD cleanup process can be time-intensive, even if the CAD visually appears to be “clean” (i.e., not a lot of complexity or a ton of surface errors). However, when it comes to model simplification or "defeaturing," simulation engineers can struggle with how many of these details to remove and/or which are important to keep when preparing the CAD model for meshing. Taking a figurative step back and looking at the overall problem you are trying to simulate helps answer these questions, as what features stay or go depends on what answer you are seeking.
If you are looking for a high degree of accuracy in one area of the model -- say to determine skin temperature on an engine manifold -- defeaturing that component could very well impact the accuracy of the solution. But what about the rest of your model? Do you need a high degree of fidelity on a car roof if you are doing a shock and vibration analysis on the suspension? Not likely. However, whenever you aren’t sure, a mesh sensitivity study can help you gauge where to best “spend” your accuracy and how it impacts your simulation’s result.
So, what is the impact on the simulation? Unfortunately, the answer is “it varies.” Defeaturing is a necessary evil in simulation. Throwing more horsepower at a simulation using a high-powered workstation to avoid CAD cleanup might not be an available or even prudent option. The entire CAD workflow always has some degree of errors that require human intervention because they will stack, just like in geometric dimensioning and tolerancing (GD&T). For example, each design engineer will create a design differently, each CAD software will have its own proprietary native file formats that either improve or hinder fidelity, and each simulation CAD prep tool will load and interpret those CAD files slightly differently depending on how it reads the file. Each of those steps creates a potential “error” that needs to be addressed, whether a missing face, a warped surface, etc. With all the discussion around AI these days, I feel that having some AI integration into CAD cleanup for simulation would be a logical next step in the coming years.
Mesh Elements: What Is the Best Option?
Again, selecting the right mesh type depends on your simulation. There is no one-size-fits-all answer, but some mesh types are certainly better places to start. For Computational Fluid Dynamics (CFD), the Fluent Mosaic mesh is a great option that blends hex-core and polyhedral elements, offering a balance between mesh count and fidelity. Hex-core elements can adequately represent bulk flow areas in your model while the polyhedral elements are excellent for the fine details. In Finite Element Analysis (FEA), tetrahedral elements mesh almost anything, while hexahedral elements are more efficient in terms of mesh count and solve speed.
But what about those new default mesh settings in simulation tools? Simulation software is far more accessible these days than ever before, especially through CAD-embedded options that offer the convenience of one-click meshing. While these default mesh types or settings can be a good starting point, blindly trusting them isn’t ideal. You must ask yourself if the mesh makes sense for the results you are expecting from your simulation. Those in the simulation community say, “meshing is an art.” I feel a more appropriate response is meshing takes practice. The more you do it and see the associated impacts in terms of results, the faster you’ll build that meshing intuition.
Simplified Physics: Balancing Speed and Precision
Deciding when to simplify the physics being used in simulation depends on the stage of product development and your desired level of accuracy. During the concepting stage, quick-and-dirty simulations can provide valuable insights for comparing design alternatives. Ansys Discovery does a great job of addressing this part of the development cycle. With a simplified graphical user interface (GUI), most design engineers can learn the tool quickly and can iterate through several concept designs before ever having to kick off tooling.
However, when the product is being prepared for physical testing that leads to certification and product launch, a more robust and holistic simulation is required. Cutting corners this late into the development cycle carries tremendous risk -- as it can lead to launch delays, retesting, redesigning, and even retooling. The key is to match simulation fidelity to the specific goals of the analysis: a quick design comparison, a full-blown validation effort, or somewhere in between.
Hardware Configuration: It’s Not Just About Horsepower
Throwing money at an extremely overpowered workstation is one way to simulate all different kinds of physics. However, that can get prohibitively expensive. Configuring hardware for optimal simulation performance comes down to what type of physics you want to simulate. For Finite Element Analysis (FEA), most simulations are very RAM-intensive, so it’s best to spec machines with that bias. On the other hand, Computational Fluid Dynamics (CFD) simulations are either CPU or GPU-intensive, and they will benefit greatly from selecting some high-performing CPUs or GPUs.
Regardless of your physics, it’s always best to reach out to your simulation provider, as most will have hardware recommendations that will unlock the most potential for the physics solver you are using. Taking this one step further, you can utilize High-Performance Computing (HPC) packs from Ansys. Leveraging parallel processing, HPC packs allow simulations to run faster by distributing the math of the simulation across multiple CPU cores simultaneously. It’s a great way to boost your simulation productivity when dealing with complex physics or shortened deadlines.
Conclusion: Practice Makes Perfect
Whether it’s hardware selection, modeling, meshing, or solving processes, the decisions made during each of these steps to balance efficiency and accuracy can hinder or augment your simulation workflow and associated results. Each simulation is an opportunity to build your expertise, leverage the evolving capabilities of simulation tools, and drive innovation in product development through simulation.
If you’re looking for guidance on how to work through CAD cleanup or are looking for training, the Rand Simulation engineering team is here to help you. As an Ansys Elite Channel Partner, our team offers end-to-end consulting services for Ansys simulation in CFD, FEA, electronics, and optics and photonics. Contact us to get started with a free assessment.
About the AuthorFollow on Linkedin More Content by Krystian Link