• 03/03/2026
  • Report

The transparent die casting process — traceability and prediction through digitalization and AI

Anyone who wants to make the die casting process transparent needs reliable data. The Fraunhofer Institute for Mechanics of Materials IWM demonstrated what a digital twin can do in this regard at EUROGUSS 2026. There, the institute presented an AI-supported approach that systematically links material conditions, process parameters and component properties, thus enabling traceability and quality predictions.

Written by Editors EUROGUSS 365

Illustration die casting process.
An integrated knowledge database of material and process data helps to meet the economic, technological and ecological requirements for cast parts.

The digital die casting twin links information regarding the condition of materials to all sub-processes of die casting and creates a knowledge base for meeting economic, technological, and ecological requirements. To this end, knowledge graphs for various process steps were created and networked at Fraunhofer IWM using ontology-based semantic structures.

In an interview, Dr. Elena García Trelles, Group Lifetime Concepts and Thermomechanics, and Dr. Johannes Tlatlik, Group Crash Safety and Damage Mechanics, answer questions about the digital die casting process and its implementation in industrial practice.

How complex does a knowledge graph need to be to lead to better cast components?

Dr. Elena García Trelles: A knowledge graph does not have to be ‘complex,’ but rather must represent the relationships that are crucial for both the component and the process. For many questions, a few clearly defined information nodes are sufficient at first – for example, regarding the alloy, tool, process parameters, microstructure, and test results. A clean structure and expandability are important: you can start with a focused section and add further details to the graph step by step as new questions arise.

Graph showing the relationship between the parameters (pressure and thermal state) and the measured oxide number of the cast samples.
Graph showing the relationship between the parameters (pressure and thermal state) and the measured oxide number of the cast samples.

Analysis from the knowledge graph: The relationship between the process parameters and the oxide content of cast samples was examined. All samples were cast using identical process parameters, but due to the nature of the process, there is a variance in the actual values of the pressures and temperatures measured by sensors in the die casting mold. The various casting parameters were reduced to general pressure and temperature profiles using machine learning. These two new parameters describe approximately 50% of the variance. The graph shows the relationship between the new parameters (x- and y-axis) and the measured oxide number (different colors) of the cast samples. Each measurement point represents a cast sample. It can be seen that samples in the low-pressure and high-temperature range (bottom right) tend to have fewer oxides. This provides a starting point for process optimization and prediction of casting quality.

What data does a digital twin need to be productive?

Dr. Elena García Trelles: A productive digital twin combines process data from machine control, sensor data (e.g., temperatures, pressures), materials and batch information, as well as test and inspection results. Simulation data and material models (e.g., damage models or service life models) are added to this. The decisive factor is not so much the ‘amount of data’ as the quality, traceability, and consistent linking of this data in the knowledge graph.

How can the digital twin take economic and ecological aspects into account?

Dr. Elena García Trelles: Economic and ecological aspects are taken into account by supplementing the knowledge graph with key figures such as scrap rates, energy and material usage, cycle times, and carbon footprint. This allows different process variants to be compared not only in terms of quality, but also in terms of cost and sustainability. The digital twin thus supports decisions that keep an eye on product performance, resource efficiency, and climate targets at the same time.

What data do I need as a foundry to make data-driven predictions and gain new insights from my production?

Dr. Johannes Tlatlik: For data-driven predictions in foundries, the more data, the better. It is not enough to simply document the classic target values. We recommend consistently measuring and storing what happens in the process – from machine data and material batches to environmental influences.

The variance of the collected actual values is crucial, because only the natural dispersion in the process provides the basis for valid evaluations. Our analyses clearly show that often unexpected correlations between seemingly unimportant data provide valuable insights.

What advantages do data-driven predictions offer me as a foundry?

Dr. Johannes Tlatlik: The basic requirement for data-driven predictions is that the system can use the prediction to streamline and even render obsolete existing processes, such as subsequent quality checks. In addition, outliers and anomalies can in particular be detected. 

A well-designed digital data structure uses the continuously collected actual measurements from ongoing production to provide a precise probability forecast of the quality for each casting. This allows foundries to focus specifically on those 'at-risk' components where the risk of defects is higher according to the algorithm. This eliminates the need for expensive, rigid full testing, saves time and resources in quality control, and allows specialists to focus on areas where the probability of a problem is highest.

Author

EUROGUSS 365
Editors EUROGUSS 365
euroguss365@nuernbergmesse.de