• The Periodic Pulse
  • Posts
  • Machine Learning in Additive Manufacturing: A Materials Scientist on the Sidelines

Machine Learning in Additive Manufacturing: A Materials Scientist on the Sidelines

From microscopes to machine learning—one materials scientist’s journey through the growing role of data-driven tools in the unpredictable world of additive manufacturing.

Working in additive manufacturing (AM) and deep in a world reliant on phase diagrams, EBSD maps, and microstructures, I was understandably quite sceptical when someone said ‘machine learning (ML) can optimise your builds’.

From behind my microscopes and mechanical testing rigs, I wasn’t quite sure how all these fancy data algorithms fit into the materials science I knew. But nonetheless, with all the background chatter going on, I’ve had no choice but to pay attention to what ML is doing for AM – and I want to share what I’ve learned from the sidelines.

🤔 Okay, but what even is machine learning?

As someone rather foreign to the field of data science and Python scripts, here’s what I gather ML to be. It’s essentially a type of AI that focuses on developing various algorithms and statistical models that are used to predict and rectify manufacturing outcomes. Masses of data are fed into these algorithms, which then analyse them to find trends, patterns, and relationships between the input and output. It then makes decisions to ‘fix’ the process, without necessarily having understood the governing physics.

Instead of using kinetic models and conservation laws, you simply give it examples – a heck of a lot of them at that – and it figures out what to do.

Say you’re working with Laser Powder Bed Fusion (LPBF). You can log parameters such as laser power, scan speed, and resultant porosity, and eventually, with enough time and examples, it will be able to predict porosity outcomes for future prints, and without intervention adjust particular parameters for required outcomes. The same could be said if you want alter processing to achieve certain mechanical properties for different applications, or print a certain amounts of parts based on your previous inventory and so and so forth.

Sounds convenient, right? But there is a catch. ML doesn’t necessarily understand why these trends occur. It won’t tell you what’s driving the solidification behaviour or where those dendrites are forming. It just knows that “These settings usually work, and these don’t.” …I don’t know about you, but as a researcher, that lack of why is rather unsettling 😬.

Still, in a field like AM—where the process window is vast and the cost of failure racks up quickly—having a tool like ML to help us navigate the minefield (even if it lacks deep understanding) shouldn’t be dismissed so hastily.

🤑 The Market’s Not Guessing — It’s Investing

For all the academic buzz around ML in AM, what really catches my eye is the industry pull towards it—not just at the R&D level, but production, maintenance and qualification.

The market is BOOMING!! 💥 Globally in 2023 ML was valued at USD 53.49 Billion and projections say that figure could hit USD 1233.02 Billion by 2032—showing a CAGR of 34.8% from 2025–2032. That’s not just academic funding or startup chatter—that’s big boy investment 💪.

In AM specifically, the picture is a little fuzzier. But according to AMPOWER’s 2024 report, more than 40% of industrial AM users are either exploring or actively rolling out data-driven systems for quality control and process optimisation. And almost all of those have ML at their core.

💸 Why the sudden surge, you may ask?

It’s because failure is expensive … and my word, does AM fail.

A failed metal build doesn’t just cost a few hours of time; it can literally mean thousands of pounds down the drain 😢. With AM being used in fancy industries like aerospace and the medical sector, parts are often complicated, materials aren’t cheap, and qualification hoops can be brutal. So when failure strikes, it strikes HARD. In LPBF, for example, failed builds can account for as much as 10% of production expenses. If ML can shave even a small percentage off that, the ROI is definitely worth it.

And it’s not just about fewer rejects:

✅ ML helps optimise parameters faster. Instead of spending weeks printing endless test cubes, companies are using ML to guide their design of experiments—basically zeroing in on “what might work” much earlier in the process. This has reduced DoE cycle times by ~4x for LPBF, depending on part complexity.

✅ Some suppliers even use ML to pre-screen builds. If the model thinks it won’t pass mechanical testing later, the plan won’t even get a second look. It’s not full digital certification yet—but it’s definitely a step in that direction. And considering how long aerospace qualification usually takes, this kind of shortcut (without compromising safety, of course 😇) could change everything.

🙃 Where I Stand (for now)

I must be entirely honest… I do still have my reservations regarding ML.

What makes me cautious isn’t the technology itself—it’s the shift in mindset it demands. I worry we’re leaning too hard on a tool that doesn’t always know what it doesn’t know. In materials science, we’re trained to seek why. Why did a crack form there? Why is this grain orientation giving us poor fatigue strength? ML, at least in its current form, often skips the “why” and goes straight to “what usually works.” And that’s a pretty hard pill to swallow for me, I can’t lie.

But on the other hand, let’s be honest—AM doesn’t always behave the way our textbooks say it should. I’ve seen well-thought-out builds which on paper should’ve worked, completely flop, and some questionable ones somehow function fine. The culprit? Nitty gritty issues like non-uniform heat build-up, layer thickness variations, and so on—which are often too difficult for us to characterise and rectify in real time.

Caught between mistrusting the ML black box and knowing full well that AM could use a helping hand… I’m guardedly hopeful about physics-informed machine learning (PIML). These models don’t just learn from data—they’re trained within the guardrails of physical laws. They’re not throwing out thermodynamics or solidification theory; they’re building on it. To me, that feels more grounded and much more respectful of what we already know, and what we still need to learn.

🧭 Final Thoughts

So no, overall I don’t entirely trust ML blindly. But I’m starting to respect it as a tool—especially in areas where trial-and-error dominates, or where data outputs are just too great for our measly selves to handle in due time.

It should be a supplement, not a substitute for physical models and engineering judgement. Something that accelerates iteration, flags anomalies, and gives us a head start, but not a replacement of the scientific rigour that built this field in the first place.

Until next time,
Amina Hussain