Keywords

1 Introduction

1.1 Machine Learning in Additive Manufacturing

Machine learning (ML), a sub-branch of artificial intelligence (AI), has seen steady adoption and popularity in applications that require adaptive decision-making for their algorithms. Such applications include but are not limited to, data modeling in mathematics, decision making and logic structures in systems automation, and the focus of this study, defect detection in additive manufacturing (AM) [2], is based on the principle of sequentially layering material filament according to a computer-aided design (CAD) model. This is the opposite to subtractive manufacturing, currently the prevalent fabrication method that involves reducing a stock of material down to shape by cutting, milling, lathing etc. Research at the intersection of these two technologies has become popular in the past 5 years, with the focus being directed towards improving part quality, minimizing operational and material costs, and optimizing the fabrication process [3]. A key step in achieving these outcomes is ensuring components can be reliably produced, known as printability [4]. This is what ML integration aims to do, using both printer and camera data to automatically predict and diagnose sources of error.

1.2 Applications of AM Defect Detection

With the rapid adoption and prevalence of AM, a method of ensuring consistency and defect detection in 3D-printed parts is, however, yet to be universally adopted [5]. This is a major challenge because it is not only highly dependent on the printing parameters of the model, but also the machine operator’s expertise and the reliability of the printer itself. Primary among the failure modes in 3D printing is filament entanglement, also known as spaghettification, war**, and surface defects after the part has been fabricated. Ideally such errors are mitigated during manufacturing, and indeed endeavors have been made with the use of live print camera-feed (in-situ) integration [6], but there is still no doubt that the value lies in informing smarter part designs or operational methods from failure diagnosis data.

ML in 3D-printing applications to date has emphasized process monitoring, and there has been comparatively little emphasis on post-fabrication quality assurance. Recognizing this significant inconsistency in AM technologies, many adopters have begun integrating image-recognition ML into existing models or designing new models with this technology pre-installed. A popular method of using ML to classify these defect patterns is with a convolutional neural network (CNN) [7]. CNNs are a class of artificial neural network and are structured similar to neurons in the brain, with a connectivity pattern that allows each node in the network to respond to signals only from other nodes that are relevant. Together with an automated part-scanning algorithm, an ‘ex-situ’ defect detection method has the potential to be integrated in place of the traditional in-situ model.

This was the motivation behind our study—how to leverage the power of programmable robotics, and continually improve a ML algorithm to automatically detect and classify defects in 3D-printed parts. Addressing this fundamental drawback in AM would allow future research pathways to consider the integration of not only the ML framework in classifying faults during the manufacturing process itself, but also to use the data to self-diagnose and improve the manufacturing process for future iterations of the model.

2 Literature Review

Significant progress has been made in introducing AM technologies to the consumer market over the past decade. The rapidly increasing userbase of 3D-printing technology, the growth and expansion of global manufacturing demand, and the push for efficiency and automation have increased the urgency for engineers and researchers to recognize and consolidate a solution for mitigating defects in printed parts. Our literature review confirmed that defect detection in AM is a problem that goes beyond small-scale production, so we present specific and general solutions, and conclude that a fundamental method of reducing 3D-printing defects is needed for the continued development and integration of AM worldwide.

2.1 Defining the Problem

It has been well recognized that defects and failure caused by 3D-printing processes are a critical barrier to the widespread adoption of AM and that it deserves serious attention. Recently in 2020, Wang et al. [8] comprehensively reviewed the state-of-the-art of ML applications in a variety of domains to study the main use-cases in the research and development of AM. Those authors state that although current ML in AM research is intensively concentrated on print parameter optimization and in-process monitoring, there is an expectation that ML research efforts will be directed to more rational manufacturing plans and automated feedback systems for AM. This is a vital step in pushing ‘smart’ AM into the near future. Their article highlighted the lack of reports on ML usage in AM with regard to determination of microstructure, material property prediction and topology optimization, and they conclude that further research into these areas is crucial in determining printability, the design of ML algorithms, and optimal part designs with respect to materials.

A similar state-of-the-art review performed by Oleff et al. [5] focused on the challenge of industrializing AM by analyzing current monitoring trends to identify the key weaknesses. These authors found that although the amount of research into anomaly detection evaluation was mostly for all in-situ process monitoring techniques, the emphasis was decidedly on part geometries and surface properties in terms of over- and underfill. These parameters refer to an ‘infill’ percentage, which simply dictates the density of the printed part. Oleff et al. noted that measurements of surface roughness and quality, as well as mechanical properties such as tensile strength, were addressed by only two of the 221 relevant publications from their research. They conclude that inspecting such material characteristics was simply not considered in any of the publications, and thus represents a significant area of further investigation.

Researchers have begun to study the implications of implementing defect detection in AM. For example, Chen et al. [3] examined the future of surface defect detection methods by comparing the key AM part inspection technologies through testing of various “traditional” detection methods, such as infrared imaging and Eddy current testing, as well as ML-based techniques such as CNN image recognition and auto-encode networks. Auto-encode networks proved to be effective at learning fault features, rather than classifying them, but required consistent input and output data dimensions. It was highlighted that ML defect detection is directly based on the characteristics of the AM field, and as such is the most sustainable program moving into the future. The authors conclude that, although more effective than traditional inspection technologies, NN-based methods of defect detection ultimately are deeply data-driven and establishing a universal model applicable at scale requires further study into each of their respective advantages and disadvantages.

Yao’s findings were replicated in China by Qi et al. [9], who evaluated the effectiveness of different NN structures with the intention of optimizing the performance of AM parts. Qi et al. document the limitations in 3D-printing performance when applying numerical and analytical models to AM. Their paper reports that not only is a ML approach to AM valid due to its ability to perform complex pattern recognition without solving physical models, but also that NN are effective in CAD model design, in-situ modeling, and quality evaluation. The validity and potential of linking AM and NN technologies is brought to light, whereby they can be effectively integrated from design phases to post-treatment; however, the authors conclude that key challenges remain in the area of ML data collection and AM quality control.

2.2 Searching for Solutions

Petsiuk and Pearce [6] took a hybrid analytical approach to develo** a 3D-printing defect detection method in their study aimed at supporting intelligent error-correction in AM. In their paper they present a comparison model, whereby an in-situ printing process was photographed layer-by-layer and compared with idealized reference images rendered by a physics engine, “Blender”. They found that a similarity comparison did not introduce significant operational delays but rather demanded time for its virtual environment and rendering processes. A notable strength of this approach is that similarity and failure thresholds can be fine-tuned, providing both flexibility and varying sensitivity to defects during manufacturing. Emphasis is placed by the two authors on their model’s ability to scale exponentially with the number of parts manufactured, needing only to render a base image set per part, as well as the model’s independence of training data, presenting a significant reduction in resources required to implement the method into AM under unfavorable printing conditions, such as miscalibrated printer components, contamination or leveling discrepancies, which are prevalent control issues in mainstream AM markets.

Similarly, Paraskevoudis et al. [10] outlined a method of assessing the quality of in-situ 3D-printing using an AI-based computer vision system. Through analyzing live video of the process by utilizing a deep CNN, a larger variant of a CNN, a primary mode of AM defect was determined to be “stringing”, which is caused by excess extruded material from the printer nozzle forming irregular protrusions on the printed surface, often causing issues when dimension tolerances are critical. The deep CNN model was demonstrably effective at identifying stringing from the experimental data, but proved to be unreliable when applied on external data acquired from web-based sources. This identified a critical need for continuous model improvement through training on “new” data. Those authors comment on the further applications of this form of error-detection, stating that, provided a sustainable detection model is found for other key AM failure modes, this approach may be developed to adjusting the printing process itself, whether by correcting its parameters or by terminating the process itself. Such a feature would reduce the skill ceiling needed to operate such machinery, meaning fewer engineers and more technicians in practice.

These international findings were replicated in Singapore in a 2021 study by Goh et al. [11], who explored and summarized the various types of ML techniques, alongside their current use in various AM aspects while elaborating on the current challenges being faced. Through the perspective of in-situ monitoring of 3D-printing processes, Goh et al. found the high computational cost and large-scale data acquisition for ML training to be significant challenges in practice. They also found that CNNs better capture spatial features and are hence ideal for 2D image and 3D model applications. The authors similarly stress the importance of large datasets for achieving high detection accuracy, and state that the realization of predictive modeling in digital twins for AM ultimately depends on the ML algorithm’s classification accuracy, its input data quality, and its multi-task learning capabilities.

In most real-world 3D-printing applications, longevity AM as a reliable method of part production hinges on its ability to diagnose issues with print quality. A study conducted by Meng et al. [12] reviewed parameter optimization and discrepancy detection in the AM field by comparing the performance of common ML algorithms. They documented an iterative training method for an ML model wherein data from printing parameters, in-situ images and telemetry, microstructural defects and roughness, and the part’s geometric deviation and mechanical strength were used to make predicative surrogate models to assist in-process optimization. This methodology is known as “active learning”, whereby input–output pairs of data are formed and used to train ML models without querying new labeling data. Practically speaking, this significantly alleviates the cost, time, and human labor of conducting dedicated experiments with input labeling. The authors emphasize the gap in research with regard to active-learning ML algorithms in AM applications and conclude by stating that such an algorithm would be highly efficient in cases where a dataset is yet to be acquired.

Regardless of the defect analysis technique used, it has become clear that the best outcomes are achieved with a form of ML NN that is both accurate and scalable and able to adapt its behavior according to previous performance. Han et al. [7] considered the direct application of ML in defect classification in AM to assess its viability in the context of real-world image datasets. Through a process of localized segmentation of defects across image frames taken from in-situ print monitoring, a steady-state CNN model was established from 103 verification data sets. This model then accurately predicted defects from 101 “new” input images of 3D-printed parts. In practice, the authors’ model outperformed several established detection frameworks such as “Faster RCNN” and “SSD ResNet” in terms of average precision, but lacked in terms of detection speed. The consistency of the segmentation model can be attributed to just that—the process of image instance segmentation effectively distinguishes the differences and boundaries between similar surface defects, which is an approach that can be used to help develop a similar strategy on a smaller scale.

2.3 Preparing for Tomorrow, Today

Today’s global manufacturing industry continues to increase in size and pursue more time- and cost-effective means of production in AM. Unfortunately, research into “smart” AM, whereby sources of print failure are determined and used to inform better printing performance, has so far been both general and sparse. The literature reviewed by us reflected that void in knowledge. Particularly needed is a way to categorize the defects caused by erroneous printing parameters and to develop effective defect recognition in AM to address them.

3 Methods

3.1 Methodological Approach

Our goal was to evaluate the efficacy of using ML to automatically classify 3D-printing defects. We aimed to both describe the characteristic strengths and weaknesses of using such a method and gain a more in-depth understanding about its feasibility in the context of current 3D-printing applications.

We anticipated that a combination of quantitative and qualitative data would be required to achieve this aim; namely, photographic images and their derivatives such as those described by Petsiuk and Pearce [6], and Han et al. [7], to examine, decompose and categorize them as input data for the CNN model. The datasets would comprise these two data types, as the images would be parsed by a computer model and then verified by a human.

In order to maintain a control environment for testing the ML model’s behavior, all preliminary data used to construct and train were primary data. From the experiments carried out by Han et al. [7] in similar CNN models, it was apparent that such a model may be very sensitive to slight deviations in input data, and so secondary data were omitted from our study but remains as a clear extension to model testing in later build iterations. Furthermore, as an act of simplifying the scope of testing, the main geometries chosen for the 3D-printed parts to be scanned were square- or rectangular-based in nature. By limiting the sample space to this particular set of shapes, the algorithm for rotating and capturing the surfaces of the part was drastically simplified. This limitation, however, did not affect the validity of what was being tested, but rather left open the door to research into converting this method into a universal model.

Finally, as a direct comparison with the ex-situ model, was created to both aid in accelerating the rate of data collection and act as a benchmark for performance and accuracy.

3.2 Data Collection Methods

Data were collected after establishing the CNN model in YOLOv5, whereby samples of 3D parts printed locally in the Monash Smart Manufacturing Hub Staging Lab served as a proportion of the total part sample space. The main selection criteria for these samples were their defect type and locality relative to the part’s geometry, and were selected to assist in faster pattern recognition training and create a fair comparison between the in-situ and ex-situ models.

The relevant tools, equipment, and materials used to gather data were a Creality Ender 3 v2 3D printer, PLA printer filament (colored and white PLA +; 1.75 mm diameter), JAI HD-camera and Windows-based operating system (PyTorch IDE and LabelImg software).

Images were captured in a controlled environment to ensure consistency not only within the scope of the experiment itself, but also to the degree that could be replicated by another researcher or engineer (Fig. 1). This environment comprised a fixed, white-balanced background calibrated against a control in equal lighting conditions (brightness and color-temperature).

Fig. 1
A photograph of an experimental setup with the K U K A arm, gripper, and J A I camera rig.

Preliminary data collection setup: KUKA arm, gripper, JAI camera rig

We anticipated encountering issues with object visibility when the color of the part being examined was identical to the reference background. In such cases, a unique background color was to be fixed or alternate backgrounds exchanged. Another potential issue was the base dataset not being sufficiently large. As mentioned by Chen et al. [3] and Goh et al. [11], a major limitation in mass-integration of NN-based models in AM is the sheer size of training dataset required. It is necessary to consider the effect of this when evaluating the performance of ML in this application. It was only feasible for us to attain < 200 samples of data for use in training and validation.

3.2.1 KUKA Robotic Arm Algorithm

The arm was programmed to grab, move, and rotate test pieces to allow the mounted JAI camera to capture multiple isometric views of the part (Figs. 2, 3, 4 and 5). Specifically, two external isometric images of the part were captured. Based on the initial hypothesis, this method reduced the number of images needed to train the CNN for a particular defect, and thus reduced the overall duration of scanning. The advantage of having a programmable robotic arm is the ability to maneuvre the piece to expose every surface in just a few images, each captured at an isometric angle containing multiple faces.

Fig. 2
A photograph of a hollow cube with rectangular cuts at the center of the sides.

Example testing part: white hollow cube measuring 80 × 80 × 80 mm

Fig. 3
A photograph of a robotic arm lifting the hollow cube.

Robotic arm in first position: working part is located and lifted

Fig. 4
A photograph of a robotic arm rotating the hollow cube.

Robotic arm in second position: working part is rotated to first isometric angle relative to hanging JAI camera

Fig. 5
A photograph of a robotic arm rotating the hollow cube. Some equipment can be seen in the background.

Robotic arm in third position: working part is rotated to second isometric angle relative to hanging JAI camera

Conditions surrounding the test rig were kept constant in terms of lighting (cool white light of 5300 K) and background (solid background sheet behind and below the sample) to limit the effect of extraneous variables.

Further stages of development are: calibration pieces (homing, reference plane, improved consistency in both programming and in result collection); hanging camera mounting at isometric angle; and surrounding part background.

3.2.2 YOLO Image Classification

Using the isometric images gathered with our scanning method, the YOLOv5 model was trained under human supervision, which involved digitally labeling the input data with the LabelImg software (Fig. 6). These annotations are the mechanisms through which the CNN learns to recognize defect features.

Fig. 6
An isometric image of a cube includes 2 rectangles at the top left and bottom right corners and a vertical rectangle at the center.

Example of labeling of primary defect modes using LabelImg: layer shift, stringing and war**

An equivalent in-situ model provided image data that were also used as training data. Further improvements to training methodology are using a 1:9 ratio of true negative to true positive training data because 10% of image training data would not contain the corresponding defects and supplementing primary data with open-source secondary datasets. As a CNN model’s performance improves proportionally to the volume of training input, it is expected that this will improve the overall performance.

Difficulties included larger forms of war** not being easily distinguished from curved features of the part’s geometry, which restricted the effectiveness of the preliminary model to only similar cuboidal shapes. To mitigate this during labeling, larger examples of these features were bisected for better localization (Fig. 7).

Fig. 7
An isometric image of a cuboidal structure includes 2 rectangles at the bottom left and bottom right with curved corners depicting defects.

Bisection method of larger examples of defects. This particular example shows dividing a large war** defect spanning the entire width of the part into two separate instances

3.3 Methods of Analysis

In both the quantitative and qualitative data analyses, outliers and anomalies were factored into the assessment of the model’s functionality and reliability, but will be omitted from computational methods such as being used as training input.

The main method of quantitative analysis was the passing of image data through a CNN and recording the corresponding output. This network functions as a filtering device, using a network of calibrated nodes that respond to their associated shape or pattern. The nodes make up the decision-making mechanism of the CNN and can be adjusted during experimentation.

Qualitative analysis will comprise identification, verification and classification against the following main AM print failure categories [10]:

  • spaghettification/stringing

  • layer shift

  • war**.

These categories can then be linked with associated mechanical errors or defects:

  • under-extrusion

  • over-extrusion

  • nozzle blob

  • poor initial layer

  • poor bridging overhang

  • premature detachment of print

  • no extrusion

  • skirt issues.

Human verification of these defect features will be used to train the CNN.

3.4 Justifications

An immediate limitation of a camera-based setup is detecting printing defects that are internal to the structure of the printed part. If a 3D-printed component needs to pass a quality check under the method outlined here, there would be no way of verifying the integrity of the inside surfaces and whether they fulfill certain failure mode criteria. An example of this would be a hollow cube, in which there is an interior surface that contributes to the physical properties of the part such as density, rigidity etc. Although it is possible to reveal print defects using higher energy such as X-ray [10], it would ultimately be more reliable and cost-efficient to simply detect such major print errors during the manufacturing process itself, such as was explored by Vosniakos et al. [2], Petsiuk and Pearce [6] and Oleff et al. [5].

4 Results and Discussion

4.1 Preliminary Results and Discussion

4.1.1 Deliverables

Our experimentation aimed to deliver a simple, working demonstration of analyzing a 3D-printed part for surface defects. The process achieved the following: (a) part retrieval from a dedicated storage location using a KUKA robotic arm, (b) an automated algorithm for part scanning using a high-definition JAI camera, (c) classification of surface defects if present and (d) percentage accuracy rating in comparison with an equivalent in-situ model.

At the time of writing, initial findings and results have only been taken from training data and experiments with the in-situ testing rig.

YOLOv5 worked consistently, but is yet to prove a reliable ML solution, noticeably because of the sheer volume of training input required.

The gathering of data will be a significant priority in the future stages of this study, with measures already taken to divide the printing load across the ex-situ and in-situ models. Further, an additional two Ender 3 v.2 rigs will be established external to the laboratory for simultaneous data acquisition.

An initial accuracy of 18.9% from the accompanying in-situ rig was due to insufficient training data with the model.

The preliminary KUKA robotic arm scanning algorithm based on isometric images was effective only for square-based prisms in a sample of existing test print pieces.

4.2 Areas of Future Work

Although this study does tackle a prominent drawback with AM technology, it is considerably limited in scope, particularly for adapting this methodology to parts of any size or shape.

4.3 Study Limitations

  • The KUKA robotic arm programming is currently limited to cuboidal and rectangular-shaped parts. This is only a limitation of the scanning algorithm itself however, and once addressed by an adaptable part maneuvering strategy will present no significant challenge to training the CNN.

  • The ability for this model to detect and classify minute defects or other small tolerancing issues may depend greatly on the resolution of the attached camera equipment. A solution for this is simply to upgrade to a higher resolution camera befitting for the tolerance requirements.

  • Gathering large samples of verified training data is an unavoidable hurdle in assuring a high rate of accuracy for ML-based models such as YOLOv5. This will be a significant contributor to the cost and time investment for this model to be adopted at scale.

  • Although defects can be quite reliably identified through the method outlined in this study, an ideal method of representing this data as a useful diagnosis for users was ultimately not investigated due to limited working constraints.

  • The severity of defects was not considered. Although the type of defect may be successfully discerned, neither its degree nor magnitude can be quantified.

  • The in-situ experiment running parallel to this method captured footage on different image-capturing hardware. Although this did not affect the premise of this study to a great extent, as in-situ monitoring remains limited by its fixed angle, nevertheless a closer comparison should be made using identical equipment.