Analysis and Classification of Temperature Measurements during Melting and Casting of Alloys Using Neural Networks

In this article, we consider the organizational issues of monitoring the thermal conditions of melting and casting alloys at foundries. It is noted that the least reliable method is when the measuring and fixing the temperature are assigned to the worker. On the other hand, a fully automatic approach is not always available for small foundries. In this regard, the expediency of using an automated approach is shown, in which the measurement is assigned to the worker, while the values are recorded automatically. This method assumes an algorithm implementation for automatic classification of temperature measurements based on an end-to-end array of data obtained in production series. This task solution is divided into three stages. Preparing of raw data for the classification process is provided in the first stage. In the second stage, the task of measurement classification is solved by using principles of artificial neural networks. Analysis of the artificial neural network results has shown its high efficiency and degree of correspondence with the actual situation at the work site. It is also noted that the application of artificial neural networks principles makes the classification process flexible due to the ability to easily supplement the process with new parameters and neurons. The final stage is analysis of the results. Correctly performed data classification provides an opportunity not only to assess agreements with technological efficiency at the site, but also to improve the process of identifying the causes of casting defects. Application of the proposed approach allows us to reduce the influence of human factor in the analysis of thermal conditions of melting and casting alloys with minimal costs for melting monitoring.

The furnace temperature at the outlet and during casting of alloys has a significant effect on the formation of a number of defects in cast billets. The high temperature of alloy contributes to the formation of shrinkage cavities, cold and hot cracks, a mismatch in mechanical properties and other, directly or indirectly related defects. The metal production with insufficient temperature leads to the formation of underfilling, weldings, deterioration of the conditions for the removal of gases from the alloy, as well as leads to a mismatch between the macro-and microstructure [1][2][3][4]. Therefore, the reliable control organization of the alloy's thermal state is crucially important in the foundry production.
To date, the most technologically advanced and accurate method for temperature measurements is the immersion thermocouple method [5][6][7]. Temperature measurement of liquid steels and irons, as a rule, is carried out in the form of one-time acts in certain technological periods. In this case, special attention should be paid to the issue of fixing the readings of a thermocouple, which can be done both manually and automatically [8][9][10]. The manual approach assumes that the worker is involved with inputting the values into the factory database during the process of visual recording instrument readings. The application of this approach will not enable minimizing the risks associated with the presence of the human factor and does not correspond to the modern concept of production organization. Implementation of an automatic method of temperature fixation assumes the solution to the issue by linking the obtained values to the measurement site, which, as a rule, is solved by creating additional measuring units at each site with their connection to the factory network. In this case, the fixation process ambiguity is minimized, but the total control cost increases. Regarding large-scale metallurgical production, this approach is certainly justified, but in foundries with relatively small melting units of several tons and compact production sites, increasing the amount of measuring units may turn out to be economically and technologically inefficient. In this case, an approach that will minimize the human factor at the stage of fixing temperature values is of undoubted interest, but at the same time will not require an increase in the fleet of measuring devices.
The aim of the work is to develop and implement a method for automatic classification of temperature measurements based on an end-to-end data array obtained in the production flow using a single measuring unit.
In modern metallurgical and foundry industries, intelligent data processing systems are increasingly used. Their implementation makes it possible to perform a number of complex tasks for automation, control and analysis of production processes [11][12][13][14][15][16]. Separately, a class of data classification problems should be allocated, the solution of which either became possible only thanks to the use of intelligent systems, or due to moving the work results to a new qualitative level. An innovative example is the use of image recognition systems for production purposes, the implementation of which is successfully based on artificial neural networks (ANNs) [17][18][19][20].
When solving this problem, using the already wellproven principles of ANNs is also proposed. For this purpose, the general formulation of the problem must be subdivided into three stages: (1) Initial data preparation and form the feature space formation.
(3) Thermal analysis of meltings. The aim of the first stage is to form a set of parameters by which the classification will be done. The initial data affecting the decision of classification are the values of temperature and measurement time. Depending on the measuring equipment under operation, the user can obtain either a ready-made data subseries-measurement number/time/temperature, or a continuous flow of temperature values recorded with a given discreteness. In the second case, the data must be pre-processed in order to allocate the true measurement temperature. For this purpose, various algorithms for analyzing thermograms are used, which make it possible to determine the area of thermal equilibrium [10], the value of which corresponds to the measurement temperature.
The initial parameters (time and temperature) are not sufficient for correct classification. To solve this problem, it is necessary to increase the feature space dimension, provided that the parameters are consistent. Thus, the initial array of measurements is supplemented with the values of the difference between the current and previous recordings (Δτ, ΔT), the rate of temperature change (ΔT/Δτ), the serial number of the melting and the code reflecting the order of measurements within the melting and the feature of duplicate measurement.
To assign the number of a melting, a simple counter is used, which is triggered by the condition that the value Δτ exceeds a certain threshold reflecting the interval between meltings. The melting time is guaranteed to exceed any other technological intervals that are its constituent parts. Depending on the technological process, the melting time in foundries can be from one hour or more. In addition, the time of the metal process finishing after its melting should be minimized, and the time between subsequent technological operations rarely exceeds 10-15 minutes. The measurement code is introduced for its preliminary identification. In this case, the first, last and duplicate measurements of the melt are allocated. According to the technological regulations for carrying out measurements, there should be at least two of them: measured in a furnace and measured in a ladle. Measurement in the furnace shows the readiness of the metal and serves as a signal for its tapping. Measurement in the ladle confirms the correctness of all technological operations for releasing the metal and preparing the ladle and shows the readiness of the metal for casting. In the future, subject to strict adherence to casting regulations, repeated measurements in the ladle are not required. Thus, there is a high probability that the first measurement of the melting is the measurement carried out in the furnace, and the last measurement-in the ladle.
It should be noted right away that classification is impossible with only one melting measurement. Such meltings in violation of regulations for monitoring technological parameters should be considered separately with the identification of the violation reasons. In addition, the first measurement of the melting will always be taken as the measurement performed in the furnace.
A duplicate measurement is carried out when there is doubt about the correctness of readings. For example, if the temperature readings after melting the metal turned out to be lower than the initial values, then it is very likely that the measurement was performed incorrectly and a repeat is required. The reasons for incorrect readings can be related to both the operation of the device and the measurement technique. A large volume of duplicate measurements indicates either the need to check the equipment or the unskilled actions of the steelmaker. The assignment of the duplicate measurement code is determined by the time required to perform a double and is made provided that δ is less than the minimum technologically justified interval. When performing a double measurement, the steelmaker must use the replaceable thermocouple tip and repeat measurement. Modern devices allow this to be done in 20-30 s [10]. There are no significant changes in the furnace bath for such a short period of time. Therefore, it is technologically inexpedient to carry out measurements with such a frequency. Thus, we can confidently assert that this measurement is a duplicate one if the interval is less than 30 s. In further analysis, the temperature value of the last double measurement is considered, while the previous "doubtful" values are ignored. The measurement classification problem is implemented on the basis of an ANN with a simple single-layer perceptron architecture [21]. In this case, both threshold and linear functions of triggering of network elements are used. The normalization of the values of the input parameters provides the versatility of the ANN application. However, the model in this case is abstracted from the processes under study and is difficult to describe from the point of view of the interaction of input and output parameters. If we omit the standardization stage, the model of a small ANN can be analyzed from the standpoint of process technology; the applied threshold values acquire technological meaning and can be directly assigned from the regulatory documentation.
Consider the operation of the proposed ANN. The values of the features (Table 1) are fed at the input layer of neurons, which are activated in accordance with the corresponding threshold values. As a result, the inner layer of ANN gets an activation state, which takes discrete values 0 and 1 or real values from 0 to 1, depending on the neuron type. In the inner layer, the activated states are multiplied by the appropriate weighting factors, after which the signals are fed to the adder acting as an output layer representing the result of the ANN operation. As noted earlier, the threshold values of neuron activation can be assigned in accordance with the regulations and the physics of the technological process. Determining the weighting factors is the task of the ANN training.
Consider the logic of assigning threshold values and the operation of network neurons. The first pair of neurons reflects the order of measurement during the melting process. In fact, the "measurement code" in Table 1 is the state of activation of the first pair of neurons. The sign of the values fed to the adder after multiplying by the weighting factors indicates the direction in which the balance of the classification process will be shifted. Negative values for the output layer indicate that the measurement was performed in the furnace, while positive values-measurement in the ladle. A zero or close to zero value means that the network is "not sure" of its answer, and, accordingly, the larger is the modulus of the value, the higher is the "confidence" in the classification correctness.
The parameter Δτ processing is carried out in three neurons using the "tapping time" threshold. The value of this threshold reflects the minimum interval required to perform all necessary operations for the alloy preparation and tapping. This is the shortest possible time between measurements, which are performed in a furnace and then in a ladle. The first neuron is activated when two conditions are met: the Δτ value does not exceed the "tapping time" threshold and the previous measurement was performed in a furnace. In this case, the neuron activity indicates with a high probability that the measurement was performed in the furnace. The second and third neurons are introduced to counterbalance the first and reflect the remaining options: the interval is greater or less than the threshold value, but the previous measurement was performed in the ladle.
Three neurons are also used to assess the parameter "temperature change ΔT". The first indicates that the alloy tapping process from the furnace into the ladle is accompanied by significant heat losses and a sharp decrease in the alloy temperature. The threshold value in this case dynamically depends on the tapping alloy temperature. The higher is the alloy temperature, the higher is the heat loss, and the greater is the threshold modulus. To determine its value, it is proposed to use a second-order polynomial, where the value of the alloy temperature from the previous measurement (the temperature of the alloy in the furnace before tapping) is used as an independent variable. Determining the equation coefficients is made on the basis of the actual monitoring data of the technological process. The equation line should emphasize the minimum values of the alloy temperature drop during its tapping from the furnace. Figure 1 plots an example of an equation obtained for the conditions of steel production from EAF-3.
ΔT takes a positive sign when the alloy is heated, which is technically possible only in a furnace. So, the second neuron is activated if ΔT is greater or equal to zero. The remaining interval from the minimum value of the temperature drop to zero is expressed through the activation of the third neuron. Thus, to solve the problem, four input parameters and at least eight neurons are used. The number of neurons can be increased when allocating additional features of technological process. So, when implementing this ANN for the conditions of releasing the alloy from the furnace, a neuron was introduced in two steps to estimate the time of the ladle turnover. To confirm that the measurement was made in the second ladle, a threshold value is applied: "minimum ladle turnover time". When the alloy is tapped in two stages, measurements in the second ladle can be made only after completing the casting operations and returning the ladle to tap the alloy remains from the furnace. Accounting for additional conditions enabled to increase the accuracy of the classification process, reducing the number of uncertain solutions.
The result of the network is a list of measurements that belong to their performing cites. Obviously, the key parameters that determine the belonging of the measurement are the temperature change and the time interval. Other factors play a supporting role. To assess their contribution and the advantage of using ANN in the classification process, the results were presented in a two-dimensional space ΔT, Δτ (Fig. 2).
The vertical solid line in Fig. 2 represents the "tapping time" threshold value. Vertical dotted line is the threshold value "minimum ladle turnover time". The frame marks the area of uncertainty of the temperature drop. These boundaries should divide the graph space into measurement areas. However, it is clearly seen that a small part of markers penetrate into conventionally selected neighboring regions. The volume of such inconsistencies is about 15%, which is an indicator of advantages of the ANN use, which consider the auxiliary parameters of the process.
Analyzing the thermal regime is carried out based on the statistical analysis summary table of the melting section operation results during the period under consideration (Table 2).  This presentation of the ANN operation results allows one to assess the observance of technological discipline at the site and use it when identifying the causes for the formation of casting defects.
The efficiency of a neural network was evaluated by the volume of uncertain measurements, which amounted to no more than 10%. The analysis of uncertainty cases made it possible to establish that they are mainly caused by the deviations in the conditions of melting and tapping of the alloy from technologically established regulations. Thus, the network operation results can additionally be used to detect violations of technological process.
An undoubted advantage of using ANNs is also a high potential for increasing the efficiency of the classification process when the vector of features is expanded with parameters directly or indirectly related to the melting process, such as the electrical characteristics of the melting unit operation.

CONCLUSIONS
The proposed method for the classification of temperature measurements makes it possible to minimize expenses when organizing the process of automated monitoring of alloy melting thermal modes in foundries.
The use of ANNs for solving the problem of classifying temperature measurements allows consideration towards the influence of indirect process factors, which provides a high degree of correspondence (about 90%) of the analysis results with the actual situation at the working sites.