A decision support tool for operational planning: a Digital Twin using simulation and forecasting methods

de Itajubá, Itajubá, MG, Brasil *chenrique.santoss@gmail.com Abstract Paper aims: Propose a continuous decision support system, a Digital Twin, integrating two widely used techniques, Discrete Event Simulation and forecasting methods. Originality: With the evolution of the industry, there is a growing need for increasingly agile and assertive decision support systems. Also, familiar tools and techniques tend to change over time to suit such a scenario, supporting new researches on their use in the modern industry. Research method: The proposed method allows the use of simulation, with the aid of forecasting methods, for continuous decision making, composing the so-called Digital Twin. The method was applied in a real process to validate it. Main findings: The Moving Average, Single Exponential Smoothing, and Double Exponential Smoothing forecasting methods were used to supply the simulation model in order to test scenarios and guide decision making. The developed system enabled a virtual copy with a certain degree of intelligence and that provides answers to make the Implications for theory and proposed method be used for several operational problems like headcount, production planning and covers different levels of decision. Therefore, it can be used both on the The real process records the materials consumed directly in the operation’s ERP system. Such registration occurs at the moment of consumption by the production lines, supplying the process database. The dashboard automatically accesses the standard ERP system reports and analyzes the data to transform it into information, allowing the forecasting methods to be automatically executed. The dashboard compares the results obtained through the forecasting techniques to choose the result with the lowest error, based on three metrics: Mean In which y is the observed time series, m is the length of seasonality, t l represents the level of the series, t b denotes growth, t s is the seasonal component and  | t ht y + is the forecast for h periods ahead of t based on all data up to time t. The , , αβ and γ are the smoothing parameters of the


Introduction
Companies are experiencing an era of constant challenges in terms of adapting their processes to new technological developments and increasingly demanding market requirements (Vijayakumar et al., 2019). For Beregi et al. (2018), the existence of Digital Twin is one of the most important requirements for dealing with problems that occur in the operation of a manufacturing system, providing decision support at planning and execution levels. The Digital Twin can be understood as a virtual copy with a certain degree of intelligence and connected with real systems through its data, aiming to optimize the decision-making process (Wright & Davidson, 2020). To obtain such virtual copies, companies may resort to various techniques, such as the use of commercial platforms, integrated solutions of several tools, among others (Tao & Zhang, 2017;Mourtzis, 2020). In this case, simulation stands out as a key factor in the design of the so-called Digital Twin, precisely because of their capacity for virtualization and decision support (Rodič, 2017;Terkaj et al., 2019).

Digital Twins and their role in decision-making support
The Digital Twin concept was created by Shafto et al. (2010) and referred primarily to virtual copies of physical systems belonging to the North American aerospace context through NASA. According to its first definition, Digital Twin is a multi-physical, multi-scale, and probabilistic system simulation tool that uses automatically collected real data to mirror physical behavior through a virtual model, aiming to evaluate and recommend changes to optimize the real systems. Tao et al. (2018) emphasize as being the main characteristics of Digital Twin the ability to mirror and optimize the physical system in a synchronized and faithful manner through connection with several data sources. In recent years, the Digital Twin concept has been incorporated into industrial activities and is among the key factors for the design of the industry of the future (Wright & Davidson, 2020).
The Digital Twin can be characterized by three main components (Tao et al., 2018): (1) physical systems, which we want to mirror; (2) virtual systems, which represent the physical in a detailed and sufficient manner; (3) synchronism between both systems. Figure 1 illustrates the Digital Twin structure. For Lu et al. (2019), the Digital Twin has proved to be a practical method to integrate the physical and virtual world of manufacturing operations and to support manufacturing strategies in terms of more efficient and intelligent decision-making. When referring to the industrial context, the Digital Twin can provide guidelines for decision-making at both the management and the shop-floor level (Wright & Davidson, 2020). In this case, Tao & Zhang (2017) proposed the Shop-floor Digital Twin, as a Digital Twin focused on operational decisions regarding the planning and monitoring of factory floor activities and operations.
Despite being widely disseminated in recent years, Wright & Davidson (2020) note that the Digital Twin has been applied in several ways and its characteristics may vary. Therefore, similar sectors may present different versions and approaches to the concept. In this case, Alam & El Saddik (2017) highlight that the Digital Twin can be used for monitoring, diagnosis, and prognosis of processes, and it is expected that some characteristics will depend on the function performed by the Digital Twin. Although being initially conceived as a virtual model capable of connecting in real-time with sensors and technological devices, Wright & Davidson (2020) highlight that for some applications, real-time is less critical and the "real-time" aspect can mean hours instead of seconds. Alam & El Saddik (2017) agree by reinforcing that, in cases where data collection via sensors is not possible or feasible, as in the case of manual processes, we can consider a Near Real-time approach for Digital Twins. Regarding the connectivity of Digital Twins with real data, we can notice connections with databases and cloud data (Alam & El Saddik , 2017), Enterprise Resource Planning -ERP (Steringer et al., 2019), Sensors (Lu et al., 2019), Programmable Logic Controller -PLC (Vijayakumar et al., 2019), among others, to guide them in mirroring the real system.

Simulation as a Digital Twin
According to Law (2014), simulation is one of the most used techniques in the area of Operational Research, highlighting some of its numerous applications such as design and analysis of manufacturing systems, financial system assessments, design and analysis of transport, service and military systems, among other applications. Banks et al. (2010) also highlight that the systems' behavior of systems can be analyzed through simulation models, built from observations and inferences. According to the authors, such inferences can be expressed by mathematical, logical, and symbolic techniques and relationships. With the advent of computational resources, the use of simulation grew quickly, gaining new resources and fields of action over the years (Goldsman et al., 2010). Fishman (2001) reports that modeling complex systems has become a means of survival for many areas and, in this sense, modeling can provide decision-making answers at relatively low cost, a fact that drives the large utilization of the technique. In this case, due to the increasing complexity of the systems, the simulation has become an indispensable technique for their modeling.
Regarding the use of simulation in manufacturing operations, Negahban & Smith (2014) report the need for more efficient techniques to cope with the increasing complexity of manufacturing operations. The emergence of hybrid approaches, where simulation is coupled with one or more techniques, is extremely important. However, despite the great applicability of simulation in various areas and contexts, Rodič (2017) reports that the simulation, as well as any other tool inserted in this context, will undergo modifications in the context of the modern industry. Uriarte et al. (2018) claim that simulation will become an even more important technique, being key technology in the context of the industry of the future. In the current scenario, there is a need to obtain increasingly efficient manufacturing systems and the simulation proved to be a powerful technique to design and evaluate them due to its low cost, fast and low analysis risk (Mourtzis, 2020). For the authors, the simulation lives the Digital Twin era, favoring constant support to decision-making. For Rodič (2017), the simulation can fulfill the role of the Digital Twin in the modern industry. Different from traditional simulation applications, where specific analyzes with limited scope prevail, the simulation as Digital Twin refers to its connection with the real process aiming at constant analysis to aid decision-making. Uriarte et al. (2018) agree that the virtualization is directly linked to the Digital Twin, and, in this context, simulation plays an important role in achieving such virtualization. Kritzinger et al. (2018) complement that the Digital Twin explores the interface between the physical and the virtual, based on the real environment through its data and using tools such as simulation to evaluate scenarios and support decision-making. Digital Twins can provide a drastic change in the decision-making life cycle, since the simulation model is expected to be used continuously in parallel with the real manufacturing system, supporting decision-makers (Beregi et al., 2018).
In the literature, it is possible to highlight several applications involving the use of simulation as Digital Twin (Steringer et al., 2019;Terkaj et al., 2019;Lu et al., 2019;Vijayakumar et al., 2019). However, although the Digital Twin is a highly promising technology that can greatly affect the digitized factories of the future, it is not still mature, not even its total capacity is explored in the literature (Mourtzis, 2020). There is a need for research addressing applications involving integration with tools and technologies aimed at the continuous improvement of Digital Twins.

Forecasting methods and techniques
Forecasting methods predict values on the future based on a given time series dataset, which considers assumptions on the future by evaluating historical data. It can be applied to many areas for the decision-making process, such as operations Management, Marketing, Finance and Risk Management, Economics, Industrial Process Control, and Demography. For Operations Management, the purpose is to schedule production, control inventory, manage the supply chain, and capacity planning (Montgomery et al., 2008). Arvan et al. (2019) report that forecasts are a critical input for decision-making related to supply, purchasing, production, inventory, logistics, finance, among others. In the literature, there is a large scope of forecasting methods, involving applications in Headcount planning (Safarishahrbijari, 2018), Product demand (Teunter et al., 2011), Stock safety planning (Beutel & Minner, 2012), among others.
As far as forecasting methods are concerned, there are several available for use and the choice will depend on the characteristics of the data and the desired level of accuracy. In this context, the Moving Average (MA), according to Montgomery et al. (2008), is one of the simplest and most widely used forecasting methods, which predict values based on a fixed size data range on time, and roll it over adding a new value while taking of the oldest. This range is also called "Span" or "Length", defined by the interval in which the moving average is computed. For the authors, MA has less variability than the original observations and could be useful for predict values through Equation 1.
In which y is the observed time series and N is the length of the MA at period t.
On the other hand, the so-called Exponential Smoothing Methods is capable to predict by extracting the level, growth, and seasonal component from the observed time-series data (Matsumoto & Komatsu, 2015). In this context, some fluctuations are smoot.hed by the weighting of the data, with an exponential weighting factor applied. In this case, recent data has greater weight. Moreover, according to the authors, in the case of exponential smoothing, it can be classified into three types: Single, Double, and Triple. In Single Exponential Smoothing (SES) only the "level" component is taken into consideration. In the case of Double Exponential Smoothing (DES), the "level" and "growth" components are taken into consideration. Finally, Triple Exponential Smoothing (TES) takes into account, in addition to the other two components, the "Seasonal". These components and the predicted value are achieved by the following equations: The real process records the materials consumed directly in the operation's ERP system. Such registration occurs at the moment of consumption by the production lines, supplying the process database. The dashboard automatically accesses the standard ERP system reports and analyzes the data to transform it into information, allowing the forecasting methods to be automatically executed. The dashboard compares the results obtained through the forecasting techniques to choose the result with the lowest error, based on three metrics: Mean In which y is the observed time series, m is the length of seasonality, t l represents the level of the series, t b denotes growth, t s is the seasonal component and  | t h t y + is the forecast for h periods ahead of t based on all data up to time t. The , , α β and γ are the smoothing parameters of the models. In addition to these methods previously presented, there are other more complex, that can take into account other variables for greater forecast accuracy. In this case, there are specific applications where they are most suitable. Other methods such as ARIMA, Neural Networks, variations of Exponential Smoothing methods, among others, are widely used in the literature (Balestrassi et al., 2009;Moon et al., 2012;Poloni & Sbrana, 2015;Rego & Mesquita, 2015;Matsumoto & Komatsu, 2015;Tratar et al., 2016). However, since it is difficult to measure and evaluate the ability of companies and organizations to apply forecasting techniques effectively, Kalchschmidt (2012) attests that using simpler techniques such as Moving Average and Exponential Smoothing is often preferable. The author defends the tendency to keep forecasting methods as simple as possible.
Considering the use of more than one forecasting method, there are several ways to compare them. In this context, Hyndman & Koehler (2006) point out that, when comparing different methods, one of the most commonly used metrics is the Mean Absolute Percentage Error (MAPE). In addition to MAPE, the Mean Absolute Deviation (MAD), and Mean Squared Deviation (MSD) can also be used to compare two or more methods and these metrics are present in several statistical software, such as Minitab®.

Proposed approach
The proposed approach provides a constant decision-making aid system, a Digital Twin. This system, represented by the composition of the dashboard with the forecasting and simulation models, should be able to obtain data from the operation's ERP, transform this data into information through forecasting and simulation techniques and, finally, suggest decision metrics. The object of this study concerns a process of supplying materials in kanban stations of an aeronautical industry. The historical materials demand of each of the kanban stations will be the main information obtained from ERP. Moreover, the dashboard must have an interface capable of forecasting the future demand of each kanban station through the widely used forecasting methods, the Moving Average, Single Exponential Smoothing, and Double Exponential Smoothing methods. Finally, the dashboard opens and runs the simulation model automatically, allowing the evaluation of all possible scenarios aiming at a better operational decision. Figure 2 illustrates the proposed architecture for this work.

Case study
The next topics will address the application of the framework suggested, shown in Figure 3, in a real study object.

Definition of the study object and system objectives
The object of the study refers to a process based on the supply of materials consumed by four production lines. Each line has a small local stock, called "kanban station". As the materials are consumed, there is a need Absolute Percentage Error (MAPE), Mean Absolute Deviation (MAD), and Mean Squared Deviation (MSD). Finally, based on the expected demand, which is stratified for each of the kanban stations, the system automatically opens and runs the simulation model. The simulation model tests the possible supply route scenarios based on the expected demands and provides the results for the Supply Lead Time and the number of resources required (operators) for the Supply Lead Time to equal one day, the ideal scenario for the operation. These analyses are possible tanks to optimizing tool integrated with the simulation software. Finally, dashboard provides to the decision-maker some important information, such as the expected demand for each of the kanban stations, the best route to follow to minimize the distance traveled and, consequently, the delivery time, as well as supply Lead Time metrics and optimal headcount to meet demand in one working day. By making the system automated with the aid of VBA programmed subroutines, we can make a user-friendly interface, making the decision-making process more efficient, accurate, and assertive, without the need for highly skilled people with extensive technical knowledge. Moreover, it is important to highlight that the dashboard can operate from wireless networks, favoring its use, and allowing greater versatility to the proposal.
To enable such architecture, we proposed to build the system in three major phases: (I) Simulation model construction, (II) Forecasting model construction, and, finally, (III) Integration and Decision Support interface construction. The first step is the simulation model construction and validation. Then, it is possible to start the construction of forecasting models, which should allow, based on historical data, forecasts that are sufficiently good to supply the simulation model. Finally, the last step is to create an interface that enables the communication between the real environment, the forecasting model, and the simulation model, allowing decision-making support for the end-user. The last stage of the proposed approach is constant decision-making and it is exactly this stage that characterizes the Digital Twin. Figure 3 illustrates in more detail the described research structure. Regarding the objectives of the proposed system, we should pay attention to the desired answers from the decision-makers point of view. Since the operational decisions, in this case, are related to the number of logistics operators needed to meet daily demand, as well as the most efficient route to follow, such responses make up the system objectives. We can summarize the objectives, as follow: -Building a Digital Twin, consisting of a simulation model and forecasting techniques, able to connect with the real process and support systems, such as ERP, to direct and support the operational planning of a materials supply process. Different from traditional simulation models, it is worth noting that we proposed a constant updating model, aiming at its adaptation given the real process; -Provide a constant decision aid tool regarding the planning of supply route and headcount, aiming for more efficient decisions and meeting the demand according to the premises required by the company, i.e. Lead Time delivery of a maximum of one day.

Simulation model construction
For the simulation model construction, according to Chwif & Medina (2015), we can use three computational representation techniques: programming language, simulation language, or simulation software. In this case, we for replenishment by a logistics operator who searches for the materials in the central stock and supplies them to the kanban stations. The production plant has a relatively large built-up area, allowing the logistics operators to choose one from numerous possible supply routes. The choice of which supply route to follow varies according to the consumption of each of the production lines and, in this case, an aid tool to choose the best route can save time, directly impacting daily productivity. Moreover, since it is a highly variable demand process, many times only one operator is unable to meet daily demand. Meeting demand on a maximum Lead Time of one day is one of the operational requirements and therefore requires proper planning as to the number of employees needed to meet daily demand. It is worth mentioning that, although several works in the literature explore Kanban systems, the proposed approach brings a new perspective of the decision-making regarding operational planning. In this way, decisions regarding headcount and supply routes are more accurate and constantly updated. Finally, it is important to mention that this work proposes a complementary approach to traditional Kanban systems. Figure 4 illustrates the floor plan of the company object of this study, highlighting the kanban stations present in each production line. chose the simulation software due to the more interactive modeling, as well as greater graphic detailing of the model. Rodič (2017) highlights, among other factors, the high graphic resolution of the computational model and the high level of detail as essential points for the adequacy of the simulation in the context of the Digital Twin. Given the need to build models that increasingly exploit the graphic and visual resources, we decided to use the FlexSim® software. This choice was also due to its main features such as the possibility of building 3D models, an extension for Virtual Reality, among others. It was necessary to import some 3D creations, representing aircraft and equipment, already built and available on the 3D Warehouse® platform. It is noteworthy that the type of simulation chosen refers to the so-called Discrete Event Simulation and concerns the modeling of systems that evolve instantly at separate points in time (Law, 2014). Uriarte et al. (2018) reveal that this type of simulation is the most popular simulation technique to support decision-making in manufacturing systems.
Furthermore, it should be noted that the simulation model was built following guidelines that will be updated periodically according to the demand of the Kanban stations. When considering an adaptive model, some disadvantages from the traditional simulation can be avoided, such as high times and efforts to modify models in an attempt to update them (Rodič, 2017). Figure 5 illustrates the Computational Model built-in 3D.

Simulation model validation
For Sargent (2013), the validation of the computational model consists of proving that the model, in its scope of application, presents results with satisfactory accuracy. The first step for model validation is to choose the validation parameter. In this context, it should be noted that the material supply process is carried out periodically. Therefore, the logistics operator starts the process by collecting materials in stock and proceeds to supply the production lines. After finishing the supply, the operator returns to stock and starts the material collection process again, closing the first supply round. To validate the model, the unit supply time (UT) was chosen as the validation parameter. To obtain the UT, the total time of each supply round was divided by the total materials supplied in the round, as shown in Equation 6.

Total round time UT s
Quantity of materials supplied inthe round = (6) Thus, the model was validated by comparing the real UT with the simulated UT. For this experiment the simulation model was preprogrammed to simulate 17 replications in each round, providing one-minute precision and the desired 95% confidence level (Chwif & Medina, 2015). Regarding the real data, the times of fifteen supply rounds were collected, a number capable of providing a Test Power of 0.8. Finally, the fifteen rounds were simulated using the created model, each round with the 17 replicates previously defined, allowing to perform a hypothesis test to verify the model's validity. To compare the real operation data with the results of the model replications, we chose the variance test, known as ANOVA. According to the P-value, equal to 0.584 and above 5%, it can be stated with a 95% confidence level that it was not possible to prove any significant difference between the real data and the simulation data (Montgomery & Runger, 2012). Figure 6 illustrates the boxplot graph comparing real data and simulation.

Definition of inputs and outputs required for the model
The simulation model is expected to change over time from changes in the real process, allowing the simulation to be always up to date and to reflect the real environment, acting as a Digital Twin. Zhong et al. (2017) point out that connected systems, as proposed, enable communication through data inputs and outputs, allowing interaction between the physical environment and a Digital Twin response. For the authors, such digital mirroring makes up the current and future industrial landscape.
As the demand for materials at each kanban station is the key factor for real process changes, this parameter was chosen as the information needed to update the simulation model. Thus, the model was programmed to receive the expected demand for materials from each kanban station, aiming to enable analysis and assist in decision-making.
As for the necessary outputs to support the decision, two main answers were defined. The first answer refers to the most efficient supply route. In this case, choosing the most efficient route saves time and directly impacts process productivity. Thus, the account shall be taken of the distance traveled by the logistics operator as well as the number of materials supplied in one day. Moreover, the second answer required by the model concerns the number of employees required to meet demand in a maximum of one day, imagining a scenario where only one supply round needs more than one day to be completed with only one operator. In this case, the simulation shall provide the supply Lead Time for just one operator and, when necessary, the number of operators if the Lead Time is greater than one day. Table 1 summarizes the model inputs and outputs.

Selection of initial data for the construction of forecasting methods
The first step of the Forecasting Phase refers to the selection of the initial data needed to construct the forecasting methods. In this case, the objective is to verify the behavior of the data over time aiming at the best selection of the forecasting methods to be constructed. Thus, data were collected regarding the daily demand of each of the kanban stations, considering a hundred days. Since such data are only for initial analysis to guide the choice of the most appropriate forecasting methods, this time period was considered sufficient. Table 2 illustrates how data is structured in the operation's database.  Once collected, it was possible to obtain the demand curve for each of the kanban stations, as shown in Figure 7. Demands appear to oscillate around a fixed point, a characteristic that, according to Montgomery et al. (2008), suggests stationary series. For the authors, a stationary series implies a behavior of statistical stability, and such characteristic is extremely important for the definition of the next steps aiming at the selection of the most adequate forecasting methods. To help with this analysis, Figure 8 presents the ACF (Autocorrelation Function) curves considering lags from 1 to 25. The ACF analyzes the dependence of a point on its previous one. In this sense, considering the presented ACFs, we concluded that the demands represent stationary series, whose samples are uncorrelated. This means that the demand in a period does not allow conclusions about the next values (Montgomery et al., 2008).

Definition of appropriate forecasting methods
Once the main characteristics of the demand data for each kanban station were understood, the next step was to define the appropriate forecasting methods for estimating future demand values. Thus, we tested three smoothing methods to forecast demand: Moving Average (MA), Single Exponential Smoothing (SES), and Double Exponential Smoothing (DES). It should be noted that the chosen methods are not necessarily the best possible concerning forecasting errors. However, it may be considered that the characteristics of this proposal, which uses forecasting as a technique to support the simulation, and the simplicity in the construction of such methods justify its choice.
It is noteworthy that, as the purpose of this paper is the creation of a continuous use tool for decision support, it must be ensured that the forecasting methods are also continuously tested. Since the methods will provide different prediction values, there must be a logic that compares the methods and chooses the best result. Therefore, we decided to compare the methods as to their respective MAPE, MAD, and MSD. Figure 9 illustrates this logic for more assertive decision-making regarding the best forecasting method.
In which t y is the real value at time t,  t y is the forecast value at time t and n is the number of observations.

Forecasting models construction and validation
For the forecasting model construction, the first decision was to choose the structure to be used for this purpose. In this case, commercial software and programming codes can be used to formulate the methods. However, since demand forecasting will be a daily and continuous use task, a way to create forecast subroutines should be sought, able to change input data, redo prediction, and provide updated results. Moreover, such a routine should not require the decision-maker to know the method's rules, the subroutine, nor require the purchase or preparation of specific software packages. Taking these considerations into account, we decided to build the models using Microsoft Excel® software, with the aid of subroutines programmed in a programming language already present in the software package, the Visual Basic for Applications (VBA). Thus, first of all, it was built the three forecasting models (MA, SES, and DES) for each of the kanban stations.
Finally, to validate the models, the prediction results, obtained through the excel sheet, were compared with the predictions obtained through a statistical software Minitab®. For this comparison, the demand data initially collected and presented in section 4.5 were used. Since the results were equal for the predicted values as well as for the MAPE, MAD, and MSD errors, we can affirm about the validity of the built models.

Planning and implementation of an integration and decision support interface
The interface that establishes the connection between the real process and the created system can be constructed and implemented in different ways, depending on the characteristics of the connectivity of the real systems. Thus, we chose a management dashboard built using Microsoft Excel® software. Moreover, VBA subroutine programming was also used to automate interface commands. The choice of Excel is justified due to the versatility of the software, which is widely used and has easy connectivity with real systems, simulation model, and forecasting models. The dashboard initially functions as a database, importing the demand history of each of the kanban stations through reports from the operation's ERP system. Then, using another command on the dashboard, the user can test the forecasting models and obtain demand forecasts for each kanban station. Finally, the dashboard updates the simulation model with the expected demands and simulates the scenarios through an optimization algorithm already present in the simulation software. This way, only with simple commands from the decision-maker, the created system allows suggestions for more assertive decisionmaking. The dashboard, through buttons and interactive graphics, eliminates the need for a person who has a mastery of the system created and the tools that compose it, becoming a valuable tool for decision support at several hierarchical levels. This whole process is performed semi-automatically, simply selecting the button corresponding to the desired option in the dashboard itself. Figure 10 illustrates the created dashboard as well as its main elements. a) Control Buttons: using these buttons, the user can import the demand history from the ERP system, run the forecasting models, run the simulation model, and finally, update the dashboard results for more assertive decision-making; b) Forecasting Results: after updating and running the forecasting models, the dashboard will provide the forecasted demands for each kanban station. It is noteworthy that the results come from the least error forecasting method, from the comparison between the MA, SES, and DES models. Through the demand curves, it is possible to verify the behavior of the demand in the last days and the forecast demand for the day; c) Simulation Results: after updating and running the simulation model, it is possible to make decisions through its results. In this way, the dashboard "Simulation Results" area presents to the decision-maker with the most efficient supply route to follow, taking into account the route time and the amount of materials delivered. Also, the dashboard provides an estimate of Lead Time for an operator to deliver materials, and when Lead Time exceeds the one-day logistics goal, the headcount to meet Lead Time one day.
Furthermore, it is worth mentioning that all dashboard automation was performed using the "Send Keys" command available on the VBA platform. In this way, it is possible to automate routines by interacting the Excel® spreadsheet with other software and platforms simply and effectively. The routine performed by each control button on the dashboard can be summarized in Figure 11. 4.9. Constant decision-making based on the results of the system created To enable constant decision-making, the management dashboard is expected to be operated daily. Thus, forecasting and simulation models should be executed at predetermined periods to represent the real process in its main characteristics. Therefore, it was possible to build a virtual system that reflects the real environment through its data, enabling the created Digital Twin to adapt and modify against changes of the physical system, aiming to predict and optimize decisions for the real system. Concerning the time interval between Dashboard upgrades, it should be noted that the material supply process runs throughout the daily workday, and, at the end of each supply round, the process resumes. Therefore, at each end of the round, a new decision about the operation planning will be made and this is when the Digital Twin will be operated again. Finally, the Digital Twin does not necessarily need to be a real-time operating system, agreeing with Kunath & Winkler (2018). The present work presents a proposal for Digital Twin that operates in near real-time.
In the material supply process, the decision-makers are the operation coordinator and the logistics operator(s). The coordinator is responsible for defining the number of operators for the supply round and, in some cases, also informing the production lines of the estimated Lead Time for the supply. Therefore, the operation coordinator will be responsible for updating the Digital Twin. Also, the management dashboard is shown on a display on the shop floor and the operator(s) should check the most efficient supply route before starting the supply process. Figure 12 illustrates the decision-making process through the proposed Digital Twin.

Results and discussions
It is important to justify the proposed Digital Twin considering the chosen study object. There are only 4 kanban stations, which results in 24 different route options.
Moreover, the operation's headcount decision is relatively simple, considering balanced demand. However, when considering uncertain and variable demands, correct and efficient decision-making is fundamental. Furthermore, the complexity of this problem grows a lot as the addition of kanban stations is considered. If we add one or two kanban stations, the number of possible routes is over to 120 and 720, respectively. Therefore, there is a need to seek solutions in order to assist in decision making and, through optimization via simulation, we can obtain good solutions on time for decision making. In this sense, the simulation model was built to allow easy and quick modification to add more kanban stations. Therefore, the objective is not only to solve a current issue but also to provide a basis for the growth of the process, concerning operational decision-making. The possibilities highlighted in this article were not analyzed from the required computational power, as well as the time spent in the analyzes. In this case, further research is necessary, highlighting the use of auxiliary techniques such as Artificial Intelligence to obtain better results.
To highlight the possible gains expected from the proposed Digital Twin, some analyzes were performed. The decision related to the headcount aims to ensure that the materials demand is met, while the decision related to the supply route aims to obtain more efficiency in the process. Therefore, the expected gain is related to supply routes. An experiment was carried out with 12 supply rounds in order to collect real data and compare them with the Digital Twin results. The demands of each round were similar. The distance traveled in each supply round was collected, considering the routes chosen without the aid of the Digital Twin. Each supply round was repeated on the Digital Twin to obtain the best possible distance to be traveled. For this experiment, the forecasting models were temporarily disabled and the Digital Twin was updated each round with the real demands. Furthermore, the load capacity of the operator was not considered, since it is not a critical feature of the process. When deciding for more than one operator, the decision-maker aims to carry out more than one supply round at the same time. The experiment resulted in a potential reduction in the operator's movement considering the Digital Twin guidelines. Table 3 summarizes the quantitative results. The financial impact was presented in percentage, given the confidentiality of the process information.

Conclusions
Through the present work, we explored the use of integrated tools aiming at the creation of a continuous decision aid system, a Digital Twin. For its conception, we used the Discrete Event Simulation to describe and optimize the behavior of a process, with the aid of forecasting models based on the Moving Average, Single Exponential Smoothing, and Double Exponential Smoothing methods. The objective of the Digital Twin is to assist in operational planning through its integration with the real process. In this case, the forecasting models act to predict certain behaviors in the real system and provide more accurate inputs to the simulation model. To enable such integration, the Digital Twin has been structured from a management dashboard built using Excel® software and automated using subroutines built through VBA programming language.
The proposed approach was applied in a real study object, which concerns a process of material supply in kanban stations of an aeronautical industry. In this case, the Digital Twin assists in making decisions involving the supply route to be followed, and the number of employees needed to meet the demand in the maximum Lead Time of one day. The developed system is capable of connecting with the operation's ERP system to obtain historical demand from kanban stations, to use forecasting methods to estimate daily demand, to simulate scenarios and optimize them through a simulation model, and, finally, to provide important metrics for operational planning.
It concludes the great versatility of both techniques, simulation, and forecasting methods, regarding their use in an integrated way, forming a Digital Twin. Moreover, we emphasize that the use of jointly expands the horizon of the use of both techniques, allowing its adaptation to the precepts of modern industry, figured by an environment that needs increasingly assertive and constant decisions. Finally, it is worth highlighting the possibility of creating Digital Twins even in manual processes. In this case, the use of tools such as simulation and forecasting techniques proved to be great alternatives in the conception of the Digital Twin.