top of page
Writer's pictureJaco-Ben Vosloo

The Simulation Model Life Cycle (Part 4) - Verification and Validation

Updated: Apr 6, 2022

From start to finish, best practices and practical advice for doing a simulation-based project.


This post is part 4 from a 7 part series about the Simulation Model Life cycle. You can catch up on the previous post here.




Life cycle of a simulation model

  1. Defining the Problem

  2. Working the Data

  3. Building the Model

  4. Verification and Validation

  5. Experimentation

  6. Analyze the Results

  7. Report on the Findings

This post focuses on the fourth step and future posts will focus on the remaining steps - you can start watching the lecture from step 4 here, or continue reading the blog post for more.


Note: Although one does go "back" in later steps to do some activities during previous steps, you by no means redo the entire step. The steps listed above are guidelines of the general steps that a typical project follows. On all steps, you can go backward and revisit previous steps, but unlikely that you will skip a step or move forward before completing a step to at least 80%-90%.

For a more insightful discussion on the matter read here


4) Verification and Validation


First things first - let's first agree on the definitions of these two terms, since sometimes there can be a lot of baggage that comes with these words - see Stefan's post here.


Verification: Checking if the model does what you designed it to do

This should include checking the quality of the model design and architecture, as well as the structure of the code. This can be done through testing, inspection, design analysis etc. and should usually be done first by the modeler and then by someone else, perhaps another team member or even an external person.


The most basic form of verification is to use unit testing, but you can also use animation to visually verify your model is doing what it is supposed to. The problem with visual inspection is that it is not automated or run without a knowledgeable person "watching" the simulation. We covered unit testing in the previous post. A key part is to also review the output data of the model by doing some summary analysis on the data. This not only verifies that you are recording the required outcomes, as defined in your model scope, but also checks if you are recording them correctly, and lastly, it also verifies the model logic to some degree.


Verification is relatively objective. Your model either passes or fails a unit test. It either correctly records and calculates a metric or key performance indicator, or it does not. It either does what it is supposed to or not. For verification, nothing less than 100% should be acceptable.


This is where you make sure that whatever you are doing, you are doing right! This is super important! Everything that is worth doing is worth doing correctly, else you are wasting your time as well as the time of everybody involved.


Trust is good...Check is better!

Unknown


Validation: Checking if it (appropriately) matches reality

This is the step where we check if the model outputs match, where possible, historical, and expected outcomes. This is typically done through the use of detailed output reports containing very low-level data to check the internal logic of the model, as well as high-level summary data to validate the overall expected outcome.


It is imperative that every person part of the team know that the validation phase is very subjective. You cannot and will never match the historical output or the expected reality 100%. Especially in environments with high volatility and abnormal events, trying to match some arbitrary set of output data is impossible.


Some of the factors that will influence the validation process are:

  • The environment of the system being modeled

    • e.g. the volatility, uncertainty, complexity and anything else that can (and probably should) never be captured 100% in a simulation model.

    • Rember that the model is always an abstraction of the system

  • The timeframe being simulated

    • Is the time frame of the comparison data big enough to capture all the statistical fluctuations and abnormal event occurrences?

    • Typically the longer the time frame the better, but this depends on the environment and the number of abnormal events present in the system.

  • The reliability of the comparison data

    • is the recorded data 100% accurate or were there manual interventions, power outages, assumed measurements or simplified calculations?

    • only in a model can you ever record everything 100%, in real life there is always some anomalies or variability.



Validation is really essential as it sets the boundaries of how your model results can, and should, be used. It defines the lens through which the results should be interpreted. If you created a model of some hypothetical future scenario, then the validation will be against expected outcomes and assumptions based on our current understanding of the future system. This will frame the way that we look at the results, knowing that the accuracy of the outcome is based on some current assumptions.


On the flip side, if your model is a replica of a current system and you will be testing changes of the current system the level of validation, the confidence level of the model results will be based on the accuracy of the input data, and the recorded output data.


I could easily write an entire post on just this one subject with detailed examples, but in the end, validations is more an art than a science. It is argumentative, subjective and relies on the overall experience and expertise of the team and the objective the model is trying to achieve.


Feel free to write to us or add your comments on the blog post or social media posts about this very interesting topic.


Now... let's look at an example for validation, since the verification was partly done in the previous post where we covered unit testing!


Retailer Example - Validating outputs

Let's look at an example of how to do this inside AnyLogic using our example model from the previous posts in this series. You can start working from the model we provided in step 3. In this example, we will look at how you can easily create a low-level data report from your model that you can use for your data validation phase. We will only be doing a single run, but for a more statistically valid exercise, you need to run multiple iterations. Something we will cover in a future post - so watch this space.


Step 1: Add and set up a txt file object

By using one of the more advanced features of AnyLogic you can simply drag a text file object to your main object and set the mode to "Write"



You can also set up a header line in the text file by printing a single line using the println() function. In this example, we only have one column, so we are adding just a plain String to the first entry, "Customer Queue Time", but if you had a number of columns you can make your text separated by using delimiters like "," for CSV or "\t" for tab etc. For example, a tab-separated file can be created with, println("Customer Queue Time \t Customer ID")



Step 2: Output your results to the text file

Now we need to save the data we want in the text file. For our very simple example, we simply record the queue waiting time as: current time - the arrival time - the service time. The time that is left is the time the customer spent in the queue, waiting to be served. (This was already recorded and added to the model in the previous post)


Now, the only thing that we add is to print a line in our text file with the data: queue waiting time.

customerQueueTime.println(queueWaitingTime);



NB! - We must also remember to close the text file if we want to be able to access it without closing the model!



Pro Tip: Add a download button for your output file

If you are using the professional version of AnyLogic you will have access to the download buttons which adds a nice touch to the model.

Here are just some of the benefits of using this approach:

  • The output file will automatically be given a unique name inside your download folder (No overwriting of previous outputs)

  • You can run the model on the cloud and download the output file (the file must just exist in the root folder, thus no subfolders)

  • Users can get access to the output easily and straightforward without having to go into the model folder (and potentially break things!)

If you have lots of separate files you want to download you can also just zip all of them together before downloading them, but more on this in a future blog post ;-) (if you want to know the answer, simply Google it... it is the nice thing of using a Java-based simulation platform)


Step 3: Validation

Now that you have the data, you can easily compare it either to historical data, your expected outcomes or test some assumptions.


Here are some sample output comparisons of simulated versus actual data for our model.



NB (again) - these are all from a single run - for a statistically valid comparison and more insightful analysis, you need to run multiple replications and compare the combined or average output to the actuals.


P.S. As we mentioned previously the validation is very subjective thus we are not going to delve into details here, but for the sake of this example and the validity of future posts we assume that these results were acceptable to the project team ;-)


If you would like access to the model, please contact us admin@theanylogicmodeler.com


Summary

In the post, we looked at the difference between verification and validation and discussed why it is such an important step in the simulation model lifecycle. We definitely went down a notch in the complexity scale compared to the previous post and did a very simple example of how you can easily record detailed output data of your model and make it readily available to the user.

In the next post, we will look at the experimentation phase and look at slightly advanced methods to run multiple experiments fairly quickly and record their data.

 

Looking forward to the next post on best practices for building your model?

Remember to subscribe, so that you can get notifications via email or you can also join the mobile app here!


Watch the full lecture here, the video below starts at step 4




What next?

If you liked this post, you are welcome to read more posts by following the links above to similar posts. Why not subscribe to our blog or follow us on any of the social media accounts for future updates. The links are in the Menu bar at the top or the footer at the bottom.


If you want to contact us for some advice, maybe a potential partnership or project or just to say "Hi!", feel free to get in touch here and we will get back to you soon!



1,227 views0 comments

Comentários


bottom of page