Sensitivity Analysis for Distributions
When building data into your simulation model, assumptions usually need to be made regarding arrival times of Work Items, cycle times of Activities, availability of Resources and so on. The assumptions are usually made in the form of a distribution. Data from the real system may be analysed and distributions derived that offer a close approximation to the behavior of this real system. However, it is often desirable to test your assumptions and ensure that your simulation is not sensitive to changes in the assumptions - otherwise your simulation may be unstable and could produce erroneous results.
The Sensitivity Analysis for Distributions feature allows you to analyze your named distributions in terms of how sensitive the results are to changes in these inputs.
Once you have built your simulation, go to the Sensitivity Analysis feature either through the Results menu or the Professional Menu. You will be presented with a list of all of the named distributions that you have created in your model. Check boxes beside each of the distributions allow you to select the distributions you wish to carry out the sensitivity analysis on. The usual procedure would be to firstly carry out analysis on all of your distributions and then deselect those that prove to have little impact on your results.
The Apply button allows you to save the checks that you make in the list so that you can run your sensitivity analysis at a later time. The Analyze button will run the sensitivity analysis immediately.
Once complete, you will be presented with the results in a sheet called “Simul8 Sensitivity Analysis Report” that is created in the Information Store so that you can refer to it as and when you need. The following is provided for each result you have added to the KPI Summary (in columns):
Normal Trial results
- row 1 = Low value for the result from the KPI Summary with all distributions as designed
- row 2 = Mean value for the result from the KPI Summary with all distributions as designed
- row 3 = High value for the result from the KPI Summary with all distributions as designed
For each distribution that is checked in the list:
first row for distribution = The trial mean with the distribution reduced by 10%
second row for distribution = The trial mean with the distribution increased by 10%
third row for distribution = A measure of the sensitivity of the result to this distribution (0= not sensitive, 1= high (=the 20% range moves the result between the low conf limit and the high conf limit)
fourth row for the distribution = 0/1 indicating that the impact of the sensitivity test takes that result outside its confidence limits therefore probably you need to review the validity
The sequence of the distributions in the list is irrelevant.
If one or more of your distributions proves to cause some issues with the sensitivity of your results, this is an indication that your assumptions may need refining. Alternatively this finding would allow you to report that the results from the simulation are dependent on this specific assumption.