Monday, July 21, 2014

Benchmarking Part 3 - Executing Model Blocks

Introduction

My first post on the topic of benchmarking discrete event simulation software identified three processes that could potentially bottleneck a typical simulation model:
  1. Event scheduling and processing
  2. Creating and destroying entities
  3. Executing model block that do not involve simulated time
This post deals with the last item on the list - the time required to execute a model block that does not involve simulated time. This benchmark may seem a bit vague since there are many type of model blocks that do not advance simulated time. Its intent is to capture the overhead associated with moving an entity from one block to the next in a process flow type simulation model.

In a perfect world, we would benchmark a wide variety of blocks for each simulation software package. No doubt, the efficiency of each software package will vary with the type of block. Software A might be much more efficient the software B for one block, but much less efficient for another block. To get started, we chose to benchmark the blocks that seize and release a resource. These blocks are commonly used in simulation models and are implemented in one form or another in every simulation software package. Very little computation is required to seize or release a block - only statistics collection - so we expect that it provides an approximate measure of the overhead time to move an entity from one block to another. 

Model Block Execution Benchmark

The model used to benchmark the execution of model blocks that do not advance simulated time is shown in the following figure.


In this model, two entities are created at time zero and directed to the Seize block. The first entity seizes the resource and executes a one second delay. The second entity enters the Seize block's queue to wait for the resource. On completing the one second delay, the first entity releases the resource and is returned to the Seize block. This process continues endlessly, with one entity completing the delay during each second of simulated time. Two entities were used in the model to ensure that the Seize block always had an entity to process, avoiding a potential source of inefficiency for some software packages.

As with the previous benchmarks, the average time to seize and release a resource was measured by running the model for 60 seconds of real time (using a stopwatch) and counting the number of times the Release block was executed. The effect of computer speed was allowed for by converting the calculated time into clock cycles. All measurements were made using my laptop computer which has a second generation (Sandybridge) Core i5 processor running at 2.5 GHz.

Performance Results

The results of the benchmark for Arena, Simio and JaamSim are shown in the following bar chart. 


The time to execute a model block was calculated by taking the average execution time per entity for the benchmark, subtracting the time to execute the delay, and dividing by two. The time for the delay was taken from the first benchmark for each software package. It was necessary to divide by two since two blocks were executed for each trip through the benchmark - a seize block and a release block.

Only the result for JaamSim2014-20 is shown - there was no difference between the values for the three versions shown in the previous posts.

The benchmark results show that JaamSim requires very little time to process simple blocks such as Seize and Release.

Concluding Remarks

This post concludes the series of three on the topic of benchmarking. Thanks to the hard work carried out by Harvey Harrison and Matt Chudleigh, JaamSim is now significantly faster.

No comments:

Post a Comment

Note: Only a member of this blog may post a comment.