## 1. Introduction

The purpose of this chapter is to demonstrate various techniques that Tcl has to offer. Besides object-oriented programming you can set up your program using multiple threads, events or coroutines, or combinations, to name a few. Which techniques to use is sometimes a matter of taste or experience, sometimes the problem you are trying to solve dictates them.

Here we will study a fairly basic queueing system - basic, because we leave out all manner of practical details that would only clutter the discussion. If you want to, you can bring them in yourself or design a different type of queueing system. The variations are almost endless.

## 2. A model system

Consider a bank, or a post office or the customs at an airport: each of these locations involve one or more people at a desk who deal with other people lining up waiting for their turn. Depending on the number of people joining the queue or queues, the time they need to be handled and the number of people at the desks, the queue may be long or short, the waiting time may be long or short or the people at the desks may be idle. These are all interesting properties of the system. And it is fiendishly difficult to solve the mathematical models that you can develop. So: rely on computer-aided modelling instead!

Our model system consists of the following:

• We have a fixed number of positions where the clients are helped.

• Each client who enters the system, is assigned a certain handling time - some require 1 minute, others 5 or 20 minutes. The probability of a client requiring 1, 5 or 20 minutes is fixed to 50%, 40% and 10%, respectively.

• Clients come in randomly, but the mean rate is fixed, say 10 or 15 people per hour.

• They join a single queue and the one at the head of the queue goes to a free position if there is one.

The properties of this system we want to know are:

• The mean waiting time

• The mean length of the queue

• The mean idle time (if any) of the positions

## 3. Analysis of the system

We need to closely examine what is happening in this system before we can implement a model for it. We will use object-oriented programming techniques to implements the various elements - the persons dealing with the clients, the queue etc.

If a person dealing with clients is idle (let’s call him/her the handler), he/she will ask the first client in line to proceed - if there is any one. The client will require a certain amount of time to get the request handled and during that time the handler is busy.

If there is no one waiting, the handler is idle, until a client shows up. We need to record how long the handler is actually idle.

For clients it is important to record the time of arrival, the length of the queue at that time and the time until he/she is helped. But we are only interested in the mean values (or perhaps the maximum values too). Let’s use an object representing the queue to record what clients are waiting and how long they are waiting. As the clients do not have much properties or behaviour - just the arrival time and the duration of their request - it does not seem necessary to model them via separate objects.

First we try and implement the "handler" objects:

• A method `accept` is used to determine at what time the next client can be accepted. If the system time is earlier than that, no client can be accepted:

```method accept {} {
set system_time [system time]
if { \$busy_until <= \$system_time } {
set next_client [queue next]
#
# Do we have a client? If not, record idle time.
# In any case, update the busy time for
#
if { [llength \$next_client] == 0 } {
incr total_idle \
[expr {\$system_time - \$busy_until}]
set busy_until \$system_time
} else {
set request_time [lindex \$next_client end]
set busy_until   \
[expr {\$system_time + \$request_time}]
}
}
}```
• A method `report` reports the idle time of the handler:

```method report {} {
puts "Total idle time: \$total_idle"
}```

Clients have two properties, the time of arrival and the time their request will take. We can implement them as a list of these two values - they do not require any particular behaviour. All behaviour is instead taken care of by the `queue` object. This object must deal with the arrival of clients, their being helped by the handlers and keeping track of the output parameters. So we need:

• A method `next` returns the next client (at the same time removing them from the queue). A client has to have arrived before they are "visible" in the queue of course:

```method next {} {
if { [llength \$clients] > 0 } {
set next [lindex \$clients 0]
set arrival_time [lindex \$next 0]
if { \$arrival_time <= [system time] } {
set clients [lrange \$clients 1 end]
my UpdateProperties \$next
return \$next
} else {
return {}
}
} else {
return {}
}
}```
• The private method `UpdateProperties` keeps track of the various properties we want to record for the clients. The method `report` reports them:

```method UpdateProperties {client} {
set system_time [system time]
#
# Count the number of waiting clients
#
set number_waiting 1
foreach c \$clients {
set arrival_time [lindex \$c 0]
if { \$arrival_time <= \$system_time } {
incr number_waiting
} else {
break
}
}
incr total_waiting \$number_waiting
incr count_waiting
#
# Waiting time
#
set arrival_time       [lindex \$client 0]
set total_waiting_time \
[expr {\$total_waiting_time + \$system_time - \$arrival_time}]
}
method report {} {
puts "Mean waiting time: \
[expr {\$total_waiting_time/double(\$count_waiting)}]"
puts "Mean queue length: \
[expr {\$total_waiting/double(\$count_waiting)}]"
}```
• A method to generate new clients. This could be invoked from within the constructor for the class - just a fixed batch of clients to handle - or from within the `system` object, so that we can go on and on with the simulation. Let us use the first method:

```constructor {rate number} {
set clients {}
#
# Generate the clients:
# - The number of clients is fixed
# - The rate is expressed as the mean number of
#   clients per hour
# - Sort the arrival times
#
set period [expr {3600 * \$number / double(\$rate)}]```
```    set times {}
for {set i 0} {\$i < \$number} {incr i} {
lappend times [expr {\$period * rand()}]
}
set times [lsort -real \$times]
foreach time \$times {
lappend clients [list \$time [requestLength]]
}
}```
 The choice to generate a list of clients at the start of the simulation has consequences for the implementation of the queue object but also for the system object, as that controls the simulation.
• As time is measured in seconds (just a convention), the procedure `requestLength` has to return 60, 300 or 1200 seconds. As there is no particular reason to make it a method in the queue class, it is implemented as an ordinary procedure:

```proc requestLength {} {
set r [expr {rand()}]```
```    if { \$r < 0.5 } {
return 60
} elseif { \$r  <0.9 } {
return 300
} else {
return 1200
}
}```
• If there are no clients left, the simulation stops:

```method clients {} {
return [llength \$clients]
}```

The final object in our system is the `system` object: it controls the whole simulation. It has the following tasks:

• Make sure all handlers get a chance to accept a client. So iterate over the handlers until there are no clients left.

• Make sure time passes in the system. We use an increment of 1 minute. An alternative would be to move from one event to the next, but this strategy requires us to ask the objects for the next event. Simply increasing the time is crude but simpler.

Here is the code for the constructor and the advancing method:

```constructor {list_handlers} {
set time     0
set handlers \$list_handlers
}```
```method advance {} {
while { [queue clients] > 0 } {
foreach handler \$handlers {
\$handler accept
}
incr time 60
}```
```    #
# Report the results
#
queue report
foreach handler \$handlers {
\$handler report
}
}```

Defining the system and running the simulation is done like this:

```#
# Create three handlers
#
set handlers [list [handler new] [handler new] [handler new]]```
```#
# Create the queue with 40 clients per hour and
# 200 clients in total
#
queueClass create queue 40 200```
```#
# Create the system
#
systemClass create system \$handlers```
```#
# Run the simulation
#

## 4. Some results

We can run the program and examine the results a bit closer:

```Mean waiting time: 1087.2610543026872
Mean queue length: 11.56
Total idle time: 180
Total idle time: 2220
Total idle time: 1680```

The period that is covered by the simulation is 300 minutes or 5 hours. This means that the handlers are idle during roughly 1 hour each. But the clients have to wait for more than 15 minutes on average.

The mean time to handle a client’s request, given the distribution in `requestLength`, is: 4.5 minutes. Theoretically, the three handlers should be able to handle 40 clients per hour - coincidentally the same as the rate we specified.

If the number of clients per hour increases to 50, then the idle time drops to about 5 minutes and the waiting time increases to more than 40 minutes, with an average queue length of 25 people:

```Mean waiting time: 2558.366997557118
Mean queue length: 24.985
Total idle time: 120
Total idle time: 360
Total idle time: 360```

We are beginning to see a problem with the handling capacity!

 If you run the program several times, you will see that the results vary considerably. This is a consequence of the rather short simulation. Longer simulations (with more clients) will lead to more representative results. How long these simulations need to be is a problem that is not easily tackled. It is best answered empirically: repeat the simulations and increase the length if the variation is too large. This behaviour is actually an indication that the system is operating near or beyond its capabilities: small differences in the details of the list of clients (one client extra with a request of 20 minutes, say) may influence the results.

## 5. Alternative implementations

### 5.1. Using the Tcl event loop

In the current implementation we let the `system` object take care of everything. But we can also use the builtin event-handling features of Tcl to control the model. Instead of an explicit loop in the `advance` method of the `system` object, we schedule events:

```::oo::class create handler {
constructor {} {
after 0 [list [self] accept]
}
method accept {} {
set system_time [system time]
if { \$busy_until <= \$system_time } {
...
}
after 0 [list [self] accept]
}
...
}```

and similarly:

```::oo::class create systemClass {
constructor {} {
set time 0
}
if { [queue clients] > 0 } {
incr time 60
} else {
set ::end 1
}
}
}```

Instead of invoking the `system advance` method explicitly we use the following code:

```#
# Run the simulation by entering the Tcl event loop
#
vwait end```
```#
# Now report
#
queue report```
```foreach handler \$handlers {
\$handler report
}```

The `after` command schedules a command or better a script to be run at an appointed time in the future. By entering the event loop via the `vwait` command, which waits for a global variable to be set and stops the event loop when that happens, we let the objects take care of themselves.

The advantage is that the structure of the simulation is no longer implemented explicitly in the `system` object’s methods, but is controlled by the objects taking part in the simulation. Very complex control structures could easily be implemented by letting objects take part in the generation and scheduling of these events or not.

Since most computers that are in use nowadays have two or more processors, many applications are built using multithreading techniques to take advantage of this feature. Here we will illustrate how the `Thread` package can be used to make our queueing model multithreaded. While the model does not really need multithreading to improve its performance, in other cases multithreading can improve it (it is of course the main reason to use multithreading.)

In our queueing model we have the handlers handling the clients in parallel - as long as they do not ask for the next client, they can work independently. When that moment comes, however, they do interfer: the handler which has finished first, in terms of the 'model time', not the wall clock time, gets the client which is next in line, if there is one.

Each handler object can be assigned to a separate thread, so that the objects can indeed work independently. The implementation problem is that they need to synchronise when the queue object has to decide which handler gets the client. To solve this, we will let each handler object send a notice to the queue object, indicating the moment (model time, of course) when the handler finished and thus can accept a new client. The handler object that finished first will get the next client.

Within this line of reasoning we moved from a fixed time increment to an event-driven time advancing scheme. It just is more natural, as we have to gather information from the handler objects.

 The `Thread` package uses a so-called Apartment programming model. This means, roughly, that two threads are strictly separated and communicate via messages. Common data are guarded automatically by mutexes. As an implementation detail, a thread may contain one or more interpreters, but an interpreter belongs to one thread only, the thread in which it was created. Each thread is characterised by a thread ID, returned by the `::thread::create` command when it is created or by the `::thread::tid` command within the thread itself. This ID is needed to send messages to the thread - the primary means of making threads interact.

In our implementation the queue object has to take the decisions: it will wait for messages from all three handlers, determine the one that finished first, in terms of model time, and send off the waiting client to that handler. As soon as three handlers have reported back, the queue object goes through this procedure. After all, a handler may be handling a client with a request that requires a lot of time. In the mean time other clients can be helped by the other handlers.

Each handler object is contained in a separate thread and the queue object also lives in its own thread.

The handler objects are the simplest to implement, so we will start with these:

```#
#
#
# Create the threads for the handler objects
#
set handler_tids {}
for { set i 0 } { \$i < 3 } { incr i } {
::oo::class create handlerClass {
variable total_idle
variable busy_until
variable queue_tid```
```            constructor {queue} {
set total_idle 0
set busy_until 0
set queue_tid  \$queue
}
...
}
#
# Wait for things to happen
#
}]
}```

The constructor method takes a single argument now: the ID of the queue thread, as we need to send it messages. Because the queue thread does not exist yet, we can not instantiate the handler objects yet. The queue object/thread also needs the IDs for the handler threads, so we can not create the queue thread first and then the handler threads. The solution to this is to postpone the creation of the actual 'objects' until we have created the threads.

Each handler thread (not the object) waits for messages to arrive via the `::thread::wait` command, which is similar to the Tcl `vwait` command, except that it does not wait for a particular variable.

Our handler object will accept a client via the `accept` method. The difference with the previous versions is that the client always arrives when the handler is ready - a consequence of the way the queue object sends off the clients. The code looks like this:

```method accept {client} {
lassign \$client arrival_time request_time
#
# If the client arrived later than the last time the
# handler was busy, then the handler was idle
#
if { \$busy_until <= \$arrival_time } {
set total_idle \
[expr {\$total_idle + \$arrival_time - \$busy_until}]
set busy_until \
[expr {\$arrival_time + \$request_time}]
} else {
set busy_until  [expr {\$busy_until + \$request_time}]
}
}```

When the bookkeeping is done, it calls the method `ready` to inform the queue object it is ready for the next client. This method just sends a message to the queue object:

```method ready {} {
#
# for a new client
#
}```

It uses the `-async` option of the `::thread::send` command, because the queue object has to gather the information from all handlers before it can proceed. This way, the method immediately returns and the handler object waits for the next message.

In this case the message is that the queue thread must run the ```queue isready``` command. The arguments that are passed indicate which handler sent the message and at what (model) time it can accept the next client.

 A message in the `Thread` package is a script that is to be run in the target thread. As the script is constructed in the sending thread, you can pass any information you need this way. The `Thread` package also offers commands to share data directly, but these are not discussed here.

The queue thread is started in a slightly different way:

```set queue_tid [::thread::create -joinable {
::oo::class create queueClass {
...
}
}]```

That is: it is started with the `-joinable` option. Here are the design considerations that led to this:

• The handler 'threads' wait for the queue thread to send them the information on the clients, but the program needs to run as long as needed.

• When the last client has been handled, the handler 'objects' must report the statistics, so that the handler 'threads' can not simply finish.

• We therefore need to wait for the queue thread to finish and keep the handler threads alive until they have reported their statistics.

• One way of achieving this is by waiting for the queue thread to finish, in multithreading parliance, wait for it to 'join' the main thread. Then we can send a message to the handler threads to report their results and finish.

There are - of course - other means to achieve the same thing, but this code takes care of these matters in the main thread (the one that runs at start-up before any new thread is created):

```set queue_id [::thread::create -joinable {...}]
#
# Create the handler objects and get them started
#
foreach tid \$handler_tids {
[list handlerClass create handler \$queue_tid]
}
[list queueClass create queue 40 200 \$handler_tids]
#
foreach tid \$handler_tids {
}
#
# Wait for the queue thread to finish
#
#
# Now report the results of the handlers
#
foreach tid \$handler_tids {
}```

The queue thread has to wait for all handlers to have indicated at what model time they are ready for the next client. When all three have indicated this, the one which is ready before the others gets the next client:

```method isready {tid time} {
#
# Register the moment the handler is ready
#
#
# Check that we have data for all handlers.
# Then we can pass on the next client.
#
set first_time   1.0e20
set next_handler {}
set keep_waiting 0
foreach handler \$handler_tids {
if { [llength \$handler_ready(\$handler)] > 0 } {
if { \$time < \$first_time } {
set first_time   \$time
set next_handler \$handler
}
} else {
set keep_waiting 1
}
}
if { !\$keep_waiting && [llength \$clients] > 0 } {
set next_client [lindex \$clients 0]
set clients     [lrange \$clients 1 end]
`my UpdateProperties \$first_time \$next_client`
```        ::thread::send \$next_handler \
[list handler accept \$next_client]
}
if { [llength \$clients] == 0 } {
my report
}
}```

• Most probably appending the incoming time to a list is unnecessary, as there can be only one "ready time" per handler. It simply seems safer to keep them in a list, treated as a first-in-first-out data structure, and in other applications it might be required for a correct operation.

• When there are no more clients waiting, the queue object/thread can report the findings and stop.

• When the next client has been sent off, the information from the other handlers is retained, as they are still waiting for a new client.

### 5.3. Modelling via coroutines

'Coroutines' are a feature that was introduced to Tcl in version 8.6. They are control constructs that allow you to do a kind of light-weight multithreading. But since they are relatively unknown - not many programming languages support them out of the box - perhaps one or two examples are called for.

#### 5.3.1. Two examples

Normally, a procedure has to return before another procedure can continue and then all local variables are lost. With 'coroutines' however, you suspend the execution, allow some other code to run and you can resume the original procedure from where it left off. This is done via the `yield` command. Here is a simple example:

Consider the following procedure:

```proc nextSquare {} {
set i 0
yield ;# Run in the coroutine command```
```    while {1} {
incr i
yield [expr {\$i**2}]
}
}```

Via the `yield` it can pass the square to the calling procedure and pick up where it left off the next time it is called. This is done using code like:

`coroutine square nextSquare`
```for {set n 0} {\$n  < 10} {incr n} {
puts [square]
}
puts "And another batch ..."
for {set n 0} {\$n  < 10} {incr n} {
puts [square]
}```
 The first `yield` command in `nextSquare` makes sure that nothing is lost, as the `coroutine` command actually starts the procedure. If you wait until the loop is running for the first yield, that first value is returned via the `coroutine` command and is therefore likely lost.

The output is:

```1
4
9
16
25
36
49
64
81
100
And another batch ...
121
144
169
196
225
256
289
324
361
400```

The `coroutine` command creates the proper "context" for this behaviour, which acts as a command `square`, where the local variables (in this case `i`) are preserved and execution is frozen until the next call.

The `yield` command passes its one argument to the caller (the for-loops). Upon the next call to `square`, the next iteration is run.

As the context command accepts one argument, you can pass back any information you want from the caller to the wrapped command. The argument becomes the 'return value' of the `yield` command. This is shown in the next example:

```proc incd {} {
set x 0
while 1 {
incr x [yield \$x]
}
}```
`coroutine accumulator incd`
```for {set i 0} {\$i < 10} {incr i} {
puts "\$i -> [accumulator \$i]"
}```

This program produces the following:

```0 -> 0
1 -> 1
2 -> 3
3 -> 6
4 -> 10
5 -> 15
6 -> 21
7 -> 28
8 -> 36
9 -> 45```

As you can see, the increment is exactly the value of `i`.

#### 5.3.2. The queueing model revised

Within our queueing model, we can use coroutines in the following way:

• The method `accept` of the handler object yields to the queue object, passing the time when it is free again. The queue object will then pass in the next client, if it is indeed the turn for this handler object:

```method accept {} {
yield ;# Get past the coroutine
#
set busy_until 0
#
# Loop until we have no more clients
#
while {1} {
set client [yield \$busy_until]
if { [llength \$client] == 0 } {
break
}
lassign \$client arrival_time request_time
#
# If the client arrived later than the last time the handler
# was busy, then the handler was idle
#
if { \$busy_until <= \$arrival_time } {
set total_idle [expr {\$total_idle + \$arrival_time - \$busy_until}]
set busy_until  [expr {\$arrival_time + \$request_time}]
} else {
set busy_until  [expr {\$busy_until + \$request_time}]
}
}
}```
• As the variable busy_until is only used in the `accept` method and that method remains active (!), there is no need anymore to make it an object variable.

• The queue object does not need direct interaction with the handler objects. All it needs to do is use the coroutine contexts for the handler objects' `accept` method (to avoid name clashes the names of the objects are `Handler1` etc.):

`set handlers {handler1 handler2 handler3}`
```foreach handler \$handlers {
handlerClass create [string totitle \$handler]
coroutine \$handler [string totitle \$handler] accept
}```
`queueClass create queue 40 200 \$handlers`
• The method `run` of the queue object handles the entire simulation:

```method run {} {
#
# Hand out the first clients - no competition here
#
foreach handler \$handler_objects {
}
#
# Now we loop until we have no clients left
# Hand out the next client to the first handler that
# is available
#
while {[llength \$clients] > 0 } {
set first_time   1.0e20
set next_handler {}
foreach handler \$handler_objects {
if { \$handler_ready(\$handler) < \$first_time } {
set next_handler \$handler
}
}
#
# Pass on the next client.
#
set next_client [my next] ;# Save it for the statistics
#
#
my UpdateProperties \$first_time \$next_client
}
}```

The simulation is now started and the results are reported via a few method calls:

```queue run
queue report```
```foreach handler \$handlers {
[string totitle \$handler] report ;# The actual handler objects!
}```

Of course, this is just a very simple application of coroutines. The effect in this implementation is that the queue object has to know nothing at all about the methods contained in the handler objects. All it needs to know is that invoking the wrapped handler object returns the value it is most interested in: the time at which the handler is available for the next client and that it needs to pass the next client.

## 6. Complete source code

While the examples discussed in this chapter are each only approximately 200 lines long, together they amount to a long listing. Hence we present them separately:

## 7. Final remarks

During the development of these examples the author made quite some mistakes, which let to the program not doing what it was supposed to do. Of course, that is how things go with programming. But the good news is that Tcl tries very hard to give clear and to-the-point error messages. For instance, multithreaded programs can be fiendishly difficult to debug, but with the error messages printed it was easy to detect where the error was in the coding. The same is true for the coroutine example.

So, the take-home message is: trust Tcl’s error messages, they make life a lot easier.