Contents

Co-expression performance

Introduction

On a couple of other pages (here and here) I mentioned a technique for avoiding the cost of procedure suspension and resumption over a deeply nested stack of calls.

The question arises, how much time does this save?

Here is a simple test program which tries to answer this.

Download bypass.icn

import io, util

global lim

procedure do_suspend(n)
   if n = 0 then
      suspend 1 to lim
   else
      suspend do_suspend(n - 1)
end

procedure do_coact(n)
   if n = 0 then
      every coact(1 to lim)
   else
      do_coact(n - 1)
end

procedure do_act_source(n)
   if n = 0 then
      every (1 to lim)@&source
   else
      do_act_source(n - 1)
end

procedure main(a)
   local depth, e

   depth := integer(a[1]) | 100
   lim := integer(a[2]) | 1000000

   &maxlevel := depth + 1000

   write("Depth=", depth, "   Generate=", lim)

   note_time()
   every do_suspend(depth)
   note_time("procedure")

   note_time()
   e := create do_coact(depth)
   while @e
   note_time("coact()")

   note_time()
   e := create do_act_source(depth)
   while @e
   note_time("@&source")

   write("Exit")
end

It creates a procedure call chain of a given depth, and produces a given number of results from the deepest level.

Three methods are used to produce the results :-

The timing results are calculated and printed (in milliseconds) using the simple library procedure util.note_time.

Run with the default for depth and number of results, I get the following results :-

$ oit -s ./bypass.icn -x
Depth=100   Generate=1000000
1522: procedure
193: coact()
49: @&source
Exit

With a shallow depth of 10 (and more results to give clearer timings), I get :-

$ oit -s ./bypass.icn -x 10 20000000
Depth=10   Generate=20000000
4009: procedure
3768: coact()
997: @&source
Exit

and with a depth of 1000, I get :-

$ oit -s ./bypass.icn -x 1000
Depth=1000   Generate=1000000
16506: procedure
174: coact()
50: @&source
Exit

These results show that for a depth of about 10, procedure call is about the same speed as coact. In fact, if you try it for depths less than 10, you will find procedure call is rather faster. However, as the depth increases, coact becomes relatively faster than procedure call.

Co-expression transmission with the @ operator seems to be consistently about 3-4 times faster than with the coact function. This is just because invoking a builtin function is a relatively expensive operation when compared to invoking a builtin operator. However, the coact function is more flexible than the @ operator. Particularly important is its ability to transmit a value without affecting the target co-expression’s &source value; this is vital in avoiding co-expression black holes.

Traversing a tree

A more practical problem involves traversing a binary tree. Here is the next test program.

Download nodes.icn

import io, util

class Node()
   public const data
   private l, r

   public insert(val)
      if val < data then
         (/l := Node(val)) | l.insert(val)
      else
         (/r := Node(val)) | r.insert(val)
   end

   public traverse()
      (\l).traverse()
      coact(self)
      (\r).traverse()
   end

   public traverse2()
      (\l).traverse2()
      self@&source
      (\r).traverse2()
   end

   public gen()
      suspend (\l).gen() | self | (\r).gen()
   end

   public depth()
      local dl, dr
      dl := (\l).depth() | 0
      dr := (\r).depth() | 0
      return 1 + max(dl, dr)
   end

   public new(i)
      self.data := i
      return
   end
end

procedure main(a)
   local root, e, r, n, i, v, nt

   # n is the number of elements to insert
   n := integer(a[1]) | 250000

   # v is the range of values to insert (1..v)
   v := integer(a[2]) | (n / 2)

   # nt is the number of times to run each traversal (larger obviously
   # gives more accuracy but takes longer).
   nt := max(2000000 / n, 1)

   # Make sure the procedure traversal doesn't exceed the call depth
   # limit.
   &maxlevel := n

   # Create the tree
   write("Creating tree with ", n, " entries in range 1..", v)
   r := create |?v
   root := Node(@r)
   every i := 1 to n do {
      root.insert(@r)
      if i % 1000 = 0 then
         writes("\e[K   ", (i * 100) / n, "% complete\r")
   }
   write("\e[KComplete")

   write("Tree depth = ", root.depth(), ", number of traversals = ", nt)

   note_time()
   every 1 to nt do
      every root.gen()
   note_time("procedure")

   note_time()
   every 1 to nt do {
      e := create root.traverse()
      while @e
   }
   note_time("coact()")

   note_time()
   every 1 to nt do {
      e := create root.traverse2()
      while @e
   }
   note_time("@&source")
end

This creates a simple binary tree and traverses over its elements, again using the three techniques from the first example.

With the default tree size, I get the following output :-

$ oit -s ./nodes.icn -x
Creating tree with 250000 entries in range 1..125000
Complete
Tree depth = 41, number of traversals = 8
1467: procedure
893: coact()
612: @&source

Here is much larger tree (this takes some time to build).

$ oit -s ./nodes.icn -x 5000000
Creating tree with 5000000 entries in range 1..2500000
Complete
Tree depth = 55, number of traversals = 1
4389: procedure
2408: coact()
1686: @&source

Finally, here is a small tree with only 200 entries.

$ oit -s ./nodes.icn -x 200
Creating tree with 200 entries in range 1..100
Complete
Tree depth = 16, number of traversals = 10000
769: procedure
657: coact()
388: @&source
Contents