Computing perfect numbers with bigloop

Objective

In this page, I explain how to use bigloop to compute the perfect numbers less than 2^25. The main routine is completely naive. Indeed, it is just a pedagogical example. Note that a perfect number is a positive integer that is equal to the sum of its proper positive divisors, check wiki pages if you really want to know more, but it is not necessary.

Client code

Here is a copy of client code. The client check the prefectness of integers in the range [first:last[ . The segment are submitted to the client by the loop manager. Along the loop, the client sends the perfect values that are discovered to the bigloop manager. when a segment is complete, it sends informations about the effort : work factor.
#include "bigloop.h"
#include 

ullong wkf;

int isperfect( ullong z )
{ ullong d;
  ullong s=1;
  for( d = 2; d*d <=z; d++ )
    if ( z % d == 0 ) {
        wkf++;
	s+= d;
	if ( d*d != z ) s+= z/d;
	if ( s > z ) return 0;
    }
  if ( s == z ) return 1;
  return 0;
}


int main(int argc, char *argv[])
{   ullong s;
    
    if (!bigloopargs(argc, argv))
	exit(1);
    
    initbigloop();
    
    if (biglregister())
	while (getbigljob()) {
	    wkf = 0;
	    s = first;
	    while (s < last) {
		if ( isperfect( s ) )
                   sendvalue( s );
                s++;
	    }
	    closeloop(wkf);
	    sendend();
	}
    return 0;
}

Typical result

bigloop finds 6 perfect numbers : 1, 6, 28, 496, 8128 and 33550336 in the intervalle [1:2^25]. The runing time was about 2 minutes, using 6 hosts and 22 processors, with a total cpu time of 1118 secondes.
departure     : Sat Mar 17 21:24:47 2012

arrival       : Sat Mar 17 21:27:06 2012

elapse time   : 139
cpu time      : 1118
job count     : 128
processors    : 22
work factor   : 135671103
sec / wkf     : 1.0000
 perf ms            ip                     wkf     cpu  job cpu
   0.692    10.2.73.58                34535842     197   33  17 imath-gpu
   0.752    10.2.73.60                51336406     318   48  28 imath01
   1.404   10.2.178.62                17804199     206   17  18 bleue.univ-tln.fr
   1.486   10.2.178.81                 5306941      65    5   5 icosapus.univ-tln.fr
   1.510     10.2.81.2                26687715     332   25  29 wavester.univ-tln.fr

Two nice pictures describe the work factor and the computation time all along the loop :
[ work factor ] and [ time report ]
As we see the work factor distribution, the running time distribution are far from flatness ! This aspect will be taken into account in future developement.

terminal shot

Now, I comment the command used to finalize the computations.
Let us look at the configuration file :
[demo]$ cat bigloop.conf 
# bigloop configuration
#ou812.univ-tln.fr
CLIENT=bigl.exe
address=10.2.73.86
port=31415
ident=25
first=1
#uppercase means a power of 2
Last=25
Step=18
This means that the bigloop manager will run on my computer ou812.univ-tln.fr, listening at the PI port, 25 identifies the loop, and it will loop from 1 to 2^25 by steps of 2^18 that is 128 steps. The bigl.exe, this configuration and the the client program need to be installed over the hosts listed in the grappe file.
[demo]$ cat grappe.grp 
#imathgpu
10.2.73.58   langevin  6  std
#hexapus 16 
10.2.178.62  langevin  4  std
#icosapus  16
10.2.178.81  langevin  2 std
#imath01  12
10.2.73.60   langevin 12  std
#imath02  24
#10.2.73.204 langevin  12 std
#wavester
10.2.81.2 langevin 4 std
#cluster wavester
10.2.81.2 langevin 4 sge
#maitinfo 16
#10.9.185.217-232 langevin  2 std
As usual # means a comment. With a such configuration, I plan to start 6 processus on imathgpu, 4 on hexapus, 2 on icosapus etc... The std flag means bigloop run the processus in a usual way, some processus will run behind the cluster wavester managed by SGE, that is the sge flag.
Now, we install the codes.
[demo]$ biglinst.sh 
target bigloop-0.2/demo
#imathgpu
10.2.73.58 langevin 6 std
 langevin 6 
sending incremental file list

sent 54 bytes  received 12 bytes  132.00 bytes/sec
total size is 332  speedup is 5.03
sending incremental file list

sent 58 bytes  received 12 bytes  46.67 bytes/sec
total size is 152  speedup is 2.17
sending incremental file list

sent 95 bytes  received 13 bytes  216.00 bytes/sec
total size is 1631  speedup is 15.10
make: entrant dans le répertoire « /home/langevin/bigloop-0.2/demo »
make: Rien à faire pour « all ».
make: quittant le répertoire « /home/langevin/bigloop-0.2/demo »
#hexapus 16
10.2.178.62 langevin 4 std
10.2.73.58 langevin 4 
sending incremental file list

sent 54 bytes  received 12 bytes  132.00 bytes/sec
total size is 332  speedup is 5.03
sending incremental file list

sent 58 bytes  received 12 bytes  46.67 bytes/sec
total size is 152  speedup is 2.17
sending incremental file list

sent 95 bytes  received 13 bytes  216.00 bytes/sec
total size is 1631  speedup is 15.10
make: Entering directory `/home/langevin/bigloop-0.2/demo'
make: Nothing to be done for `all'.
make: Leaving directory `/home/langevin/bigloop-0.2/demo'
#icosapus 16
10.2.178.81 langevin 2 std
10.2.178.62 langevin 2 
sending incremental file list

sent 54 bytes  received 12 bytes  132.00 bytes/sec
total size is 332  speedup is 5.03
sending incremental file list

sent 58 bytes  received 12 bytes  140.00 bytes/sec
total size is 152  speedup is 2.17
sending incremental file list

sent 95 bytes  received 13 bytes  216.00 bytes/sec
total size is 1631  speedup is 15.10
make: Entering directory `/home/langevin/bigloop-0.2/demo'
make: Nothing to be done for `all'.
make: Leaving directory `/home/langevin/bigloop-0.2/demo'
#imath01 12
10.2.73.60 langevin 12 std
10.2.178.81 langevin 12 
building file list ... done

sent 85 bytes  received 20 bytes  210.00 bytes/sec
total size is 332  speedup is 3.16
building file list ... done

sent 89 bytes  received 20 bytes  218.00 bytes/sec
total size is 152  speedup is 1.39
building file list ... done

sent 120 bytes  received 20 bytes  280.00 bytes/sec
total size is 1631  speedup is 11.65
make: Entering directory `/home/langevin/bigloop-0.2/demo'
make: Nothing to be done for `all'.
make: Leaving directory `/home/langevin/bigloop-0.2/demo'
#imath02 24
#10.2.73.204 langevin 12 std
#wavester
10.2.81.2 langevin 4 std
10.2.73.60 langevin 4 
building file list ... done

sent 85 bytes  received 20 bytes  70.00 bytes/sec
total size is 332  speedup is 3.16
building file list ... done

sent 89 bytes  received 20 bytes  218.00 bytes/sec
total size is 152  speedup is 1.39
building file list ... done

sent 120 bytes  received 20 bytes  93.33 bytes/sec
total size is 1631  speedup is 11.65
make: Entering directory `/home/langevin/bigloop-0.2/demo'
make: Nothing to be done for `all'.
make: Leaving directory `/home/langevin/bigloop-0.2/demo'
#cluster wavester
10.2.81.2 langevin 4 sge
10.2.81.2 langevin 4 
building file list ... done

sent 85 bytes  received 20 bytes  210.00 bytes/sec
total size is 332  speedup is 3.16
building file list ... done

sent 89 bytes  received 20 bytes  218.00 bytes/sec
total size is 152  speedup is 1.39
building file list ... done

sent 120 bytes  received 20 bytes  93.33 bytes/sec
total size is 1631  speedup is 11.65
make: Entering directory `/home/langevin/bigloop-0.2/demo'
make: Nothing to be done for `all'.
make: Leaving directory `/home/langevin/bigloop-0.2/demo'
#maitinfo 16
#10.9.185.217-232 langevin 2 std

Let us look about the activity on the cluster.
[demo]$ biglcmd.sh 
target bigloop-0.2/demo
#imathgpu
10.2.73.58 langevin 6 std
 20:47:05 up 30 days,  8:55,  7 users,  load average: 4.73, 3.97, 2.32
#hexapus 16
10.2.178.62 langevin 4 std
 20:48:07 up 136 days,  9:59,  0 users,  load average: 11.06, 10.58, 9.67
#icosapus 16
10.2.178.81 langevin 2 std
 20:47:06 up 136 days, 10:02,  0 users,  load average: 18.95, 18.77, 18.38
#imath01 12
10.2.73.60 langevin 12 std
 20:47:06 up 31 days, 11:17,  3 users,  load average: 11.99, 10.15, 5.80
#imath02 24
#10.2.73.204 langevin 12 std
#wavester
10.2.81.2 langevin 4 std
  8:47pm  up 83 days  0:59,  0 users,  load average: 3.11, 2.54, 1.31
#cluster wavester
10.2.81.2 langevin 4 sge
  8:47pm  up 83 days  0:59,  0 users,  load average: 3.11, 2.54, 1.31
#maitinfo 16
#10.9.185.217-232 langevin 2 std
6 actions
We see that icosapus is the most busy, some people are logged imathgpu. Not very important, all the processus will be niced.
We start the bigloop manager.
using server : 10.2.73.86-31415
identificator: 25 
loop from 1 to 33554432 by 262144 (127 steps)
usually it takes a while...

The manager wait the clients. Let us invite the hosts of the grappe to join the loop !
demo]$ bigljoin.sh 
looping demo
#imathgpu
10.2.73.58
10.2.73.58 langevin 6
6 cpu detected
charge 0.07 1.09 2.48
2.48
starting 4 processus
 [ 4 ] client started
 [ 3 ] client started
 [ 2 ] client started
 [ 1 ] client started
#hexapus
10.2.178.62
10.2.178.62 langevin 4
16 cpu detected
charge 8.00 8.63 9.65
9.65
starting 4 processus
 [ 4 ] client started
 [ 3 ] client started
 [ 2 ] client started
 [ 1 ] client started
#icosapus
10.2.178.81
10.2.178.81 langevin 2
24 cpu detected
charge 18.00 18.16 18.42
18.42
starting 1 processus
 [ 1 ] client started
#imath01
10.2.73.60
10.2.73.60 langevin 12
12 cpu detected
charge 1.00 2.76 5.80
5.80
starting 6 processus
 [ 6 ] client started
 [ 5 ] client started
 [ 4 ] client started
 [ 3 ] client started
 [ 2 ] client started
 [ 1 ] client started
#imath02
#10.2.73.204
#wavester
10.2.81.2
10.2.81.2 langevin 4
4 cpu detected
charge 0.01 0.49 1.34
1.34
starting 3 processus
 [ 3 ] client started
 [ 2 ] client started
 [ 1 ] client started
#cluster
10.2.81.2
10.2.81.2 langevin 4
0 are running
starting 4 processus
Your job 5656 ("bigl.exe") has been submitted
[ 4 ] 
Your job 5657 ("bigl.exe") has been submitted
[ 3 ] 
Your job 5658 ("bigl.exe") has been submitted
[ 2 ] 
Your job 5659 ("bigl.exe") has been submitted
[ 1 ] 
#maitinfo
#10.9.185.217-232

Let us check at the output of the manager
....
step=128/127 actifs : 2
GET 25:4 0 0 0 :  10.2.178.62:2014
STP 25:4 0 0 0 :  10.2.178.62:2014
step=128/127 actifs : 1
VAL 25:8 33292289 33554432 33550336 :  10.2.178.81:43690
step=128/127 actifs : 1
END 25:8 33292289 33554432 1111435 :  10.2.178.81:57830
19 sec on 10.2.178.81
step=128/127 actifs : 1
GET 25:8 0 0 0 :  10.2.178.81:49548
STP 25:8 0 0 0 :  10.2.178.81:49548
step=128/127 actifs : 0
waiting for zombies...
eoj
Oops ! It is already finish...
[demo]$ ls log/
info-25.log  output-25.log  proc-25.log
[demo]$ cat log/output-25.log 
value=1
value=6
value=28
value=496
value=8128
value=33550336

departure     : Sat Mar 17 21:24:47 2012

arrival       : Sat Mar 17 21:27:06 2012

elapse time   : 139
cpu time      : 1118
job count     : 128
processors    : 22
work factor   : 135671103
sec / wkf     : 1.0000
 perf ms            ip                     wkf     cpu  job cpu
   0.692    10.2.73.58                34535842     197   33  17 imath-gpu
   0.752    10.2.73.60                51336406     318   48  28 imath01
   1.404   10.2.178.62                17804199     206   17  18 bleue.univ-tln.fr
   1.486   10.2.178.81                 5306941      65    5   5 icosapus.univ-tln.fr
   1.510     10.2.81.2                26687715     332   25  29 wavester.univ-tln.fr

Well, imathgpu appears as the most powerfull computer of the grappe. Now, we draw the pictures.
[demo]$ biglgraph.sh 25
making pictures
Could not find/open font when opening font "arial", using internal non-scalable font
Could not find/open font when opening font "arial", using internal non-scalable font
A last look at the computer friends
[demo]$ biglstat.sh 
target bigloop-0.2/demo
grappe.grp for status
#imathgpu
10.2.73.58
#hexapus
10.2.178.62
#icosapus
#10.2.178.81
#imath01
10.2.73.60
#imath02
#10.2.73.204
#wavester
10.2.81.2
#cluster
10.2.81.2
#maitinfo
#10.9.185.217-232

5 hosts
This means no client run there. All of them were properly stopped. The tracks of computations are in the data directory
[demo]$ biglcmd.sh -c "ls data/*"
target bigloop-0.2/demo
#imathgpu
10.2.73.58 langevin 6 std
data/trace-25-0.log
data/trace-25-1.log
data/trace-25-2.log
data/trace-25-3.log
data/trace-25-4.log
data/trace-25-6.log
#hexapus 16
10.2.178.62 langevin 4 std
data/trace-25-5.log
data/trace-25-7.log
data/trace-25-8.log
data/trace-25-9.log
#icosapus 16
#10.2.178.81 langevin 1 std
#imath01 12
10.2.73.60 langevin 12 std
data/trace-25-10.log
data/trace-25-11.log
data/trace-25-12.log
data/trace-25-13.log
data/trace-25-14.log
data/trace-25-15.log
data/trace-25-16.log
data/trace-25-17.log
data/trace-25-18.log
data/trace-25-19.log
data/trace-25-20.log
data/trace-25-22.log
#imath02 24
#10.2.73.204 langevin 12 std
#wavester
10.2.81.2 langevin 4 std
data/trace-25-21.log
data/trace-25-23.log
data/trace-25-24.log
data/trace-25-25.log
data/trace-25-26.log
data/trace-25-27.log
data/trace-25-28.log
data/trace-25-29.log
#cluster wavester
10.2.81.2 langevin 4 sge
data/trace-25-21.log
data/trace-25-23.log
data/trace-25-24.log
data/trace-25-25.log
data/trace-25-26.log
data/trace-25-27.log
data/trace-25-28.log
data/trace-25-29.log
#maitinfo 16
#10.9.185.217-232 langevin 2 std
5 actions
But, we do not need this data anymore
[demo]$ biglcmd.sh -c "rm data/*"
target bigloop-0.2/demo
#imathgpu
10.2.73.58 langevin 6 std
#hexapus 16
10.2.178.62 langevin 4 std
#icosapus 16
#10.2.178.81 langevin 1 std
#imath01 12
10.2.73.60 langevin 12 std
#imath02 24
#10.2.73.204 langevin 12 std
#wavester
10.2.81.2 langevin 4 std
#cluster wavester
10.2.81.2 langevin 4 sge
rm: cannot remove `data/*': No such file or directory
#maitinfo 16
#10.9.185.217-232 langevin 2 std
5 actions

Philippe Langevin , Last modification on 20 March 2012.