The primary factor that influences the runtime of celldetect is the size of the dataset. Two secondary factors are the number of cell sizes to examine and the actual number of detected sources.
The three most important factors that affect the runtime of wavdetect are: (1) the size of the dataset, (2) the number of wavelet scales, and (3) the number of background cleaning iterations. The first two factors influence both wtransform and wrecon , the third only affects wtransform . An additional factor is the size of the wavelet: for large scales the dataset needs to be padded, so the size of the data gets augmented artificially.
In general, wavdetect uses a lot of computer memory; pixels should be considered the practical limit for the size of the input dataset.
While the runtime of vtpdetect does not depend on the spatial size of the dataset, it does depend on many other factors and is rather unpredictable. The most important factors are the number of unique photon locations and the overall number of events.
vtpdetect will run quickly if the number of photons is low and there is a high contrast between background and sources. In the opposite scenario (i.e. a large number of photons and a large number of faint sources), the vtpdetect run can be very long, since fitting background becomes an arduous task. In these situations, tests have shown that vtpdetect becomes very slow if the observation contains more than photons.
To give the reader a rough idea on the performance, we ran all three tools on various subsets of two simulated HRC-I observations. One simulation contained a set of sources on top of a flat background. The second contained only a flat background, but had an exposure 10 times longer than the first and therefore contained roughly 10 times as many background events.
Table 3.1 shows the runtimes on a Sun Ultra 1 with a 167 MHz processor and 124 MB of RAM. The same runs on a Sun SPARCstation 5 with 70 MHz CPU and 64 MB of RAM were 3 times slower. The runtime of celldetect and wavdetect are not affected by the increase in the number of events; the major factor is the size of the dataset. In the case of vtpdetect , however, the number of events is the primary factor; vtpdetect slows down considerably when the number of events is large.
In the runs reported in Table 3.1, only one wavelet scale was analyzed for wavdetect , and only one background iteration was performed for both wavdetect and vtpdetect . To a first approximation, the time required for execution of wrecon and vtpdetect increases linearly with the number of iterations. Also, each additional scale in wavdetect increases the runtime by roughly the amount of time required for one scale.
As described in section 2.4, we chose to use ObsID 2405 as a test field since we have determined all real sources on the full exposure. ObsID 2405 is approximately 10% of the size of the full CDFS. For celldetect , we used a S/N threshold of 3, an encircled energy of 0.8, and the PSF library of April 2001. For wavdetect , we used scales of 1,2,4,8,16 and sigthresh = E-6. For vtpdetect we used scale = 1 and a maximum probability of being a false source (limit) equal to E-5 and E-6. The results are shown in Table 3.2.
The distinction between "spurious" and "edge" is somewhat subjective. In the case of wavdetect , the edge detections were not actually on the edge, but were close to the edge. The percent spurious assumes that the edge detections don't count (i.e. if we had used an exposure map, they probably would not have been detected). The values presented here are indicative, but each tool can be made to perform differently by adjusting the parameter values.