On hypothesis testing, trials factor, hypertests and the BumpHunter
Abstract
A detailed presentation of hypothesis testing is given. The “look elsewhere” effect is illustrated, and a treatment of the trials factor is proposed with the introduction of hypothesis hypertests. An example of such a hypertest is presented, named BumpHunter, which is used in ATLAS [1], and in an earlier version also in CDF [2], to search for exotic phenomena in high energy physics. As a demonstration, the BumpHunter is used to address Problem 1 of the Banff Challenge [3].
Contents
1 Introduction
The goal of the BumpHunter is to point out the presence of a local data excess like those caused by resonant production of massive particles in Particle Physics [1]. Such features are colloquially called “bumps”, hence the name “BumpHunter”. More specifically, the BumpHunter is a test that locates the most significant bump, where the data are most deviant from the Null hypothesis. Based on this bump, the test returns a , corresponding to its TypeI error probability. This is done in a way that accounts for the “trials factor”.
For the reader who may not be familiar with the terminology of hypothesis tests, a thorough discussion follows. Another account of hypothesis testing can be found in [4]. A similar discussion on trials factor can be found in [5].
In the following paragraphs we spell out issues that are often misunderstood, such as the interpretation of and the issue of “trials factor”. A solution is provided to account for the latter, by introducing the notion of hypothesis hypertest. The discussion that follows is not limited only to the BumpHunter; the latter is a practical application.
After presenting the BumpHunter algorithm, a demonstration is made, based on Problem 1 of the Banff Challenge [3].
1.1 Hypothesis tests and
There are several statistical tests to evaluate if some data are consistent with a specific hypothesis. Two famous examples are Pearson’s , and the KolmogorovSmirnov (KS) test. The BumpHunter is one more test in this category.
In all tests of this kind, often called “hypothesis tests” or “goodness of fit tests”, one has some data and a hypothesis, which typically is the “Null”, or “0signal”, or “background” hypothesis, denoted . One could test the consistency of with any hypothesis, but is usually chosen, because typically a discovery can be claimed by establishing that the data are inconsistent with the “standard” theory, without having to show necessarily that they are consistent with some alternative theory. Once inconsistency with is established, several alternative signal hypotheses can be tested to characterize the discovery. For example, we can assume that the signal follows a specific distribution, and estimate its amount, either by bayesian inference, or by defining frequentist confidence intervals (CIs). It helps, conceptually, to distinguish hypothesis tests, like or the BumpHunter, from bayesian inference and frequentist CIsetting methods^{1}^{1}1 There is, actually, a connection between hypothesis tests and frequentist CIs, which will be explained in this footnote, hoping to avoid confusion. One can assume any kind of signal, and set a lower limit to the amount of this signal that may exist in , using the classical Neyman construction, where the statistic of some test is used as observable; to be specific, let’s say is used to construct the Nayman band. If the resulting CI doesn’t contain the value 0 for signal, then is excluded, in the frequentist sense, namely in the sense that 0signal is not contained in a CI characterized by some Confidence Level (CL). The smallest CL for which the corresponding semiinfinite CI includes the value 0 for signal, is equal to of the hypothesis test (of the test in this case) which compares to . This is the case for any assumed signal shape. . One can use as observable the value of or of the BumpHunter statistic (defined below) to make a bayesian inference or to set a frequentist CI on the amount of a specific signal that may exist in the data, but the BumpHunter is designed to address a different question, for which only and are required, and no specific signal is assumed, hence its modelindependence.
All hypothesis tests, including the BumpHunter, work as follows:

is compared to , and their difference is quantified by a single number. This number is called “the statistic” of the test, or “test statistic”, and in this document it is denoted by . For example, in the test, the statistic is
(1) where denotes the observed events in bin , and the events expected by in the same bin. The statistic in the KS test is the biggest difference between the cumulative distribution of the data and the cumulative distribution expected by . We will present later the exact definition of the BumpHunter statistic, but it follows the same logic: the bigger the difference between data and , the bigger the test statistic.

Pseudodata are generated, following the expectation of . In each pseudodata spectrum, the same test statistic is computed, comparing the pseudodata to . The distribution of test statistics from pseudoexperiments is made. The achievement of Pearson, Kolmogorov and Smirnov, was that they calculated analytically the distribution of the statistic of their tests under . For example, Pearson showed that, under some assumptions of gaussianity, his statistic follows a distribution. Nowadays, computers make it possible to estimate numerically the distribution of any test statistic.

Calculate the of the test. The is the probability that, when is assumed, the test statistic will be equal to, or greater than^{2}^{2}2The convention used is that the test statistic becomes greater as the discrepancy increases; otherwise the would be defined as . the test statistic obtained by comparing the actual data () to :
(2) where the test statistic is a random variable since it depends on how pseudodata fluctuate around , and is the observed statistic from comparing to . If the exact probability density function (PDF) of under is known (), then the is exactly computed as . When is estimated using pseudoexperiments, as the case is for the BumpHunter, then the is estimated as a binomial success probability. Using Bayes’ theorem, if pseudoexperiments are produced, of which had , we infer
(3) where is a normalization constant, and is the prior assumed. If we assume , which is a reasonable choice, the result becomes
(4) According to this posterior distribution, the most likely is .
So, the final product of a hypothesis test of this kind is a . Ideally, the would be precisely computed, but in practice is has to be estimated from a finite set of pseudoexperiments. We will explain next how the can be interpreted, and why it is so useful.
1.2 What does the mean?
It will be shown that the is interpretable as a falsediscovery probability. To reach systematically to that interpretation, and to clarify what that means, we will first prove a simple theorem.
1.2.1 A simple theorem about
Assume a decision algorithm which declares discovery (i.e. it rules out ) if , where is an arbitrary parameter of the algorithm. It will be shown that the probability of this algorithm to wrongly rule out is , no matter what hypothesis test the is coming from, under one condition; that there be a solution for which , where is the PDF followed by the test statistic () under .
The probability to wrongly rule out , which is named “TypeI error”, is the probability to find while holds, namely
(5) 
which can be spelled out more clearly, using the definition of :
(6) 
In eq. 2, was a random variable and was a fixed number, which depended on the real data and on . In eq. 6, both and are random variables, because we don’t have a fixed observed dataset ; we are instead trying to calculate the probability that will be such that will satisfy . In other words, eq. 6 is the probability of drawing a random variable , such that the random variable will have probability less than to be greater than . That happens if , where , where is the PDF followed by .^{3}^{3}3 The equation needs to have a solution; if doesn’t exist, the rest of the proof fails. For example, consider and is the Kronecker function. In this case, there is no that satisfies for , because if then , and if then . If is continuous, then a exists . Most test statistics based on event counts don’t follow a continuous PDF, due to event counts being discrete. Another possible reason to not follow continuous PDF is the imposition of conditions as we will see paragraph 2. So, there are specific values of for which this theorem is exactly true; in other cases the probability to wrongly exclude is not exactly . However, we will explain in paragraph 1.2.2 that this theorem’s condition is met if we set equal to an observed , which allows any observed to be exactly interpreted as a TypeI error probabilities. , where The probability for to be greater than is , where is the PDF followed by . So, eq. 6 can be written
(7) 
By looking back at eq. 6, we see that fluctuates according to how pseudodata fluctuate around , as implied by the conditional in . At the same time, fluctuates according to how pseudodata fluctuate around , as implied by the rightmost conditional in eq. 6. So, both and are drawn from the same distribution, namely . Therefore, eq. 7 becomes
(8) 
This is an important result, and it is what makes useful. We showed that, no matter how we define the test statistic , if we use the resulting in a discovery algorithm that declares discovery when , the TypeI error probability of that algorithm will be equal to . The only requirement is for a to exist that satisfies .
Corollary:
If a test statistic follows a continuous PDF under , then the condition of the above theorem is satisfied for any value , therefore , therefore the of any such hypothesis test is a random variable that follows a uniform distribution between 0 and 1, when is true.
Note that if is discontinuous, then this corollary does not follow, i.e. the does not follow a uniform distribution between 0 and 1, but the previous theorem is still valid for values for which has a solution. This is important, because it is often wrongly thought that if a doesn’t follow a uniform distribution under , then it can not be correctly interpreted as a TypeI error probability. That is not true. In paragraph 1.2.2 we will see why.
1.2.2 Interpretation of the of a test
So, how should we interpret the of a hypothesis test checking the consistency of a dataset with a hypothesis ?
Exploiting the theorem of paragraph 1.2.1, if we observe we know that there is a discovery algorithm which would have ruled out based on this with TypeI error probability equal to . That algorithm is the one with parameter . If we set then the algorithm wouldn’t declare discovery for the observed . If we set a discovery would still be declared, but such an algorithm would have a larger TypeI error probability, so it would be less reliable. Therefore, if we observe , then the discovery algorithm with the smallest TypeI error probability that would still declare discovery would do so with probability of being wrong. In this sense, the observed is a falsediscovery probability. It is the smallest falsediscovery probability we can have, if we declare to be false.
What if the hypothesis test follows a discontinuous PDF ? We saw in 1.2.1 that in that case there can be some values of for which the proof can not proceed, because there is no satisfying . That, however, does not interfere with the interpretation of an observed as a TypeI error probability. The reason is that will always be such that will have a solution, so, the theorem of paragraph 1.2.1 will always hold, if we set . How do we know that any observed will always be such that will have a solution? We know, because otherwise couldn’t have been observed. Let’s take, for example, the discontinuous PDF that was mentioned in paragraph 1.2.1: , as mentioned earlier, the equation has no solution for , but this is precisely the range where couldn’t be in any circumstance. If , then the will be . If , then . . For this
We showed, therefore, that any observed will always be interpretable, thanks to the theorem of paragraph 1.2.1, as the smallest possible TypeI error probability of a discovery algorithm which would have declared discovery on the basis of the observed . This interpretation will be correct even if the conditions are not met for the corollary of 1.2.1 to be true, i.e. even if the is not distributed uniformly in under .
To prevent a common misinterpretation, if we find a = 0.7, it doesn’t mean that is right with probability 70%. In strictly frequentist terms, the is not a statement about itself, but about the TypeI error probability of an algorithm that would exclude , as explained above^{4}^{4}4Equivalently, the corresponds to the CL of a specific CI. See footnote 1..
1.3 Interpretation of multiple tests
If we run the KS test and find = 0.7, we know that even the most reliable decision which would rule out on the grounds of the KS test would still have 70% probability to be wrong. With so high odds of being wrong, we couldn’t support a discovery claim. But the fact KS doesn’t identify a big discrepancy doesn’t mean no other test will. For example, the data may follow the PDF predicted by , but have different population. Since the KS test compares cumulative distributions, it is insensitive to an overall normalization difference, while the test would notice it. So, if the test returns = , we can say that the most reliable decision which would rule out on the grounds of the test would have probability to be wrong. With such high confidence, a discovery claim could be supported. This statement from does not contradict the one from KS. Both are correct, simultaneously. One says that the distribution shape agrees with ; the other says that the normalization doesn’t.
The above scenario illustrates why one can benefit from more than one statistical test. Each test is sensitive to different features, and we may not know apriori how may differ from . Unless one is willing to limit the scope of his search to only one kind of discrepancy (e.g. shape discrepancy or normalization discrepancy), he needs to compare to in more than one way. To do so correctly, he must carefully take into account the “trials factor”, which is the subject of the next paragraph.
1.3.1 Adhoc tests, and trials factor
Reading paragraph 1.3, one may be tempted to “engineer” more hypothesis tests, until one of them gives a small that would allow him to rule out with great confidence. For example, imagine that the data are binned in small bins. In so many bins, it is only natural for one bin to fluctuate significantly from the prediction, even if is true. If a hypothesis test is engineered to look just at that bin, then the observed statistic () will be very large, and the will be very small, because pseudoexperiments will very rarely have as big a discrepancy in the same bin.
Even for such an adhoc test, everything we proved still holds. It would be technically correct that based on this aposteriori decided test we could rule out with a tiny chance of being wrong. And yet, any minimally skeptical scientist should refuse to rule out based on this result. All it says, in essence, is that there is one out of the bins that is very discrepant. If we had stated it like that, it wouldn’t have sounded so dramatic, but that’s really what it means, and the reason is that the bin had not been chosen apriori, but after seeing . If a different bin had fluctuated far from , then another aposteriori test would have been quoted, which would again rule out with high confidence, even if were true. This is what physicists refer to as “the look elsewhere effect”, or “the trials factor”, implying that each bin counts as a trial with its own chance of triggering a discovery, and the fact there are many such trials has to be taken into account somehow. It will become clear later that the “trials” actually are not due to the many bins, but due to the many possible hypothesis tests one would be interested in considering simultaneously. In other words, the “look elsewhere effect” may better be called “look in different ways effect”.
1.3.2 How to account for the trials factor – Hypertests
Continuing the example of the previous paragraph, to see if there is a singlebin fluctuation that is too unlikely under , without having any prior preference to some bin, we will come up with a statistical test that considers all possible bins on an equal footing. It will have, like every hypothesis test, a statistic and a corresponding to the observed statistic . Its will follow the theorem of paragraph 1.2.1, it will therefore be interpreted as the TypeI error probability based on this test.
The hypothesis test that looks at all bins can be viewed as a hypertest, which combines all the specialized tests which focus on individual bins. These many tests are the many ways in which a discovery could be claimed. These many tests are the “trials”. We will see how to construct such a hypertest.
In our example, where the data are partitioned in bins, one could define hypothesis tests, each using one bin to define its test statistic. Of these hypothesis tests, each can use any test statistic; they don’t even have to be the same. For example, for hypothesis tests that examine odd bins we could define the test statistic
(9) 
where and are the observed data and the expectation of in the bin where each hypothesis test focuses. For hypothesis tests that examine even bins, we could define the test statistic
(10) 
No matter how we define these hypothesis tests, regardless how numerically different their statistics may be, for each one of these tests there is an observed statistic , and the corresponding in the interval . For each one of these , the theorem of paragraph 1.2.1 holds: If is true, then each one of these tests has probability to return .
In this example the tests are independent, meaning that
so the probability of at least one such hypothesis test giving a is
(11)  
In this case we may use the phrase “the trials factor is ”, meaning that this set of hypothesis tests consists of statistically independent tests. If, on the contrary, all tests were totally correlated, meaning that
then we would have
(12)  
In this case, we may say “the trials factor is 1”, meaning that, although there are many () hypothesis tests in the set we are considering, they count as 1 because they behave identically. In any intermediate case of partial independence, we can define a real number such that
(13) 
We can refer to as the effective trials factor, which can take values between 1 and . The value of depends on , on the way the hypothesis tests are correlated, and on . It should be clear at this point that the trials factor has little to do with how many bins there are in the data, or how many final states we consider in a search for new physics^{5}^{5}5More bins and more final states allow one to devise more hypothesis tests, but one doesn’t have to.. It is really a function of the number of hypothesis tests that we employ, and of how their answers correlate.
We just showed that a discovery algorithm that says “declare discovery if any of the tests gives ” does not have TypeI error probability equal to , but equal to which is . That is why we cannot look at a set of hypothesis tests (e.g. tests, each looking at a different bin), pick the smallest , and interpret that as a TypeI error probability.
There is a way to account for the trails factor, by defining a new hypothesis test that is sensitive to the union of the features that each of the tests is sensitive to, and has a which can be interpreted as a TypeI error probability. This new test will be combining hypothesis tests, and use as statistic the following:
(14) 
In words, this new hypothesis test uses as statistic the smallest . The negative function is used to make increase monotonically as decreases, following the convention that wants to increase with increasing discrepancy. Obviously the function could be replaced by any other monotonically increasing function.
We refer to the new test as a hypertest, i.e. a union of many tests, because its statistic is a of some other hypothesis test from a predetermined set of hypothesis tests.
Every hypertest has an observed statistic and a corresponding , found as described in paragraph 1.1. This quantifies how often such a small (or smaller) would be returned by at least one of the hypothesis tests included in the set, under . The of this hypertest, like any , obeys the theorem of paragraph 1.2.1. The of this hypertest can be interpreted as described in paragraph 1.2.2.
1.3.3 Final remarks on the definition of hypertests
In paragraph 1.3.2 we gave a prescription to correctly consider simultaneously a set of hypothesis tests, by defining a hypertest that takes into account the trials factor, and returns a that can be correctly interpreted as a TypeI error probability. The obvious question is which hypothesis tests to include in the set used to define the hypertest.
There is no unique answer. By including more (independent) hypothesis tests to the set, the hypertest gains sensitivity to more features. That can be desirable, especially when we have no prior expectation of how may differ from . The price one pays is that the effective trials factor () increases, so, the power of the test decreases, namely it would take more signal to obtain the same from the hypertest.
If we knew somehow that would differ from in a specific bin, there would be no need to get distracted by looking in any other bin.
A reasonable strategy, which is adopted also by the BumpHunter, is to specify a set of hypothesis tests which cover a large family of similar features. For example, the BumpHunter, as we will see, is a hypertest based on the set of hypothesis tests that look for bumps of various widths in various locations of the spectrum. The interpretation of such a test is rather simple. If the is not small enough, we conclude that there is no significant bump of any width, at any location.
One final remark is that a hypertest A may be included in the set of hypothesis tests used by a hypertest B. That doesn’t make B a hyperhypertest or something. Both B and A are hypertests, because their are the result of considering simultaneously the of a set of hypothesis tests (or hypertests). It is also trivial to show that if a hypertest A contains in its set just one hypothesis test (or hypertest) B, the of A is identical to the of B, so the distinction between simple hypothesis test and hypertest gets lost in the trivial case.
2 The BumpHunter
The BumpHunter scans the data () using a window of varying width, and keeps the window with biggest excess of data compared to the background (). This test is designed to be sensitive to local excesses of data. The same treatment is given to pseudodata sampled from , and the is estimated as described in paragraph 1.1.
In the language of paragraph 1.3.2 and 1.3.3, the BumpHunter is a hypertest that combines hypothesis tests which focus on bumps of various widths at various positions of the spectrum, taking the trials factor into account.
It will become clear that some choices have been made in this implementation of the BumpHunter which could be different. For example, one may use different sideband definitions, or may search for bumps within some width range. As explained in paragraph 1.3.3, such choices are essentially arbitrary. They are made based on what we wish the interpretation of the result to be.
This version of the BumpHunter operates on data that are binned in some apriori fixed set of bins. In the limit of infinitesimally narrow bins, the arbitrariness of the binning choice is removed. If the bins are not infinitesimally small, then their size limits the narrowest bump that one may be sensitive to. In most applications there is a natural limit to how narrow a bump can be. For example, in [1] the limit reflects the finite detector resolution. Practically, one can have very good performance using bins of finite width. In the case of the Banff Challenge, the information is given that the signal follows a Gaussian distribution with , so, we define 40 equal bins between 0 and 1, resulting in bin size 0.025.
Given some data and some background hypothesis , the following steps are followed to obtain the test statistic () of the BumpHunter:

Set the width of the central^{6}^{6}6“Central window” is the window where excess of data is checked for. The word “central” is used to distinguish that window from its left and right sideband. window . In this implementation, where the data are binned, is an integer which specifies how many consecutive bins to include in the central window. This width is allowed to vary between some values. In [1], where the potential signal is of unknown width, is allowed to range from 1 to , where is the total number of bins from the lowest observed mass to the highest. To address the Banff Challenge [3], where the signal is a Gaussian of known , we constrain between 3 and 5 bins, which fit roughly 68% to 95% of this Gaussian signal.

Set the width of each sideband. Sidebands are used, optionally, if one wishes to impose quality criteria ensuring that the BumpHunter will focus on excesses surrounded by nondiscrepant regions. In [1] such sidebands were used, and their size (in number of bins) was set to . To address the Banff Challenge, we do not use any sidebands, in the interest of speed, and because there is some risk associated with using sidebands when is constrained to small values; this risk is illustrated in paragraph 4.3. In the following steps we will describe how sidebands are used, because they constitute part of the BumpHunter algorithm, even though in the Banff Challenge they are not used.

Set the position of the central window, which will range from the lowest to the highest observed value^{7}^{7}7Dijet mass in the case of [1], or in the case of the Banff Challenge..

Count the data () and background () in the central window. Obviously is an integer and is a real number, representing the expectation value, according to , in the central window. Similarly, count the data (, ) and the background (, ) in the left and right sideband (subscript “” and “” respectively).

In this step, which is at the heart of the BumpHunter, we will make a connection to what was said in paragraph 1.3.2. We will define the test statistic of each one of the hypothesis tests that are combined in the BumpHunter hypertest. Each local hypothesis test examines the presence of a bump at the location where we are currently placing the central window as we scan the spectrum. Each such hypothesis test has its statistic , which has an observed value coming from comparing the data to , resulting in a . The smallest of these will be used in step 8 to define the BumpHunter test statistic, according to paragraph 1.3.2.
Given the six numbers and , we define the following test statistic for the hypothesis test which focuses on the current window and sidebands:
(15) In this definition, can be any positive, monotonically increasing function, such as or . Also,
(16) Ignoring the sidebands is equivalent to using 0 instead of in eq. 15. The definition^{8}^{8}8As an aside, in footnote 3 it was mentioned that paragraph 2 would illustrate an example of a test statistic which doesn’t follow a continuous distribution . Indeed, the test statistic of eq. 15 is discontinuous at 0. Due to the condition which may set to 0 in some cases, the PDF of contains a peak at 0 which could be formulated as a Kronecker multiplied by the probability for to be 0. of eq. 15 was carefully designed to have the following characteristics, which make it meaningful and practical:

.

, i.e. the discrepancy is characterized maximally uninteresting when the data, where the particular hypothesis test focuses and is computed, do not meet the following criteria which a bump would be expected to meet: (a) Have an excess of data in the central window, namely . And (b), have both sidebands consistent with the background. That is where the two with are employed. Each one of these is the of a hypothesis test that focusses on just the left or right sideband, and uses as test statistic , or something similar that increases monotonically with the difference between data and background in each sideband. By requiring to be greater than , we require that can not be excluded, based on event counts in the sideband, with less than probability of being wrong. The value is arbitrary, and can be set higher or lower to tighten or relax, respectively, the good sidebands requirement.

The of this hypothesis test is analytically calculable directly from and , without even having to calculate or ! We will soon explain how. This remarkable property allows the BumpHunter statistic to be computed quickly, without needing pseudoexperiments to estimate the of each local hypothesis test that it incorporates.
The is computed as follows. We have the observed events , and . If , we don’t have an excess, so we know that the observed statistic is 0 according to eq. 15, therefore any other pseudoexperiment would have , therefore . The same is true, for the same reason, if or . When none of the above happens, is defined to increase as increases, since , in eq. 15, is monotonically increasing and is fixed. So, when , we know that the only way would be is by having , while and remain consistent with and . To find the , which by definition is , we have to compute the probability of these three things happen simultaneously. The conditions on and were designed to be independent from each other and from . This allows us to express the as the product of 3 probabilities: , , and . The first probability is, by definition, . The second and third probabilities are equal^{9}^{9}9This equality is only approximate, due to being integer. It is, however, a very good approximation. Due to taking discrete values, so does . For example, if , then to have , has to be , and that has probability instead of 0.999. If , then the same probability is , and for large values of the approximation becomes better because the discreteness of becomes negligible. to , because of the theorem of paragraph 1.2.1, and because and are . Putting it all together, we have:
(17) The term is very close to , but even if it wasn’t, it could be ignored because it is constant of all local hypothesis tests, therefore it affects neither which will be the smallest (see step 8), nor the BumpHunter .
After all, we have shown that the of eq. 17 depends on three values, which are analytically calculable quantities, using the wellknown function and its normalized lower incomplete version, which is also tabulated in standard computational packages code libraries, like the ROOT TMath class [6]. The useful relationship that allows this computation is:
(18) from which it follows that:
(19) 

Shift the central window, and its sidebands, by a number of bins, and repeat step 5, namely compute the of the local hypothesis test that focuses on that new location. In principle, the bins could be infinitesimally narrow, and the translation could be in infinitesimally small steps, to include in the BumpHunter every possible bump candidate (or, equivalently, every possible hypothesis test focusing on a local mass range). However, in practice there are computational limitations. Hypothesis tests which focus on roughly the same mass range are highly correlated. By adding more highly correlated tests not much new information is gained, the effective trials factor doesn’t increase much (see eq. 1.3.2), but it takes time to compute the all these tests. For this reason, in the implementation of the BumpHunter used in [1] and in the Banff Challenge we use
In this way we still consider bump candidates which overlap significantly, but we avoid spending time to consider almost identical bump candidates.

In this last step, the BumpHunter test statistic is calculated, according to eq. 14:
(20) where is the smallest of all found in the previous steps.
2.1 The background and pseudodata
Like in all hypothesis tests (e.g. , KS etc.), in the BumpHunter the is an input. The BumpHunter uses the , its depends on it, but it doesn’t define . Depending on how the analyst defines , the interpretation of the BumpHunter, or any other hypothesis test, will have different interpretations.
In particle physics, may come from Monte Carlo (MC) simulation, representing typically the Standard Model prediction. Then, everything we have discussed so far applies. The MCbased background is used

to compare to , thus obtaining the observed BumpHunter statistic ,

to generate pseudodata according to multiple times,

to obtain the BumpHunter statistic by comparing each pseudodata spectrum to .
Then the BumpHunter is estimated, according to paragraph 1.1.
In some cases, it is wellmotivated to formulate as a function of , instead of using MC. Specifically, in [1] and in the Banff Challenge, the background is not independent of . It is obtained by fitting a function to . In the case of Banff Challenge we have the information that the background should follow an exponential spectrum
(21) 
In the case of [1], studies showed that there is a more complicated functional form which can fit the Standard Model prediction, but couldn’t fit a spectrum with a resonance. One can define as the result of fitting this functional form to the data . This definition of the null hypothesis may be called “smooth background hypothesis”.
When depends on , it is necessary to compute (i.e. by refitting) not only for the actual data , but also for every pseudoexperiment that will be used to estimate the . Otherwise is not consistently defined, which means that in theorem 1.2.1 and are not identical, thus the is not interpretable as a TypeI error probability.
2.1.1 Fitting by omitting anomalies
When is computed by fitting there is the concern that, if a bump actually exists, it will influence the fit. Naturally, the fitted background will try to accommodate part of the signal, even if it doesn’t have the flexibility to fully do so. That can obscure the signal, and cause the fit to not describe the data even where they don’t contain signal. Fig. 1 shows such an example.
An alternative is to define as the spectrum obtained by fitting the data, after omitting the window which improves the fit in a predetermined, algorithmic way. The algorithm used in the Banff Challenge is to try the fit after omitting various windows, similar to the way the BumpHunter scans the spectrum (paragraph 2). The windows that are omitted have size between 3 and 5 bins, corresponding to width of potential signal, and they are considered for exclusion only if they contain an excess of data. If after the omission of some window the test becomes greater than 0.1, then we consider the fit good enough and we stop looking for other windows to possibly omit from the fit. If the fit is not made better than that after the omission of any window, then we keep the fit which gave the greatest , even if it was less than 0.1. An example of this algorithm in action is shown in Fig. 1, where the window with the bump is automatically excluded, resulting in a much better fit of the rest of the spectrum. The same algorithm, obviously, is used each time we fit pseudodata.
The advantage of omitting the most discrepant region is that it pronounces the bump, as one sees in Fig. 1. Also, if the goal of the fit is to estimate the background parameters, e.g. the value of in eq. 21, then this allows for the fit to find the right value of without bias caused by the signal.^{10}^{10}10However, in the specific case of the Banff Challenge this is not how we estimate , because we have the information that the signal follows a Gaussian of known width, so, it is better to fit the background of eq. 21 simultaneously with a Gaussian. The primary goal of the BumpHunter is not to estimate parameters, but to test .
Besides these advantages, nothing would be wrong about the results of the BumpHunter even if one didn’t follow this fit procedure. If we define as the result of fitting the whole spectrum, then the BumpHunter (and any other test) returns the right that reflects this definition. If the indicates a significant discrepancy between and , it is clear what means and what the interpretation is. In other words, the BumpHunter (like any test) operates with the input and , not caring how wellmotivated is; that is up to the analyst.
3 The Banff Challenge, problem 1
The Banff Challenge [3], Problem 1, offers an opportunity to demonstrate BumpHunter’s performance.
is defined as the spectrum obtained by fitting the data with eq. 21, following the algorithm of paragraph 2.1.1. The BumpHunter is estimated using the procedure of sec. 1.1, generating pseudoexperiments until we are sure (in the bayesian sense described in 1.1) that the is smaller or greater than 0.01 with probability . If the is estimated to be (with probability 0.999), we declare discovery; if the is estimated to be (with probability 0.999), then we don’t.
Then comes the challenge of estimating the parameter of the background and the position of the signal (if discovery was declared). We go one step further, and estimate also the amount of signal (). We do all that by fitting to the data the function
(22) 
This fit has free parameters . We use the result of the BumpHunter to aid it; the initial value of is set to the position where the BumpHunter located the most significant bump.
All data are studied after binning them in 40 equal bins of between 0 and 1. (Bin size = 0.025.) If the actual is the fit will return roughly /40=250.^{11}^{11}11This is the result of not using the option ‘I’ when fitting in ROOT [6].
We executed the BumpHunter and the subsequent 4parameter fit to all 20000 distributions handed out with the Challenge. The results are tabulated in a separate, long text file, with the columns:

Dataset number (from 0 to 19999)

Decision : 0 means “most likely estimated , thus no discovery claim.” 1 means “most likely estimated , thus discovery is claimed.”

estimate. For example, the string
0.0666667 = 6/90 P(pval>0.01)= 0.999961
condenses the following information: 90 pseudoexperiments were generated. 6 of them had a BumpHunter statistic greater than the BumpHunter statistic observed in the actual data. That means that the most likely value for the is 6/90 = 0.067. According to the bayesian posterior described in paragraph 1.1, the is greater than 0.01 with probability 0.999961^{12}^{12}12The actual accuracy of this probability does not extend beyond the third or fourth significant digit.. So, it is safely above 0.01, and in this case we don’t declare discovery. Let’s see another example:
0 = 0/690 P(pval<0.01)= 0.99904.
This string means that 690 pseudoexperiments were generated, none of them was more discrepant than the actual data, which means that the most likely is 0, and the bayesian posterior ensures that the is less than 0.01 with probability 0.99904. In this case we claim discovery.

The next three numbers: same as the previous three numbers, but for parameter , after fitting eq. 22.

The last three numbers: same as the previous three numbers, but for parameter , after fitting eq. 22.
Appendix A includes the first 100 lines of the aforementioned text file.
3.1 A discovery example
As an example where we claim discovery, we present dataset 10, the first dataset where discovery is claimed. Fig. 2 summarizes the information extracted from this dataset.
For this dataset, we estimate the most likely to be . With the 690 pseudoexperiments generated, and assuming a flat prior in , we infer that the is less than 0.01 with probability about 0.99904.
The signal mean is estimated at . Similarly, , and . It should be reminded that each bin was width 0.025, and , which is comparable to what is known about , i.e. that it is a random variable around . Similarly, , which is comparable to the number of events one can identify as signal in Fig. 2LABEL:.
3.2 A nondiscovery example
As an example where we do not claim discovery, we present dataset 0. Fig. 3 summarizes the information extracted from this dataset.
For this dataset, we estimate the most likely to be . Of course, this number is not so useful, because it reflects only 10 pseudoexperiments. The useful inference from those 10 pseudoexperiments, though, is that the is greater than 0.01 with probability indistinguishably close to 100%.
3.3 Summary of datasets
Of the 20000 datasets, we found 1819 where the most likely was estimated to be . Of the 20000 datasets, there are 107 datasets where it was decided to stop producing pseudoexperiments, because we ran out of time. For those 107 datasets, 57 have estimated , and 50 have . The reason it took too long to conclude was that the is very close to 0.01, so many trials are required to discern, with 0.999 credibility, on which side of 0.01 the is. However, of those 107 datasets where 0.999 credibility was not attained, 64 concluded with credibility less than 0.99, 38 concluded with credibility less than 0.9, and just 2 with credibility less than 0.5. Indicatively, these 2 datasets estimated the most likely to be .
Fig. 6 summarizes the best fitting values of , and in just those 1711 pseudoexperiments where a discovery was claimed at the level of 0.01 TypeI error probability.
4 Sensitivity
4.1 The Banff Challenge sensitivity tests
The sensitivity of the BumpHunter is measured in three signal cases, as required by the Banff Challenge. “Sensitivity” means the probability of observing a in the presence of a specific amount and kind of signal. In all signal cases, the signal is injected in the nominal background distribution, which comes from integrating in each bin. In all cases, the signal is given by a function .
In the first test, we have . Integrating the signal function in , we have a total of 75.9 events. Out of 300 pseudoexperiments, generated from the distribution in Fig. 7LABEL:, the BumpHunter was less than 0.01 in 64 pseudoexperiments. That implies discovery probability of about 21.3%.
The results of the second and third test are summarized in Table 1.
Fig. 7 summarizes the expected distributions in the three sensitivity tests, and shows an example of pseudodata from each expected spectrum.
4.2 Comparison to the case of known signal shape and position
For the sake of comparison, what would our sensitivity be if we knew the location of the signal and its exact shape, and we only ignored its amount (which is proportional to )? In that case, obviously, the BumpHunter would be unnecessary; why look at many places, and pay the penalty of the trials factor, when knowing exactly where the signal is?
In that ideal case, we could compare the null hypothesis to the hypothesis which includes the specific signal and best fits the data. We could define as test statistic the “log likelihood ratio”:
(23) 
where is the probability of observing the data, bin by bin, assuming the given signal shape with parameter , and is the value of which maximizes this likelihood.
Running this hypothesis test, we found that in the first test we found a in pseudoexperiments (probability about 58%). In the second test the same success rate was %. In the third test, the result was %. These numbers are added to Table 1 as an extra column.
Comparing these success rates to the ones mentioned in paragraph 4.1, one confirms that the BumpHunter is less sensitive than a test to which the location and shape of the signal have been disclosed. This lower sensitivity is a consequence of the greater trials factor in the BumpHunter, as expected from the discussion in paragraph 1.3.2. Nevertheless, in research one doesn’t know in advance what he is going to discover, unless some confirmation is sought instead of discovery. Between the less sensitive BumpHunter, which covers a large range of possibilities, and an arbitrary hypothesis test that is sensitive to just one arbitrary signal and insensitive to almost everything else, the BumpHunter seems to be a better choice.
Total signal  BumpHunter  likelihood ratio test  

0.1  1010  75.9  64/300 21.3%  175/300 58% 
0.5  137  10.3  87/300 29.0%  173/300 58% 
0.9  18  1.35  32/300 10.7%  112/300 37% 
4.3 Sensitivity of different tunings, without refitting
In this section we will compare the sensitivity of the BumpHunter when it is tuned in the following ways:

Not using sidebands criteria, and trying all window sizes, as described in paragraph 2.

Not using sidebands criteria, and constraining the window size between 3 and 5 bins. This is the tuning used to address the Banff Challenge, as described in paragraph 2.

Using sidebands criteria, and constraining the window size between 3 and 5 bins.
In this paragraph, is not obtained by refitting eq. 21 to the data (or pseudodata), but is always the same spectrum, which corresponds to .
The sensitivity of the various BumpHunter tunings are compared to that of the targeted test of paragraph 4.2. The sensitivity of Pearson’s traditional is also shown, where the test statistic is that of eq. 1.
Fig. 8 shows the probability of observing in three cases of signal, as a function of the expected number of signal events. The three signal cases used correspond to Gaussians of and means , according to the Banff sensitivity tests discussed in 4.1 and 4.2.
In Fig. 8 we see, as expected, that the BumpHunter is always less sensitive than the targeted test. It is much more sensitive, though, than a simple test, except when the signal is at 0.9.
In Fig. 8 it may be surprising is that the BumpHunter sensitivity does not reach asymptotically 100% when the sidebands criteria are taken into account and the width of the central window is constrained. This is the risk talked about in paragraph 2, step 2. The explanation is simple. When the signal increases a lot, and the central window is not allowed to become wider, the sidebands start accumulating so many signal events that they become discrepant, so the bump candidate often disqualifies. We see that this doesn’t happen when the sidebands are ignored, or when the size of the central window can vary freely.
One may compare the sensitivity of the BumpHunter without sidebands and constrained width in Fig. 8 to Table 1. In Fig. 8, for the same amount of injected signal shown in the table (i.e. 75.9, 10.3 and 1.35), the sensitivity appears higher. The difference is that in Fig. 8 the background is known and fixed, rather than obtained by fitting as in Table 1.
It is worth reminding here that, for any hypothesis test, sensitivity depends on the kind of signal. The conclusions of this paragraph may not apply to different signal shapes.
4.4 Locating the right interval
Here will be demonstrated how the BumpHunter locates the position of injected signal. We will refer to two of the BumpHunter tunings of paragraph 4.3; tuning 1 (no sidebands and unconstrained width) and tuning 2 (no sidebands and width constrained between 3 and 5 bins). The signal injected will be Gaussian of and mean 0.5; the results are similar at mean 0.1 and 0.9. Various amounts of signal will be tried to show how the ability to locate the right interval progresses.
Let’s first examine what intervals are located as most discrepant when there is no signal injected on top of the background of the Banff Challenge, . Fig. 9 shows two examples; one with BumpHunter tuning 1 and one with 2. Fig. 9LABEL: and Fig. 9LABEL: show that higher values are less likely to be included in the most discrepant interval. The reason has to do with expecting too few events beyond (see Fig. 7). To demonstrate that, Fig. 10 shows the same as Fig. 9LABEL:, but for a background function instead, so as to expect over 100 events even in the highest bin. Consequently, Fig. 10 shows more constant probabilities, indicating that the most interesting window is uniformly distributed in the range. In Fig. 10 one can still see a reduction of probability close to =0 and 1. These edge effects are there because the bins that are not so close to the edges have more possibilities to be included in the most discrepant interval; they may be in its middle, or near its end. Marginal bins, however, have fewer possibilities to be included; for the very last bin, only one way exists: the most discrepant interval has to reach to the edge of the range.
Fig. 11 shows the same as Fig. 9, except that just one signal event is injected (on average) on top of the background. According to Fig. 8LABEL:, the sensitivity to 1 signal event is very low. However, in Fig. 11LABEL: and 11LABEL: one sees that this signal is enough to give the right bins a much greater probability to be included in the most discrepant interval. Fig. 12 shows the same, but with 10 signal events injected on average, which makes the effects more prominent. In Fig. 12LABEL: one sees that the intervals tend to have approximately the width of the injected signal. Fig. 13 shows the same, but with 40 signal events injected on average, which means that the BumpHunter has 100% probability to return , according to Fig. 8LABEL:. In this case all intervals are located at the right position, and have the right width, given the finite size of bins which discretizes the width of the intervals returned by the BumpHunter.
5 Generalizing the BumpHunter concept
The BumpHunter is not the only hypertest one could use, as explained in paragraph 1.3.3. Understanding the logic behind the BumpHunter allows one to think of generalizations of this idea. One such generalization is the TailHunter (paragraph 5.1). Another is a hypertest that combines multiple distributions (paragraph 5.2). Another hypertest, very similar to the BumpHunter, was developped previously in the H1 experiment [7], where data deficits were also considered as potential signs of new physics, and no sideband criteria were used. The H1 hypertest^{13}^{13}13This is not the terminology used by H1, but looking at it from the perspective of this work, it was indeed a hypertest, taking the trials factor into account correctly., which obviously predates this work, can be viewed aposteriori as a particular tuning of the BumpHunter.
5.1 TailHunter
A simple hypertest, analogous to the BumpHunter, is the TailHunter, which is used in [1], and is also similar to the Sleuth algorithm [8, 9] used in [10, 2].^{14}^{14}14Besides small technical differences, the biggest difference is that Sleuth combined many final states, and didn’t use fixed bins. Regarding the combination of many final states, see paragraph 5.2.
One can think of the TailHunter as a BumpHunter without sidebands, where the right edge of every window is always at the last bin that contains data. The only requirement that remains in the definition of (eq. 15) is to have an excess of data with respect to the background. All tails are examined by local hypothesis tests, the smallest is used to define the statistic of the TailHunter hypertest, and the of the TailHunter is found as explained in paragraph 1.1.
Fig. 14LABEL: presents an example of a spectrum where the TailHunter finds less than 0.01 with credibility greater than 0.999. The spectrum is created by adding to dataset 0 of the Banff Challenge some signal events that follow a uniform distribution between 0 and 1, with 40 signal events expected in the whole interval. The observed TailHunter statistic in this example is 17.8, far beyond the values obtained in pseudodata, shown in Fig. 14LABEL:.
5.2 Combining spectra
Another hypertest (let’s refer to it as mBH for “multiBumpHunter”), allows the combination of two or more spectra to be scanned simultaneously. In some particle physics analyses this is useful, because an exotic particle may decay in many ways (e.g. and ), so the signal may populate two or more statistically independent distributions. When we search for bumps in the mass spectrum of decay products, e.g. in and , all spectra should indicate an excess at roughly the same mass, namely the mass of the new particle. The width of the signal, though, is not expected to be the same in all distributions, since different decay products may be measured with different experimental resolution.
One way to extend the BumpHunter into mBH is the following: The BumpHunter statistic is first computed independently in each spectrum, and then the mBH statistic is defined as the sum of all BumpHunter statistics^{15}^{15}15Remember that the BumpHunter statistic is the negative logarithm of a , so the sum of many BumpHunter statistics is the negative logarithm of a product of . Adding BumpHunter statistics is equivalent to multiplying ., with the extra requirement that all spectra must have their most interesting intervals within some distance from each other. The exact distance criterion can be adjusted. If bumps are found at different masses, then we can characterize the mBH’s finding maximally uninteresting, by setting the mBH statistic to 0. At last, the mBH is estimated as explained in paragraph 1.1.
The mBH is highly sensitive to signals that appear simultaneously in two (or more) spectra, because all signal significances are combined at the step where the BumpHunter statistics are summed. Obviously, the mBH described so far makes a strong assumption; that the signal has to appear simultaneously in all examined spectra. If this is indeed a characteristic of the signal, then mBH is more sensitive to it; otherwise it is not a wellmotivated test. As explained in paragraph 1.3.3, there is not a universally best hypertest.
If one relaxes the extra requirement which compares the interval locations in different spectra, and uses as mBH statistic the biggest BumpHunter statistic instead of their sum, then mBH naturally reduces to the approach taken in [10, 2] and [7] to search in multiple spectra without making strong assumptions. There, each spectrum is examined independently, without checking for patterns across spectra, and without making any attempt to combine the significance of the findings in different spectra. The smallest from all spectra is noted (this corresponds to defining the mBH statistic as the maximum BumpHunter statistic found across the examined spectra), and the probability is estimated of seeing a as small as that, or smaller, in pseudodata that follow in all distributions (and this corresponds to finding the of the mBH).
6 Conclusion
After an introduction to hypothesis testing and the meaning of , the issue of the trials factor was illustrated, and a method to deal with it was proposed, by the introduction of hypothesis hypertests. One such hypertest is the BumpHunter, inspired by searches for exotic phenomena in high energy physics.
The BumpHunter algorithm is presented, and its performance is demonstrated with the opportunity of the Banff Challenge, Problem 1 [3].
Besides documenting the BumpHunter (and TailHunter) algorithm in detail, the author is open to collaborating with people who need his code. Hopefully, it will soon be incorporated in a standard library, like ROOStats [11].
I wish to thank Pekka Sinervo, Pierre Savard, Tom Junk, and Bruce Knuteson, for our fruitful discussions.
References
 [1] ATLAS Collaboration, “Search for New Particles in TwoJet Final States in 7 TeV ProtonProton Collisions with the ATLAS Detector at the LHC,” Physical Review Letters 105 no. 16, (Oct., 2010) 161801, arXiv:1008.2461 [hepex].
 [2] CDF Collaboration, T. Aaltonen et al., “Global Search for New Physics with 2.0/fb at CDF,” Phys. Rev. D79 (2009) 011101, arXiv:0809.3781 [hepex].
 [3] W. Fisher, T. Junk, J. Linnemann, R. Lockhart, and L. Lyons, “Banff Challenge 2a Problems – Statistical Issues Relevant to Significance of Discovery Claims.” http://wwwcdf.fnal.gov/trj/bc2probs.pdf.
 [4] M. R. Whalley, (ed.) and L. Lyons, (ed.), “Advanced statistical techniques in particle physics.,” Proceedings, Conference, Durham, UK, March 1822 (2002) .
 [5] E. Gross and O. Vitells, “Trial factors for the look elsewhere effect in high energy physics,” European Physical Journal C 70 (Nov., 2010) 525–530, arXiv:1005.1891 [physics.dataan].
 [6] R. Brun and F. Rademakers, “ROOT: An object oriented data analysis framework,” Nucl. Instrum. Meth. A389 (1997) 81–86.
 [7] H1 Collaboration, A. Aktas et al., “A general search for new phenomena in e p scattering at HERA,” Phys. Lett. B602 (2004) 14–30, arXiv:hepex/0408044.
 [8] D0 Collaboration, B. Knuteson, “Sleuth: A quasimodelindependent search strategy for new physics,” arXiv:0105027.
 [9] CDF Collaboration, G. Choudalakis, “Sleuth at CDF, a quasimodelindependent search for new electroweak scale physics,” arXiv:0710.2378 [hepex].
 [10] CDF Collaboration, T. Aaltonen et al., “ModelIndependent and QuasiModelIndependent Search for New Physics at CDF,” Phys. Rev. D78 (2008) 012002, arXiv:0712.1311 [hepex].
 [11] L. Moneta, K. Belasco, K. Cranmer, A. Lazzaro, D. Piparo, G. Schott, W. Verkerke, and M. Wolf, “The RooStats Project,” arXiv:1009.1003 [physics.dataan].
Appendix A First 100 lines from the Banff Challenge, problem 1
0Ψ0Ψ0.9 = 9/10Ψ P(pval>0.01)= 1Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0 1Ψ0Ψ0.9 = 9/10Ψ P(pval>0.01)= 1Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0 2Ψ0Ψ0.1 = 3/30Ψ P(pval>0.01)= 0.999746Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0 3Ψ0Ψ1 = 10/10Ψ P(pval>0.01)= 1Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0 4Ψ0Ψ0.8 = 8/10Ψ P(pval>0.01)= 1Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0 5Ψ0Ψ0.1 = 3/30Ψ P(pval>0.01)= 0.999746Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0 6Ψ0Ψ0.0666667 = 4/60Ψ P(pval>0.01)= 0.999626Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0 7Ψ0Ψ0.4 = 4/10Ψ P(pval>0.01)= 1Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0 8Ψ0Ψ0.0666667 = 6/90Ψ P(pval>0.01)= 0.999961Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0 9Ψ0Ψ0.4 = 4/10Ψ P(pval>0.01)= 1Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0 10Ψ1Ψ0 = 0/690ΨP(pval<0.01)= 0.99904Ψ0.663528Ψ0.645274Ψ0.681782Ψ0.128468Ψ0.061079Ψ0.195858Ψ242.076Ψ230.556Ψ253.595 11Ψ0Ψ0.166667 = 5/30Ψ P(pval>0.01)= 0.999999Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0 12Ψ0Ψ0.4 = 4/10Ψ P(pval>0.01)= 1Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0 13Ψ0Ψ0.6 = 6/10Ψ P(pval>0.01)= 1Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0 14Ψ0Ψ0.5 = 5/10Ψ P(pval>0.01)= 1Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0 15Ψ0Ψ0.25 = 5/20Ψ P(pval>0.01)= 1Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0 16Ψ0Ψ0.2 = 4/20Ψ P(pval>0.01)= 0.999998Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0 17Ψ0Ψ0.5 = 5/10Ψ P(pval>0.01)= 1Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0 18Ψ0Ψ1 = 10/10Ψ P(pval>0.01)= 1Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0 19Ψ0Ψ0.4 = 4/10Ψ P(pval>0.01)= 1Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0 20Ψ0Ψ0.2 = 2/10Ψ P(pval>0.01)= 0.999845Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0 21Ψ0Ψ0.8 = 8/10Ψ P(pval>0.01)= 1Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0 22Ψ1Ψ0.00431373 = 11/2550ΨP(pval<0.01)= 0.999027Ψ0.0907464Ψ0.0800601Ψ0.101433Ψ2.5333Ψ1.78409Ψ3.28251Ψ236.094Ψ221.044Ψ251.144 23Ψ0Ψ0.3 = 3/10Ψ P(pval>0.01)= 0.999997Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0 24Ψ0Ψ0.1 = 3/30Ψ P(pval>0.01)= 0.999746Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0 25Ψ1Ψ0.00357143 = 7/1960ΨP(pval<0.01)= 0.99901Ψ0.497455Ψ0.488462Ψ0.506448Ψ0.392582Ψ0.266279Ψ0.518885Ψ267.207Ψ255.062Ψ279.351 26Ψ0Ψ0.0136605 = 103/7540Ψ P(pval>0.01)= 0.999016Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0 27Ψ0Ψ0.0165385 = 43/2600Ψ P(pval>0.01)= 0.999245Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0 28Ψ0Ψ0.6 = 6/10Ψ P(pval>0.01)= 1Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0 29Ψ0Ψ1 = 10/10Ψ P(pval>0.01)= 1Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0 30Ψ0Ψ0.9 = 9/10Ψ P(pval>0.01)= 1Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0 31Ψ0Ψ0.4 = 4/10Ψ P(pval>0.01)= 1Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0 32Ψ0Ψ0.0428571 = 6/140Ψ P(pval>0.01)= 0.999411Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0 33Ψ0Ψ0.6 = 6/10Ψ P(pval>0.01)= 1Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0 34Ψ0Ψ0.5 = 5/10Ψ P(pval>0.01)= 1Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0 35Ψ1Ψ0 = 0/690ΨP(pval<0.01)= 0.99904Ψ0.391657Ψ0.385773Ψ0.397541Ψ1.0584Ψ0.834799Ψ1.282Ψ297.514Ψ284.591Ψ310.436 36Ψ0Ψ0.3 = 3/10Ψ P(pval>0.01)= 0.999997Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0 37Ψ0Ψ0.2 = 2/10Ψ P(pval>0.01)= 0.999845Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0 38Ψ0Ψ0.2 = 2/10Ψ P(pval>0.01)= 0.999845Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0 39Ψ0Ψ0.2 = 2/10Ψ P(pval>0.01)= 0.999845Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0 40Ψ0Ψ0.9 = 9/10Ψ P(pval>0.01)= 1Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0 41Ψ1Ψ0.00855856 = 266/31080ΨP(pval<0.01)= 0.999893Ψ0.543414Ψ0.52964Ψ0.557188Ψ0.194656Ψ0.105582Ψ0.283731Ψ220.339Ψ209.713Ψ230.964 42Ψ1Ψ0 = 0/690ΨP(pval<0.01)= 0.99904Ψ0.143887Ψ0.134607Ψ0.153166Ψ2.21724Ψ1.70884Ψ2.72564Ψ287.405Ψ273.936Ψ300.875 43Ψ0Ψ0.6 = 6/10Ψ P(pval>0.01)= 1Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0 44Ψ0Ψ0.2 = 2/10Ψ P(pval>0.01)= 0.999845Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0 45Ψ0Ψ0.1 = 3/30Ψ P(pval>0.01)= 0.999746Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0 46Ψ0Ψ0.0177778 = 32/1800Ψ P(pval>0.01)= 0.999091Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0 47Ψ0Ψ0.7 = 7/10Ψ P(pval>0.01)= 1Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0 48Ψ0Ψ0.3 = 3/10Ψ P(pval>0.01)= 0.999997Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0 49Ψ0Ψ0.5 = 5/10Ψ P(pval>0.01)= 1Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0 50Ψ0Ψ0.3 = 3/10Ψ P(pval>0.01)= 0.999997Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0 51Ψ0Ψ0.7 = 7/10Ψ P(pval>0.01)= 1Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0 52Ψ0Ψ0.2 = 6/30Ψ P(pval>0.01)= 1Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0 53Ψ0Ψ0.3 = 3/10Ψ P(pval>0.01)= 0.999997Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0 54Ψ0Ψ1 = 10/10Ψ P(pval>0.01)= 1Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0 55Ψ0Ψ0.133333 = 4/30Ψ P(pval>0.01)= 0.999986Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0 56Ψ0Ψ0.2 = 2/10Ψ P(pval>0.01)= 0.999845Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0 57Ψ0Ψ0.05 = 5/100Ψ P(pval>0.01)= 0.999437Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0 58Ψ0Ψ0.4 = 4/10Ψ P(pval>0.01)= 1Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0 59Ψ0Ψ0.2 = 2/10Ψ P(pval>0.01)= 0.999845Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0 60Ψ0Ψ0.4 = 4/10Ψ P(pval>0.01)= 1Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0 61Ψ0Ψ0.7 = 7/10Ψ P(pval>0.01)= 1Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0 62Ψ0Ψ0.4 = 4/10Ψ P(pval>0.01)= 1Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0 63Ψ0Ψ0.15 = 3/20Ψ P(pval>0.01)= 0.999948Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0 64Ψ0Ψ0.6 = 6/10Ψ P(pval>0.01)= 1Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0 65Ψ0Ψ0.3 = 3/10Ψ P(pval>0.01)= 0.999997Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0 66Ψ0Ψ0.0368421 = 7/190Ψ P(pval>0.01)= 0.999247Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0 67Ψ0Ψ0.25 = 5/20Ψ P(pval>0.01)= 1Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0 68Ψ0Ψ1 = 10/10Ψ P(pval>0.01)= 1Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0 69Ψ0Ψ0.4 = 4/10Ψ P(pval>0.01)= 1Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0 70Ψ0Ψ0.3 = 3/10Ψ P(pval>0.01)= 0.999997Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0 71Ψ0Ψ0.7 = 7/10Ψ P(pval>0.01)= 1Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0 72Ψ0Ψ0.0571429 = 4/70Ψ P(pval>0.01)= 0.999247Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0 73Ψ1Ψ0.00431373 = 11/2550ΨP(pval<0.01)= 0.999027Ψ0.507966Ψ0.496324Ψ0.519609Ψ0.391159Ψ0.262968Ψ0.51935Ψ275.858Ψ263.193Ψ288.522 74Ψ0Ψ0.4 = 4/10Ψ P(pval>0.01)= 1Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0 75Ψ0Ψ0.1 = 4/40Ψ P(pval>0.01)= 0.999944Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0 76Ψ0Ψ0.2 = 2/10Ψ P(pval>0.01)= 0.999845Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0 77Ψ0Ψ0.2 = 2/10Ψ P(pval>0.01)= 0.999845Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0 78Ψ0Ψ0.5 = 5/10Ψ P(pval>0.01)= 1Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0 79Ψ0Ψ0.4 = 4/10Ψ P(pval>0.01)= 1Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0 80Ψ0Ψ0.4 = 4/10Ψ P(pval>0.01)= 1Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0 81Ψ1Ψ0.00230769 = 3/1300ΨP(pval<0.01)= 0.99902Ψ0.508847Ψ0.495754Ψ0.521941Ψ0.24085Ψ0.135517Ψ0.346183Ψ251.84Ψ240.059Ψ263.621 82Ψ0Ψ0.5 = 5/10Ψ P(pval>0.01)= 1Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0 83Ψ0Ψ0.8 = 8/10Ψ P(pval>0.01)= 1Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0 84Ψ0Ψ0.7 = 7/10Ψ P(pval>0.01)= 1Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0 85Ψ0Ψ0.2 = 4/20Ψ P(pval>0.01)= 0.999998Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0 86Ψ0Ψ0.3 = 3/10Ψ P(pval>0.01)= 0.999997Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0 87Ψ0Ψ0.2 = 2/10Ψ P(pval>0.01)= 0.999845Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0 88Ψ0Ψ0.5 = 5/10Ψ P(pval>0.01)= 1Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0 89Ψ1Ψ0 = 0/690ΨP(pval<0.01)= 0.99904Ψ0.961137Ψ0.885444Ψ1.03683Ψ6.06533Ψ670.853Ψ682.983Ψ245.924Ψ234.977Ψ256.871 90Ψ0Ψ0.166667 = 5/30Ψ P(pval>0.01)= 0.999999Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0 91Ψ0Ψ0.8 = 8/10Ψ P(pval>0.01)= 1Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0 92Ψ0Ψ0.075 = 3/40Ψ P(pval>0.01)= 0.999246Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0 93Ψ0Ψ0.0473684 = 9/190Ψ P(pval>0.01)= 0.999973Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0 94Ψ0Ψ0.7 = 7/10Ψ P(pval>0.01)= 1Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0 95Ψ0Ψ0.5 = 5/10Ψ P(pval>0.01)= 1Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0 96Ψ0Ψ0.6 = 6/10Ψ P(pval>0.01)= 1Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0 97Ψ0Ψ0.0333333 = 9/270Ψ P(pval>0.01)= 0.999529Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0 98Ψ0Ψ0.0571429 = 4/70Ψ P(pval>0.01)= 0.999247Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0 99Ψ0Ψ0.9 = 9/10Ψ P(pval>0.01)= 1Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0Ψ0