Hypothesis testing can be done without the use of probability. Firstly we sort data by the influence statistic ** inf_{i}** and remove the minimum

**data from either the highest or lowest extremity of data till the effect size m=0 from**

*q%***.**

*y=mx+b*(1)

In equation (1) ** inf_{i}** is the influence of the point

**in calculating**

*i***. This is defined as the following:**

*m*** m_{U}** is the effect size

**calculated for all points in the dataset, where**

*m***denotes the entire dataset.**

*U*** m_{U-i}** is

**calculated without the**

*m***data point.**

*i*_{th}The minimum percent of data to exclude so that the effect size reaches or passes zero is the * q%* of data. Note to determine the

*of data is rather cumbersome. This is because the most influential data point in a certain direction has to be determined and deleted one at a time. So potentially we have to recalculate*

**q%****a total of n/2 (where n is the sample size) at most to get the effect size to reach the null hypothesis in the fastest way. This way we are sure that we get the minimum percent of data to exclude to get a null-effect.**

*inf*_{i}Because data can contribute to the effect size in an unequal way, it would be inaccurate just to report the minimum * q%* of data to be excluded so that the effect size is zero.

Hence we have the following statistic based on deriving interval estimation by excluding effect:

for i where

for i where

The Upper and Lower Confidence Intervals by a 30% percentage effect are as below:

Therefore the Q-value or percent of effect to be excluded is derived as below:

: for i where: if

: for i where: if

A large Q-value indicates a real result not due to chance with a high signal to noise ratio. Q-values are real values ranging from zero to +ve infinity.