**Statistics Without Probability – A New Statistical Paradigm**

New research topics for SWOP at SWOP Trello

**Abstract:**

Statistics Without Probability prescribes to the paradigm that each sample of patients is unique, forming its own population. Statistics without probability arises from the inductive argument of Analogy. By analogy, if a patient is sufficiently similar to the patients in the sample on who the study was done then that patient will also receive the benefit that the sample received.

Statistics without probability theory discards the need for probability distributions, standard errors, confidence intervals and P values. Statistics without probability does not assume the axioms of probability.

Point estimation is based on least squares. Two methods are proposed each for Hypothesis testing and interval estimation based on the influence statistic, which is the change in effect size after excluding a data point.

We show that interval estimation and hypothesis testing values do not contract with increasing sample size as seen in frequentist estimation, thereby overcoming the Jeffreys-Lindley Paradox. We also show that interval estimates contract with decreasing noise to signal ratio as they should.

SWOP can also carry out adjusting for confounders via the Corrected Treatment Effect (CTE) and develop and assess accuracy of Prediction modelling via the Standardised Mean Residual (SMR).

invariant estimators, New Statistical Paradigm

**Introduction:**

Recall from the philosophy of statistical science according to the frequentist perspective, that a sample is from a population and a patient is part of that population, that is the argument of \textit{Generalisability} and \textit{Statistical Syllogism}. From a Statistics Without Probability (SWoP) perspective a patient is sufficiently similar to the sample; that is the argument of Analogy. Both principles are present in inductive philosophy. Neither is more powerful, neither is more concrete and they both talk of likelihoods and increased or decreased gradients of effect sizes. Neither Generalisability (and Statistical Syllogisms) or Analogy are deductive arguments in that the outcomes are fixed and certain.

The inductive argument from analogy is as follows. A sample of patients shares some inclusion and exclusion criteria or some other set of properties that define the sample. For example, those over 55 years of age who do not smoke and do not have diabetes. These features of the sample are usually true for all of the members of that sample. A drug, say for heart disease is tested on this sample via a randomised control trial and is found to be effective in preventing heart attacks. Then, another individual with the same properties (over 55, not diabetic, not a smoker), alive closely after that trial would benefit from that drug to prevent heart attacks. This is what happens in clinical medicine. That is if an intervention works for patient group X and my patient is similar to patient group X then the intervention would work for my patient.

One can argue that by analogy since patient “X” is sufficiently similar to the patients in the sample on who the RCT was done then patient “X” will also receive the benefit that the sample received.

So why then should we use SWOP theory over frequentest theory? With SWOP theory we discard the need for probability distributions, standard errors and confidence intervals and P values. We do not even need the axioms of probability for SWOP to work. We do not need to assume that any event has an apriori probability. SWoP theory has its own set of point estimation, interval estimation and hypothesis testing.

This paradigm removes the need for knowing probability distributions and standard deviations and standard errors in statistics.

This also means we don’t have to worry about the independence of errors when calculating effects for longitudinal, correlated, multilevel and clustered data as we don’t use P values or confidence intervals or standard errors in this paradigm of statistics without probability.

Also, a sample size determination method is discussed based on assessing the fluctuations of the point estimate and its associated statistics as the sample size increases using real data or simulations.

Furthermore, novel techniques for adjusting for single and multiple confounders under the SWoP paradigm are discussed along with prediction techniques for SWoP.

This makes Statistics Without Probability a fully fledged paradigm in Statistics.

Adrien Marie Legendre. Nouvelles méthodes pour la détermination des orbites des comètes. F. Didot, 1805.

Glenn Shafer. Lindley’s paradox. Journal of the American Statistical Association, 77(378): 325–334, 1982.

By Dr Mithilesh Dronavalli MBBS/BMedSc (Melb) MBios (Mon) MPhil(Epi) (Syd)

Born in India, Andhra Pradesh, Gudivada and a current Overseas Citizen of India. I am a proud Indian.

Statistical Consulting: www.dataclinic.org

Resume: www.kindmind.info

Poetry: www.sastram.com

Linked In: https://au.linkedin.com/in/mithilesh84

Email: dr.mit(at)me.com