When comparing two treatments, any differences in results may simply reflect the play of chance.
The way to avoid being misled by the play of chance in treatment comparisons is to base conclusions on studying sufficiently large numbers of patients who die, deteriorate, improve, or stay the same.
For example, in a study of 20 patients, 4 out of 10 patients (40%) died after receiving Treatment A compared with 6 out of 10 (60%) similar patients who received Treatment B.
The measure of difference (risk ratio) in this example is 0.67.
Based on these small numbers, would it be reasonable to conclude that Treatment A was better than Treatment B?
Probably not. Chance might be the reason that some people got better in one group rather than the other.
If the comparison was repeated in other small groups of patients, the numbers who died in each group might be reversed (6 against 4), or come out the same (5 against 5), or in some other ratio – just by chance.
But what would you conclude if exactly the same proportion of patients in each treatment comparison group (40% and 60%) died after 100 patients had received each of the treatments? Although the risk ratio is exactly the same (0.67) as in the earlier comparison, 40 deaths compared with 60 deaths is a more impressive difference than 4 compared with 6, and less likely to reflect the play of chance.
If you feel that this definition hasn't helped you to understand the term, click on our monkey to let us know.