Probability and significance

Specification: Probability and significance: use of statistical tables and critical values in interpretation of significance; Type I and Type II errors.

Before analysing data, a clear hypothesis must be outlined which can be either directional or non‐directional. It is important to recognise which is which; without knowing, the wrong statistical test might be selected for the data which will misrepresent the findings.

Β 

A directional hypothesis states which direction the findings are expected to take. For example, in a test of difference, we might expect that one group performs better than the other; in an association, we might specify the type of relationship that we expect to see, for example a positive relationship (correlation). A directional hypothesis is selected by a researcher when previous research in that field of psychology suggests findings will go in that particular direction.


A non‐directional hypothesis is selected by a researcher when there is little, or conflicting, evidence in that field of psychology and a clear outcome for the research is not certain. For example, a non‐directional hypothesis investigating differences may state that a difference between conditions is expected, but not give further specific details regarding that difference. Equally with an association, a non‐directional hypothesis would predict that a relationship would be found, without stating the direction.Β 

Use of statistical tables and critical values

After conducting a statistical test (e.g. the sign test), a number will be generated which is called the calculated value. It is this number that will help determine whether results are significant, which will in turn help decide whether to reject the null hypothesis and accept the experimental/alternative hypothesis. To do this, the calculated value needs to be compared with the critical value in the statistical tables.

Β 

The critical value varies with the statistical test used, as each has its own specific table of critical values. To know which critical value is needed, several factors need to be considered in making the decision.

Type I and type II errors

Because the level of significance selected suggests that there is a chance that findings are the result of chance (most commonly, 5%), there is the possibility that the wrong hypothesis has been accepted in error.

Β 

A type I error occurs in situations where the null hypothesis is rejected, and the experimental/alternative hypothesis accepted, when it should have been the other way around. A researcher will have concluded that the results are statistically significant when in fact they are not. This can also be referred to as a false positive whereby the psychologist falsely claims their findings are significant when in fact there is no difference/relationship present.

Β 

A type II error occurs when the researcher has accepted the null hypothesis and rejected the experimental/alternative hypothesis, and it should have been the other way around. This can also be referred to as a false negative whereby the psychologist thinks their findings were not statistically significant, but they were.

Β 

The likelihood of making each of these errors can be down to the level of significance chosen. If the p‐value is too lenient, perhaps 0.1 instead of 0.05, then a researcher is likely to have made a type I error, and claimed that their results were statistically significant when they were not.

Β 

If the p‐value is too strict, perhaps 0.01, then it is likely that the researcher has made a type II error, stating that the findings were not statistically significant when they were. This is why psychologists often maintain a p‐value of 0.05, to ensure that their results are due to the independent variable having an effect on the dependent variable. This 5% level ensures that there is a balance between the risk of making a type I or a type II error.

Possible exam questions

Revision materials