If every single member of a reading club made a huge gain in reading age and no member of the control group made any such gain, then you can be reasonably confident that the eating club benefited the pupils’ reading ability! In the real world, however, data is rarely that clear cut. There will usually be variation between individuals and the degree of this may shed doubt on your conclusions. If an improvement is shown, you need to judge it this improvement is significant.
Even if a significant effect is noticed, you need to consider the reliability of your conclusion – how certain can you be that what you did caused the effect, rather than any other factors? This guide takes you through the necessary steps to ensure that you get real value and meaning from your collected results. It deals with quantitative data, as it is often difficult to draw general conclusions from qualitative data. IDENTIFYING TRENDS You hope that attendance at study support in general, or at a particular activity, will have some measurable effect on the participants.Order now
Trends are important when you want to study the effect Of different amounts Of study support. Examples might be: The effect of different levels of attendance at study support on SAT level achieved at the end of a Key Stage. The effect of different number of attendances at a Math’s club on Math’s results at the end off year. Progress of a measurable sports skills during the course of an extended programmer. What you are looking for is some sort to relationship between what you provided and a measurable outcome.
This is called a correlation, and is best shown by plotting the detects on a grandfathered types to correlation are shown in Box l. Strong positive correlation Increasing the amount of activity increases the measurable outcome considerably. Weak positive correlation Increasing the amount of activity increases the measurable outcome to some extent. No correlation The activity has no measurable effect on the outcome Strong negative correlation The activity significantly decreases the measured outcome (this can be desirable – e. G.
If the activity was a ‘health club’ and the outcome was fast food items eaten in a week). 1 Weak negative correlation Increasing the amount of activity decreases the measurable outcome to some extent. Box 1 – Types of correlation Amount of activity Measurement of effect Weak positive correlation No correlation Measurement Of effect Amount Of activity Strong positive correlation Very often, the points on the graph will not fall in an absolutely regular pattern. The lines drawn on the graphs in Box 1 simply indicate general trends in the data.
Other patterns are possible: This type of trend indicates that the activity has an initial effect, but that after a while the rate of improvement becomes much less. This could be useful in determining the optimum length for the activity. This pattern oftener indicates that improvement takes a while to ‘kick in’. This also gives useful information, this time on the minimum duration of the activity. It can be seen that analyzing such trends can provide a range of useful information, beyond the simple question of “Does this activity have an effect?
ACCURACY AND RELIABILITY OF DATA It is unwise to draw firm conclusions from inaccurate or unreliable data. Accurate data reflects reality – it is near to the ‘true’ value of what you are measuring. Often, there is little problem with the accuracy of data collected in connection with study support. Unless there is human error, for instance, it is impossible to be ‘inaccurate’ when recording attendance data – either a pupil is there or is not! Accuracy can sometimes become an issue with questionnaires and evaluation forms, however. Some common problems are: 2) 3) 4) 5) The pupils do not understand the question
The pupils misunderstand the question – this can often occur venue asked to give a rating of 1-5 for a response, for instance. Some pupils may ‘reverse’ the scale thinks I means ‘good’, when in fact it means ‘bad’) The pupils tend to give the response that they feel is ‘expected’ rather than a genuine opinion. The pupils answer in ‘friendship groups’, all Of Whom give the same response. Self reporting of progress can be subjective and inaccurate. This is particularly the case With pupils Of low self-esteem, Who can either play down their progress or be reluctant to admit to anything less than absolute success.
Careful planning of the questionnaire and its administration, and careful observation of the responses can help to avoid such inaccuracies. If a particular inaccuracy is likely to skew the results, some data may need to be excluded. In the suggested scenario above, for instance, where a pupil may be following his or her friend, there may be a case for recording their two responses as a single response. This exclusion of results should only be done where there is a clear _noblewomen’1 each sweeps_J©H accountableј each, complementј vive. Team viewer. Mom nonmember, accountable cable nonpaying Ton Housekeepers noncombatant,OR Snare sac B October! 42 group Who attended, but the results are very variable and several pupils Who attended did worse than some who did not. These results are rather unreliable Grade improvement or decrease over predicted grade Page Attended Did not attend The results here produce the same averages but the figures for the attendees are much less variable and can therefore be considered more reliable. The figures for those who did not attend show a little more variability than for those who did.
Note that both sets of data in Box 2 show a relatively small improvement in those ho did attend study support compared to those who did not. A judgment will still hue to be made as to whether such an improvement can be judged significant (see below), but that judgment can be made much more easily with reliable results than with unreliable ones. If the difference between groups is very large, reliability of results, uphill still desirable, becomes less important. In order to maximize reliability, the following measures are suggested: 1) Ensure that results are as accurate as possible (see above).
Inaccurate methods can increase variation. Ensure sample size is big enough (rogue’ results can have a big effect in small amplest). If your results are not as reliable as you would like, it does not mean that they are useless and should be discarded. Unreliability can make any conclusions a little more tentative, but not necessarily invalid. Benefit is significant, even if small. A simple way to judge the significance of any difference is to analyses the results statistically (see below).
It is difficult to draw reliable conclusions unless some form of control group is used. A ‘Reading Buddy’ screener may appear to produce a significant rise in the participants’ reading age. However, unless you know that a group of similar pupils who did not attend the scheme showed much less advancement, the conclusion is unreliable. Setting up a control group is sometimes difficult, and this is covered in another guidance leaflet in this series, but it is usually essential in order to draw reliable and valid conclusions.
Sometimes, national data may be a substitute for a school based control group. Ensure that any conclusions drawn do not go beyond what the data actually indicates. In particular, data will very often indicate an effect but not provide any evidence for the cause of that effect. For example, let us suppose that attendance t a Summer transition course resulted in the pupils who attended reporting less anxiety in the first week of their new school year than those pupils who did not attend.
It is reasonable to conclude that attendance was linked with less anxiety, but the data is unlikely to give precise evidence as to why this occurred. A conclusion like “having the chance to meet their peers and teachers in the transition course caused them to feel less anxious when they started school” is not justified (unless that was specifically identified as a factor by the pupils concerned It may be, for instance, that those pupils who were willing to attend such a course were inherently less anxious to begin with.