We collected data for each indicator (sometimes this involves a calculation). We then found the average for England for each variable, and the standard deviation between local authorities within England. In most cases, the England average was available from the same source as the data for individual local authorities. Where this wasn’t the case we calculated the average by taking a weighted (by population) average of all local authorities. This was the case for 16 indicators, including those sourced from the IMD.
We then calculated the z-scores for each indicator by each LA, subtracting the mean for England and dividing by the standard deviation between the LAs
where rawij indicates the original indicator value for indicator i for LA j, etc. Where necessary indicators were reversed so that positive numbers are better than average.
Calculating z-scores allow us to compare a LA’s performance on two indicators even if they are measured on different scales. So if an LA scores -1.0 on one indicator, and -2.0 on another, then it means that it is 1 standard deviation below the English mean for the former, but 2 standard deviations below the mean for the latter – indicating that the second indicator may be more of a priority for the LA.
Note that, in future years, to allow comparison over time, it will be possible to calculate ‘pseudo z-scores’ where the data for new years is benchmarked against the mean and standard deviation from this first Index. That means that while for this year, the average z-score for any indicator is by definition 0, in future years, the average could rise or fall.
We averaged all indicators within each subdomain first. In almost all cases, all indicators were given the same weighting. We then averaged all subdomains within each domain. Note that we had two measures of wellbeing inequality, so these were averaged together, before combining them with the other two measures of inequality. We then averaged for all the domains for the Local Conditions to create a Local Conditions score.
z-scores are hard to interpret for most people. We converted them to a scale that runs between 0 and 10, with 5 indicating the average for England (for this year). A 10 on such a scale indicates an exceptionally good performance, and a 0 indicates an exceptionally bad performance. To do so, each z-score was multiplied by 5/3 and then 5 was added, as shown below:
Scores above 10 were capped at 10, and those below 0 were capped at 0.
This may seem, and indeed is, somewhat arbitrary, and the formula was designed purely to ensure a reasonable spread of scores between 0 and 10. With this formula, any variation beyond 3 standard deviations away from the mean is ignored. So, for example a LA which has a z-score of 3.1 on a particular domain would get 10/10, as would a local authority which had a z-score of 7.1. The implication is that any variation beyond a certain range is fairly irrelevant. As it happens, out of the 2700 subdomain scores for the 150 local authorities, only 8 z-scores fell beyond the ±3 range, and were therefore capped.
As well as calculating 0-10 scores, we also devised a colour scheme for presenting scores.
These are shown below.
The thresholds were chosen to ensure a reasonable spread across the colours. So for example, 18% of subdomain scores are in the bottom category, 21% in the second category, 27% in the third category,
and so on