The CQC has recently published further information on its new regulatory approach, most notably in relation to the frequency of assessment, how it will gather evidence and its new scoring system for reaching ratings.
Frequency of assessment
Assessment frequency will be dependent on the information the CQC receives and the evidence it collects about a service. This contrasts to previous assessment frequencies based on the type of service and its previous rating. The CQC is currently carrying out responsive inspections mainly based on the presentation of risk and this focus on risk will not change under its new regulatory approach. However, the collection of evidence will no longer be primarily based on physical inspections and the CQC is moving to refer to ‘evidence collection activities’.
The new assessment framework is broken down into different evidence categories and the CQC is internally setting an initial schedule for ongoing assessment of different categories for different types of services. The schedule is intended to be flexible and can be varied in light of the CQC’s view of risk or in response to additional national priorities (e.g. the current focus on maternity services).
The CQC has confirmed that its current ambition is to update the information it holds on a service across all required evidence categories within a 2-year period.
The CQC has developed evidence categories, linked to the new quality statements, to help it decide how best to collect evidence. The categories are intended to bring structure and consistency to its assessments and demonstrate the types of evidence the CQC uses to understand the quality of care being delivered against a quality statement.
Six evidence categories have been developed:
- People’s experience of health and care services;
- Feedback from staff and leaders;
- Feedback from partners;
- Processes; and
On-site inspections fall under the ‘observation’ category and these will be carried out more frequently where:
- There is greater risk of poor or closed cultures and it is the only way of gathering people’s experiences of case;
- It’s the only way to ensure the right people and activities will be available to assess quality;
- It has concerns about transparency and availability of evidence;
- There is a statutory obligation to do so.
In general, the number of on-site inspections is likely to reduce and the CQC is shifting towards more remote data collection practices. This demonstrates an opportunity for providers to be proactive with the sharing of information, for example to help update the CQC on real-time improvements within a service.
New rating system
The CQC has published further information on how it will use a scoring system to process its evidence and determine ratings of Inadequate, Requires Improvement, Good and Outstanding for the five key questions and at the overall service level. It has confirmed that it will only publish ratings to start with but it intends to publish scores in the future.
The scores are intended to ensure clarity and consistency around ratings and between providers and also to help the CQC track whether quality is moving up or down within a rating.
Scores will be attributed to each evidence category to determine an overall score for a quality statement. These scores will then be used to generate a rating at the five key question level and then these ratings will be aggregated for the overall rating.
Scores range from 1-4 with the following criteria:
4 = evidence shows an exceptional standard of care
3 = evidence shows a good standard of care
2 = evidence shows shortfalls in the standard of care
1 = Evidence shows significant shortfalls in the standard of care
Scores for each quality statement are determined by adding together the evidence category scores for an individual quality statement and diving that number by the maximum possible score. This will give a percentage score that will be used to determine the overall score for the quality statement. Percentage thresholds have been set for each score from 1-4 and similarly for each rating level (please see below). Key question ratings are reached in a similar way, by adding together the score for each quality statement and calculating the percentage score for that area which gives us the rating for that key question.
The percentage thresholds are the same for the scores and the ratings, as follows:
25 to 38% = 1 / Inadequate
39 to 62% = 2 / Requires Improvements
63 to 87% = 3 / Good
Over 87% = 4 / Outstanding
Key question ratings will then be aggregated to determine the overall rating.
As the CQC moves away from assessing at a single point in time, the intention is for different areas of the framework to be assessed utilising the new scoring system on an ongoing basis. This means scores can be updated more frequently in relation to single evidence categories which could have a knock-on impact on ratings.
Providers should note that the CQC is still consulting on how it will develop its existing factual accuracy process to allow providers to challenge ratings attributed under the new system.