In June 2015 (Ridout Report: ‘Inherent weaknesses in CQC’s rating system – will you get caught out?’), we highlighted struggles experienced by Providers initially rated as ‘Requires Improvement’ or ‘Inadequate’ as they tried to increase their overall ratings through subsequent inspections. In this article, we reflect on a potential shift in CQC’s approach.
The issue encountered by Providers was that follow up or ‘focused’ inspections would only re-assess those areas rated as ‘requires improvement’ or ‘inadequate’ rather than assessing the service as a whole. Where discrete ratings were subsequently increased, CQC often refused to increase overall ratings despite significant improvements; arguing that there needed to be evidence of consistency of practice over time.
Obtained through a Freedom of Information Act request, CQC internal guidance for inspectors sets out the following at s11.4:
· “Where the last comprehensive inspection took place less than six months ago, you should use a fresh aggregation tool to see whether the overall location rating should be changed… We will not change an overall rating if we carry out a focused inspection more than six months after a site visit for a comprehensive inspection. This is because we will not be able to make judgements about all aspects of the service at a reasonably similar time, which we must be able to do in order to award an overall rating”.
· “The ratings characteristics are written in terms of there being a track record and consistency of practice, which would be unlikely to be achieved in the time between the comprehensive inspection and the follow up inspection. For example, it is doubtful that a service would be able to achieve the consistency characteristics required to be ‘Good’ within a few weeks or months of being judged ‘Requires improvement’”.
The guidance indicates that, if the focused inspection occurs within 6 months of the comprehensive inspection, the overall rating can be changed. However, evidence of consistency (without which the overall rating cannot be amended) is unlikely to be achievable “within a few weeks or months”. The guidance fails to set out a preferred time period (i.e. between four and six months) during which follow up inspections could assess both improvements and consistency.
Without such clarity, overall ratings are not reflective of Providers’ improvements and decision-making by inspectors becomes aribtrary.
CQC’s refusal to amend an overall service rating (despite acknowledging significant improvements) leads to irrational conclusions. One such recent example is a Provider whose first inspection resulted in 2 ‘good’ ratings and 3 ’requires improvement’. The focused inspection took place six months and one week later. Inspectors found the remaining 3 to now be ‘good’, but did not amend the overall rating to ‘good’ as CQC could not confirm consistency of practice over time.
The refusal of CQC to amend the overall rating was successfullychallenged by the Provider as an irrational decision based on internal guidance which unfairly fettered the inspectors’ discretion.
Had this rating remained, a service with 5 ‘good’ ratings would have been overall ‘requires improvement’. Plainly, this would not constitute an accurate reflection of the Provider’s service and may have an adverse impact on the Provider’s business.
The CQC’s Provider handbook explains that CQC ratings are a tool “to help people choose care”. An inaccurate overall rating may dissuade potential service users and their families from choosing a service which would otherwise be of great benefit to them. Robert Francis QC emphasised in his Executive Summary to the Report of the Mid Staffordshire NHS Foundation Trust Public Enquiry that the “provision of accurate information on which the public and others can rely to make decisions” (paragraph 1.73) is the basic task of CQC, together with protecting users of services from substandard care.
Commenting on the Government’s publication of its findings from The Cutting Red Tape Reviews (3 March 2016), CQCChief Inspector of Adult Social Care, Andrea Sutcliffe, echoed these sentiments:
“The responsibility for delivering safe, effective, person-centred and high quality care clearly rests with providers, supported by their commissioners and funders. Regulators must not get in the way of that – we have to ensure that we add value by setting clear expectations, providing transparent information about our judgments, encouraging improvement and tackling poor care when we find it”.
CQC’s interpretation of its internal guidance creates a far from transparent system. It is not in the interests of either CQC or the public for inappropriate or misleading ratings to be published. Maintenance of public confidence in CQC dictates a high level of scrutiny of all findings and it is within this context that questions must be asked of CQC’s approach to the concept of consistency of practice over time.
It is hoped that the recent successes in overturning such decisions demonstrates a new, common-sense approach by CQC. To avoid the need for such challenges in future, however, it may be that CQC needs not only to scrutinise its ratings but also its underlying guidance.