CQC’s Strategy for 2016 to 2021 sets out four priorities it will focus on in order to achieve a more targeted, responsive and collaborative approach to regulation, so more people get high-quality care.  One of those four priorities is to “deliver an intelligence-driven approach to regulation”.

CQC hopes, that by being more intelligence-driven, it will become a more effective and efficient regulator.  CQC currently uses information to determine risk, and target resource, but it accepts the quality and availability of information is currently limited in adult social care.

CQC wants to use information to focus on risk by “combining and mining” information collected from various sources, including the public, providers, inspections and other sources such as whistleblowing to “generate rapid insight and action”.

In the 2014 CQC paper called “An intelligence-driven CQC” (an apparent pre-cursor to the current CQC Strategy) it was noted that “Our staff told us they needed more analysis and interpretation, rather than raw data.”  Three years ago, CQC staff were apparently struggling to interpret available data.  Whether this was due to time constraints or an inability to analyse and understand what the raw data might mean we do not know but the 2016-2021 Strategy appears to try and further advance the 2014 paper and it would appear that much further work is required by CQC to achieve what it laid out in 2014.

But how will the “raw” information be gathered and used and who, or what, will analyse the information quickly and insightfully? In order to achieve its aims, CQC will have to gather information efficiently.  One way to improve how information is gathered is through technology.  Technology is being used more and more in the health and social care sector and it’s something we all must get to grips with in day to day life.  CQC wants to move all its interactions with providers online.  CQC recognises the cost pressures that all providers are under and the fact that many of the 25,000 adult care locations are run by small providers.  Cost is a barrier to providers taking up technology but if CQC wants to generate rapid information and rapid insight then technology on all sides is going to have to be deployed.

Of course, in time, this may result in artificial intelligence (AI) gathering information and assimilating it from sources previously not used – how about information on when call bells in homes are answered; when someone posts a negative review on Facebook; or simply a phone call is made from a care home to the local social services team?

Rapidity also requires algorithms, and someone will have to determine what set of rules will be used in order to analyse the mined data.  Whether that’s people in the short term or AI in the long-term, what should be determinative of risk? Will CQC look at quantitative information and/or qualitative information?  For example, is it the number of statutory notifications a provider has submitted to CQC that will flag up a risky provider (and what is the magic trigger number?) or is it the type of issue within a singular notification that highlights potential concern?  Are we looking at long term or short-term trends?

Will CQC inspectors be replaced by computers?  Probably not.  But advancements in technology may free up CQC inspectors to spend more time on the ground inspecting and therefore, whilst the initial spend on technology would be high for CQC, it may save money in the long term.  However, as with any information that sparks an inspection, that information merely highlights a potential area of concern which must be tested and corroborated by CQC inspectors.  Intelligence simply presents facts and perhaps, on the algorithms/rules used, skewed facts.  Analysed information itself, must be continually analysed, tested and, if required, the base line changed. Think about computer generated police profiling and the proportion of people from ethnic minorities who are stopped and searched by police versus white people.  Those systems should be re-checked to ensure limited resources are not being used ineffectively – that CQC inspectors are being sent to providers that may truly be a concern.  Computer systems can be flawed and unfortunately so can humans – we’re conditioned by multiple factors and confirmation bias means we tend to look for evidence that supports our views rather than evidence which undermines it.  Therefore, whilst CQC inspector views may be coloured by the pre-inspection information, inspection findings must also be tested and challenged, where necessary.

In addition, no matter what algorithms are used, insight and action can only be achieved “after the event” – the data that is collated means events have already occurred.  CQC wants to move back to a model where there is a more hands-off approach to providers rated Good or Outstanding – a model once employed but amended following various sector scandals such as Winterbourne View.  If used appropriately, information and intelligence may allow CQC to act swiftly, but the reality is that events which trigger inspection will have already happened.  Is this good enough? Can and should intelligence be used to predict potential failures; predict potential breaches of regulations; predict those providers who should be closed down now and action taken ahead of time or are we straying into a “PreCrime” department à la Minority Report?

Whatever an “intelligence-driven approach” for CQC will mean in reality, intelligence and technology can only do so much.  Remember the Star Ratings Programme when a provider could not achieve a rating because the computer programme would not allow it to deviate from particular weightings – the Computer simply said ‘No’.  Intelligence does not determine what should be done with that intelligence.  Intellect (and common sense) must still be used by all parties involved.  Elements of weighting, proportionality and fairness must be employed.  Until artificial intelligence can learn these and perhaps uniquely human elements of empathy and understanding then CQC inspectors cannot be replaced by technology.  However, some providers may say they wouldn’t be able to spot the difference anyway.