Thursday, 25 April 2013

Finding things to stop doing...


Early exponents of evidence-based medicine put forward an optimistic view of future healthcare, where the availability of robust information would allow clinicians to select the most effective treatments – and to stop doing things which were shown not to work.  But this last part has proved elusive.

A recent paper by Sarah Garner and colleagues from NICE tested out the use of Cochrane reviews to identify low value practices which might inform local disinvestment decisions http://jhsrp.rsmjournals.com/content/18/1/6.full.  NICE had been criticised by the Health Select Committee in 2008 for lack of progress in supporting NHS drive for efficiency savings.   Very few technologies had been identified as absolute candidates for disinvestment.  Indeed, the authors noted that only two topics featuring low value practices had been selected in the last six years for full NICE guidance development.   A really interesting contention was explored in this paper – greater certainty and levels of evidence were required for a `do not recommend’ decision by NICE, than a treatment option left to the discretion of the clinician.  This is because of the inevitable challenge (in the courts) if existing treatments are discontinued.

And the UK is not alone.  The same paradox was noted by Chris Henshall speaking at the Health Technology Assessment International conference last year.   He remarked wryly that the search for low-hanging fruit (ineffective procedures) often ignored the fact that they were still firmly attached to the tree.  He did however commend NICE for one of the few examples of a `managed exit’ for low value procedures, with guidance on the use of prophylactic antibiotics for endocarditis.

Other attempts to throw light on disinvestment in the NHS include an NIHR-funded study by William Hollingworth in Bristol http://www.netscc.ac.uk/hsdr/projdetails.php?ref=09-1006-25.  This study uses routine data to identify high levels of practice variation as a proxy to identify procedures of uncertain clinical value.  In this way, he draws on classic Wennberg notions of uncertainty or preference-sensitive care for procedures such as radical prostatectomy.  Hollingworth goes on to work up guidance for local commissioning groups and explore practice and beliefs on de-commissioning. 

It reminded me too of wider NIHR-funded work on commissioning practice and the use of evidence http://www.netscc.ac.uk/hsdr/projdetails.php?ref=08-1808-244.   Some interesting observational research on decision-making by commissioning organisations was striking in the disproportionate effort given to certain areas of activity.  For instance, individual funding requests (for exceptional treatments for individual patients) required high levels of scrutiny and evidence for quite marginal areas of spend and activity.   
Muir Gray and others have long bemoaned the lack of information or informed decision-making about huge areas of clinical care, from epilepsy to pain management.  This kind of inverse evidence law helps to explain just some of the difficulties in trying to realise savings in the NHS.    

Some of the most encouraging developments to reduce waste and enhance value in recent years has come with programme budgeting and pathway redesign.  This might include shifting care, role substitution (such as nurse consultant-led clinic and triage centres) and reviewing thresholds and optimal levels for stepped care.   Examples such as the initiative in Oldham to re-shape services for rheumatology, orthopaedics and chronic pain suggest the power of using evidence with engagement from clinicians, managers and patients to re-think care processes http://www.rightcare.nhs.uk/downloads/Right_Care_Casebook_oldham_IPH_april2012.pdf

There has been a belief that to identify ineffective practice or treatments `the evidence will speak for itself’.  Progress to date suggests that it is difficult to make absolute assertions about interventions of low clinical value.  A more engaged process to eliminate waste across whole programmes of clinical activity is likely to yield more fruit – with the clinicians shaking the tree.

Tuesday, 5 March 2013

The Secret Life of Organisations – lessons from case study research


Social science research has an honourable tradition of de-familiarising activity, processes and culture which have become embedded as normal practice - `how we do things here’.   A key research approach is organisational case study research.   But there is often little understanding of the art (or science) of research into organisations or what makes a good case study.   This was the subject of a recent seminar hosted by the Health Services Research Network at Manchester Business School.   Kieran Walshe chaired the event, with a formidable array of talent from different fields.   The unifying theme was the organisational case study, but the approaches ranged widely from historical archival research to contemporaneous sense-making of new organisations.  We heard from a range of disciplines, including management and organisational studies as well as health services research.   And the subjects of research ranged from code-breaking units to operating theatres to clinical genetic centres.

We opened with Chris Grey (Royal Holloway London), talking of his fascinating work on wartime signals intelligence in Bletchley Park (http://taralamont.blogspot.co.uk/2012/10/from-bletchley-park-to-nice.html).  Contemporary sources and official historians described the chaotic nature of the organisation – so how did it achieve such astonishing results?  Chris Grey used a compelling range of evidence and analysis to argue that the success was because and not despite its organisational hybridity.  He described it as a `twisting together’ of routine data processing and semi-mechanised work with esoteric, highly skilled cryptanalysis.  Its organisational porosity – sucking in expertise from other sources (such as indexing capacity from the retail sector) – gave it an adaptability which was used to `patch’ organisational fissures at a local level without recourse to more elaborate longwinded structural solutions.  The provisional, adaptive nature of the enterprise was not a weakness, but its greatest strength.

We moved rapidly from signals intelligence to de-coding the work that surgeons do.  Justin Waring (University of Nottingham) explained the use of ethnography to `make strange’ the ritualistic responses to events and shared norms of professional and inter-professional groups – in this case, operating teams.  His work has helped us to understand for instance what kind of adverse events are seen as worth reporting by surgeons and why.  He also explained the strengths of case study research as a method – particularly, the ability to zoom out (to explain the context and inter-connectedness of forms) and zoom in (to provide depth and focus on particular processes) within a single study.

Ewan Ferlie (Kings College London) described a broad arc of organisational case study research and its epistemic context, from the classic single case such as Lukacs’ account of five days of the Dunkirk crisis (refreshing to have a different example from the much-cited Allison’s account of the Cuban missile crisis) to broader organisational research ranging from Mintzberg to Pettigrew.   He talked about his work on managed clinical networks, using tracer activity such as implementation of NICE guidelines on urology and observational research to `look at what people do not what they say’.   There was some discussion about good practice in case study design.  Where social scientists are often equivocal about the optimal number of study sites, Ewan Ferlie was robust  – in his experience, the right number is always eight!

Graham Martin (University of Leicester) then picked up issues about methods and design in describing his work on the sustainability of new genetic services.   He cited classic works from Yin to Gerring, but cautioned against over-reliance on deductive logic, as there will always be uncontrolled variance in the dynamic, complex world of healthcare.  Although his study had used a clear 2x2 sampling frame for genetic services, based on key variables of interest, the status of participating sites changed during the course of the study.   He also noted that the best organisational case studies needed adaptive, highly skilled researchers in the field, with iterative cycles of data collection and analysis.   He had found practical suggestions helpful from case study methodologists such as Eisenhardt – for instance, her suggestion of creating paired comparisons to look for points of commonality and divergence in a structured way.

We ended with a presentation from Nick Emmel (University of Leeds) which was almost philosophical.  He noted that the hallmark of organisational case study research was that the question `what is a case’ or `what do I have a case of?’ is constantly posed throughout the research.  This in itself was a key research tactic to interpret and explain activity and causal mechanisms.   The cases might change and evolve during the course of the study.  He emphasised that the selection of appropriate topics was crucial – the ideal cases should bundle together ideas, contexts and outcomes to develop and test theories of the middle range.   Overall, Nick Emmel’s contention was that we should move from an idea of a case as a passive noun to a more active verb `casing’  where cases are created from the research activity.

If these stimulating thoughts were becoming a little abstract, the audience provided some grounding during questions.  One researcher questioned whether case study research was more or less accessible to managers than other forms of evidence.  On the plus side, this kind of research provided stories which was a powerful form of transmitting learning (and familiar to senior leaders in the NHS who had been through management or business school).  But others challenged the timescale for carrying out longterm observational research and how this could deliver usable findings to managers who needed immediate answers.  It was agreed that there was a place for 3-5 year indepth studies, but not all knowledge gaps needed primary research.  

There were interesting points about the different team composition needed for good case study research.  Participants noted that biomedical research was often predicated on a hands-off principal investigator and much work done by teams of junior researchers.  In case study work, senior researchers needed to engage in the fieldwork and respond to emerging data challenges and design.  The quality of analysis and write-up was particularly important for this kind of research.

Other participants noted the exceptional nature of some of these interesting, atypical cases. Would this provide distorted findings?  On the contrary, some researchers argued that outliers might yield important learning but it would always be important to contextualise the case against the population from which it was drawn. 

So a rich and stimulating seminar, which reminded us of the strengths of organisational case study research for health.  It remains the best way to provide what Flyvberg calls `concrete, context-dependent knowledge’.  Participants agreed it was not appropriate to identify a single blueprint for case study research, given the diversity of methods, but greater attention could be paid to study design.   This included making explicit choices about sampling or selecting cases and actively looking for data which challenged emerging lines of enquiry.   There were practical tips which could be shared from more experienced research teams, especially given the challenges of ensuring consistency but flexibility in comparative case study work.   Best studies allowed for `thick description’ – one of the strengths of case study research - within a rigorous, analytical, theory-driven framework.   A key problem was how to generalise findings from descriptive, context-dependent case studies.  This was difficult but possible through cross-case analysis and in deliberate theory-building.    Although there were no easy answers, it may be helpful to identify common standards and tenets of good practice for those funding, delivering and using research of this kind.  At its best, case study research provides the shock of recognition – literally, thinking or seeing afresh the organisations where we work and receive healthcare.

Monday, 3 December 2012

How numbers help – from weather to walk-in clinics


Off last week to flood-bound Exeter, for a stimulating two-day conference led by Martin Pitt at Peninsula Medical School (http://www.hsrlive.org/events/change-by-design-systems-modelling-and-simulation-in-health-care).  It was designed to bring together clinicians, managers and patients with researchers practising those strange sciences of systems modelling and simulation.  These techniques have been under-used in health, but there was a palpable sense of excitement over these two days that this was an approach whose time had come.   

This is not new – health planners in the 1950s were using primitive modelling methods for booking outpatient systems.    But latest techniques embrace the complexities of health and social care, the uncertainties and the multiple interests of commissioners, range of providers and patients.  It is no longer – and perhaps never has been – a two dimensional numerical exercise.

We heard inspiring stories of how the particular techniques of operational research had been brought to bear on tricky NHS problems.   These include using queuing theory to allocate and share scarce specialist mental health assessment slots between teams; applying stochastic modelling techniques to predict ambulance response times and plan rosters; using scenario planning to allocate capacity between medical, surgical and cardiac beds on `service lines’ in paediatric intensive care; and using system dynamics to re-model the entire unscheduled and emergency care systems in one locality. 

There was a great presentation from Paul Harper, using software animations to illustrate the dangers of planning capacity on averages.  If you fail to build in variability, a given in most systems dependent on human behaviour, your estimated average wait of 30 minutes in a walk-in centre becomes two hours.  Check out his youtube presentation (http://www.profpaulharper.com/home/research/research-materials) .  This made me think of a brilliant book I read recently on the dangers of relying on `common sense’ by the US engineer turned sociologist, Duncan Watts (http://www.amazon.com/Everything-Obvious-Common-Sense-Fails/dp/0307951790).  A common sense planner would schedule outpatient waits based on average times from reception to work-up with  nurse to doctor.  This would be wrong.   A quote by Watts - “the whole trick is to know what variables to look at and then know how to add” – could itself be an epigraph for operational research.

One of the best parts of the two day event was a sandpit exercise where small groups of service leaders and operational researchers quickly worked up bids for new projects.  These were pitched to the room, dragon den style.   The outputs were impressive - from using location analysis to site diagnostic services across one region to modelling how best to implement NICE guidelines for DVT care.

I ended the day talking to a paediatrician who had stumbled on the event, with no prior knowledge of systems modelling, and was inspired to get analytic help when making a business case for a new specialist epilepsy nurse and pathway redesign.  There is a tension though between the very applied, local problem-driven analytics and a more lasting body of knowledge.  Sally Brailsford (mathematician, turned nurse, turned health modelling academic) had pointed to the paradox – we have a huge body of evidence, but few generaliseable outputs.  She had identified 1008 individual papers on re-modelling emergency department flows.   Were all these necessary?  How can we learn from the best?   As well as the embedded local analysts within a health organisation focused on particular problems, we need high quality research studies to generate national learning, by testing and validating models and carrying out robust evaluations of impact.

And so a long return from Exeter, with rather trying transport arrangements given the flood damage.   During discussion, some had raised the old argument that healthcare was just too complex to lend itself to mathematical techniques.   The same of course used to be said of weather forecasting, where predictions of more than three days  were notoriously inaccurate.  But today’s weather modelling techniques, using historic data from multiple sensors and understanding the interplay of solar activity, land masses, water temperatures and wind flow are much better.   Applied to health, techniques such as system dynamics can build in uncertainties (such as patient preferences) and variability (patient and clinician behaviours), with more sophisticated understanding of interactions (through network analysis and other) to predict more accurately how services might be used and savings could be made.   Scenario planning can also present various `what-ifs’ to integrate strategic uncertainties – a given in the NHS – into the planning process.  Numbers themselves are not enough.  But at a time of ever tighter financial pressures, can we afford to ignore the weathermen?

Wednesday, 24 October 2012

From Bletchley Park to NICE


What can Bletchley Park teach us about organisations and organisational life?  I have just read an excellent book http://tinyurl.com/9q3gyff by Christopher Grey, Decoding Organization  (great title), who brings his academic experience in organisational studies, together with a deep delve into archives, to the iconic site of Bletchley Park.  Much has been written on this, but it is strangely under-studied as an organisation.  Grey provides rich insights while debunking myths.  Yes, there really were chess-playing tweedy mathematical geniuses recruited from Cambridge colleges in Hut 6 – but at the same time it was a complex organisation of 10,000 staff, three quarters of whom were women.  

In many ways the story of Bletchley Park challenges all modern precepts of successful organisations – clear leadership, open culture, shared objectives and feedback to staff.  At Bletchley Park, it was not clear who was in charge, with continuous friction between the different agencies at the helm , from the Admiralty to the Foreign Office.    The chain of command was obscure - a US navy liaison officer arriving in 1942 was amazed to find no organisational chart.  This despite the workplace having grown to a complex web of listening stations, interception, decoding and intelligence functions.  There were no shared work goals - beyond the overarching mission and unifying force of war, which Grey does not discount.   Textbooks on successful management would have chief executives get up at staff meetings and tell rousing stories of what the organisation has achieved.   At Bletchley Park, there was little shared information – indeed, many or most of those working there knew nothing of the breaking of Enigma ciphers, a momentous act which experts reckon shortened the war by two years.  Instead, the organisation was characterised by secrecy and highly compartmentalised units.   It has been described as a “multiple series of concentric circles”, with 47 fairly autonomous sections.  One particularly telling story which Grey tells is of a couple who had both worked at Bletchley Park but never revealed the fact to each other until thirty years later. 

Modern management stresses the need for a strong organisational culture with shared values and beliefs.  But Bletchley Park appears to be “a multiplicity of different agencies with potentially competing interests.”   Grey argues that the organisation was formed from conflicts and negotiations between very different cultures - from the military and civilian, the dons and the clerks.     

So how did it work?  Grey describes the dense web of friendship connections between these individuals, including the shared backgrounds of many and informal recruitment through universities.    While antithetical to modern notions of equal opportunity, some of the core activities were supported by high levels of trust and interconnectedness on a personal basis, something which Grey calls informal `micro-networks’.  This enabled for instance competing heads of naval and army `huts’, who had been friends at university, to work out solutions for competing demand for rare resources – in this case, use of analytic devices or bombes.  Strong pre-existing personal and social  connections helped to avoid institutional conflicts.  They also provided a cultural structure that overcame the incoherent organisational structure of Bletchley Park.   In this way, it most closely resembles more recent kinds of knowledge-intensive organisations in Silicon Valley.

Many accounts celebrate the eccentricities and amateurism of Bletchley Park – one source notes that a high-ranking foreign visitor was appalled at the (effective) indexing system housed in shoe boxes.   But Grey favours the notion of `organised anarchy’ – Bletchley Park was charcterised both by efficient rule-based standardisation of work (associated with formal bureaucracy), much of it routine, as well as reliance on personal initiative, networks and discretion.  Not either, but both.   Similar arguments have been made for the craft (or art?) of medicine, where evidence-based guidelines do not substitute for professional judgement. 

It is interesting that Bletchley Park is recognised as more successful than its equivalents in Germany or US. This is partly because it brought together for the first time the separate functions of interception, cryptanalysis and intelligence, creating a new kind of organisation.   But there were also different ways of working, mobilising large numbers of staff on a temporary basis for particular projects, enabling a degree of flexibility and innovation which more established military and administrative structures in other countries may have inhibited.  Something of the spirit of the Olympics Games Makers perhaps. 

Why does it matter?  This thought-provoking study brings academic rigour – with Grey’s broad hinterland of organisational theory - together with a narrative of a time and place which still fascinates us.  It makes me think about some high-performing hospitals, with tight networks of semi-autonomous specialist coteries, and `light touch' general management.  Grey's insight is that Bletchley Park's success may be because of its organisational chaos and porosity and the tensions between diverse units - not despite it. 

But it is also his methods and storytelling which excite.  Could we use this power of analysis to learn more about NHS organisations and our recent history?  Could we for instance start to decode the success of NICE as a unique British institution?  What particular confluences led to the creation of this new institution in 1999 – including the momentum of evidence-based medicine, the Child B case and other headlines on postcode lottery  generating the need for political distance and a process to manage the demand for expensive new drugs and treatments?  How much did the continuity and traits of the `three at the top’ (Mike Rawlins, Andrew Dillon, Peter Littlejohns) contribute to its longevity?  What are the tensions between the rational enterprise of evidence-based decision-making and the competing interests of different parties (industry, patients, clinicians and politicians) and how were these played out in some of the big stories (tamiflu or drugs for kidney cancer or Alzheimer’s disease)?   There are rich seams of structure, agency, culture and political process to mine here. 

An aside – I found out  that Christopher Grey shares my enthusiasm for the under-rated and deeply unfashionable novels of C P Snow.  Snow was himself a key figure in recruitment at Bletchley Park, having moved from scientist to war-time director of the Ministry of Labour.  I can’t think of a writer who describes better the emotional intensity of working life – from political intrigue (Corridors of Power, The Masters) to scientific fraud (The Affair).   Our anorak passion is shared by Muir Gray, I found in a recent twitter exchange, who insists his registrars read Snow’s novels to understand how policy and organisations work.  Time for a revival?

Tuesday, 21 August 2012

What does a good hospital look like?

The biggest question in health care at the moment is how to reduce costs without compromising quality.  But we still know very little about the relationship between inputs or costs and outcomes for patients.  There is some research – largely qualitative – on characteristics of high-performing hospitals.  And it is mainly hospitals, rather than other kinds of provider organisations, which are studied.  But even this literature is characterised by a circularity of argument – good facilities attract good staff which treat patients well…  And is it true?
I was thinking about this when reading a paper by Veena Raleigh and colleagues looking at patient-reported experience across services http://tinyurl.com/dxqstho. They found that some hospitals consistently performed better than others.  Their analysis showed that foundation and teaching status and proportion of white inpatients were positively associated with high patient ratings – deprivation, not so much.   In other words, some hospitals in tough places still managed to perform consistently well on patient measures such as dignity, respect, cleanliness.
What else do we know?  Jha and colleagues in large quantitative analyses in the US looked at the relationship between efficiency and structural characteristics, such as nursing levels and outcomes http://content.healthaffairs.org/content/28/3/897.abstract.  They found that low-cost hospitals perform worse on quality indicators and patients were less satisfied.  So much, so predictable perhaps – but worth stating in a climate where payers may look only at the bottom line and seek out low-cost alternatives.  A more recent study by Stukel and co in Canada http://jama.jamanetwork.com/article.aspx?articleid=1105068, confirming these findings, provide further insight into why hospital costs may be related to patient outcomes.  Their careful work showed for instance that patients in high-spending hospitals received a higher intensity of nursing care and more visits from specialists.
There is also an interesting debate about performance by specialty or whole-institution – Dr Foster and all, take note.   This is a large and contested area, but research for instance by Shwartz and colleagues in the US http://mcr.sagepub.com/content/68/3/290.abstract suggest that hospitals who perform well on a composite measure were often not in the top quintile for individual measures.  That is, there may be pockets of excellence from clinical teams or specialties in otherwise poor performing trusts.

There does not appear to be a simple blueprint for successful organisations.  For instance, we know that larger is not always better – an evidence review by Rod Sheaff some time back found no consistent relationship between the size and performance of an organisation (over and above the relationship between volume and quality for specialist procedures). http://www.netscc.ac.uk/hsdr/projdetails.php?ref=08-1318-055  The review similarly found no consistent or strong relationship between performance and other factors such as leadership style or economic environment.
So where do we go from here?  Veena Raleigh’s work suggests system-level determinants of good patient experience – and that some trusts can deliver against the odds.   We know something – largely from North America – about the relationship between organisational input and performance.  But, given the importance of the question, the paucity of high quality research in this area is striking http://www.ncbi.nlm.nih.gov/pubmed/22871420.   It would be good to see ambitious studies in the UK which tackled these big questions.   What does good look like – and do we get what we pay for?

Wednesday, 4 July 2012

So where are the doctors? (...in patient safety research)

Bob Wachter, a leading US clinical researcher/leader (of `hospitalist’ fame) over here on a sabbatical last year mentioned in passing his personal roll-call of influential figures from this side of the water on patient safety research.  Jim Reason, Charles Vincent, Mary Dixon-Woods… all social scientists.  Where were the doctors?  In the US, the leading lights combine research and clinical leadership – Atul Gawande, Peter Pronovost, David Bates, Lucian Leape, Don Berwick. 
A few exceptions come to mind – Liam Donaldson, responsible for setting the agenda at a national (and international) level.  Tony Avery (GP) and his work on prescribing errors, Peter McCulloch (surgeon) and his crowd-pleasing studies on operating theatres using Formula One handover techniques.  And other professional groups have their research luminaries – particularly pharmacy, thinking of the work of Nick Barber and Bryony Dean Franklin, from medication safety in care homes to evaluation of electronic prescribing.  Nurse leaders have been prominent in safety campaigns and initiatives (for instance around infection control) – perhaps less notable in research and setting the framework for debate. 
But the absence of prominent medics as patient safety researchers and thinkers is puzzling.  This may be part of a broader issue – few trust chief executives in this country have a clinical background.   In the US, Goodall’s work http://www.ncbi.nlm.nih.gov/pubmed/21802184 showed a positive association between high-performing healthcare facilities and leadership by a physician.  My quick googling of chief executives of high-performing trusts (QUEST) on quality/safety markers show none with an obvious medical background, one from nursing.   There’s a whole other debate around medical leadership and the interesting hybrid medical/manager role from the likes of Peter Spurgeon and Chris Ham.
Does it matter?  Flip another way and you could cite patient safety research as an example of social scientists leading the way – from Jim Reason’s analysis of latent threats and system weaknesses to Charles Vincent and others (Sari, Hogan) measuring the rate of harm to robust evaluations of complex safety interventions (Dixon-Woods, Benning).  It has been exciting to see other, newer disciplines outside health come to the fore – human factors (Rhona Flin), design and ergonomics (Peter Buckle).   Plus the important contribution of researchers with an understanding of organisational culture and sense-making.  I particularly like the paper by Graham Currie and Justin Waring, whose observational study http://tinyurl.com/dy4weak of hospital incident reporting systems showed how doctors determined what counted as safety incidents – for instance, dismissing non-sterilisation of instruments as an issue.   In this way, we know that top-down safety initiatives which overlook issues of professional and instutitutional cultures and hierarchies (pace Mintzberg) are doomed to failure.
So there is a good foundation for patient safety research in this country, driven by social scientists http://tinyurl.com/d24xx38.  But Atul Gawande’s great insights http://gawande.com/complications into medical and surgical practice show so elegantly the dilemmas of doctors trained for a world that no longer exists.  Today’s clinicians need teamworking and communication skills (and checklists) to navigate complex healthcare systems – and an understanding of how those systems work.  This kind of insight comes from the inside out.  So where are the UK’s  Atul Gawandes who will shape the patient safety debates of the future?