Every year since 2007 the NHS in England asked patients what they think about their GP practice in a large national survey. The survey findings are intended to inform patients, healthcare professionals and planners about patients’ experience of the care provided by individual practices in England.
This Highlight shares insights obtained from research using this general practice survey data. We share findings about what patients really think about their care, how this varies for different patient groups and how practices can act on patient feedback.
800,000+ patients responded to the 2016 GP Patient Survey
7,674 GP practices in England amongst which the GP Patient Survey is conducted
2.1 million patients are sent the GP Patient Survey each year
Evidence at a glance on the GP Patient Survey
The English GP Patient Survey is the largest and most reliable source of information for practices about how patients view their experience. The survey produces detailed data that help practices to identify and target areas which might result in improved patient experience. But practices vary in how, and how much, they use the survey data to address patient perceptions. There is scope for greater use of the data.
What did the research programme find?
- Patients found questionnaires to be limited tools for expressing concerns about consultations and were reluctant to criticise their GPs. There was a tendency for patients to choose positive options in questionnaires. This contrasts with more critical judgements of GPs’ skills in consultations when they reviewed filmed consultations. Patient scores in the survey may present an over-optimistic view of GPs’ care. However, this optimism likely applies to all practices and so whether a practice scores above or below the national average – or other practices – is likely to be robust.
- Minority ethnic groups provide consistently low scores in the GP patient survey. When research focused on South Asian respondents, there was no evidence that these patients used the rating scores differently from White British patients. A video-based test of perceptions of consultations suggested that low scores among South Asian patients reflected care that is genuinely worse than that of white patients, as opposed to simply reflecting differences in prior expectations of care.
- GP practice staff expressed concerns about the validity and reliability of the patient survey data and about how representative the survey respondents were. Many practices are sceptical about the value of the survey for improving the services they provide to patients.
Questions for professionals and the public
For GPs and practice teams
- Do you share and discuss the GP Patient Survey data for your practice?
- Looking at the most recent GP patient survey data and given the pressures that you work under, what is the one thing you could change that might improve patient experience in your practice?
- The patient survey data suggests that minority ethnic patients receive poorer consultations than patients in the general population. How can you examine this in your team and how might you address this? What provisions do you need to make to improve the consultations of minority ethnic patients, especially older female patients?
For Clinical Commissioning Groups
- How do you support the awareness and use of the GPPS data in the practices within the CCG? Can you draw on expertise or support from your Commissioning Support Unit for this?
For Patient Participation Groups
- Do you share and discuss the GP patient survey data for your practice?
- Do you know how your practice compares with the national average and with other local practices?
- Are you aware of changes that your practice has made on the basis of the GP patient survey data?
- Do you feel that you help the practice maximise the value of all the data that is derived from the survey? If not, what needs to change?
- What do you think should happen in your practice as indicated by the survey data?
The GP Patient Survey has been run each year since 2007. It provides the English NHS with a consistent and comprehensive set of data about patients’ experience of using their GP services.
The survey is designed and managed in a collaboration between NHS England and Ipsos MORI, a leading market research agency, with academic input from the Universities of Cambridge and Exeter. The survey is rigorously sampled to ensure that patient views are gathered from among people registered in all English GP practices and carefully validated to ensure that the data are reliable and can be used as the basis for change in GP practices.
Each year the survey is sent to about 2.1 million patients. The survey is available in British Sign Language and 15 other languages. Around 800,000 patients respond each year, either by post or online. The survey has questions on: use of practice services, experience of appointments, access and waiting time, evaluation of the last interaction with a practice doctor and/or nurse, satisfaction with opening hours and overall experience satisfaction level. The survey gathers a large amount of individual patient data including health-related quality of life measures and information about the respondent’s current use of, and satisfaction with, other healthcare services, including out-of-hours and dentistry. Demographic data by which the results can be filtered include ethnicity, working status, proximity of home to work, parental and other caring responsibilities, disability, smoking history, sexual orientation and religion.
The survey data are reported at national and practice level and are freely available and searchable online. There is no standardised requirement for how GP practices use the survey data and each practice can plan to review the survey findings as they see fit. Survey findings form an integral part of the NHS Outcomes Framework. The survey data is a key element of the package of data that describes the practice when the Care Quality Commission is preparing to inspect a GP practice. The ratings of the practice no longer form part of the calculation of Quality and Outcomes Framework payments to the practice and no survey data is generated at an individual doctor or nurse level.
One element of the IMPROVE study compared patient responses on the questionnaire with their actual experience of a consultation with a GP.
The researchers wanted to know if the behaviours that patients comment on (in the section about their last GP consultation) can be accurately assessed by the questionnaire. Underpinning this was a recognition that while the section of the GP Patient Survey asks about the patient’s most recent consultation using six questions, the patient’s choice of answers can be influenced by many factors. These include the pre-existing relationship with the GP (if any), the relationship with the wider practice and the outcome of the consultation.
A study was conducted in two geographical areas of England in practices with lower than average GP patient communication scores in the most recent GP survey. A sample of 529 patients agreed to have their consultation with a GP video-recorded. Just after the consultation the patients completed a short questionnaire evaluating aspects of the GP’s communication, using wording and rating scales similar to the GP consultation section of the national GP survey questionnaire.
The researchers also asked four experienced, trained clinical raters to each review 56 of these video-recorded interviews using an internationally recognised rating instrument which gives an overall score of between 0 and 10 for the communication quality of the consultation. The research team asked each rater to score each consultation and a mean score was calculated for all 56. Each of the 56 consultations was also rated by the GP who had carried out the consultation (a total of 37 GPs) and the patient.
There was weak evidence of an association between the patient questionnaire scores and the scores of the trained raters. When trained raters assessed communication in a consultation to be of a high standard, patients tended to do the same. But when the trained raters judged communication in a consultation to be of a poor standard, patients reported communication as anything from poor to very good. While trained raters and patients tended to agree what good communication looks like in a consultation, clinical raters were more likely than patients to judge communication as poor.
The research team suggests that these differences may be due to a wide range of factors that inhibit some patients from assigning poor scores to consultations. They noted that earlier qualitative research suggested that patients struggle to criticise doctors’ performance in surveys and found that the rating of videoed consultations support the view that patients may be inhibited in criticising doctors’ performances. They believe that patient surveys as they are currently used may be limited in their usefulness for feeding back views about consultation. They caution that a high patient mean rating of communication with GPs should not necessarily be assumed to indicate that all is well.
When it comes to GPs learning from the GP patient survey, the data may get picked up by a few individuals – perhaps a Continuing Professional Development programme director who might base an in-house training session on it, or sometimes the Royal College of General Practitioners local faculty may put on a workshop. If you want to make things happen at the grassroots you need an educational lead to put on an activity of interest and attractive to GPs to attend. With a survey like this and the feedback from it, the key thing is to make it possible for GPs to see it is worthwhile to spend time on it and that it can help improve their provision of care somehow.
Dr Richard Weaver, Director of Primary Care & GP Education & Head of School, Health Education England, Wessex
The IMPROVE team carried out detailed analysis on 2009 GP Patient Survey data to examine the variations between patients in different ethnic groups. This showed that certain patient groups reported more negative experiences of care than all respondents. These were minority ethnic patients (particularly those from Chinese and South Asian backgrounds), patients with poor self-reported health and younger patients. The research team saw variations in the survey scores for doctor-patient communication in these groups as a particular concern because this is such an important driver for overall satisfaction with care.
To explore this further, the researchers then did further analysis of the data from the 2012-2013 and 2013-14 years to see where the largest gaps in patient experience lie. A measure of GP-patient communication was built up from the five items that patients scored in that section of the questionnaire. They found strong evidence that the effect of ethnicity on reported GP-patient communication varied by both age and gender. In particular, older, female, Bangladeshi respondents reported significantly poorer experiences of communication than White British patients of the same age and gender, within the same practice. Chinese patients responding to the survey also reported more negative experiences of care compared with White British counterparts, this time across all age groups. Finally, the researchers also identified that younger (those aged under 55) ‘any other white’ patients also experienced disparities in their reported consultation experience, compared to White British patients.
The researchers were careful to consider the part that language proficiency might play in these results. They realised that the experience of some patients with poor language skills might not be captured at all, as they just wouldn’t complete the survey – even though the questionnaire is available in numerous languages. So, if language proficiency plays a part in how likely people are to respond to the survey, the survey may underestimate the consultation difficulties experienced by certain minority ethnic groups. The researchers encourage GPs to consider patients’ language challenges as a step in improving consultation experience.
I’m not altogether surprised that some Asian patients reported dissatisfaction with their GP consultations. Some of the older Indian and Pakistani doctors can have a rather old-fashioned style of talking to patients. I think patients’ language skills must play some part in these results, too, as some of the first generation immigrants don’t have the language proficiency of their children and grandchildren. That is hard as you get older and need to use the NHS more. With the GPPS, we need to make sure we know what the data is saying and respond to the results.
"In my Patient Participation Group, we tend to use much of our time supporting the practice to make practical changes: make the lift more reliable, increase the phone lines, improve the training of front desk staff. But we also discuss the consultation data with our practice manager and one of the GPs, and they take the development points forward.
Moinuddin Kolia, Community Pharmacist and Chair of the Patient Participant Group in a central Leicester practice
Further work followed up the disparities in the experience of South Asian survey respondents. To be sure that these data were giving a real account of the experience of such patients, the researchers carried out an experiment to test whether South Asian patients either report poorer care because they get lower quality care or receive similar care but rate it more negatively. A well-established way of doing this is to ask respondents to watch and rate standardised clinical scenarios or ‘vignettes’.
Videos of simulated GP-patient consultations were shown to two groups of people (Pakistani and White British) who were asked to rate the quality of the communication in each consultation on the five dimensions for consultations in the actual GP Patient Survey. Three key issues were built into the design of the experiment: the symptom the patient came to see the doctor about (there were four different ones); the quality of the communication (poor versus good); and the ethnic backgrounds of the ‘doctor’ and the ‘patient’ – either South Asian or White British.
The vignette exercise was conducted in English as it was not possible to produce equivalent vignettes in community languages. This approach was consistent with the fact that 99.8% of responses to the GP Patient Survey are in English.
Equal numbers of White British and Pakistani patients with slightly differing age and gender profiles took part in this experiment. Respondents from a Pakistani background rated communication in the simulated GP consultations significantly more positively than their White British counterparts. These differences were in the opposite direction to those seen in the national GP Patient Survey, where Pakistani respondents give significantly lower scores for communication than White British patients. What this suggests is that not only are there differences in the real-life GP consultation experiences of White British and South Asian patients but that these differences are even greater than previously reported via the GP Patient Survey. The researchers suggest that Pakistani patients experience genuinely worse GP-patient communication and that practices should be encouraged to take these factors into account when considering the issues involved in caring for a diverse patient population.
The GP Patient Survey data is published online each year and any reader of the website can see both the national data and the data derived from individual practices. While the GP Patient Survey data for any given practice is used as part of its CQC inspection record, there is no standard requirement for practices to review that data or act upon them. As a result there are wide variations in the use of the data and the value that can be derived from them.
The IMPROVE research team conducted a piece of qualitative research to look at the role that patient feedback is seen to play in both assessing and improving standards of care. They surveyed patients from 25 practices, asking them about their experience with the practice using questions based on those in the GP Patient Survey. The findings were fed back to the practice staff. Focus groups were then run in 14 of these practices, involving GPs, practice managers, nurses, receptionists, administrators/secretaries and other staff such as dispensers and healthcare assistants. The focus groups explored practice staff’s attitudes towards patient surveys (such as the local one conducted by the research team, and the national GP Patient Survey) and their potential to improve experience of the practice generally, and GP consultations in particular. During the group discussions, the research team noted that organisational response to patient experience surveys was dominated by GPs and practice managers with far less input from receptionists and administrative staff.
Overall, practice staff found it hard to trust many surveys to reflect ‘reality’, even though they expressed interest in, and engagement with, the findings. Practice teams mistrusted survey methods. They particularly criticised the design and administration of the GP Patient Survey and expressed concern about its representativeness, reliability, sample size, bias and political ends. Two major concerns about the GP Patient Survey were that, firstly, it samples all the patients registered with a practice rather than its most recent or most frequent service users and secondly, that the survey generates data only at practice, and not individual practitioner level.
However, practice teams noted that the GP Patient Survey process was well-established and could prove useful in comparing their practice with national average data, as well as identifying potential changes to be made in the practice. However, there were wide variations among practices in how patient survey data were shared, analysed and used.
If you collect data, you’ve got to use it. I review the GPPS data to see how we are positioned in the local health economy, to spot areas where we are deficient and to see if we can analyse why the patients are reporting problems in specific areas.
We try to identify the ‘hotspots’ to ensure we understand where the main problems are and either allocate more resources to that or try to encourage patient expectations to be more realistic – then the patient participation group plays a key role in communicating with our patients.
Joseph Todd, Practice Manager, Westlands Medical Centre, Portchester
Practice staff discussed the part that patients themselves might play in the smooth running of the practice and the scope for their involvement through patient participation groups.
Several managers remarked that it was very hard to tackle individual doctors’ performance on the basis of the survey data, even when individual doctor data is available. (Specially-collected data had been made available confidentially for the focus group work, whereas it is not collected in that way from the GP Patient Survey).
We see the GPPS data as being most informative in relation to practical points such as access times, prescription systems, online booking, that kind of thing.
Dr Craig Kyte, General Practitioner, Salisbury
Overall, staff in many practices felt that there was little external support for making changes in response to the patient feedback. The research team suggests that staff in English General Practice broadly view the role of patient feedback as one of quality assurance while its function in quality improvement seemed much less certain.
The research team concludes that the overall value of patient feedback from surveys is undermined by a combination of variable attitudes to the credibility of the data and challenges for practice staff in bringing about meaningful changes. They suggest that the expectation that survey feedback alone will stimulate major changes in care is unrealistic; most commonly, when change does happen, survey findings were only one of the spurs to action to address a problem that had already been acknowledged.
The perpetual problem from the GP point of view is that the survey samples the whole adult practice population (though not the children) rather than those patients who most depend on the practice, and for whom we design our services. And yet we know that the GPPS does produce useful comparable data & I am aware that some practices make good use of it, especially where there are enthusiasts for change. On my patch, some patient groups have also discussed the data together and take important issues back to their practices for a plan of action.
Dr Guy Watkins, Chief Executive, Cambridgeshire Local Medical Committee
This Highlight draws on four journal publications derived from the findings of the following NIHR research study: IMPROVE - Improving patient experience in Primary Care: a multi-method programme of research on the measurement and improvement of patient experience.
Burt J, Abel G, Elmore N, Newbould J, Davey A, Llanwarne N, Maramba I, Paddison C, Benson J, Silverman J, Elliott M.N, Campbell J, Roland M. Rating communication in GP consultations: the association between ratings made by patients and trained clinical raters. Medical Care Research and Review. 2016
Burt, J., Lloyd, C., Campbell, J., Roland, M., & Abel, G. Variations in GP–patient communication by ethnicity, age, and gender: evidence from a national primary care patient survey. Br J Gen Pract. 2016; 66(642), e47-e52.
Burt, J., Abel, G., Elmore, N., Lloyd, C., Benson, J., Sarson, L., & Roland, M. Understanding negative feedback from South Asian patients: an experimental vignette study. BMJ Open. 2016; 6(9), e011256.
Boiko, O., Campbell, J. L., Elmore, N., Davey, A. F., Roland, M., & Burt, J. The role of patient experience surveys in quality assurance and improvement: a focus group study in English general practice. Health Expectations 2014; 18.
NHS England’s publication website sets out the survey methods and shows the data generated by the GP Patient Survey, with analysis tools for comparisons between GP practices https://gp-patient.co.uk/
NHS England’s statistics pages show more detail on the methodology of the survey and the ways in which the data are used https://gp-patient.co.uk/
- Extended hours in primary care link to reductions in minor A & E attendances
- Continuity in primary care may be linked to reduced unscheduled hospital care
- Shared decision-making in primary care can reduce antibiotic prescribing
Research in progress
Other NIHR studies are under way on other aspects of measuring and improving patient experience:
An evaluation of a real-time survey for improving patients’ experiences of the relational aspects of care https://www.journalslibrary.nihr.ac.uk/programmes/hsdr/130739/#/
Several projects examining aspects of gathering and using patient experience data in hospitals available here
Ours is an inner city practice in a somewhat deprived area of Nottingham. There is a senior partner and four other part-time GPs.
The patients are very varied in their social and ethnic backgrounds and their needs. There are lots of people with lots of problems so I guess you could call it a difficult practice.
Like many Patient Participation Groups, ours couldn’t really be described as representative of the practice’s patient population. Most people who are involved are retired or not working so can meet during the day and we are lucky to have one young person who has been able to join us.
What issues do we tackle? There are some perennials: access time to make an appointment, length of time to wait for an appointment with the patient’s choice of GP and so on. And we have set up ways to act positively to support patients – for example for people with long term conditions the practice follows up their hospital appointments, reminding patients to keep their appointments, that kind of thing.
Every year we receive the GPPS data when it is published and it is routine that we examine and discuss the results. The practice manager comes to all of our meetings and the senior partner comes to the beginning of the meeting, genuinely involved in the discussion.
Last year our practice manager did a good comparison of our practice data with other local practices and the national average benchmarks. Mostly we seem to come out better than other practices. Usually the main use we make of the data is seeing if any change we have made in the practice has made patient perceptions better – for example the extra hour of phone cover in the morning has improved the ‘access to appointment’ score.
We also openly discuss the data evaluating the patients’ most recent doctor or nurse appointments. What usually shows up is that satisfaction with practice staff is very high, particularly the levels of satisfaction with the consultations. It’s just the waiting that is the issue.
We’ve only ever bothered to do one local opinion survey. We just gathered as many views as we could on a single day in the practice. In the end we got all the same indications as we got from the GPPS data so it doesn’t really seem necessary to repeat the exercise.
I can honestly say that the surgery is genuinely committed to meeting the communication needs of its patients and consciously only employs people who are empathetic and care about the patients. We do know that it’s not the same in every practice - each practice has its own ethos - but that’s how we are here.
Blog - Innovation in Dudley: Enabling practice to improve and change
Here at the Strategy Unit we have been working with GP practices in Dudley for the last three years to support service improvement in Primary Care. As part of this programme (EPIC), the team promotes the use of the GP Patient Survey data to practices to better understand patient experience, to support service improvements and to assess the impact of their improvements.
The GP Patient Survey data was identified as relevant to the practices because it reflects the experience of the practice’s own patients and covers a range of issues that matter to patients. It was felt that the GP Patient Survey data is the highest quality, most objective data source that practices have access to, and provides an ability to track service improvements with patient experience as data is updated each year.
EPIC general practices were already aware of the importance of incorporating patient experience to improve service quality but weren’t previously making active use of the GP Patient Survey data. The EPIC programme has supported them to do that as part of an approach to service improvement and change which addresses patient experience, staff experience, and productivity.
Involvement of Patient Participation Groups as part of EPIC discussions has led the practice staff to be mindful that other sources of data are also available to the practice: these include processes for patient feedback after appointments, Friends & Family Test and NHS Choices comments, all of which can support more specific or rapid assessments of patient experience.
In the EPIC programme, the GP patient survey data is combined with other practice data collected for avoidable appointments, prescriptions processes, document management and measures of staff experience, both pre- and post- service improvement. Together the data help the practice to identify and measure changes that would contribute the most to achieving improvements.
Further support allows practice teams to agree on, invest in, and follow through, changes in the practice to benefit patients and staff. All the practices we work with now recognise the benefits of using data to highlight ‘what are we doing well’ and ‘what are we not doing so well’ before designing improvements.
The practices examining their GP Patient Survey data felt the findings reflected what they also saw as their strengths and weaknesses and found it reassuring that these were the areas they were addressing.
For one practice, the GP Patient Survey data confirmed what they thought, that work was needed on improving waiting times. What they did not anticipate was how clearly the survey showed the satisfaction levels for the time with clinicians. It also highlighted that 50% of respondents were not aware of the online options for accessing the GP services. As a result the practice is now promoting the online service and encouraging uptake, to make access easier for patients. The practice manager is; “keen to see next year’s results and see if there will be a noticeable change in patient experience and if more people are completing the survey.”
The current practices are eagerly awaiting the results of the 2017 GP Patient Survey to see if the improvements they are making have had a measurable impact.
Authors: Mahmoda Begum and Abeda Mulla. Strategy Unit, Midlands and Lancashire Commissioning Support Unit
At Westlands we use the GPPS data as part of our overall commitment to the quality of what we do. When the data is published we discuss it at our partners’ meetings, where we can talk with the senior nurses and the practice manager.
We have to look at it critically and decide how much of what it tells us is within our control. For example waiting times – we consider whether that is something we can influence by re-jigging the appointment system or opening up appointments at different times.
We also look carefully at the information about the patients’ satisfaction with nurse and GP consultations and work out what we can do to improve those scores. It’s always a careful balancing act really, trying to appease patients and make sure they get the best experience possible but also to try and run the service and make sure you’re not trying to improve things in one area to the detriment of another.
Last year the Clinical Commissioning Group gave us comparative data and encouraged us to see how we compared with other local surgeries. We were keen to make sure that we didn’t have any ‘outlying’ data that needed attention and at the same time looked for positives that we could feed back to particular staff groups – for example, the very high ratings that our practice nurses get. We know that when we are next inspected by the CQC they will expect us to be familiar with the GPPS data and to have acted upon it.
There are other sources of patient feedback, of course. We keep a close eye on the comments our patients leave on the ‘reviews and ratings’ page of the NHS Choices website and respond to them. The ‘friends and family test’ data is used for CQC inspection records. We also have a feedback book on reception – the comments that patients leave for us are constantly reviewed and acted upon. We just try to make sure that there is a clear loop of communication between the patients, the clinicians and the management team.
I know that some people are sceptical about the GPPS because the data is at the practice level and not about individual doctors and nurses. But we do separate questionnaires about our consultations as part of our revalidation every five years, so we just focus on using the GPPS information for sensible, practical quality improvements.
Author: Dr Chris Castle. General Practitioner, Westlands Medical Centre, Portchester
Blog - The team statistician: "I'm paid to be sceptical"
As the statistician on the IMPROVE study team, it was my job to approach data with scepticism. I had to think constantly about how and why the data we had generated, and more importantly the national GP Patient Survey data, might mislead our readers and how we could best examine the data to shed light on it. I had to be sure we were conveying the right ideas about ways to improve patient experience.
Whilst it has its problems, overall I’m reassured about the data at the heart of the IMPROVE study, the GPPS data generated by respondents to the annual survey. I really want people to realise just how robust the data is.
Ipsos MORI, who run the survey on behalf of NHS England, put a lot of effort into coming up with a good sample and making the data fair and representative. We know that many people who read the GPPS results for their practice are sceptical about them. They say “the survey isn’t accurate, it’s biased because it only gets answered by people who have got an axe to grind.”
In fact, the data suggest that as far as doctor and nurse consultations are concerned, people are more positive in the GPPS questionnaire that they should be because, as we established, they are reluctant to criticise their GPs. I’d like practices to study the data and to think: “well, OK, our consultation ratings are 85% and that looks great but maybe that is overstated and so there is room for improvement.” And I’d really like practices that serve predominantly South Asian populations to consider what we found out about the experience of South Asian patients in comparison with their White British counterparts and to think about what could be done to address that in their own practice.
So, if you work in a practice, don’t expend a lot of effort doing your own local surveys because you doubt the value of the GPPs data. The GPPS data gives you good insight into how you are performing compared to your peers and it is rich – dig deeper into it!
Author: Dr Gary Abel. IMPROVE Research Project Statistician
Produced by the University of Southampton on behalf of NIHR through the NIHR Dissemination Centre