Yesterday afternoon while visiting a friend to help look at a PC display issue, and of course to have a cuppa and a good chat, we ended up discussing research, results and the ‘blindness’ of some researchers. They often do not want to investigate an area, they have a pre-conceived idea of what they want as an outcome, and therefore construct their research, surveys etc, to deliver that result.
She supervises post-graduate students in the areas of social studies and entrepreneur-ship (amongst other odd titles). I am a consultant. Yet we see the same behaviour manifest itself in academia and business. It’s what I call “I know what answers I want, so I will construct the survey to give me those answers”.
This is not only visible in the social sciences/business arenas, but also seen in other areas. Where money, position, reputation is dependent on the outcomes, there will always be the temptation to, or a ‘blind’ bias towards obtaining the answers one wants.
Not so long ago I told a company’s senior management (I was doing a review there) that if I was checking them out as a supplier and I saw their online survey I would immediately realise they were collecting evidence only to support their claims of ‘doing well’ for their customers. A ‘quick’ survey to be done after they have fixed your PC/sorted out some system use issue was the subject of the discussion. There were many questions to be answered, and the default answer for all was the one that put them in the best light. I got bored after filling in the first few answers.
Later I asked for my survey to be sent to me. And there the evidence was – my own selections on the first few questions, followed by the (default) positive ones filled in by the automated answering part. They hadn’t been left blank as they should have been if they were looking for a real opinion.
Another, similar survey, was sent out to managers of operating units in a company. This was done by their suppliers. I was there doing some consulting at the time, and heard the outcome. Glowingly positive results were reported. Then the internal IT manager and the supplier account executive went round on visits, and received a very nasty reception.
I was sitting quietly reviewing something I had written for them, when the IT manager returned from the first round of visits. He and I went off for a cup of coffee as he wanted someone independent to have a chat with. He began the story and looked shocked when I said, “and of course they began telling you how the same failures occur over and over, never ending”. He queried how I knew that, and I told him I had seen the questionnaire that morning and was doing the write up for them – that had been what I was reviewing when he had interrupted me.
He said the supplier had told him they had a consultant draw up the questionnaire, and therefore wondered why it had landed him in such trouble that day. He just sat and laughed when I pointed out the supplier had been responsible for briefing the consultant – the brief may have been to give a survey so that they could prove they were doing well.
Possibly the most difficult part of investigating something, is finding the right questions to ask. The ones that will reveal what is there, not the ones to confirm a previously held belief, or deliver an answer that is desired for what else it will bring.
Then again, some questions are there to incriminate the answerer, no matter what. The obvious one is “Tell me Mr xxx, when did you stop beating your wife?”