AI will likely take your research to the next level. But you must do your homework first.
Three big questions to ask yourself before getting fully onboard with Artificial Intelligence.
I have been in a fortunate position to observe the rise of Artificial Intelligence (AI) in consumer research from a front-row seat. Being a founder of an intelligent survey start-up while also working closely with our partners delivering customer insights, I have gained an appreciation for both AI’s amazing promise and it’s simultaneous blind spots. Without going into too many details, let me share a few observations and questions I believe every insights professional should be asking about AI before fully committing.
Where can AI help?
I am convinced that AI-powered tools are tremendously useful and are the most important technological innovation in consumer research since online surveys. But when it comes to doing consumer research, using AI does not automatically guarantee good results.
Self-learning AI excels in finding subtle patterns that may not be obvious to the naked human eye or even to traditional statistical research methods. Today, a machine can learn quickly from very subtle behavior and data variations. However, a machine still needs human researchers and data scientists to tell it what to learn. Most of the machine training is supervised, which means that before AI can start learning new patterns, a human must define what a good outcome looks like. This in itself is not a trivial task and it is not too dissimilar from building an online survey: By defining what check boxes are available to respondents, we define the universe of outcomes that can be discovered.
Before committing to AI in a project, researchers should be careful to consider the training data sets that were used to calibrate its models. This is particularly important if the subject of the research project is ambiguous and the universe of answers is largely unknown, for example, when researching the tough ‘why, what and how’ questions. AI works best where a large amount of labeled data is available to learn from and the domain of the training set and the research topic are closely related. The further the training data set is from the research subject matter, the more careful researchers should be in interpreting the results.
One aspect of consumer research, where I don’t expect AI to help much is asking the right research questions. Good research always starts with asking good questions, and the often-repeated cliché — junk in, junk out — holds very much true with Artificial Intelligence. AI will most likely generate a wealth of data in response to any question we ask it to analyze. For AI, quantity is never a problem. But no amount of information can make up for a biased or misleading question that should not have been asked in the first place. There is a lot of good AI research out there and increasingly more robust models are available to data scientists. However, this progress doesn’t absolve us from making sure that the structure of our research projects is fundamentally sound — whether it does or does not include AI.
Where are the biases hiding?
Biases are an inevitable part of doing consumer research. Just the fact that we conduct our research online makes it a subject to a known bias. Minimizing bias in research is important, but so is understanding where it comes from, so we can account for it when we build insights.
For a tech company like ours, it is critical that we are transparent with how we make technology design decisions that can introduce bias and impact insights our platform delivers. For example, during the design process, we have to make trade-offs between probing into our respondents’ answers horizontally (for the breadth of their opinions) or vertically (go deeper on a particular answer). Imagine that our respondent tells us that they like a certain brand of coffee because of its balance. Should our algorithm dig deeper into that answer or should it explore other topics such as coffee’s aroma, which our machine has already learned is also an important factor? Given the limited amount of time we have with each respondent, it is not practical to go both fully deep and fully wide. Hence, we make sure our customers understand the design decisions we make and how they may impact the resulting data.
There are other biases. I already talked about the training sets, which are chosen by human data scientists for reasons that are not always self-evident or transparent to the end-user. The same goes for choices of models that run on those training sets. AI tools can thus become black boxes with seven locks and fiery dragons guarding them (we call that IP protection). As with any new technology, it will take time for the research community to learn about AI’s hidden nooks and crannies. When online surveys first emerged, it took time, lots of data, and a healthy level of skepticism from the community before they gained a foothold as a valid and well-understood research methodology. Artificial Intelligence is going through this same process now.
How does AI impact humans providing the responses?
One specific bias that is worth paying special attention to is how our human research participants perceive the machine with which they interact. Some AI tools only touch information after it was generated by humans, but what about other tools such as chatbots or virtual discussion moderators? What do we know about the impact on respondents as they answer questions posed by a machine rather than a human?? How does it compare to answers provided in the context of a more ‘sterile’ online survey? We, humans, have a rather nascent and evolving relationship with machines as we are getting used to the reality where AI is increasingly becoming a part of our daily routine. Think about the times when you let Google finish typing your search question or when you let Pandora suggest the next song on your music list. Employing predictive analytics and machine learning in our consumer research should give us a pause to consider whether the context in which our respondents answer research questions are affected by who (or what) respondents think is on the other end of the chat.
Rasto Ivanic, CEO of GroupSolver, Inc.