AI is everywhere: in our workplaces, homes, schools, art galleries, concert halls, and even neighborhood coffee shops. We can’t seem to escape it. Some hope it will unlock our full potential and usher in an era of creativity, prosperity, and peace. Others worry it will eventually replace us. While both outcomes are extreme, if you’ve ever used AI to conduct research with synthetic users, the idea of being “replaced” isn’t so wild.
For the past month, I’ve beta-tested Crowdwave, an AI research tool that allows you to create surveys, specify segments of respondents, send the survey to synthetic respondents (AI-generated personas), and get results within minutes.
Sound too good to be true?
Here are the results from my initial test:
- 150 respondents in 3 niche segments (50 respondents each)
- 51 questions, including ten open-ended questions requiring short prose responses
- 1 hour to complete and generate an AI executive summary and full data set of individual responses, enabling further analysis
The Tool is Brilliant
It took just one hour to gather data that traditional survey methods require a month or more to collect, clean, and synthesize. Think of how much time you’ve spent waiting for survey results, checking interim data, and cleaning up messy responses. I certainly did and it made me cry.
The qualitative responses were on-topic, useful, and featured enough quirks to seem somewhat human. I’m pretty sure that has never happened in the history of surveys. Typically, respondents skip open-ended questions or use them to air unrelated opinions.
Every respondent completed the entire survey! There is no need to look for respondents who went too quickly, chose the same option repeatedly, or abandoned the effort altogether. You no longer need to spend hours cleaning data, weeding out partial responses, and hoping you’re left with enough that you can generate statistically significant findings.
The Results are Dangerous
When I presented the results to my client, complete with caveats about AI’s limitations and the tool’s early-stage development, they did what any reasonable person would do – they started making decisions based on the survey results.
STOP!
As humans, we want to solve problems. In business, we are rewarded for solving problems. So, when we see something that looks like a solution, we jump at it.
However, strategic or financially significant decisions should never rely ona single data source. They are too complex, risky, and costly. And they definitely shouldn’t be made based on fake people’s answers to survey questions!
They’re Also Useful.
Although the synthetic respondents’ data may not be true, it is probably directionally correct because it is based on millions and maybe billions of data points. So, while you shouldn’t make pricing decisions based on data showing that 40% of your target consumers are willing to pay a 30%+ premium for your product, it’s reasonable to believe they may be willing to pay more for your product.
The ability to field an absurdly long survey was also valuable. My client is not unusual in their desire to ask everything they may ever need to know for fear that they won’t have another chance to gather quantitative data (and budgets being what they are, they’re usually right). They often ignore warnings that long surveys lead to abandonment and declining response quality. With AI, we could ask all the questions and then identify the most critical ones for follow-up surveys sent to actual humans.
We Aren’t Being Replaced, We’re Being Spared
AI consumer research won’t replace humans. But it will spare us the drudgery of long surveys filled with useless questions, months of waiting for results, and weeks of data cleaning and analysis. It may just free us up to be creative and spend time with other humans. And that is brilliant.
Ai could also be used to survey real respondents – which seems a bit better option for me. Also triangulation of methods would be needed as you rightly noticed…
AI is proving to be very helpful in all aspects of research—creating the survey/interview guide, summarizing results, and responding! It’s wild and, as you rightly point out, requires caution and double- and triple-checking to ensure the data is solid. Thanks for chiming in, Tatiana, always great to get your expert perspective!
Which tool did you test?
An excellent (and common) question! Since the tool is still in Beta, I’ve asked the company if I can share their name. Once I hear back, I’ll post the answer here. Stay tuned!
Great experiment and I agree with your caution. I would also add that AI surveys may be a starting point if you intend to
Interview the general population. I would worry about the results from a survey intended for highly technical or specialized field. AI has been known to make up things that sound real but are not.
Great caution, Heike! I’m definitely curious how the tool would fare in a highly technical or specialized field. Maybe I’ll run an experiment on that – any suggestions? – and report back
Loved that you tested this for us and give us a perspective on how to use the AI version to improve our Human Version!
Use the AI version to improve our human version – exactly! That is so well phrased, Victoria. Even though we are very nearly perfect, a bit of help, perspective, and experimentation/learning never hurts. Better is always possible and AI is a great tool to help us. Thanks for chiming in!