Amidst all the hype surrounding generative AI, it can be easy to lose sight of how—and whether—consumers are actually using it in their daily lives.
Luckily, CR has an all-star team of researchers who design surveys so that we can keep tabs on Americans’ preferences. We recently conducted two nationally representative multi-mode surveys: one focused on how consumers use generative AI chatbots, and another focused on consumers’ attitudes about how chatbots use consumers’ health data.
How Americans use AI chatbots
Some of the findings didn’t come as a total surprise. For example, our August survey of 2,062 US adults found that around 70% of Americans hadn’t used a chatbot in the last three months. But some folks had, and ChatGPT proved to be the most widely used tool, with 19% of Americans reporting that they’d used it in the past 3 months, followed by BingAI at 6%.
The most common activities Americans used chatbots for, if they used them at all, was to answer a question, in lieu of a search engine, or to have the chatbot explain something. It’s worth noting that chatbots often produce incorrect information, so relying on them for research can backfire.
The reasons people gave for why they turned to chatbots revealed many consumers’ experimental spirit. Out of the group of Americans who had used a chatbot in the past three months, the most common reason offered (37%) was that they thought it would be fun. The next most common reasons were that they thought it would save time (36%) and that it would make the task easier or less stressful (35%)
Americans’ preferences on chatbots’ use of their health data
Consumer Reports also surveyed Americans about their preferences surrounding how companies with chatbots use their health data. CR does a lot of work to protect consumers’ privacy, from strengthening legislation to building apps that enable consumers to take control of their personal information, to investigating products that put consumers’ privacy at risk. When consumers interact with a chatbot and ask questions about health, those interactions become data for the company behind the bot. These interactions can be sensitive, revealing, for example, health concerns surrounding weight, or medical conditions that a consumer might not otherwise choose to share with a software company.
In a nationally-representative survey of 2,070 US adults conducted in 2023, CR found that about one in five Americans had used a chatbot for some health-related activity or to discuss a health-related topic in the past six months. Those conversations included consumers looking up what a medical term means, looking up nutrition facts, looking up symptoms, learning about specific medical conditions, and more.
We also asked questions to get at consumers’ preferences surrounding what chatbot companies do with their health-related data. The most common response, from a little less than half of Americans, was that companies that operate chatbots should never store health information. About three in ten Americans said it was acceptable for companies to use consumers’ health information to train a program. Very few Americans (5%) said they thought it was acceptable for a company to sell or share health-related information to organizations that would use it in a way that affects the consumer, such as targeted advertising.
These results suggest consumers’ strong preferences for privacy protections surrounding their health data – a preference that doesn’t always align with companies’ current practices.