Browse all articles

Championing Responsible AI for Kids and Families

Our new ratings and reviews of AI products will surface both opportunities and harms in the industry.

Young Black girl using a tablet while sitting on a couch.

Artificial intelligence is predicted to be one of the largest disruptive changes we will experience in our lifetimes. While these powerful technologies are not new, AI is evolving at an unprecedented pace, and without sufficient guardrails to protect human rights or democracy.

At the same time, the public lacks understanding about the different types of AI, how they work, their benefits, and their pitfalls—particularly for kids and families. AI's "view" of the world can shape impressionable minds and their sense of what is "good" or "normal," and with little accountability. What happens to our children when they are exposed to the worldview of a biased algorithm repeatedly and over time? What view of the world will they assume is correct, and how will this inform their interactions with real people and society?

One of the best ways for parents and educators to help is to increase their own understanding of how AI tools work. At Common Sense, we've been using our independent, research-backed expertise to help families and educators make informed choices about the media and technology in kids' lives for 20 years. That means we have a responsibility to carefully rate and review AI and other emerging technologies that will become integral in the digital lives of kids and families. And today we've launched the first 10 reviews using our new AI ratings and reviews system.

Our reviews are designed to distill extremely complex concepts and data into a straightforward "nutrition" label for each AI product, and measure them against a set of AI principles that highlight opportunities and identify potential harm.

Evaluated by a team of experts, these first 10 reviews represent a range of the most popular apps for learning, information, and creativity. There are lots of different types of AI products out there, so we bucket our product reviews into three categories:

  • Multi-Use: Designed for use in many different ways, this category covers products like generative AI, including chatbots and products that create images from text inputs, translation tools, or computer vision models. Generative AI chatbots ChatGPT and Bard and text-to-image tools DALL-E and Stable Diffusion are in this category.

  • Applied Use: These products are built for a specific purpose, but they aren't specifically designed for kids or education. Examples include the automated recommendations in your favorite streaming app, or the way an app sorts the faces in a group of photos so you can find pictures of your niece at a wedding. Generative AI tutor Loora and Snapchat chatbot My AI are in this category.

  • Designed for Kids: This category is a subset of the applied use category, and includes learning tools for kids as well as teaching tools for educators that ultimately benefit students in some way. Khan Academy's chatbot Khanmigo, teaching tool Toddle AI, and educational tutors Kyron Learning and Ello are in this category.

While each of these products has its own benefits and risks, we've uncovered a few overall trends:

  • More data doesn't mean better AI: The more data that an AI tool scrapes from the internet, the riskier it can be for users. In fact, the most successful AI tools that we reviewed are powered by limited, thoughtfully curated data sets and are designed for specific audiences or contexts.

  • Responsible AI practices aren't always obvious: Just because a developer has a process for transparency reporting or risk mitigation doesn't mean that the product is safe. Often this information is hidden from your average user in deeply technical and obtuse writing.

  • Generative AI is best for fiction, not fact: Consumers—and kids especially—must understand that generative AI tools are best used for creative exploration and are not designed to consistently give factual, truthful responses to questions.


Generative AI, by virtue of the fact that it is trained on massive amounts of internet data, presents a unique set of risks for reinforcing harmful stereotypes, unfair biases, and false information in the content it generates. But no product is without its risks and limitations—even the highest-rated products we reviewed, both of which use speech recognition, face the daunting challenge of being able to accurately recognize the wide variety of ways every language is spoken.

We'll be rolling out new reviews of both current and emerging tools over the coming months. Through our reviews, families and educators will have the opportunity to better familiarize themselves with the products their kids and students use. The ratings will also inform new legislative and regulatory efforts to keep kids safe online and to push for increased transparency and responsible development from AI creators—an opportunity that as a society we missed when social media emerged on the scene almost 20 years ago.

This work is all part of a larger initiative we are leading, with generous support from Craig Newmark Philanthropies, to ensure that AI has a positive influence on kids. That work includes a set of AI curricular resources for classrooms, research about the impact of AI on kids, and advocacy efforts to establish policies and guardrails around AI that keep kids front and center.

Tracy Pizzo Frey

Tracy Pizzo Frey leads the Common Sense AI ratings and reviews program. She has an extensive background in applied use of advanced technologies in both public and private sector organizations across the globe. Tracy was the Managing Director for Outbound Product Management and Responsible AI at Google Cloud. In 2017, Tracy founded, created, and led all of Google Cloud's responsible AI work, which served as a model for how other business lines in Google could align with Google's AI Principles. Tracy is the founder and CEO of Restorative AI, a services-based company that helps organizations ensure that their creation, use, and adoption of AI tools, systems, and products contributes to the future we all deserve. She is also the co-founder and Managing Partner at Uncommon Impact Studio.