OpenAI has recently announced the formation of a new team that will focus on studying child safety.

SAN FRANCISCO, CALIFORNIA - NOVEMBER 06: OpenAI CEO Sam Altman speaks during the OpenAI DevDay event on November 06, 2023 in San Francisco, California. Altman delivered the keynote address at the first-ever Open AI DevDay conference.(Photo by Justin Sullivan/Getty Images)

Image Credits: Justin Sullivan / Getty Images

Facing scrutiny from activists and parents, OpenAI has established a new team to explore ways of preventing the misuse or abuse of its AI tools by children.

A recent job listing on OpenAI’s career page discloses the formation of a Child Safety team. The company states that this team collaborates with platform policy, legal, and investigation groups within OpenAI, along with external partners, to oversee “processes, incidents, and reviews” related to underage users.

The team is actively recruiting a child safety enforcement specialist. This role will involve applying OpenAI’s policies in the context of AI-generated content and participating in the review processes for “sensitive” content, presumably related to children.

Companies of a certain size in the tech industry often allocate substantial resources to adhere to laws like the U.S. Children’s Online Privacy Protection Rule, which imposes controls on what children can access on the internet and the types of data companies can collect from them. Therefore, the hiring of child safety experts by OpenAI is not entirely surprising, especially if the company anticipates a significant underage user base in the future. (OpenAI’s current terms of use mandate parental consent for children aged 13 to 18 and prohibit use for kids under 13.)

However, the creation of this new team, following OpenAI’s recent partnership with Common Sense Media to collaborate on kid-friendly AI guidelines and securing its first education customer, also indicates OpenAI’s caution about potential conflicts with policies related to minors’ use of AI and negative publicity.

Children and teenagers are increasingly relying on Generative AI (GenAI) tools not only for schoolwork but also for personal issues. According to a poll by the Center for Democracy and Technology, 29% of children have reported using ChatGPT to address anxiety or mental health issues, 22% for problems with friends, and 16% for family conflicts.

However, some view this trend as a growing risk. Last summer, schools and colleges hastily banned ChatGPT due to concerns about plagiarism and misinformation. While some institutions have reversed their bans, not everyone is convinced about the positive potential of GenAI. Surveys, such as the one by the U.K. Safer Internet Centre, reveal that over half of children (53%) have witnessed peers using GenAI negatively, such as creating believable false information or images to upset others.

In September, OpenAI released documentation for ChatGPT in classrooms, including prompts and an FAQ to guide educators on using GenAI as a teaching tool. In one support article, OpenAI acknowledged that its tools, particularly ChatGPT, “may produce output that isn’t appropriate for all audiences or all ages” and advised exercising “caution” when exposing children, even those meeting the age requirements.

Read Also:

There is a growing call for guidelines regarding the use of GenAI by children. UNESCO urged governments to regulate the use of GenAI in education, including implementing age limits for users and establishing safeguards for data protection and user privacy. Audrey Azoulay, UNESCO’s director-general, emphasized, “Generative AI can be a tremendous opportunity for human development, but it can also cause harm and prejudice. It cannot be integrated into education without public engagement and the necessary safeguards and regulations from governments.”

Sharing Is Caring:

Leave a Comment