News Daily


The Times Real Estate

Australia

  • Written by The Conversation
Google is rolling out its Gemini AI chatbot to kids under 13. It’s a risky move

Google has announced it will roll out its Gemini artificial intelligence (AI) chatbot to children under the age of 13.

While the launch starts within the next week in the United States and Canada, it will launch in Australia later this year. The chatbot will only be available to people via Google’s Family Link accounts.

But this development comes with major risks. It also highlights how, even if children are banned from social media, parents will still have to play a game of whack-a-mole with new technologies as they try to keep their children safe.

A good way to address this would be to urgently implement a digital duty of care for big tech companies such as Google.

How will the Gemini AI chatbot work?

Google’s Family Link accounts allow parents to control access to content and apps, such as YouTube.

To create a child’s account, parents provide personal details, including the child’s name and date of birth. This may raise privacy concerns for parents concerned about data breaches, but Google says children’s data when using the system will not be used to train the AI system.

Chatbot access will be “on” by default, so parents need to actively turn the feature off to restrict access. Young children will be able to prompt the chatbot for text responses, or to create images, which are generated by the system.

Google acknowledges the system may “make mistakes”. So assessment of the quality and trustworthiness of content is needed. Chatbots can make up information (known as “hallucinating”), so if children use the chatbot for homework help, they need to check facts with reliable sources.

What kinds of information will the system provide?

Google and other search engines retrieve original materials for people to review. A student can read news articles, magazines and other sources when writing up an assignment.

Generative AI tools are not the same as search engines. AI tools look for patterns in source material and create new text responses (or images) based on the query – or “prompt” – a person provides. A child could ask the system to “draw a cat” and the system will scan for patterns in the data of what a cat looks like (such as whiskers, pointy ears, and a long tail) and generate an image that includes those cat-like details.

Understanding the differences between materials retrieved in a Google search and content generated by an AI tool will be challenging for young children. Studies show even adults can be deceived by AI tools. And even highly skilled professionals – such as lawyers – have reportedly been fooled into using fake content generated by ChatGPT and other chatbots.

Will the content generated be age-appropriate?

Google says the system will include “built-in safeguards designed to prevent the generation of inappropriate or unsafe content”.

However, these safeguards could create new problems. For example, if particular words (such as “breasts”) are restricted to protect children from accessing inappropriate sexual content, this could mistakenly also exclude children from accessing age-appropriate content about bodily changes during puberty.

Many children are also very tech-savvy, often with well-developed skills for navigating apps and getting around system controls. Parents cannot rely exclusively on inbuilt safeguards. They need to review generated content and help their children understand how the system works, and assess whether content is accurate.

Close up photo of Google logo sign.
Google says there will be safeguards to minimise the risk of harm for children using Gemini, but these could create new problems. Dragos Asaeftei/Shutterstock

What risks do AI chatbots pose to children?

The eSafety Commission has issued an online safety advisory on the potential risk of AI chatbots, including those designed to simulate personal relationships, particularly for young children.

The eSafety advisory explains AI companions can “share harmful content, distort reality and give advice that is dangerous”. The advisory highlights the risks for young children, in particular, who “are still developing the critical thinking and life skills needed to understand how they can be misguided or manipulated by computer programs, and what to do about it”.

My research team has recently examined a range of AI chatbots, such as ChatGPT, Replika, and Tessa. We found these systems mirror people’s interactions based on the many unwritten rules that govern social behaviour – or, what are known as “feeling rules”. These rules are what lead us to say “thank you” when someone holds the door open for us, or “I’m sorry!” when you bump into someone on the street.

By mimicking these and other social niceties, these systems are designed to gain our trust.

These human-like interactions will be confusing, and potentially risky, for young children. They may believe content can be trusted, even when the chatbot is responding with fake information. And, they may believe they are engaging with a real person, rather than a machine.

A mother teaching her child the alphabet.
AI chatbots such as Gemini are designed to mimic human behaviour and gain our trust. Ground Picture

How can we protect kids from harm when using AI chatbots?

This rollout is happening at a crucial time in Australia, as children under 16 will be banned from holding social media accounts in December this year.

While some parents may believe this will keep their children safe from harm, generative AI chatbots show the risks of online engagement extend far beyond social media. Children – and parents – must be educated in how all types of digital tools can be used appropriately and safely.

As Gemini’s AI chatbot is not a social media tool, it will fall outside Australia’s ban.

This leaves Australian parents playing a game of whack-a-mole with new technologies as they try to keep their children safe. Parents must keep up with new tool developments and understand the potential risks their children face. They must also understand the limitations of the social media ban in protecting children from harm.

This highlights the urgent need to revisit Australia’s proposed digital duty of care legislation. While the European Union and United Kingdom launched digital duty of care legislation in 2023, Australia’s has been on hold since November 2024. This legislation would hold technology companies to account by legislating that they deal with harmful content, at source, to protect everyone.

Read more https://theconversation.com/google-is-rolling-out-its-gemini-ai-chatbot-to-kids-under-13-its-a-risky-move-256204

Top 5 Providers of SEO Focused Guest Posts in Florida You Can Trust

Many companies today aim to increase their online presence, which is a good use for guest blogging. In guest blogging, you compose content for the benefit of other websites that link back to yours. This promotes your business and increases... Read more

The Role of Litigation Lawyers in Brisbane

Litigation lawyers in Brisbane play a crucial role in the legal landscape, ensuring justice is accessible and efficiently administered for the clients they represent. They have expertise in handling disputes that may result in court proceedings, with their work encompassing... Read more

Edge Computing: Revolutionising Connectivity in the Digital Age

Edge computing is rapidly transforming how organisations process and manage data, bringing computational power closer to where it's most needed. In an increasingly connected world, Microsoft Azure services are at the forefront of this technological revolution, enabling businesses to leverage... Read more

What You Need to Know About Towing a Caravan

Towing a caravan can be an exciting way to explore Australia's vast landscapes, but it also comes with its own set of challenges. Whether you’ve just purchased a new caravan or are browsing caravans for sale, understanding the ins and... Read more

How to curb short-sightedness in kids

Kids should play outside more to reduce the risk of short-sightedness and potential adult blindnessWe are in the grips of a ‘myopia epidemic’: more than 20 per cent of Australians have myopia or short-sightedness, tipped to rise to 50 per... Read more

The Importance of Fast Energy Services in Your New House

Moving to a new place is challenging and accompanies a lot of work. From packing to getting everything to your new place and unpacking, there’s alot to consider. One important necessity people forget to check is if the electricity and... Read more