Speak to an expert : Live Chat exa online chat

Knowledge HubTMeducation

Generative AI and Content Filtering

Generative AI and Content Filtering

Developments in technology can provide more efficient ways to solve challenges, which explains the recent surge in popularity around the use of large language models and generative Artificial Intelligence (AI). These technologies are developing so rapidly there’s very little legislation and self regulation in place to control them. Could children use these technologies to help them access content typically restricted by filtering systems?

What are The Risks and Ethics of Using AI Tools in School?

While AI tools tend to have their own age restrictions, filters and usage policies, even when used correctly there are risks and ethics to bear in mind.
  • Misinformation: AI isn’t 100% accurate and only generates content based on what is input into the system. It can also be out of date depending when it last gathered information.
  • Bias and stereotypes: due to bias and stereotypes demonstrated online this is often reflected in AI generated content.
  • Inappropriate material: AI may produce content that isn’t suitable for the audience.
  • Ethical considerations: Using content generated by an AI and not disclosing this could constitute plagiarism.
  • Privacy: Depending on the privacy policy of the AI being used, if you enter any personal data this may be stored and used to inform the AI.

What have Exa implemented?

In recognition of the possibility of AI answering questions about sensitive topics, SurfProtect blocks all classified AI urls and search terms by default to prevent learners from accessing AI resources. If you require access to AI tools and would like these added to your whitelist, you can access these settings through the SurfProtect panel via surfprotectpanel.exa.net.uk. You can follow the guide on our new help site or please feel free to get in touch.

Surfprotect logo

Some systems like ChatGPT already filter prompts and refuse to provide an answer or refer users to seek help, however this isn’t sufficient and can be worked around if a user wanted to e.g. using role play such as DAN. Due to this, further protection may be required by schools and raises the question, can’t content filtering just block inappropriate AI generated content as it’s created?

Why is it difficult to filter AI generated content dynamically?

Filtering software often has a tough time dealing with generative AI. Here’s why:

  • Variability and Context: AI content can vary widely in terms of style, tone, and context. This makes it hard for filtering software to set consistent rules for what’s appropriate and what’s not. Something fine in one situation might be totally wrong in another which is indistinguishable by the filtering software. E.g. Asking AI to write something in the style of an author who produces inappropriate content, may generate inappropriate content. 
  • Evolution: AI is developing and changing constantly, this means filtering software has to constantly update its algorithms to ensure that it is filtering effectively and not impacting user experience. 
  • Resources and performance: Responses are fed back in a variety of formats and volumes. This leads to not only having to process large amounts of text quickly, but also potentially spending even more resources converting this into a format which can be filtered. This can take significant computational resources and have impacts on response times. E.g. a response on chat gpt takes time to generate as it processes, imagine this when trying to use any web page. 
  • Adversarial Generation: Some AI is designed to trick filtering software by making content that gets past the filters. This makes the job even harder for the software. E.g. character.ai focuses on generating characters rather than content such as essays, AI’s with a different focus can be used to create inappropriate content without specifically asking for it. 
  • User Workarounds: If users are determined enough to find the content, they’ll find workarounds. For example, they might ask AI to create content and replace blocked keywords in the text or avoid using certain words or phrases. Users can also guide AI to provide answers to their questions by using follow-up prompts, potentially bypassing content filters altogether.

    In short, AI-generated content is a real headache for filtering software. It’s always changing and full of surprises, which means developers have to keep working hard to keep our online spaces safe.

    To help educate your learners on the risks of AI and safe usage, we’ve included some key points to bear in mind.

AI use in education

How do you use AI safely in School?

Establish guidelines on usage when using AI tools and provide education around this such as:
  • Educate learners about the risks, capabilities and limitations of AI and encourage learners to evaluate the accuracy of generated content.
  • Ensure AI is only used for educational purposes and respects school policies. Educate students on why they shouldn’t present work from AI as their own, and the importance of learning for themselves and being creative to avoid dependence on AI.
  • Only allow usage of AI when supervised, to ensure proper use of the technology
  • Only use AI that is safety conscious and enforces strict policies and content filtering on harmful/inappropriate content. Also check if there are any safeguarding features that should be switched on.
  • Encourage open communication about AI, create a safe environment for students to share concerns, ask questions and report inappropriate content.
  • Teach digital literacy and responsible online behaviour such as keeping personal information private.
  • Educate parents on AI usage out of the classroom also to ensure they can instil good etiquette into young learners when using AI outside of the classroom.
The Exa Foundation offers a variety of sessions for students, teachers and parents focussed on digital literacy and protecting yourself and young people when using the internet. Find out more about what sessions The Exa Foundation offers here: https://exa.net.uk/foundation/

You can also find out more about the DfE’s stance on the use of generative AI in education here: https://www.gov.uk/government/publications/generative-artificial-intelligence-in-education/generative-artificial-intelligence-ai-in-education#opportunities-for-the-education-sector

More details on what Exa are doing when it comes to AI in education are to follow, subscribe to our newsletter to ensure you don’t miss out on any further updates!

Keep up to date with our blog

Suggested Next Read

Related Knowledge Hub™ Articles

ISPA Testing

The Exa Foundation

Contact us

Sales

Sales

Office hours

Monday: 8:30am – 5pm
Tuesday: 8:30am – 5pm
Wednesday: 8:30am – 5pm
Thursday: 8:30am – 5pm
Friday: 8:30am – 5pm
Saturday: Closed
Sunday: Closed

Technical Support

Contact us

Email: helpdesk@exa.net.uk
Phone: 0345 145 1234

Office hours

Monday: 8am – 6pm
Tuesday: 8am – 6pm
Wednesday: 8am – 6pm
Thursday: 8am – 6pm
Friday: 8am – 6pm
Saturday: 10am – 4pm
Sunday: 10am – 4pm