Digital child abuse, the danger of AI-based exploitation
• The British Government and the AI Security Institute released the first-ever International AI Safety Report 2025, highlighting the risk of child sexual abuse material (CSAM) generated by AI tools.
• The UK is making the first legislative attempt to target the threats posed by AI tools that can generate CSAM.
• The World Economic Forum and the Internet Watch Foundation have also highlighted the proliferation of CSAM on the open web.
• The UK’s legislation will make it illegal to possess, create, or distribute AI tools that can generate CSAM, and paedophile manuals that may guide individuals in using AI tools to generate CSAM.
• The proposed law outlaws even the possession and use of such AI tools, making it deterrent and holistic.
• The proposed law can curb the initial rippling effect caused by the spread of CSAM on the mental health of children and addresses the legislative gap concerning CSAM generated as purely AI imagery.
• India’s cybercrimes against children have substantially increased compared to the previous year’s statistics, with 1.94 lakh child pornography incidents recorded as of April 2024.
• The existing legislative framework lacks adequate safeguards to deal with the AI-generated CSAM.
• The proposed plan to follow includes replacing the definition of ‘child pornography’ under the POCSO Act with ‘CSAM’, defining’sexually explicit’ under Section 67B of the IT Act, and imposing statutory liability on Virtual Private Networks, Virtual Private Servers, and Cloud Services to comply with CSAM-related provisions in Indian laws.
• The Government of India must pursue the adoption of the UN Draft Convention on ‘Countering the Use of Information and Communications Technology for Criminal Purposes’ by the UN General Assembly.