{"id":5275,"date":"2021-12-07T08:40:54","date_gmt":"2021-12-07T08:40:54","guid":{"rendered":"https:\/\/ircai.org\/top100\/entry\/safeguarding-children-online-in-realtime\/"},"modified":"2021-12-07T08:40:54","modified_gmt":"2021-12-07T08:40:54","slug":"safeguarding-children-online-in-realtime","status":"publish","type":"gt_entry","link":"https:\/\/naixus.net\/index.php\/top100\/entry\/safeguarding-children-online-in-realtime\/","title":{"rendered":"Safeguarding children online in realtime"},"content":{"rendered":"<h2 class=\"section-title\">1. General<\/h2>\n<h2 class=\"label\">Category<\/h2>\n<div class=\" h-auto\">\n<p>                    <span class=\"type-checkbox mr-2\">SDG 3: Good Health and Well-being<\/span><\/p>\n<p>                    <span class=\"type-checkbox mr-2\">SDG 11: Sustainable Cities and Communities<\/span><\/p>\n<p>                    <span class=\"type-checkbox mr-2\">SDG 16: Peace and Justice Strong Institutions<\/span><\/p><\/div>\n<h2 class=\"type-select font-bold mt-4 mb-2\">Category<\/h2>\n<p class=\"type-select mb-6\">Other<\/p>\n<h2 class=\"type-text font-bold mt-4 mb-2\">Please describe Other<\/h2>\n<p class=\"type-text mb-6\">Online child safety &#8211; safetytech<\/p>\n<h2 class=\"section-title\">2. Project Details<\/h2>\n<h2 class=\"type-text font-bold mt-4 mb-2\">Company or Institution<\/h2>\n<p class=\"type-text mb-6\">SafeToNet Ltd<\/p>\n<h2 class=\"type-text font-bold mt-4 mb-2\">Project<\/h2>\n<p class=\"type-text mb-6\">Safeguarding children online in realtime<\/p>\n<h2 class=\"type-textarea font-bold mt-4 mb-2\">General description of the AI solution<\/h2>\n<p class=\"type-textarea mb-6\">SafeToNet is a UK SafetyTech company that focuses on the development and distribution of realtime solutions to online child safety. The internet and most social media services were not designed with children in mind. The Unseen Teen report from Data and Society Research Institute suggests that service design practices of social media service providers, in some instances, deliberately ignores the needs of this vulnerable age group. Age gates are not robust enough to prevent under-age and even more vulnerable children from using these services.<\/p>\n<p>SafeToNet\u2019s on-device AI is designed to be platform agnostic and device independent. It provides a realtime safeguarding layer that can be used on iOS, Android, MacOS and Windows devices. It allows children to go online and benefit from all that the internet and associated public online spaces offer, while being able to exercise their digital rights as outlined in the UN General Comment 25.<\/p>\n<p>For legal and technical reasons, SafeToNet\u2019s AI operates entirely on the device and within the technical constraints these environments provides, such as memory usage, storage requirements and battery life. <\/p>\n<p>SafeToNet\u2019s AI is designed to address a number of online harms in real time; text-based conversations that lead to cyberbullying, sexting &amp; sextortion and \u201cdark thoughts\u201d \u2013 self-harm and suicide ideation. It comprises an AI-based keyboard the intervenes in realtime, nudges the child\u2019s behaviour, and provides realtime advice and guidance, so that they make safer digital decisions. In addition the AI (SafeToWatch) reacts in realtime to what the device\u2019s camera sees and if it detects child nudity, then it will in realtime redact the image and render it useless.<\/p>\n<p>SafeToNet\u2019s AI-based safetytech provides social media companies with tools to help them deliver on their Duty of Care as outlined in the UK\u2019s Online Safety Bill and similar legislation from around the world.<\/p>\n<h2 class=\"type-website font-bold mt-4 mb-2\">Website<\/h2>\n<p class=\"type-website mb-6\"><a href=\"https:\/\/www.safetonet.com\/\" rel=\"nofollow\">https:\/\/www.safetonet.com\/<\/a><\/p>\n<h2 class=\"type-text font-bold mt-4 mb-2\">Organisation<\/h2>\n<p class=\"type-text mb-6\">SafeToNet<\/p>\n<h2 class=\"section-title\">3. Aspects<\/h2>\n<h2 class=\"type-textarea font-bold mt-4 mb-2\">Excellence and Scientific Quality: Please detail the improvements made by the nominee or the nominees\u2019 team or yourself if your applying for the award, and why they have been a success.<\/h2>\n<p class=\"type-textarea mb-6\">We use an on-device privacy-preserving AI approach that utilizes safe continual learning and knowledge transfer scheme that enables continuous learning. This improves the security of our models, leverages the visual and acoustic knowledge to enable interpretability, improves accuracy, and reduces the risk of false positives.  <\/p>\n<p>Use of \u201cAdversarial learning\u201d during training helps to safeguard the model against perturbations in the data. Our evaluation process is comprised of data conditioning and model evaluation. <\/p>\n<p>In data conditioning, we evaluate the quality of data being labelled according to a defined schema. We analyze the inter-agreement and test data bias among our annotators. The inter-agreement metrics include kappa score, Cronbach&#x27;s alpha, and level and category distribution per group of annotators. Bias and fairness metrics include disparity and parity constraints i.e., statistical parity, equality of odds, and equality of opportunity. <br \/>For model evaluation, we use binary and multi-class (categorical) models. Categorical model measurements are similar to the ones used in binary after averaging them over all categories. <\/p>\n<p>For binary models, we use standard industrial accuracy metrics; Area Under the (Receiver Operating Characteristic) Curve (AUC) and F1 scores. AUC measures the predictive ability and accuracy of our model before setting the optimal threshold. F1 assists in evaluating the internal quality of the model. For Categorical models, we use macro average F1 score, which is the unweighted average over all categories. <\/p>\n<p>For both ML types, we measure precision (the fraction of positives among examples that are correctly predicted) and recall (the fraction of positives that are correctly predicted).<\/p>\n<p>We implemented and validated technology on iOS and Android. Our results are validated and tested through different in-house and public datasets. We will publish results using our model against public datasets in corresponding conferences.<\/p>\n<h2 class=\"type-textarea font-bold mt-4 mb-2\">Scaling of impact to SDGs: Please detail how many citizens\/communities and\/or researchers\/businesses this has had or can have a positive impact on, including particular groups where applicable and to what extent.<\/h2>\n<p class=\"type-textarea mb-6\">For UN SDG 16.2 \u201cEnd abuse, exploitation, trafficking and all forms of violence against and torture of children\u201d to be fully met, children online MUST be included. To be effective and to deliver on the promise of UN CRC Optional Protocols and General Comment 25, online \u201csafetytech\u201d must be privacy-preserving, proactive, in realtime and therefore on the child\u2019s device. Backhauling to a server for retrospective analysis is neither effective nor timely.<\/p>\n<p>The more children that go online (SDG9.1), the more children are readily available for grooming by anyone from anywhere at any time. Grooming for CSE often results in a child taking and sharing intimate images, a growing phenomenon as reported by the UK\u2019s Internet Watch Foundation.<\/p>\n<p>SafeToNet uses AI described above in its SafeToWatch product to analyse what the child\u2019s smartphone\u2019s camera sees in realtime, to prevent the taking of an intimate image of the child for onward sharing. It also intercedes in realtime with an AI-powered keyboard to disrupt sexualised text-based conversations that children have that lead to the taking of these images.<\/p>\n<p>The Global Impact could be enormous. In the UK alone (NSPCC), the cost of child sexual abuse is up to \u00a33.2Bn per annum. Online CSE is a contributing factor to Adverse Childhood Experiences (ACE), which represents a UK cost of \u00a342Bn or \u00a31,800 per household per annum (BMJ). UNICEF says there are 750m children online globally.<\/p>\n<p>The UK\u2019s Online Safety Bill defines \u201charm\u201d as \u201ccontent that has an adverse impact on the physical or mental wellbeing of a child\u2026\u201d SafeToNet\u2019s AI also helps protect children from cyberbullying, self-harm and suicide ideation.<\/p>\n<p>Progress is measured in the number of downloads in the markets in which SafeToNet operates, currently UK, US, Germany 106 other countries, so based on the UK figures the global added value is immense.<\/p>\n<h2 class=\"type-textarea font-bold mt-4 mb-2\">Scaling of AI solution: Please detail what proof of concept or implementations can you show now in terms of its efficacy and how the solution can be scaled to provide a global impact ad how realistic that scaling is.<\/h2>\n<p class=\"type-textarea mb-6\">SafeToNet\u2019s AI is designed to eliminate in realtime the production of CSAM. The IWF found 126,000 URLs containing over 93,000 illegal images of mostly girls aged 11-13, and over 33,000 between 7 and 10 years old. SafeToNet\u2019s AI contextualises the conversations and activities online and prevents the self-production and streaming of intimate images of children.<\/p>\n<p>SafeToWatch and SafeToNet, two products based on SafeToNet\u2019s AI, are engineered to operate on the child\u2019s smartphone, within and despite all the technical constrictions that apply. As this is the case, there are no scalability issues. SafeToNet and SafeToWatch can be pre-installed so that phones are \u201csafe out of the box\u201d or can be installed by the child\u2019s parent.<\/p>\n<p>SafeToWatch and SafeToNet are on-device realtime safetytech solutions that demonstrate the art of the possible. SafeToWatch is being developed as an SDK so that 3rd party developers can incorporate it into their social apps, and enhance in-app safety, by for example switching off the camera if it detects an intimate image of a child. We believe this deep-rooted safetytech presents growth opportunities for social media service providers the world over as there is an increasing backlash against unsafe devices for children.<\/p>\n<p>SafeToWatch SDK will encourage \u201cAI for Good\u201d. It is a ready-made solution for social media service providers and app developers. Cohort-level reports can be derived from the system\u2019s \u201cback end\u201d so that the number of risks filtered in realtime for example can be compared and contrasted across different regions of the world. In addition, SafeToNet\u2019s Safety-by-Design AI compliments Privacy-by-Design.<\/p>\n<p>SafeToNet is fully compliant with GDPR and our obligations for Special Category Data (sexual, political and religious). ADM (Automated Decision Making) is a key component of GDPR, which SafeToNet\u2019s AI complies with. The Investigatory Powers Act, Computer Misuse Act and Defamation Act also all apply.<\/p>\n<h2 class=\"type-textarea font-bold mt-4 mb-2\">Ethical aspect: Please detail the way the solution addresses any of the main ethical aspects, including trustworthiness, bias, gender issues, etc.<\/h2>\n<p class=\"type-textarea mb-6\">The business model of social media companies is to monetise algorithmically produced content, irrespective of what this content is. In an apparent attempt to maximise their revenues they seemingly misuse legislation such as Section230 of the CDA and avoid Age Verification technologies so that children much younger than 13 as set in most of their own terms and conditions can use their services. They also claim a \u201clegitimate interest\u201d to sidestep \u201cconsent\u201d as defined in COPPA. Social media companies seem to present themselves as being unethical. Twitter for example is being sued by a 16 year old boy for monetising intimate images of him as a 13 year old. <\/p>\n<p>We believe SafeToNet\u2019s safetytech AI provides an ethical solution for the seemingly unethical business model of most social media service providers around the world, where algorithmically driven content leads to child suicide through sextortion and cyberbullying.<\/p>\n<p>SafeToNet\u2019s AI has been developed to comply with all relevant laws, especially but not limited to GDPR and our obligations for Special Category Data (sexual, political and religious), ADM (Automated Decision Making), the Investigatory Powers Act, Computer Misuse Act and Defamation Act. SafeToNet\u2019s safetytech AI resides entirely on the child\u2019s smartphone, it is robust and in one place. Backhauling content to a server for analysis is too slow, unpredictable and, with some content, illegal. <\/p>\n<p>Specific elements of added value of SafeToNet\u2019s AI to social media operators are to safeguard in realtime children using their services from content that has an adverse impact on their physical and psychological wellbeing. It provides them with tools to meet their Duty of Care as defined in the UK\u2019s Online Safety Bill. SafeToNet\u2019s AI works for all children equally, regardless of race or gender. Unlike most current social media services, it is intrinsically designed as Tech for Good.<\/p>\n","protected":false},"parent":0,"template":"","gt_category":[100,6,13,18,21,27],"class_list":["post-5275","gt_entry","type-gt_entry","status-publish","hentry","gt_category-online-child-safety-safety-tech","gt_category-promising","gt_category-sdg11","gt_category-sdg16","gt_category-sdg3","gt_category-uk"],"_links":{"self":[{"href":"https:\/\/naixus.net\/index.php\/wp-json\/wp\/v2\/gt_entry\/5275","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/naixus.net\/index.php\/wp-json\/wp\/v2\/gt_entry"}],"about":[{"href":"https:\/\/naixus.net\/index.php\/wp-json\/wp\/v2\/types\/gt_entry"}],"wp:attachment":[{"href":"https:\/\/naixus.net\/index.php\/wp-json\/wp\/v2\/media?parent=5275"}],"wp:term":[{"taxonomy":"gt_category","embeddable":true,"href":"https:\/\/naixus.net\/index.php\/wp-json\/wp\/v2\/gt_category?post=5275"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}