Child Safety Track at TrustCon TrustCon registration closes TOMORROW, July 9. Register here: https://lnkd.in/g4HTqiWS. We are sponsoring the child safety track featuring 25 events over all three days. Here is a look at the child safety track events for Tuesday, July 23: 11:00 AM - 12:30 PM - Cross-Sector Collaboration for Tackling Online Child Sexual Exploitation and Abuse Workshop 11:10 AM - 12:00 PM - The State of Child Safety Tech in 2024 - Understanding Facilitator-Perpetrator Engagement & the Role of Technology in Enabling OSAEC in the Philippines - Leveling Up Our CSAM Detection Capabilities Through Ethical Experimentation Presentation 1:30 PM - 2:20 PM - Enacting Generative AI Safety by Design Principles to Combat Child Sexual Abuse Panel 2:50 PM - 3:40 PM - The CyberTipline Pipeline: Perspectives From Platforms, NCMEC & Law Enforcement Presentation - An Engineer's Perspective: Practical Considerations for Automating the NCMEC Reporting Process Presentation - Designing for Child Privacy and Safety: Legal Landscape, Best Practices & Tools Presentation 4:10 PM - 5:00 PM - “Pay Or Your Life is Ruined”: A Multi-Stakeholder Discussion of the Latest Sextortion Research & Trends Panel The full agenda can be found here: https://lnkd.in/gf7uMApg
Tech Coalition’s Post
More Relevant Posts
-
Check out our article on Child Safeguarding, where we delve into how Bikal is revolutionizing the landscape of child protection through collaboration, innovation, and AI technology.🤖 Join us in shaping a safer and more secure future for children everywhere. By the way, have you tried Dialog XR? Elevate conversations, streamline processes, and pave the way for a smarter tomorrow with DialogXR😎🦾📱 https://lnkd.in/eiaVT7Ut #child #safeguarding #uk #police #AI #Police #Artificial #Intelligence Raj Sandhu Satish Vasu Mustafa Rampurawala Jas Dahil Mark Bantilan Karina Zafarova Sirojiddin Dushayev Ahmed Elbanna Raneesh A Maykhel De Leon Anang Mistry
To view or add a comment, sign in
-
📢 Kate Butterby and Nancy Lombard from the Glasgow Caledonian University have recently published their article, “Developing a chatbot to support victim-survivors who are subjected to domestic abuse”, in the Journal of Gender-Based Violence, Volume 22, Issue 22. Their piece delves into the ethical considerations of developing and implementing the ISEDA chatbot and explores how technology can support victims of domestic violence. It outlines both the benefits and challenges of technology interventions in this field and considers how to ensure the chatbot’s longevity to empower under-funded women’s services. 📚 Don’t forget to check this article out at the following link: https://lnkd.in/dmdrtyev #domesticviolence #domesticviolenceawareness #domesticabuse #domesticabuseawareness #domesticviolenceprevention #domesticabuseprevention #genderbasedviolence #genderbasedviolenceawareness #genderbasedviolenceprevention
Developing a chatbot to support victim-survivors who are subjected to domestic abuse: considerations and ethical dilemmas
bristoluniversitypressdigital.com
To view or add a comment, sign in
-
Law | UNICEF | Govt. of Maharashtra | Top Voice Personal Branding | I double Project Impact for govts/orgs | AI for Social Impact
I work in child protection and the work is not easy. This technology can be a game changer though: - I work with the Government and UNICEF in identifying vulnerabilities in children. As there are no clear indicators, it can be difficult. But technology can be an excellent lever. We are exploring geotagging of houses and artificial intelligence to better understand child protection. 💡Here are 2 ways which can change the game: 1. Neighborhood Analysis: Geotagging of houses, combined with AI algorithms, can help analyze neighborhood characteristics and identify areas that may pose risks to children's well-being. By considering factors such as crime rates, proximity to high-risk locations, availability of resources like schools and healthcare facilities, and socio-economic indicators, AI can assist in identifying vulnerable neighborhoods. This information can guide policymakers, social service agencies, and community organizations to target interventions, allocate resources, and implement preventive measures to support at-risk children and families. 2. Predictive Modeling for Child Vulnerability: Geotagging of houses can contribute to predictive modeling for child vulnerability. By analyzing geolocation data and combining it with other relevant factors, such as socio-economic status, family dynamics, or educational indicators, AI algorithms can develop predictive models to identify children at higher risk of experiencing vulnerability or adverse outcomes. These models can aid in early intervention efforts, targeted support, and resource allocation to ensure the well-being of vulnerable children. 💡It is essential to approach these applications with strict adherence to privacy laws, ethical guidelines, and child protection protocols. 💡If leveraged well, we can reach and protect more children through this model! Share this with someone working in the social sector. Views are personal. LinkedIn LinkedIn for Creators #ChildProtection #EthicalAI #letstalkgovt #PrivacyMatters #GeotaggingAnalysis #VulnerableChildren #NeighborhoodSafety #EnvironmentalHazards #DataSecurity #ResponsibleAI #EarlyIntervention #CommunitySupport #ChildWellbeing #artificialintelligence #technology #data
To view or add a comment, sign in
-
-
The Canadian Parliament currently has an online safety bill, Bill C-36, before it. I don't support online safety bills like Bill C-36 because they can empower police forces, increase surveillance, threaten privacy and restrict access (age-gate) to information important for children. I'm skeptical of any legislation that is done "for the children". Ton-That of Clearview AI deploys that argument all the time to bully regions to adopt his facial recognition technology. I'm also skeptical of any legislation that uses the word safety with AI because it won't keep people safe; it'll keep AI safe from public inquiry. I do appreciate that there was far public consultation in the drafting of this bill compared to Bill C-27 but these types of bills shut down consultation after implementation. If it does keep children safe, it'll be white hetero children and not children globally. https://lnkd.in/erd_JCQZ https://lnkd.in/euuiiTkW https://lnkd.in/eG_5RUu2 https://lnkd.in/eqWywvmV https://lnkd.in/eYKS8KFW
Child safety bills are reshaping the internet for everyone
theverge.com
To view or add a comment, sign in
-
🚨 An important new publication 🚨 📅 Today, the"Online risks to children: evidence review" has been published, shedding light on the online risks and harms faced by children in the UK. 🌐 The review covers findings from 2017-2023, preceding the Online Safety Act 2023. 🔍 The review: * sets out the different types of risks and harms children face online, focusing on sexual risks * examines how the design of online platforms and tools can play a part in increasing or decreasing those risks * gives research recommendations and recommendations by NSPCC to tech companies and Ofcom. 🔗 Authored by Dr. Jo Bryce, Prof. Sonia Livingstone, Prof. Julia Davidson OBE, Beth Hall, and Jodie Smith, it not only identifies diverse risks but also evaluates how online platforms' design can impact these risks. 👥 To stay updated on more news, thought-provoking #research and academic collaborations, please click the '#Follow' button to join our LinkedIn community! Thank you! 💙👧👦 https://lnkd.in/egaB-Ubx #OnlineSafety #DigitalWellbeing #Research
Online risks to children: evidence review | NSPCC Learning
learning.nspcc.org.uk
To view or add a comment, sign in
-
In the past few weeks irresponsible Big Tech has been called to account in the Senate Child Safety hearings, while Swifties were outraged over deepfake porn trending on Twitter. 🤯 We’re here to help with educational Accountability - researched resources! 📚 Read. Share. 💥 https://lnkd.in/gsP-RVTB And, open those vital conversations about AI with your family or your accountability partner, because we all need help navigating this ever evolving world of technology! #DeepFake #AIManipulation #DigitalManipulation #SyntheticMedia #DeepLearning #FakeVideos #AIgenerated #DeepFakeTechnology #MediaForgery #DeepFakeAwareness #PornAddictionRecovery #RecoveryJourney #HealthySexuality #PornFreeLife #BreakTheCycle #AddictionRecovery #MindfulLiving #SelfCareJourney #SupportCommunity #OvercomePorn #taylorswift
To view or add a comment, sign in
-
-
Safeguarding Children's Data Rights: A Multi-Stakeholder Responsibility In our digital age, protecting children's data rights has become a collective responsibility that demands collaboration across sectors. My latest article at https://lnkd.in/emR3rEUW explores the vital roles of governments, social media companies, civil society, educational institutions, parents, and children themselves in creating a safer digital environment. Only through a coordinated multi-stakeholder approach can we mitigate the risks posed by emerging technologies and safeguard the privacy and wellbeing of our youngest digital citizens. From enacting robust data protection laws to implementing ethical AI governance, fostering digital literacy, and empowering youth voices - we all have a part to play. As technology evolves rapidly, we must remain vigilant and proactive in addressing new challenges. I invite you to read the full article at https://lnkd.in/emR3rEUW and join the conversation on this crucial issue. Together, we can build an inclusive digital future where children can explore and thrive without compromising their rights. #ChildrensDataRights #DigitalSafety #DataProtection #EthicalTech #YouthEngagement
To view or add a comment, sign in
-
-
🟢Live Update: In the first workshop, “What is happening in the real world?” we discussed the biggest challenges hotlines face, the latest developments and trends in the CSAM ecosystem, and how hotlines can be supported more effectively. Members agree that hotlines deal with two major issues: the rise of AI and the legal burdens. It was made clear that we need to identify and address the blind spots. Insights from various group tables highlighted the need to focus on regulatory gaps by confronting legal barriers hindering hotlines’ efficiency. Serious effort is required to navigate the complex landscape of tech and legal challenges surrounding child sexual abuse material (CSAM). #MembersMeeting #IdentityandPurpose #Workshop #ChildProtection #FightCSAM #DigitalProblems
To view or add a comment, sign in
-
-
It’s an incredible feeling when you are a force for good. Everything around you, direct or indirect, is positively impacted. In this case, I’m proud to be part of an organization that values the welfare of children in our communities nationwide. As a father of 2, this really hits home… The growing volume and variety of data is no match for Logikcull | a reveal technology. Last year, NCMEC handled 21,494 cases of missing children and with that technology, Logikcull saved them over 4,000 hours. With the compressed timelines, they can focus on turning the data over to law enforcement to act faster and prosecutors to get convictions. It doesn’t get much better than that! Great work and THANK YOU to everyone involved!!! #ncmec #law #legaltech #investigations #technology #innovation
This is how AI can help find missing children
fastcompany.com
To view or add a comment, sign in