AI Companies Join Forces to Ensure Child Safety Principles in Technologies
Related Articles
-
Top AI companies commit to child safety principles as industry grapples with deepfake scandals
After a series of highly publicized scandals related to deepfakes and child sexual abuse material (CSAM) have plagued the artificial intelligence industry, top AI companies have come together and pledged to combat the spread of AI-generated CSAM. Thorn, a nonprofit that creates technology to fight child sexual abuse, announced Tuesday that Meta, Google, Microsoft, CivitAI, Stability AI, Amazon, OpenAI and several other companies have signed onto new standards created by the group in an attempt...
-
Industry Experts Join Homeland Security's New AI Safety Board
The board will advise the public and private sectors on matters of responsible AI deployment.
-
The world's leading AI companies pledge to protect the safety of children online
Leading artificial intelligence companies including OpenAI, Microsoft, Google, Meta and others have jointly pledged to prevent their AI tools from being used to exploit children and generate child sexual abuse material (CSAM). The initiative was led by child-safety group Thorn and All Tech Is Human, a non-profit focused on responsible tech. The pledges from AI companies, Thorn said, “set a groundbreaking precedent for the industry and represent a significant leap in efforts to defend children...