A few years ago, engineers Karine Mellata and Michael Lin teamed up at Apple’s fraud engineering and algorithmic risk team. Their work primarily focused on combating various forms of online abuse, including spam, bot automation, compromised accounts, and developer fraud, all to safeguard Apple’s expanding user base.
Despite their persistent efforts to create new models to keep pace with the changing landscape of online abuse, Mellata and Lin realized they were constantly having to revisit and rebuild the foundation of their trust and safety infrastructure. This was a Sisyphean task that inhibited their ability to really stay ahead of the perpetrators.
With growing regulatory pressure to consolidate and streamline disparate trust and safety operations, Mellata envisioned the potential for impactful change. She imagined a dynamic system capable of adapting as rapidly as the abuse it was designed to counter and expressed this ambition in a conversation with TechCrunch.
To turn this vision into reality, Mellata and Lin established Intrinsic. Their startup provides vital tools that allow safety teams to effectively prevent abusive activity on their platforms. Securing $3.1 million in seed funding, Intrinsic has garnered support from notable investors like the Urban Innovation Fund, Y Combinator, 645 Ventures, and Okta.
Intrinsic’s multifunctional platform has been crafted to moderate content created by both users and AI. It offers a robust infrastructure that empowers clients, notably social media and e-commerce companies, to identify and act upon policy-violating content. Intrinsic automates various moderation tasks, such as user bans and content reviews.
Mellata highlights Intrinsic’s adaptability, noting the AI platform’s capability to address specific issues like preventing inadvertent legal advice in marketing content or identifying region-specific prohibited items on marketplaces. She stresses that Intrinsic’s customization supersedes generalized classifiers and that even well-equipped teams would require significant development time to deploy similar in-house solutions.
When asked about competitors such as Spectrum Labs, Azure, and Cinder, Mellata points out Intrinsic’s unique features, like its explainability in content moderation decisions and extensive manual review tools. These allow customers to interrogate the system about errors and refine their moderation models using their own data.
According to Mellata, traditional trust and safety systems lack the flexibility to evolve with the nature of online abuse. Consequently, teams with limited resources are increasingly seeking external solutions that can reduce costs while maintaining rigorous safety standards.
Without independent third-party audits, it’s challenging to ascertain the accuracy and unbiased nature of any vendor’s content moderation models. However, Intrinsic is reportedly making headway, securing major contracts with “large, established” customers.
Looking forward, Intrinsic plans to grow its team and broaden its technology to include oversight of not just text and images, but also video and audio content.
As the tech industry experiences a broader slowdown, the interest in automation for trust and safety grows. Mellata believes that this trend puts Intrinsic in a prime position. By providing cost-effective, efficient, and thorough abuse detection, Intrinsic appeals to executives looking to trim budgets and mitigate risks
Read More
Seguridad en la Nube y AWS La seguridad en la nube es un elemento esencial…
Ciberseguridad y TI Hacking Ético Ciberseguridad La ciberseguridad es fundamental en el mundo actual donde…
Introducción a la ciberseguridad y TI Hacking etico ciberseguridad En el mundo digital de hoy,…
Introducción a la ciberseguridad y TI servicio de IAM Entornos Multicloud En el mundo actual,…
# Beneficios de seguridad de la computación en la nube ## Introducción La computación en…
Brechas de seguridad en la computación en la nube La computación en la nube ha…
This website uses cookies.