Are you applying to the internship?
Job Description
About the job:
Trust & Safety team members at Google are dedicated to identifying and addressing the biggest challenges that threaten the safety and integrity of Google’s products. They leverage technical expertise, exceptional problem-solving skills, user insights, and proactive communication to shield users and partners from abuse across various Google products like Search, Maps, Gmail, and Google Ads. This team prioritizes a big-picture perspective, strategic teamwork, and a passion for doing what’s right. They collaborate globally and cross-functionally with Google engineers and product managers to swiftly identify and combat abuse and fraud cases, all while upholding Google’s commitment to user safety and promoting trust in the brand.
The Content Adversarial Red Team (CART):
The Content Adversarial Red Team (CART) within Trust and Safety Intelligence is a newly formed team that employs unstructured persona-based adversarial testing techniques to identify “unknown unknowns” and uncover new or unexpected loss patterns on Google’s leading generative AI products. The CART team works closely with product, policy, and enforcement teams to proactively detect harmful patterns and contribute to building the safest possible experiences for Google users.
Your Role:
As a Content Adversarial Red Team Analyst, you will be at the forefront of generative AI testing within Trust and Safety, playing a crucial role in supporting Google’s mission to launch bold and responsible AI products. You will be exposed to a range of content, including graphic, controversial, or upsetting material, as this role involves working with sensitive content and situations.
This position presents a unique opportunity to be involved in the exciting field of generative AI and contribute directly to building a safer and more trustworthy online experience for Google users.