Nowadays, AI development involves comparisons with other models from time to time. Recently, it was reported that Google may be using rival Anthropic’s Claude model to improve the security of its own model.
TechCrunch reports that Google is commissioning an outsourcer to compare Gemini’s output with Claude’s. It takes up to 30 minutes for the outsourced staff to evaluate each set of responses, and the report quoted the outsourced staff as saying that they had found multiple outputs in the pending content on Google’s internal platform with the caption “I’m Claude from Anthropic.”
According to the outsourced employee, Claude outperformed Gemini in terms of security, and that Claude would refuse to respond to unsafe prompts, such as role-playing, and that Gemini had output inappropriate content to certain prompts. While Google is one of Anthropic’s major investors, Anthropic’s terms of service explicitly prohibit the unlicensed use of Claude to develop competing products or train competing AI models.
Google DeepMind spokesperson Shira McNamara admitted that they were indeed comparing AI model outputs, but denied using Claude to train Gemini, but did not respond to whether they had been licensed by Anthropic to do the output comparison.