Google CEO says over 25% of new Google code is generated by AI



On Tuesday, Google’s CEO revealed that AI systems now generate more than a quarter of new code for its products, with human programmers overseeing the computer-generated contributions. The statement, made during Google’s Q3 2024 earnings call, shows how AI tools are already having a sizable impact on software development.

“We’re also using AI internally to improve our coding processes, which is boosting productivity and efficiency,” Pichai said during the call. “Today, more than a quarter of all new code at Google is generated by AI, then reviewed and accepted by engineers. This helps our engineers do more and move faster.”

Google developers aren’t the only programmers using AI to assist with coding tasks. It’s difficult to get hard numbers, but according to Stack Overflow’s 2024 Developer Survey, over 76 percent of all respondents “are using or are planning to use AI tools in their development process this year,” with 62 percent actively using them. A 2023 GitHub survey found that 92 percent of US-based software developers are “already using AI coding tools both in and outside of work.”

AI-assisted coding first emerged in a big way with GitHub Copilot in 2021, and the feature saw a wide release in June 2022. It used a special coding AI model from OpenAI called Codex, which was trained to both suggest continuations to existing code and create new code from scratch from English instructions. Since then, AI-based coding has expanded in a big way, with ever-improving solutions from Anthropic, Meta, Google, OpenAI, and Replit.

GitHub Copilot has expanded in capability as well. Just yesterday, the Microsoft-owned subsidiary announced that developers will be able to use non-OpenAI models such as Anthropic’s Claude 3.5 and Google’s Gemini 1.5 Pro to generate code within the application for the first time.

While some tout the benefits of AI use in coding, the practice has also attracted criticism from those who worry that future software generated partially or largely by AI could become riddled with difficult-to-detect bugs and errors.

According to a 2023 study by Stanford University, developers using AI coding assistants tended to include more bugs while paradoxically believing that their code is more secure. This finding was highlighted by Talia Ringer, a professor at the University of Illinois at Urbana-Champaign, who told Wired that “there are probably both benefits and risks involved” with AI-assisted coding, emphasizing that “more code isn’t better code.”



Source link

Related Posts

About The Author

Add Comment