Google CEO Sundar Pichai has promised upgrades to the company’s AI chatbot, Bard, in response to criticism. Speaking on The New York Times’ Hard Fork podcast, Pichai said that Google will soon be upgrading Bard to the more capable PaLM models, which will bring added capabilities in reasoning, coding, and answering math questions. He noted that Bard is currently running on a lightweight version of LaMDA, an AI language model that focuses on delivering dialog. Pichai compared Bard to a souped-up Civic racing against more powerful cars, suggesting that PaLM is a more powerful model that can handle tasks such as common-sense reasoning and coding problems.
Bard was released to the public on March 21st but failed to garner the same attention and acclaim as OpenAI’s ChatGPT and Microsoft’s Bing chatbot. Pichai suggested that one reason for Bard’s limited capabilities was a sense of caution within Google. He said it was important not to release a more capable model before ensuring it could be handled well.
Pichai also discussed concerns about the rapid pace of AI development and its potential impact on society. He acknowledged the merit of concerns raised in an open letter signed by Elon Musk and top AI researchers, calling for a six-month pause on AI development. He also suggested that existing regulations in industries such as privacy and healthcare could be applied to AI rather than creating new laws specifically for AI.
Some experts have raised concerns about chatbots’ tendency to spread misinformation, while others warn about more existential threats. Pichai acknowledged that AI systems are becoming increasingly capable, drawing closer to artificial general intelligence (AGI) – systems as capable as humans across a wide range of tasks. He stressed the importance of anticipating the impact of these systems and evolving to meet the moment.
As Google upgrades Bard and other AI models, the debate around AI’s impact on society will likely continue. Pichai’s comments suggest that caution and thoughtful regulation will be essential in ensuring AI’s benefits outweigh its potential risks.
Full podcast below: