The most revolutionary technology Google is developing right now is AI. AI enables people, businesses, and communities to reach their full potential by assisting in the earlier diagnosis of diseases and allowing users to access information in their native tongue. And it creates new possibilities that have the potential to greatly enhance the lives of billions of people. In order to fulfil our objective to organise the world’s information and make it widely accessible and valuable, they reoriented the firm around AI, six years ago. As a result, they consider AI to be the most significant way they can accomplish this mission.
Since then, they’ve kept investing in AI across the board, and companies like Google AI and DeepMind are pushing the field. The greatest AI computations currently double in size every six months, considerably exceeding Moore’s Law. At the same time, big language models and sophisticated generative AI are catching people’s attention all across the world. In fact, a lot of the generative AI applications you’re beginning to see now are built on the foundation of their Transformer research project, their field-defining publication from 2017, as well as their significant advancements in diffusion models.
Google working on these technologies at this time is incredibly exciting as they turn thorough research and technological advances into goods that genuinely benefit people. That has been their experience with big language models. Two years ago, they presented their Language Model for Dialogue Applications, which powers the newest generation of language and conversation capabilities (or LaMDA for short).
They have been developing Bard, an experimental LaMDA-powered conversational AI service. And while they prepare to make it more widely accessible to the public in the coming weeks, they’re moving forward by opening it to reliable testers today.
Bard aims to bring together the depth of human knowledge with the strength, wit, and inventiveness of their massive language models. It uses data from the internet to deliver original, excellent answers. Bard can serve as a creative release and a springboard for inquiry, enabling you to impart new scientific findings from NASA’s James Webb Space Telescope to a 9-year-old or learn more about the top football strikers of the moment before receiving training to hone your abilities.
They’re initially making it available using LaMDA’s lightweight variant. They can scale to more people and get more input because this much simpler model uses a lot less computer power. To ensure that Bard’s responses reach a high standard for quality, safety, and groundedness in real-world knowledge, we’ll mix external feedback with our own internal testing. We’re eager to use this testing period to continue learning and enhancing Bard’s performance.
For billions of users, they have a long history of employing AI to enhance search. One of their initial Transformer models, BERT, was ground-breaking in its ability to comprehend the nuances of spoken language. MUM, which is 1,000 times more effective than BERT and has next-level and multilingual information understanding, was introduced two years ago. It can identify key moments in videos and provide crucial information, including crisis support, in more languages and is 1,000 times more powerful than BERT.
One of the most exciting opportunities is how AI can deepen their understanding of information and turn it into useful knowledge more efficiently — making it easier for people to get to the heart of what they’re looking for and get things done. When people think of Google, they often think of turning to them for quick factual answers, like “how many keys does a piano have?” But increasingly, people are turning to Google for deeper insights and understanding — like, “is the piano or guitar easier to learn, and how much practice does each need?” Learning about a topic like this can take a lot of effort to figure out what you really need to know, and people often want to explore a diverse range of opinions or perspectives.
When there is no one correct answer to a subject, AI can be useful in synthesising findings. Soon, we’ll notice AI-powered Search features that condense complex information and multiple viewpoints into digestible formats so we can quickly understand the big picture and learn more from the web, whether that means looking for additional viewpoints, like blogs from people who play both the piano and the guitar, or going deeper on a related topic, like beginner-friendly starting points. Soon, Google Search will start implementing these new AI features.
They believe it’s crucial to make it simple, secure, and scalable for others to profit from these advancements by building on top of their finest models, in addition to their own products. They’ll begin enrolling individual developers, creators, and businesses the next month so they may test their generative language API, which is initially driven by LaMDA and will eventually use a number of models. They plan to provide a set of tools and APIs over time to make it simple for others to develop more cutting-edge AI applications. Startups must also have the requisite computational capacity to create trustworthy AI systems, therefore they are thrilled to support the scaling of these initiatives through their recently announced Google Cloud collaborations with Cohere, C3.ai, and Anthropic.
They must act bravely and responsibly when introducing experiences based on these models to the outside world. They’re dedicated to responsibly developing AI because of this: One of the first businesses to release a set of AI Principles was Google in 2018. In order to make AI safe and practical, they continue to interact with communities and subject-matter experts, partner with governments and other organisations to set standards and best practises, and provide education and tools for their researchers.
They will continue to be brave with innovation and prudent in their approach, whether it be using AI to fundamentally improve their own products or making these potent capabilities available to others. And it’s just the beginning; in the next days, weeks, and months, there will be more in all of these areas.