Featured Image Source: io.google
Google just announced 20 new AI updates at their annual developer conference, Google I/O 2024. Google called it the age of Gemini, a new era centered around its powerful AI model, Gemini.
For a large portion of the presentation, Google showed off a lot of ways that they’re now integrating Gemini into all of their products, which is mostly about how you can better find and organize information. If you didn’t catch them live at Google I/O, no worries, we’ve got you.
1. Gemini in Google Search
Google announced improved multi-step reasoning using Gemini in Google Search. It figures out which problems it needs to solve and in what order, then using reasoning, taps into information about the real world, and updates in real time.
For example, you can find yoga and pilates studios near you with intro offers and specific details in just seconds.
And that’s not all Google Search will do for you. You can also do what is called “Planning in Search.”
In this Planning in Search, you could, with only one query, export a meal plan, get ingredients as a list, add preferred items to the shopping cart, and more. Or you could plan a trip with one prompt.
2. Gemini in Google Workspace
Google Workspace can now summarize and search emails in Gmail in a whole different way. It could summarize meetings in Google Meet and an entire spreadsheet that could have taken hours to create on your own.
Not all. Gemini can also analyze the data in the spreadsheet and even create a graph to visualize everything.
3. Gemini in Google Photos
Google Photos is getting a little AI update too. It’s called Ask Photos, and with it, Gemini will be able to answer questions about your photo library, including simple questions or asking Photos to pull up an image of a specific person or passport.
Google Photos Magic Editor will use generative AI to help you make complex photo edits right from your phone.
4. Music and Videos
For music lovers, Google announced the Music AI Sandbox, where you can create new instrumental sections from scratch or make entirely new songs. Beautiful,right? But then there’s also something for video lovers.
Google announced Veo, an all-new generative video model, rivaling that of Sora. With Veo, you can produce high-quality 1080p videos from text prompts and then further edit the videos using additional prompts.
With Google Deep Mind’s generative video model, you can storyboard, generate longer scenes, and maintain consistency from shot to shot. You can also convert input text into output video. You can visualize things on a time scale that’s 10 or 100 times faster than before.
5. Gemini Advanced
Additionally, Google announced Gemini Advanced. With Gemini Advanced, you will be able to provide a personalized vacation plan based on your travel information and wants. It will create a travel itinerary customized to you and your wants in a matter of seconds.
By pulling from all kinds of information, like your flight and hotel information, Gemini will adjust your travel itinerary on the fly for you based on time constraints. You’ll be able to adjust your itinerary through the start time. Other notable Gemini Advanced announcements are:
- Upload multiple files, up to an hour-long video, or a PDF with 1500 pages.
- Search for a specific part of that PDF (this is only going to be able to work if you’re subscribed to Gemini Advanced).
- Gemini Advance will also be getting a new feature called Gems, which is effectively a personal, customized Gemini assistant. You can set this up pretty easily by giving a prompt describing the type of personality and response style you need.
6. New AI Assistant, Astra
Astra is one of the more exciting announcements from Google IO 2024. This is just a concept right now, and it’s not something that you’ll be able to use right away.
Google just teased how it’ll work, and it’s essentially a new type of AI agent that uses audio and data from your camera as well, so you can just point your phone at something and ask it a question about that product or item in your environment.
Astra would be capable of real-time reasoning and quick responses for everyday tasks. You can simply focus your smartphone camera on your surroundings and ask questions about locations, objects, codes, or handwritten diagrams.
Astra would be able to identify whole neighborhoods based on a few buildings, describe objects, or give knowledge on specific aspects of an object, like the tweeter of a speaker. Astra would be able to locate misplaced items it sees during a session and tell you exactly where they were last seen, all in real-time.
Final Words
Google I/O 2024 is Google going all in on AI, and it’s a little bit overwhelming for consumers; there’s just so much going on. Therefore, Google is slowly rolling out these features to allow consumers a bit of time to sort of get used to the new workflow of things, but once they do get used to it, it’s going to be a game changer.
Sadly, for a lot of these things, we will have to wait a couple of weeks or even months to see them rolled out.