Do NOT follow this link or you will be banned from the site!
Techradar
Latest Meta Quest 3 software beta teases a major design overhaul and VR screen sharing – and I need these updates now
- HorizonOS v76 has rolled out to the public test channel
- This beta release is teasing new fetaures Meta is working on
- In future updates we can expect major UI changes, and shareable windows
Meta has just launched the HorizonOS v76 update to its public test channel, and the beta software is already teasing some massive changes for how you can use your Meta Quest 3 headset to virtually socialise.
Firstly, Meta is putting your Horizon Avatera front and center in video calls, finally unlocking the selfie camera – a feature it first teased back at Meta Connect 2022. You could previously take Zoom meetings from your virtual workspace, but with update v76, you’ll be able to use your Meta avatar in more casual video calls through WhatsApp and Messenger.
Avatar Selfie Cam UI in Meta Quest/Horizon OS v76 PTC.Doesn't seem to be enabled out of the gate though. pic.twitter.com/zOG0aya5NiMarch 22, 2025
In Settings, you can see your Selfie cam options to adjust how narrow or wide the virtual camera is, and you can select a static background that will appear behind your character.
Then, when you join video calls while using your headset, other people will see your avatar moving as you move. However, people who have tested the in-development tool say it is still limited.
‘In-development’ is definitely the key description here, as Selfie cam still feels very limited so it might take a little while before it reaches the wider HorizonOS public release.
Further, when it does, Meta might move it to be an ‘experimental feature,’ which is a designation given to features that are available in the full HorizonOS release, but that might be a little buggy still.
Strings in Quest/Horizon OS v76 PTC suggest that Meta is working on the ability to share windows with other users in Horizon Home (and possibly Worlds).This will likely work similarly to SharePlay on visionOS. pic.twitter.com/ZudymM05XJMarch 22, 2025
Update v76 in the PTC also hides details about the ability to share your screen with other Meta Quest users.
The feature isn’t live yet, but code strings (discovered by Luna) suggest that 2D window panels will gain a ‘share’ and ‘unshare’ button so you can show other people in Horizon Home or Horizon Worlds (and maybe other multiplayer apps) what you’re looking at in your browser.
The Quest 3 already has the ability to screenshare YouTube content, and this release seems like a more general rollout of that bespoke feature so other 2D apps can be shared.
Given its current state in the PTC update, screen sharing might be an update or two away. However, when it does arrive, it might be joined by a massive UI overhaul.
Codenamed ‘Navigator’ Luna shared a short five second long clip of a tutorial for the new layout – which Meta demo’d at Meta Connect 2024.
Meta teased "the future of Horizon OS" at Connect today, showing a concept of a complete redesign.Details here: https://t.co/nYX2CfKeXt pic.twitter.com/Knavsn3p54September 26, 2024
Luna added that it’s expected to drop in v77 or later, so it’s still a release or two from launch, but these first hints suggest this overhaul’s launch is approaching.
We’ll have to wait and see if this UI overhaul is what Quest 3 has been needing all along or one of those terrible changes that'll have us begging Meta to put everything back the way it was.
From what we’ve seen, it should be the former, but we won’t know until the Navigator UI is available for everyone to test (hopefully later this year).
You might also likeChatGPT is down for many – here's what's going on
ChatGPT is down for many, especially users in the US, where prompts are returning error messages across multiple models, including 4o and 03-mini.
We first noticed the issue on a number of our own ChatGPT queries and then confirmed that others are reporting similar issues with the artificial intelligence platform on Downdetector. The service, which tracks outages across the web, notes outage reports starting at roughly 9AM ET and growing since then.
ChatGPT and Sora most recently suffered a major outage late last year but have been fairly stable since then.
This story is developing...
No answersThis is what the outage looks like when you're trying to get a prompt response from ChatGPT 4o.
What's notable here is that the platform is not down but ChatGPT's ability to answer after ingesting a prompt appears compromised.
ChatGPT has yet to acknowledge any issues on its X (formerly Twitter) feed but we'll keep an eye on it for updates.
Up not downDowndetector's report tracker for OpenAI services has been rising since the morning (9AM ET) and steadily rising since then.
It's worth noting that the service is tracking all OpenAI services and not just ChatGPT. However, most reports we're seeing elsewhere only point to ChatGPT as the primary culprit.
The outage may not be global since our counterparts in the UK report no issues with processing prompts.
We're seeing them on both the desktop and iOS app.
It looks like Microsoft might have thought better about banishing Copilot AI shortcut from Windows 11
- Windows 11 could again be graced with the ‘Windows key + C’ shortcut to summon Copilot
- This keyboard shortcut was removed from Windows 11 last year for reasons that it’s difficult to fathom, frankly
- Microsoft now seems to have had second thoughts about banishing the shortcut – or so a rumor suggests
There’s some potentially good news for Windows 11 users who miss the old keyboard shortcut that invoked Copilot, as early hints have been dropped that this functionality could be reinstated by Microsoft.
The keyboard shortcut in question – which is ‘Windows key + C’ – essentially served as a substitute for those who don’t have a dedicated Copilot key (as seen on Copilot+ PCs like the new Dell XPS 13) to summon the AI assistant.
According to PhantomOfEarth on X, a regular source of gossip and leaks for Windows, Microsoft is “experimenting with bringing back” this keyboard combination.
Microsoft is experimenting with bringing back the Windows key + C keyboard shortcut. It will do the same action as the Copilot key, so can be customized in Settings."Choose what happens when you press the Copilot key or Windows logo key + C"March 23, 2025
I assume that PhantomOfEarth has uncovered clues in a recent preview build of Windows 11 to indicate this process is underway, but they don’t make that clear.
As noted, just like the Copilot key, you will be able to customize the function of this shortcut in Settings. So, if you don’t use the AI assistant, you can get Windows + C to perform a different action more useful to your particular way of working.
There’s actually quite a backstory here that you may not be aware of. Many moons ago, Windows + C was used to fire up Cortana, but when that AI assistant was ditched (back in 2023), Microsoft transferred the shortcut to Copilot.
Then, in what was a puzzling twist at the time, in the middle of 2024 the keyboard shortcut was decoupled from Copilot – the reasons for that being best known to Microsoft. As I observed at the time, the more cynical might suggest that it did lend extra value to the convenience of that dedicated Copilot key, effectively making that more of a (slight) selling point for Copilot+ laptops, at least in theory.
(I should also note that Windows Latest, which spotted the above post, points out that back at the time, Microsoft did argue that using the Windows key in conjunction with the “number position for Copilot pinned to your taskbar” is a “great way to open Copilot,” rather than Windows + C. Still, what’s the harm in having another – more convenient in my book – way of doing that? It wasn’t like Windows + C got another more important use allocated to it when Copilot was removed from the shortcut – it just didn’t do anything).
At any rate, whatever the reasoning was back then, Microsoft is seemingly going to reverse course now – and it’s about time. Well, I say that in the full knowledge that no AI chickens should be counted just yet – not until we actually see this keyboard shortcut reemerge officially in testing as a way to bring up the Copilot app. For now, this remains just a hint that Microsoft is busy reintroducing this small but potentially useful feature.
Of course, even if you don’t use Copilot in Windows 11, you’ll likely appreciate having the ability to redefine the shortcut to something else, rather than the combo lying dormant. Although I’m guessing this ability could come laden with the same limitations as remapping the dedicated Copilot key to another function, namely that only certain apps can be linked to it – but who knows, maybe that won’t be the case.
You may also like...Samsung's rumored smart specs may be launching before the end of 2025
- A pair of Samsung smart specs are on the way
- They could launch before the end of 2025
- Samsung is also developing a bigger XR headset
We know that Samsung is busy working with Google on an Android XR (extended reality) headset known as Project Moohan, but it seems that some AR (augmented reality) smart specs are also in the pipeline – and could be launching before the end of the year.
A new report from South Korean outlet ET News (via @Jukanlosreve) suggests that these smart glasses are being developed under the codename Haean, and that features and specs are currently being finalized.
One of Samsung's priorities, according to the report, is on producing a design that fits every face shape and structure. Meanwhile, gesture support is said to be included with the specs, to reduce the number of buttons needed on the device itself.
There aren't any more details in this particular report, but it does say that the Samsung smart glasses could well be unveiled alongside the Android XR headset – which Samsung has told us much more about so far.
Specs and pricingReport: Samsung Developing Smart Glasses Aimed for Year-End RevealAccording to reports from South Korean media, Samsung is currently developing smart glasses with a target of unveiling them by the end of the year. The company has launched a project codenamed “HAEAN” and is…March 23, 2025
After a few false starts – Google Glass, anyone? – it feels as though there's now some momentum behind the idea of smart glasses as a product, with the Ray-Ban Meta Smart Glasses currently leading the way.
It would seem Samsung wants a part of this smart specs action with a product of its own. Rumors around such a device have been floating around for years at this point, with the name Samsung Glasses mentioned in a trademark filed in 2023.
These upcoming smart glasses are most likely going to be powered by a Qualcomm chip, and come with an integrated camera. We've seen rumors suggesting Samsung is aiming for an affordable price point, which would of course be welcome.
There had been suggestions that the specs would make an appearance alongside the Samsung Galaxy S25 at the Unpacked event in January. That obviously didn't happen, but it seems we will see them sometime in the next nine months.
You might also likeWindows 11 should soon be faster at extracting files from compressed ZIPs – and it’s about time, frankly
- Windows 11 has a new preview build that improves performance with ZIPs
- Unzipping now happens faster in File Explorer, particularly with ZIPs crammed with a ton of small files
- Complaints about sluggish performance with unzipping have been around for quite some time, though, and this fix has been a long time coming
Windows 11 has a new preview out and it does some useful – albeit long-awaited – work in terms of accelerating the rate at which files are pulled out of ZIPs within File Explorer, plus there are some handy bug fixes here – and a minor feature that’s been ditched.
All this is happening in Windows 11 preview build 27818 (which is in the Canary channel, the earliest external test build).
As mentioned, one of the more notable changes means you’ll be able to extract files from ZIPs, particularly large ZIP archives, at a quicker pace in File Explorer.
A ZIP is a collection of files that have been lumped together and compressed so they take up less space on your drive, and unzipping such a file is the process whereby you copy those files out of the ZIP.
File Explorer – which is the name for the app in Windows 11 that allows you to view your folders and files (check here for a more in-depth explanation) – has a built-in ability to deal with such ZIP files, and Microsoft has made this work faster.
Microsoft explains in the blog post for this preview build: “Did some more work to improve the performance of extracting zipped files in File Explorer, particularly where you’re unzipping a large number of small files.”
It’s worth noting that this is a performance boost that only applies to File Explorer’s integrated unzipping powers, and not other file compression tools such as WinRAR or 7-Zip (which, in case you missed it, are now natively supported in Windows 11).
Elsewhere in build 27818, Microsoft has fixed some glitches with the interface – including one in File Explorer, where the home page fails to load and just shows some floating text that says ‘Name’ (nasty) – and a problem where the remote desktop could freeze up.
There’s also a cure for a bug that could cause some games to fail to launch after they’ve been updated (due to a DirectX error), and some other smoothing over of general wonkiness like this.
Finally, Microsoft informs us that it has deprecated a minor feature here. The suggested actions that popped up when you copied a phone number (or a future date) in Windows 11 have been disabled, so these suggestions are now on borrowed time.
Windows Latest noticed the change to ensure ZIP performance is better in File Explorer with this preview, and tested the build, observing that speeds did indeed seem to be up to 10% faster with larger, file-packed ZIPs.
Clearly, that’s good news – and it’s great to see Microsoft’s assertion backed up by the tech site – but at the same time, this is more about fixing poor performance levels, rather than providing super-snappy unzipping.
Complaints about File Explorer’s unzipping capabilities being woefully slow in Windows 11 date back some time, particularly in scenarios where loads of small files are involved – so really, this is work Microsoft needs to carry out rather than any kind of bonus. If Windows Latest’s testing is on the money, a 10% speed boost (at best) may not be enough to placate these complainers, either, but I guess Microsoft is going to continue to fine-tune this aspect of File Explorer.
There are plenty of other issues to iron out with File Explorer too, as I’ve discussed recently – there are a fair few complaints about its overall performance being lackluster in Windows 11, so this is a much broader problem than mere ZIP files.
Furthermore, Microsoft breaking File Explorer for some folks with last month’s February update doubtless didn’t help any negative perceptions around this central element of the Windows 11 interface.
You may also like...- Shock, horror – I’m not going to argue with Microsoft’s latest bit of nagging in Windows 11, as this pop-up is justified
- Microsoft is supercharging Windows 11’s voice commands on Copilot+ PCs with Snapdragon CPUs, and fine-tuning a few Recall features
- Windows 11 fully streamlined in just two clicks? Talon utility promises to rip all the bloatware out of Microsoft’s OS in a hassle-free way
The Bella Ramsey Apple Intelligence ad that disappeared, and why Apple is now facing a false advertising lawsuit
- Apple is facing a lawsuit due to false advertising
- Clarkson Law Firm claims Apple 'misled customers' with Apple Intelligence-powered Siri
- It comes after Apple delayed the AI-powered voice assistant
Apple Intelligence continues to dominate headlines for everything but its AI capabilities, as Apple now faces a lawsuit for false advertising over its AI-powered Siri.
The lawsuit, which Axios originally reported, claims Apple has falsely advertised its Apple Intelligence software that launched alongside the iPhone 16 lineup of smartphones.
The lawsuit claims that Apple has misinformed customers by creating "a clear and reasonable consumer expectation that these transformative features would be available upon the iPhone's release".
Now, six months after the launch of the iPhone 16 and iPhone 16 Pro, some of the Apple Intelligence features showcased in promotional campaigns have been delayed, with no expected release schedule.
Most notably, the lawsuit highlights an ad starring The Last of Us actor, Bella Ramsey, where Ramsey showcased Siri's AI capabilities including personal context and on-screen awareness to help them schedule appointments. That ad, which was available from September, has now been removed from YouTube following the announcement of Siri's delay.
Filed in San Jose, California, by Clarkson Law Firm, which has previously sued Google and OpenAI, the lawsuit targets Apple's iPhone features that haven't shipped yet and not the capabilities of Apple Intelligence features like Genmoji that have.
You can read the full lawsuit online, but the key argument reads, "Contrary to Defendant's claims of advanced AI capabilities, the Products offered a significantly limited or entirely absent version of Apple Intelligence, misleading consumers about its actual utility and performance. Worse yet, Defendant promoted its Products based on these overstated AI capabilities, leading consumers to believe they were purchasing a device with features that did not exist or were materially misrepresented."
We'll have to wait and see if anything comes of this legal battle, but considering Apple has only delayed Siri's upgrade, we could see the AI improvements launch before anything comes to pass.
Apple Intelligence's redemption arcJust yesterday, reports of a Siri leadership shakeup started to surface. And, with exec Mike Rockwell expected to be named as the person to oversee the launch of Siri's AI upgrade, there's reason to be optimistic.
Rockwell is known for his impact in bringing Apple Vision Pro to market, and it shows a real effort from the company to overhaul the current Siri approach so that consumers finally get the capabilities promised.
If Rockwell's direction can get Siri back on track, then Apple Intelligence as a whole could still be a success. After all, once the dust settles, if Apple has a capable AI offering in its smartphones, we'll all quickly forget about the lawsuits and the bad press.
That's not to say we shouldn't hold Apple accountable for advertising features that are still not available on a device six months after launch, but if any company deserves a chance at redemption it's the Cupertino-based firm.
You might also likeGoogle’s NotebookLM adds Mind Maps to its string of research tools to help you learn faster than ever
- Mind Maps are rolling out to NotebookLM
- The new feature works in both the paid-for and free versions
- NotebookLM is shaping up to be one of our favorite learning tools
Hot on the heels of its announcement that NotebookLM's Audio Overviews are now available in Gemini, Google has revealed that a new feature, Mind Maps, will now be available as an option in NotebookLM.
Mind maps are great at helping you understand the big picture of a subject in an easy-to-understand visual way. They consist of a series of nodes, usually representing ideas, with lines that represent connections between them.
The beauty of mind maps is that they show you the connections between ideas in a way that helps make those connections more obvious.
Another string to its bowNotebookLM is Google’s AI research helper. You feed it articles, documents, even YouTube videos and it produces a notebook summarizing the main points of the subject and you can chat to it and ask questions, as you would a normal AI chatbot.
Its best feature is that you can also generate an Audio Overview in NotebookLM, which is an AI-generated podcast between two AI hosts that discusses the subject, so you can listen to it and absorb the key points while doing something else at the same time. The Audio Overview can sound so natural it’s hard to believe you’re not listening to two humans talking!
Now Mind Maps have been added as another string to NotebookLM’s bow for helping you absorb information. They work in either the standard free version of NotebookLM or the paid-for Plus version.
To generate a Mind Map you simply open one of your notebooks in NotebookLM, or create a new one, then click on the new Mind Map chip in the Chat window (the central panel).
Once you’re viewing your Mind Map (it appears in the Studio panel once it has been generated) you can zoom in or out, expand and collapse branches, and click on nodes to ask questions about specific topics.
NotebookLM is shaping up to be an essential tool for students who have a lot of information to digest, and don’t necessarily read very quickly. Using the power of AI you can get AI to do a lot of the leg work for you, then present you with the key bits of information, and Mind Maps is just another way for NotebookLM to help you on your path to better understanding.
You may also likeWould you pay for better sound on YouTube? The video-sharing platform could soon let you control audio quality, but it'll cost you
- YouTube might be adding another feature behind its Premium paywall
- A new report says an audio quality control with three levels is arriving
- It could bolster the Premium feature set but continue the trend of putting more features behind a membership
YouTube is seemingly pulling out all the stops to remain at the top of the streaming game for both video content and YouTube Music and while it answered our requests for adjustable video quality a few years ago, the platform has yet to offer the same for audio.
However, this could be on the horizon for YouTube, as new hints point to a forthcoming feature that would allow you to control audio quality when watching videos.
Thanks to Android Authority, which has spotted new strings in the YouTube beta app, there’s fresh evidence that hints at YouTube’s next big upgrade. It would essentially give you the liberty to adjust the audio quality of whatever video you're watching.
These could come in three different options; Normal, YouTube’s standard audio, High, an improved bitrate option, and Auto, which could simply be an automatic setting depending on your internet speed. It seems too good to be true, doesn’t it? Well ,with YouTube, there’s always a catch.
According to Android Authority’s findings, YouTube’s audio quality feature will only be available for those who are subscribed to YouTube Premium, and even then, there’s a possibility that this feature may only be applicable to certain videos in its endless library of content.
It’s hard to pinpoint when YouTube will launch this feature since it only exists as a few lines of coding at the moment, but if YouTube decides to proceed with it, it could be one of the platform’s most notable upgrades of the past few years.
It seems as though YouTube will do almost anything to get more people signed up for its YouTube Premium service, and these attempts to lure you in have been cropping up quite frequently. A few weeks ago, YouTube launched its cheaper YouTube Premium Lite tier in the US, packing ad-free content on ‘most videos’ but excluding offline or background video playback.
For as long as I can remember, adjustable video quality settings have been part of YouTube’s array of video enhancements, but they have had no effect on audio playback. The audio quality of YouTube videos has always depended on the uploader, so if the audio control rumors are true, it could do wonders to get more audiophiles to jump on the YouTube Premium bandwagon.
You might also likeThis AI app claims it can see what I'm looking at – which it mostly can
- Hugging Face has launched HuggingSnap, an iOS app that can analyze and describe whatever your iPhone's camera sees.
- The app works offline, never sending data to the cloud.
- HuggingSnap is imperfect but demonstrates what can be done entirely on-device.
Giving eyesight to AI is becoming increasingly common as tools like ChatGPT, Microsoft Copilot, and Google Gemini roll out glasses for their AI tools. Hugging Face has just dropped its own spin on the idea with a new iOS app called HuggingSnap that offers to look at the world through your iPhone’s camera and describe what it sees without ever connecting to the cloud.
Think of it like having a personal tour guide who knows how to keep their mouth shut. HuggingSnap runs entirely offline using Hugging Face’s in-house vision model, smolVLM2, to enable instant object recognition, scene descriptions, text reading, and general observations about your surroundings without any of your data being sent off into the internet void.
That offline capability makes HuggingSnap particularly useful in situations where connectivity is spotty. If you’re hiking in the wilderness, traveling abroad without reliable internet, or simply in one of those grocery store aisles where cell service mysteriously disappears, then having the capacity on your phone is a real boon. Plus, the app claims to be super efficient, meaning it won’t drain your battery the way cloud-based AI models do.
HuggingSnap looks at my worldI decided to give the app a whirl. First, I pointed it at my laptop screen while my browser was on my TechRadar biography. At first, the app did a solid job transcribing the text and explaining what it saw. It drifted from reality when it saw the headlines and other details around my bio, however. HuggingSnap thought the references to new computer chips in a headline were an indicator of what's powering my laptop, and seemed to think some of the names in headlines indicated other people who use my laptop.
I then pointed my camera at my son's playpen full of toys I hadn't cleaned up yet. Again, the AI did a great job with the broad strokes in describing the play area and the toys inside. It got the colors and even the textures right when identifying stuffed toys versus blocks. It also fell down in some of the details. For instance, it called a bear a dog and seemed to think a stacking ring was a ball. Overall, I'd call HuggingSnap's AI great for describing a scene to a friend but not quite good enough for a police report.
HuggingSnap’s on-device approach stands out from your iPhone's built-in abilities. While the device can identify plants, copy text from images, and tell you whether that spider on your wall is the kind that should make you relocate, it almost always has to send some information to the cloud.
HuggingSnap is notable in a world where most apps want to track everything short of your blood type. That said, Apple is heavily investing in on-device AI for its future iPhones. But for now, if you want privacy with your AI vision, HuggingSnap might be perfect for you.
You might also likeSiri's chances to beat ChatGPT just got a whole lot better
- Apple is shaking up its Siri development team
- Mike Rockwell, the man behind Vision Pro, is taking over from John Giannandrea
- Could this be the pivotal change Apple needed to bring an upgraded Siri to market?
There's a major shake-up at the top of the Apple food chain as Tim Cook opts for a new leader to help the company bring Apple Intelligence-powered Siri to life.
Following reports from earlier this week that Apple Intelligence would be a focal point of Apple's off-site Top 100 exec meeting, Bloomberg's Mark Gurman is now reporting that AI head John Giannandrea is no longer going to be overseeing Siri's overhaul.
According to Gurman, "[Apple CEO] Tim Cook has lost confidence in the ability of AI head John Giannandrea to execute on product development, so he’s moving over another top executive to help: Vision Pro creator Mike Rockwell."
Rockwell will now be in charge of Siri, according to Gurman's sources, who have asked not to be named, and he'll report directly to software chief Craig Federighi.
With Giannandrea no longer in charge of Siri, he'll now be working on other AI projects. The decision was made during the Apple Top 100 meeting and is said to be confirmed to employees later this week.
Assuming Gruman is right, this major exec shift comes at a pivotal time for Apple – once a pioneer in the voice assistant game but now just a mere passenger – as it tries to devise a solution for Siri.
Apple had advertised its flagship iPhone 16 Pro as the best device for Apple Intelligence, yet nearly seven months after its launch, customers have yet to see the real benefits of AI.
With Apple's Siri delay a public disaster, this shift in leadership could be the catalyst for success that's needed to make the personal context-capable Siri a reality.
Siri's redemption arc?At WWDC 2024 in June, we got to hear directly from Giannandrea about his vision for the then-newly-announced Siri. He said Siri with Apple Intelligence has a "rich understanding of what’s on your device," and that the voice assistant's knowledge base would "only get richer over time."
Nearly a year later, Siri still can't tell you what month of the year it is and definitely doesn't understand what's on your device.
Rockwell, who can take credit for Apple's Vision Pro development, has the potential to get Siri's development back on track. While the mixed reality headset hasn't necessarily been a commercial success it does achieve some incredible feats.
Tim Cook is likely counting on Rockwell's ability to innovate, as evidenced in the Vision Pro, to make Siri as good as the company advertised back at WWDC. And, if he does, then all the naysayers will have to accept that Apple truly is back on track.
You might also likeBoston Dynamics is using Nvidia tech to make Atlas a better robot than C-3PO ever was – and it's about time
- Boston Dynamic's Atlas robot can do cartwheels now
- The robots' more fluid, dynamic moves are possible thanks to Boston Dynamic's robotics expertise and Nvidia's models
- Atlas looks a lot more like C-3PO now and moves a lot more like a human
I get it, Blue, the adorable robot collaboration between Nvidia, Google, and Disney, captivated hearts, but I've seen something better and more practical from Boston Dynamics that's based on many of the same Nvidia foundational models. Further, it's a better indicator of the next big step – or cartwheel – in humanoid robotics.
Boston Dynamics was an early adopter of Nvidia's Project GROOT, and now it has deepened the partnership by tapping into multiple Nvidia platforms, including the Jetson Thor computing platform and Isaac Lab, which uses Nvidia's Isaac Sim and Omiversion technologies to help drive its stunning, all-electric Atlas humanoid robot.
Jetson Thor is paired with Atlas's body and manipulation controllers to tap into multimodal AIs, and the Isaac Lab framework is used to help the robot learn in virtual environments.
All of this helps with motion and adapting to unforeseen or at least unexpected environments, which can also improve the safety of a humanoid robot that might one day work alongside you.
It would be hard to conceptualize the benefits of all that deep technology if it weren't for this video.
In the latest Atlas demonstration, the 6-foot tall, 330-pound all-electric humanoid robot crawls, runs, rolls, performs a can opener move (ask your break-dancing parents), and cartwheels.
The series of moves was so shocking that I had to ask if the video had been sped up to make everything look smoother. Representatives for Boston Dynamics confirmed the video is running at normal speed.
As I watched the video and imagined all the virtual training necessary to pull off the live moves, it occurred to me that we've reached a tipping point.
Step aside, C-3Sure, the hydraulic Atlas could do parkour and backflips, but it didn't look much like us. The electric Atlas is a different story. Its physiology is decidedly human. The head lacks a true face, but it's clearly a head, and the body proportions are all normal if a bit beefed up to body-builder size. Remember, it's 330 pounds.
In other words, Atlas is finally looking a lot more like C-3PO. Now, there are a lot of new humanoid robots from Tesla (Optimus), Figure AI (Figure 01), x1 (Neo Gama), and Unitree (Unitree G1).
With the exception of G1, these robots are mobile disappointments. None of them move in truly fluid and convincing ways. Their steps are halting, their motions stutter, and sometimes there are significant pauses between actions that humans usually strand together like many shiny pearls.
Most, in fact, move like C-3PO. To be fair, that Star Wars protocol droid was Actor Anthony Daniels in a stiff plastic suit, gamely trying not to succumb to the African desert heat. Even so, the robot became an icon and the template for our nearly five decades of humanoid robot dreams. Perhaps that's why people are so excited about all those other robots, even if they shouldn't be.
Atlas is different, and I think it's the combination of Boston Dynamic's decades in robotics engineering (the company's robots were competing in robotics challenges years before most of these other companies entered the space) and Nvidia's powerful silicon and foundational models that are making the difference.
It's not enough to build a robot that can move and perform basic tasks. Most of the other robot competitors know this and have partnered with Google and OpenAI to gain access to their AI multi-modal models, but I think they're playing catchup.
If humanoid robotic development were a horse race, I'd put my money on Boston Dynamics and Nvidia. Together, they'll likely bring us a legion of factory and, eventually, home robots that all do literal cartwheels around us and make us wonder what we saw in C-3PO in the first place.
You might also likeMicrosoft is adding a powerful new feature for using Xbox controllers with Windows 11
- A new Gamepad keyboard is available in the latest Windows 11 preview build
- It features "button accelerators" optimized for handheld gaming PCs
- The update should be arriving for everyone in the coming weeks
Microsoft is implementing a new virtual keyboard for use with the Xbox Wireless Controller, which will make Windows 11 easier to navigate, especially on handheld devices.
The new gamepad keyboard layout for Windows 11 is now available in the Windows 11 preview build (26100.3613) with a promised "gradual rollout" that will see the feature coming to every user in the following weeks.
In Microsoft's own words: "This change introduces the ability to use your Xbox controller to navigate and type." It includes the use of "button accelerators" (with some buttons used for inputs such as backspace and spacebar) and the "keyboard keys have been vertically aligned" for "better controller navigation patterns".
It's the "button accelerators" that appear to be the biggest shortcut, as well as the compact layout aimed at handheld players. LT (the left trigger) is mapped to a secondary symbols key (&123), with the capitalization key mapped to L3 (clicking the left stick), and the start button serving as the enter key.
The layout should sound familiar to those PC gamers who are used to SteamOS, which is available on the Steam Deck and Steam Deck OLED gaming handhelds. Valve's software is optimized for handheld use straight out of the box in a way that Windows 11 just hasn't been when implemented on some of the best gaming handhelds like the Asus ROG Ally X and the Lenovo Legion Go.
With the new Gamepad keyboard still yet to be fully released, some quality-of-life features are yet to be implemented. You're currently unable to log in to Windows 11 with an Xbox Wireless Controller, and the new keyboard doesn't appear to automatically pop up when entering text fields yet (via The Verge). However, it's a step forward in making Windows 11 a more palatable experience for gaming handhelds.
The biggest complaint about using Windows 11-based gaming handhelds has been the fact that the operating system is not designed for the hardware. We've seen this with launchers (such as Steam, Epic Games, GOG Galaxy, and Ubisoft Connect) being less-than-stellar with touchscreen controls, a keyboard that's sluggish to use, and text that can be too small to read, among other issues.
It was recently announced that SteamOS would start to be supported in non-Steam Deck handhelds instead of solely relying on Windows 11. SteamOS 3.7.0 promised a "beginning" to implementation, and we've seen promising things from the Lenovo Legion Go S, which forgoes Windows 11 for Valve's software instead. This handheld has the option for both operating systems, as well as the Ryzen Z2 Go chip, which outpaces the older custom RDNA 2 architecture in Valve's current handhelds.
As such, Microsoft will need to continue optimizing its latest operating system for the handheld market if it wants to keep competitive in this particular PC arms race. A better keyboard for controllers in Windows 11 is just the start, but a welcome one, and here's hoping future updates can continue to keep up.
You may also like...AI is taking over your favorite fast food restaurants as Taco Bell, Pizza Hut, and KFC team up with Nvidia - 500 locations by the end of 2025
- Yum! Brands announce official partnership with Nvidia
- 500 new Taco Bell, Pizza Hut, KFC, and Habit Burger locations will get AI drive-thrus by the end of 2025
- AI is expected to make fast food restaurants more efficient, but is that a good thing?
Last year, Yum! Brands, the company behind Taco Bell, Pizza Hut, and KFC, introduced AI to some drive-thru locations. Now the company plans to roll out AI ordering at 500 different restaurants later this year.
If you've been to one of the more than 100 AI-powered restaurants across 13 U.S states already, you may have already ordered a Crunchwrap Supreme by speaking to AI. For the rest of us, who've yet to experience an AI server, your closest Taco Bell, Pizza Hut, KFC, or Habit Burger might offer the service soon.
In a press release, Yum! Brands announced the new AI partnership with Nvidia that will see artificial intelligence implemented in some of the company's 61,000 restaurants across the globe. In fact, the company is "Nvidia's first AI restaurant partner."
"Yum! and Nvidia are planning to transform the future of dining by unlocking scalable AI applications quickly, reliably and affordably." But what does that mean exactly?
Well, restaurants chosen for this AI rollout will receive voice-automated order-taking AI agents which the company says will "advance drive-thru and call center operations with conversational AI."
Elsewhere, the operations side of the restaurants will be improved by AI, with Yum! stating AI will help with analytics to ensure better-performing locations.
AI drive-thrus, yes please (under certain circumstances)Now, I love fast food just as much as the next person. And, in fact, I hate drive-thrus because I often get a sense of dread as I roll down my window to speak to someone over an intercom.
Will replacing drive-thru employees with AI make the fast food experience better for me? I guess that depends if you think human comprehension is better than artificial intelligence's.
Personally, I'm all for more efficient fast food restaurants as long as the use of AI doesn't replace human workers. If this partnership with Nvidia allows Yum! to lay off thousands of employees then I fear for the mass restaurant exodus we'll experience over the next few years.
As always, use AI to compliment and facilitate your employee's jobs and you're onto a winner. Use AI to replace humans, and that's the dangerous territory that gives reason to the AI-skeptics out there.
One things for sure, if you hate AI, you might have just unlocked an epic fitness hack. Because now there's a chance you never want to order at your local KFC again.
You might also like- Stability AI’s new virtual camera turns any image into a cool 3D video and I’m blown away by how good it is
- ChatGPT helped me pick my March Madness bracket - I doubt I’ll win, but if I do I owe AI a chunk of that $1 million cash prize
- Google Gemini's new model is the brainstorming AI partner you've been looking for
Embarrassing Windows 11 bug that deleted Copilot app is now fixed – but will anyone outside of Microsoft care?
- A bug in recent updates for Windows 11 and 10 accidentally deleted Copilot
- Microsoft has swiftly fixed this and reinstated the Copilot app
- The company will doubtless be looking to forget this odd episode in the AI assistant’s history just as swiftly
Microsoft has rushed out a patch to put Copilot back into Windows 11 (and Windows 10), after the latest round of updates for its operating systems deleted the app for the AI assistant (for some users, anyway).
In what’s one of the more head-scratching bugs we’ve seen from Microsoft in the recent past – and it has some competition there, make no mistake – the key introduction for Windows 11 as far as AI is concerned was accidentally removed from some PCs.
Windows Latest noticed the fix has now landed, and observed that Microsoft updated its support documents for affected versions of Windows, which are Windows 11 24H2 and 23H2, and also Windows 10.
Microsoft tells us: “This issue has been fixed, and the affected devices are being returned to their original state.”
So, if the Copilot app has gone missing from your desktop, it will soon be returned to its rightful place, although it may take a little time for the cure to be pushed to all affected systems.
As Microsoft also notes, if you can’t wait, you can manually reinstall the Copilot application yourself. You’ll find it in the Microsoft Store (and once it’s installed again, you can also manually pin it back on the taskbar).
Windows Latest also observed that Microsoft kept this one rather under the radar, keeping its known issue updates (including the resolution) just to the respective support documents for Windows versions, rather than flagging this in the overarching Windows health dashboard.
That’s not surprising, though, and indeed the speedy fix is no surprise either. Let’s face it, this was a red-alert-level of embarrassment here – Microsoft is pushing hard to drive Copilot adoption, so ditching the AI app mistakenly from some Windows 11 devices was rather shooting itself in the foot, to say the least.
Clearly enough, it wasn’t a difficult fix, and at any rate, as Microsoft has pointed out, it wasn’t difficult to rectify the problem yourself simply by reinstalling the Copilot app manually.
As a final thought, here’s an interesting question to ponder: how many of the affected PCs that had Copilot removed even noticed the AI assistant was missing? If you never summon the Copilot app, you might not have even noticed the icon going missing from the taskbar. I’m betting a fair few people will have fallen into that category…
That said, it should be noted that as far as I’m aware, only a relatively small set of Windows 11 (and 10) users were hit by the vanishing Copilot bug in the first place, so the overall impact was likely to be limited, anyway. As mentioned, this is more of a PR embarrassment for Microsoft than anything else, but it’s certainly a weird error to have occurred.
You may also like...- Shock, horror – I’m not going to argue with Microsoft’s latest bit of nagging in Windows 11, as this pop-up is justified
- Windows 11 is still my favorite OS, ads and all
- Windows 11 fully streamlined in just two clicks? Talon utility promises to rip all the bloatware out of Microsoft’s OS in a hassle-free way
Stability AI’s new virtual camera turns any image into a cool 3D video and I’m blown away by how good it is
Stability AI's videos have infused text and images with movement and life for a few years but are now literally adding a new dimension by turning two-dimensional images into three-dimensional videos.
The company's new Stable Virtual Camera tool is designed to process even a single image into a moving, multi-perspective video, meaning you could rotate around and view the film from any angle.
It's not entirely a new concept, as virtual cameras have long been a staple of filmmaking and animation, letting creators navigate and manipulate digital scenes. But Stability AI has taken that concept and thrown in a heavy dose of generative AI. The result means that instead of requiring detailed 3D scene reconstructions or painstakingly calibrated camera settings, Stable Virtual Camera lets users generate smooth, depth-accurate 3D motion from even a single image, all with minimal effort.
What makes this different from other AI-generated video tools is that it doesn’t just guess its way through animation and rely on huge datasets or frame-by-frame reconstructions. Stable Virtual Camera uses a multi-view diffusion process to generate new angles based on the provided image so that the result looks like a model that could actually exist in the real world.
The tool lets users control camera trajectories with cinematic precision, choosing from movements like zoom, rotating orbit, or even a spiral. The resulting video can be in vertical form for mobile devices or widescreen if you have a cinema. The virtual camera can work with just one image but will handle up to 32.
Stability AI has made the model available under a Non-Commercial License for research purposes. That means you can play with it if you have some technical ability by grabbing the code from GitHub. Going open-source as Stability AI usually does also means the AI developer community can refine and expand the virtual camera's capabilities without the company needing to pay.
3D AIOf course, no AI model is perfect, and Stability AI is upfront about the kinks still being worked out. If you were hoping to generate realistic people, animals, or particularly chaotic textures (like water), you might end up with something that belongs in a low-budget horror film.
Don't be surprised if you see videos made with it featuring perspectives that awkwardly travel through objects or have perspective shifts leading to flickering, ghostly artifacts. Whether this will be a widely adopted tool or just another AI gimmick ignored by dedicated filmmakers remains to be seen.
Not to mention how much competition it faces among AI video tools OpenAI's Sora, Pika, Runway, Pollo, and Luma Labs' Dream Machine. Stable Virtual Camera will have to show it performs well in the real world of filmmaking to go beyond just another fun demo video.
You might also likeWindows 11 could eventually help you understand how fast your PC is - as well as offer tips for making your PC or laptop faster for free
- Windows 11 could get a feature to better inform you on how fast your PC is
- It’s still hidden in testing, but a new FAQ to help the less tech-savvy has just been discovered
- As this is still in the very early stages – and not an official feature at all yet – we should temper our excitement somewhat
Microsoft is developing a feature in Windows 11 that provides some easy to understand information on the spec of your PC, and how powerful the hardware inside the device is.
Neowin noticed that a regular contributor to the Windows rumor scene on X, PhantomOfEarth, uncovered some new work on this capability which remains hidden under the bonnet of Windows 11.
New Frequently Asked Questions list in Settings > System > About, hidden in builds 26120.3576 and 22635.5090. Has some questions related to the Windows version and device specs. (vivetool /enable /id:55305888) pic.twitter.com/AkaP8XR3PRMarch 17, 2025
PhantomOfEarth found the new FAQ section in preview builds 26120.3576 and 22635.5090, and they enabled the functionality using a Windows configuration utility (ViVeTool).
You may recall that this feature was first discovered in the background of Windows 11 back at the start of 2025, when the same leaker aired images of some ‘cards’ in the Settings app, which are compact info panels that display the specs of the PC so they’re easy to see at a glance.
These panels (in System > About, within Settings) display core specs such as the CPU, graphics card, and amount of memory and storage. On top of that, as we noted at the time, Windows 10 users already had this feature live, in testing, and it came with a FAQ section tacked on.
Now that FAQ has arrived in Windows 11, as mentioned, and it provides a range of questions and answers on elements of the spec of the host PC.
The nifty bit is that the FAQ is tailored based on the PC that’s running Windows 11. So for example, if you haven’t got a discrete GPU, and you just use the integrated graphics provided by your processor, Microsoft will provide info on exactly what that means for your prospects of running certain software or games.
Or if you’ve got a low amount of system RAM, you’ll be given details on how that leaner allocation might affect the running of apps on your PC.
It’s good to see this FAQ section arriving in Windows 11, although it was expected to do so, given that it was present in Windows 10 (testing) already. (However, I’m not quite sure why Microsoft is developing this for Windows 10 at all, given that the OS is shuffling off its coil before too long, something Microsoft is now regularly reminding us about in, erm, creative ways, shall we say).
We still must remember that at least for Windows 11, this is a hidden feature and not yet enabled in testing, so there’s no guarantee it’ll ever arrive in the finished version of the operating system (the same’s true for Windows 10, for that matter).
I think it’s quite likely that it will be pushed through to Windows 11, though, given that this will be a helpful feature for computing novices who aren’t sure about the capabilities of their PC. The tailored nature of the new FAQ is particularly useful, so the info provided is guaranteed to be relevant to the user.
Still, the answers to the questions posed do remain a little generic, but I can see them being fleshed out by AI in the future. This could be a good use of Copilot in getting the assistant to be of more use to the less tech-savvy out there.
As I’ve discussed in the past, this new approach looks far superior to the Windows Experience Index, which computing veterans may recall from back in the day. The WEI, as it was known, was introduced with Windows Vista, and rated your PC’s performance in a bunch of categories – but it was convoluted and confusing, rather than helpful.
It looks like Microsoft is going to do much better with this fresh take on the concept, but the proof, as ever, will be in tasting the pudding – and this feature is still very much at the mixing ingredients stage right now.
You may also like...- Shock, horror – I’m not going to argue with Microsoft’s latest bit of nagging in Windows 11, as this pop-up is justified
- Microsoft is supercharging Windows 11’s voice commands on Copilot+ PCs with Snapdragon CPUs, and fine-tuning a few Recall features
- Windows 11 fully streamlined in just two clicks? Talon utility promises to rip all the bloatware out of Microsoft’s OS in a hassle-free way
Microsoft gets into the spam game by again emailing Windows 10 users to prod them to upgrade to Windows 11 – is the nagging going too far now?
- Microsoft is sending out emails to push people to upgrade from Windows 10 to Windows 11
- While on the face of it, that seems a useful move to help some users, Microsoft’s angling of the email is far from ideal
- It also runs the risk of making Windows 10 users feel spammed, particularly as they’re still getting nudged numerous times within the OS itself
Microsoft is once again trying to persuade Windows 10 users that they need to upgrade to Windows 11, ahead of the impending cessation of support for the older operating system later this year.
This time, though, the nudge to upgrade isn’t being delivered within Windows 10 itself, but via email – although it isn’t the first time Microsoft has tried this approach.
I received an email from Microsoft (sent to the email address linked to my Microsoft account) regarding my Windows 10 PC needing an upgrade at the end of November 2024, a few months back, but now the software giant is sending out fresh messages to upgrade this month.
I didn’t get this latest mail (not yet, anyway), but Windows Latest did, and although it carries the same title, a warning that ‘End of support for Windows 10 is approaching,’ the email itself is somewhat different.
The overall thrust of the content is similar though. There’s a prominent reminder of the exact date that Microsoft halts support for Windows 10 – which happens on October 14, 2025 – and some suggestions of what to do with your old PC (trade it in, or recycle the machine). You can also click a link to check your upgrade eligibility for Windows 11.
Microsoft also clarifies that your PC will continue to work, it’s just that there will be no more support – as in software updates – piped through. There’s also a link to some blurb on how Windows 11 is more secure (which is certainly true), and a nudge to use OneDrive to back up your files if you plan to use Windows 10 after the deadline has passed, heading into 2026.
There are a couple of things that strike me as odd here. Firstly, the plug for OneDrive feels very gratuitous, and hardly a solution to counter the prospect of having your PC compromised by running an out-of-date OS. Where on earth is the stern warning that it really isn’t a good idea to run Windows 10 on your PC when support for the operating system expires?
As you may be aware, without security updates, your computer will be left vulnerable to exploits, as when holes appear in Windows 10, they will no longer be patched up – a recipe for disaster, potentially.
Of course, if you really want to stick with Windows 10, then for the first time ever, consumers can pay to extend support, and I’d recommend you do so (for other options, explore my article on how to prepare for Windows 10 End of Life). Oddly enough, Microsoft doesn’t mention this extension of support in its email.
I say it’s odd, but then, Microsoft would really prefer you upgrade to Windows 11 anyway, either on your current PC – if it’s eligible – or by purchasing a new Windows 11 computer. And to that end, there’s a link in the email to ‘explore new computers’ which is something Microsoft has been urging us to do for a while now. As I’ve discussed before, there’s arguably merit to the suggestion in some ways, but a whole lot of other concerns outweighing that around the environmental toll that a ton of Windows 10 PCs ending up on the scrapheap might usher in.
These are serious worries, and likely why Microsoft is sending the other message in this email advising on recycling (or trading in) your old Windows 10 PC if you do upgrade.
The other point here is do you want to be getting emails direct from Microsoft about Windows 10 upgrades? Well, in some ways, I guess it’s better (or at least slightly less annoying) than being pushed to upgrade within the operating system itself, but the problem is, Microsoft is doing that as well – so Windows 10 users are getting both barrels, as it were. Sigh…
We can likely expect several further barrages of these kind of emails as 2025 progresses, and the October support deadline draws nearer – messages that folks may well be wanting their spam filter to deal with, frankly.
Don’t get me wrong here: I’m not saying it isn’t important to warn consumers about the dangers of an out-of-date operating system – it definitely is – but Microsoft is rather overstepping with its broad approach here, and worse still, this particular email actually undersells those dangers (while overselling other Microsoft products).
You may also like...- Shock, horror – I’m not going to argue with Microsoft’s latest bit of nagging in Windows 11, as this pop-up is justified
- Microsoft is supercharging Windows 11’s voice commands on Copilot+ PCs with Snapdragon CPUs, and fine-tuning a few Recall features
- Windows 11 fully streamlined in just two clicks? Talon utility promises to rip all the bloatware out of Microsoft’s OS in a hassle-free way
Nvidia, Google, and Disney's AI-powered Star Wars robot is absolutely the droid I've been looking for
- Nvidia is collaborating with Google and Disney to create a physics engine for robotics
- The open-source engine is titled Newton and is expected to launch later this year
- Nvidia CEO Jensen Huang revealed Blue, a Star Wars-inspired robot using Newton for complete real-time simulation
Nvidia CEO Jensen Huang has announced a new collaboration between the company, Google DeepMind, and Disney Research which is bringing AI-powered Star Wars robots to life.
Taking to the stage at Nvidia's GTC 2025 keynote on Tuesday, Huang revealed Blue, a Star Wars-inspired research robot capable of incredible movement akin to those seen in your favorite sci-fi movies.
The companies have teamed up to create Newton, the physics engine behind the robot's movement, which is expected to be released as open source later this year.
Huang said, "Can you believe you're looking at complete real-time simulation? This is how we're going to train robots in the future. Blue has two Nvidia computers inside." Nvidia's CEO went on to interact with Blue on stage before telling the robot to go home.
Nvidia's press release reads, "Newton is open source, empowering the entire robotics community. This enables roboticists to use and distribute the framework freely and contribute cutting-edge research to its development."
Now, this is all very proof of concept for the consumer, so what does Nvidia, Google, and Disney's collaboration mean for you and me? Well, we might not reap the rewards any time soon but after seeing Blue in action on stage, I'm convinced Disney's dream of having droids in Disney World is now going to become a reality.
In fact, just last week a report from Axios at SXSW stated that Disney is planning to showcase the robots in its entertainment parks at some point this year.
I've always wanted a robot, please make this a realityYou know, robots are pretty scary, I get it. But as someone who constantly struggles to deal with the stress of life in the 21st century, I'd absolutely jump at the opportunity to have a Star Wars droid in my home.
While I love my French Bulldog, Kermit, he can't do the dishes, he can't do the washing up, and he sure as heck can't understand what I'm saying (although I think he chooses to ignore me).
Now, this concept of a small cute robot doing all my chores in my home is not going to arrive anytime soon, but Newton's physics engine makes it a real possibility in the near future, and I'm sold on that idea.
Give me a robot that can make my life easier through the power of AI and I'll take out a loan to get one. My productivity would soar, my mental health would improve, and best of all, I'd hopefully never have to wash the dishes again.
You might also likeGoogle's AI Overviews will now include crowd-sourced medical advice, and that sounds like an accident waiting to happen
- Google has shared 6 health AI updates at its annual event The Check Up
- It is improving AI Search results for health queries, and helping researchers parse large volumes of literature
- It has also created a model that could improve AI-powered drug discovery
At its annual The Check Up event, Google has shared six ways it says it's using AI to improve health care and advances in medicine and science.
The company claims "AI can lead to scientific progress and cutting-edge products that help improve health outcomes for people all around the world." While some of the benefits of tools like Gemini are much more obvious when it comes to sifting through emails or doing research, the medical application can appear less obvious, although no less exciting.
Here are the six developments the company shared, including one that I think might cause some concern.
1. AI Overviews Search improvementsThe first development is a change to Google's AI Overviews in Search, which I believe will have the biggest day-to-day impact on Google users and should be treated with the most caution.
Google says Search and AI Overviews "to find credible and relevant information about health, from common illnesses to rare conditions," and that it's improving the AI Overview results on health topics to be "so they’re more relevant, comprehensive and continue to meet a high bar for clinical factuality."
The change is a new What people suggest section. "While people come to Search to find reliable medical information from experts, they also value hearing from others who have similar experiences," Google says.
To that end, AI will organize different perspectives from online discussions to help you sift through helpful experiences from people in similar situations. The example Google uses is a person dealing with arthritis who might want to know how other people with the condition exercise.
Obviously, there's the potential for misinformation to surface here. Google's image includes a disclaimer that the results are "for informational purposes only" and suggests consulting a medical professional for advice or diagnosis.
As with everything you read on Google, a level of caution and discernment is required, all of this information already exists on the internet, Google is just trying to make the helpful stuff easier to find. Real-world results will determine whether or not it's successful.
2. Medical Records changesGoogle has also launched a new Medical Records API globally in Health Connect, which lets apps read and write medical record information like allergies and medications in a standard format that you can share with your doctor's office.
3. Pixel Watch 3 Loss of Pulse DetectionAnnounced last month, the Pixel Watch 3, one of the best Android smartwatches, is getting Loss of Pulse Detection in the US at the end of March. The tool can automatically call emergency services and notify people close by if your heart stops beating.
4. AI co-scientistGoogle's recently launched AI co-scientist can help researchers "parse large volumes of scientific literature and generate high-quality, novel hypotheses." Google says the tool won't automate the scientific process but is designed "to help experts uncover new ideas and accelerate their work." The company says it's already being used in Imperial College London and Stanford.
5. TxGemmaGoogle has launched a new collection of Gemma-based models it hopes "will help improve the efficiency of AI-powered drug discovery." The AI can "understand regular text and the structures of different therapeutic entities, like small molecules, chemicals, and proteins," which means researchers can use it to predict how safe or effective new therapies and drugs might be.
6. Cancer treatmentFinally, Google highlighted how it's helping a hospital in the Netherlands develop an AI tool that can "accelerate the identification of personalized cancer treatments by combining vast public medical data and de-identified patient data." It can reportedly generate "summaries of treatment options and relevant medical publications," giving Doctors more time to focus on patient care.
The efficacy and reach of all of these initiatives remain to be seen, but Google's update is a clear sign that as AI continues to permeate the world around us, its advance into every facet of life including medicine appears inevitable. With lives at stake and patient well-being on the line, getting it right is more important than ever, but the rewards for success are also infinitely greater.
You may also likePerplexity AI drops new Squid Game-inspired ad that pokes fun at Google starring Lee Jung-jae
- Perplexity AI has launched an ad campaign starring Squid Game actor Lee Jung-jae
- The ad mocks Google, renamed Poogle, as useless compared to Perplexity
- The ad showcases how important AI assistant reliability is to consumers
AI conversational search engine Perplexity is getting as ruthless in its marketing as the judges in Squid Game's eponymous game show. The company has introduced a major celebrity-driven ad campaign featuring Squid Game star Lee Jung-jae and some jabs at Google.
The 90-second spot above portrays Jung-jae playing a game very similar to the Squid Game. First, he must figure out how to get coffee stains out of a white shirt. When he opens the very obvious Google parody Poogle, it responds in typical search engine fashion with a list of blue links.
Realizing that sifting through articles for answers isn’t going to cut it, he panics and asks Perplexity instead. Using its voice model, the AI chatbot provides him with clear, step-by-step guidance.
He next has to ask Perplexity how to make cheese stick to pizza. Perplexity provides the right answer before quipping, “Don’t use glue,” a direct nod to the infamous mistake Google’s AI made in suggesting Elmer’s glue as a potential pizza ingredient.
The ad concludes with Jung-jae being asked to name the first Korean to win an Emmy award, which he doesn't need any help with as he is that actor.
Perplexity pokes at GoogleThe timing of the ad campaign is no accident, as Netflix’s third and final season of Squid Game is expected to drop in June. The campaign is also part of a yearlong partnership between Perplexity and Artist United, which Jung-jae co-owns. Artist United has also integrated Perplexity into its daily research and content creation operations.
The ad begins running today in the U.S. before rolling out to Korea, Japan, and Europe over the next ten days. Each region will get a localized version of the text and voiceovers, though Jung-jae’s dialogue will remain in Korean.
Perplexity’s strategy with the ad is notable beyond just having a world-famous actor poke fun at the search industry’s biggest player. It suggests that Perplexity grasps that accuracy and reliability are what people care most about when it comes to AI assistants.
If Perplexity can overtake Google's reputation as the go-to online information authority, it would be a major coup. Even seizing on small cracks in Google's place at the top could propel Perplexity above many of its rivals.
Whether Google has the glue to fill in those cracks after making pizza is an open question.
You might also like