Do NOT follow this link or you will be banned from the site!
Techradar
New judge’s ruling makes OpenAI keeping a record of all your ChatGPT chats one step closer to reality
- A federal judge rejected a ChatGPT user's petition against her order that OpenAI preserve all ChatGPT chats
- The order followed a request by The New York Times as part of its lawsuit against OpenAI and Microsoft
- OpenAI plans to continue arguing against the ruling
OpenAI will be holding onto all of your conversations with ChatGPT and possibly sharing them with a lot of lawyers, even the ones you thought you deleted. That's the upshot of an order from the federal judge overseeing a lawsuit brought against OpenAI by The New York Times over copyright infringement. Judge Ona Wang upheld her earlier order to preserve all ChatGPT conversations for evidence after rejecting a motion by ChatGPT user Aidan Hunt, one of several from ChatGPT users asking her to rescind the order over privacy and other concerns.
Judge Wang told OpenAI to “indefinitely” preserve ChatGPT’s outputs since the Times pointed out that would be a way to tell if the chatbot has illegally recreated articles without paying the original publishers. But finding those examples means hanging onto every intimate, awkward, or just private communication anyone's had with the chatbot. Though what users write isn't part of the order, it's not hard to imagine working out who was conversing with ChatGPT about what personal topic based on what the AI wrote. In fact, the more personal the discussion, the easier it would probably be to identify the user.
Hunt pointed out that he had no warning that this might happen until he saw a report about the order in an online forum. and is now concerned that his conversations with ChatGPT might be disseminated, including “highly sensitive personal and commercial information.” He asked the judge to vacate the order or modify it to leave out especially private content, like conversations conducted in private mode, or when there are medical or legal matters discussed.
According to Hunt, the judge was overstepping her bounds with the order because “this case involves important, novel constitutional questions about the privacy rights incident to artificial intelligence usage – a rapidly developing area of law – and the ability of a magistrate [judge] to institute a nationwide mass surveillance program by means of a discovery order in a civil case.”
Judge Wang rejected his request because they aren't related to the copyright issue at hand. She emphasized that it's about preservation, not disclosure, and that it's hardly unique or uncommon for the courts to tell a private company to hold onto certain records for litigation. That’s technically correct, but, understandably, an everyday person using ChatGPT might not feel that way.
She also seemed to particularly dislike the mass surveillance accusation, quoting that section of Hunt's petition and slamming it with the legal language equivalent of a diss track. Judge Wang added a "[sic]" to the quote from Hunt's filing and a footnote pointing out that the petition "does not explain how a court’s document retention order that directs the preservation, segregation, and retention of certain privately held data by a private company for the limited purposes of litigation is, or could be, a “nationwide mass surveillance program.” It is not. The judiciary is not a law enforcement agency."
That 'sic burn' aside, there's still a chance the order will be rescinded or modified after OpenAI goes to court this week to push back against it as part of the larger paperwork battle around the lawsuit.
Deleted but not goneHunt's other concern is that, regardless of how this case goes, OpenAI will now have the ability to retain chats that users believed were deleted and could use them in the future. There are concerns over whether OpenAI will lean into protecting user privacy over legal expedience. OpenAI has so far argued in favor of that privacy and has asked the court for oral arguments to challenge the retention order that will take place this week. The company has said it wants to push back hard on behalf of its users. But in the meantime, your chat logs are in limbo.
Many may have felt that writing into ChatGPT is like talking to a friend who can keep a secret. Perhaps more will now understand that it still acts like a computer program, and the equivalent of your browser history and Google search terms are still in there. At the very least, hopefully, there will be more transparency. Even if it's the courts demanding that AI companies retain sensitive data, users should be notified by the companies. We shouldn't discover it by chance on a web forum.
And if OpenAI really wants to protect its users, it could start offering more granular controls: clear toggles for anonymous mode, stronger deletion guarantees, and alerts when conversations are being preserved for legal reasons. Until then, it might be wise to treat ChatGPT a bit less like a therapist and a bit more like a coworker who might be wearing a wire.
You might also likeWindows 10 users who don’t want to upgrade to Windows 11 get new lifeline from Microsoft
- Microsoft has launched a wizard to help Windows 10 devices stay secure
- It’s only intended as a temporary solution, though
- Windows 10 support ends later this year
Windows 10 has been around for almost a decade now, but official support is due to end on October 14 this year. Yet that doesn’t have to be the end of the road, as Microsoft has just announced a new process for anyone who needs a little more time to switch to Windows 11.
The updates are part of Microsoft’s Extended Security Updates (ESU) program, which brings monthly critical and important security patches to Windows 10 users for one year after official support ends. Microsoft says this is only meant to be a short-term solution, as it doesn’t include non-security updates or new features.
With today’s change, there are now a few new ways to get started. For individuals, there’s a new enrollment wizard that will give you three options: use Windows Backup to sync all your settings to the cloud; redeem 1,000 Microsoft Rewards points to get started; or pay a one-off fee of $30.
After you’ve picked an option and followed the instructions, your Windows 10 PC will be enrolled. ESU coverage for personal computers lasts from October 15, 2025 until October 13, 2026. The enrollment wizard is currently available in the Windows Insider Program, made available to regular Windows 10 users in July, and will roll out on a wider basis in mid-August.
Time to upgradeThe ESU changes aren’t just coming to individual Windows 10 users. Commercial organizations can pay $61 per device to subscribe to the ESU program for a year. This can be renewed annually for up to three years, although Microsoft warns that the cost will increase each year. Businesses can sign up today via the Microsoft Volume Licensing Program, while Cloud Service Providers will begin offering enrollment starting September 1.
As for Windows 10 devices that are accessing Windows 11 Cloud PCs via Windows 365 and virtual machines, these will be granted access to ESU free of charge and will receive security updates automatically, with no extra actions required.
In a way, Microsoft’s announcement highlights the struggles the company has had with getting people to upgrade to Windows 11. Microsoft first announced that it would kill off Windows 10 way back in June 2021, and yet there are still people and organizations that have not made the switch, despite many years of prompts and warnings.
For some people – especially those with mission-critical devices or large fleets of computers – upgrading to Windows 11 might be a herculean task. But if you’re able to make the switch, you really should do so to ensure you keep getting all the latest updates. We’ve even got a guide on upgrading to Windows 11 to help you through the process.
You might also like- Windows 10 diehards can keep their beloved OS secure for a little while longer (for a fee) as Microsoft pleas with them to be reasonable
- Windows 10 has a year left to live – but are users prepared to upgrade to Windows 11?
- Windows 10 goes dark in 6 months, yet shockingly, many businesses haven't even got a plan to upgrade
Now it’s dogs doing Olympic diving in the next AI video craze to sweep the Internet from the Veo 3 rival Hailuo 02
- New diving dogs AI-generated video follows the cat Olympics craze
- You can try Hailuo 02 yourself for free
- Are dogs better at 'diving' than cats? You decide
Hot on the heels (or should that be paws?) of the AI-generated ‘cats doing Olympic diving’ video that broke the Internet a few days ago comes the natural follow-up.
Yes, it’s dogs doing Olympic diving, which opens up the possibility of a debate on who does it better - cats or dogs?
@stanislav_laurier ♬ оригинальный звук - stanislav_laurierCreated by TikTok and Hailuo 02 user Stanislav Laurier, the video features the same impressive physics and realistic depictions of dogs that made the cat video so successful in the first place. The way the dogs bounce on the diving board before launching themselves into a spin makes this a truly impressive piece of AI work.
And of course, the dogs look just as realistic as the cats as they walk along the diving board. It’s only when you see them doing impossible spins that you realize that this must be AI.
Like the cat video, this was created in a new Veo 3 and Sora rival called Hailuo 02, and effortlessly demonstrates how far AI video has come.
On the podiumAfter a few impressive dives, the video ends with a winners' podium showing off which dogs got third, second, and first place. Here, AI lets itself down slightly, as it says "1nd" and "2st" on the podium. It's amazing that it can get all the complicated physics of spinning dogs correct, but can't get some simple text right.
The video was posted on TikTok, and received quite a few comments, especially from Laura Smith who perhaps hadn’t quite caught on that the video was made with AI: “Wowww!!!! This is so amazing that these clever dogs can do this!”
Other users seem to have worked it out, though, like Kaia : 3, who said, “I’m crying, I thought this was real until the Pomeranian started spinning.”
Try it yourselfHailuo 02 is created by Chinese AI video developer MiniMax and it debuted earlier this summer.
You’ll need to create an account to use Hailuo 02 (it let me log in with my Google account), but after that, you can give Hailuo a go yourself for free. I asked it to create “A cat throwing a shot put in the Kitty Olympics 2026”
As a “non member” (subscriptions are available, starting at $95.99 – about £70/AU$147 – a year) I got 500 free points that were valid for the next 3 days, and I had to wait in a four minute queue, which was more than acceptable, after which it started to generate the video. After a couple of minutes, my video was ready, and it had only used up 25 of my points.
I’ll admit that it doesn’t look great, but that was my first attempt. More time invested in refining the very simple prompt I used would produce much better results.
So, who do you think takes the prize for best Olympic diving? Dogs or cats? Comment with your opinion below, and let’s not pretend that this isn’t exactly what AI was created for.
You might also likeMicrosoft's 'if you can't beat them, join them' approach to the threat of Steam in the new Xbox PC app is a great idea
- Microsoft's improved Windows 11 Xbox PC app will be available for Xbox Insiders
- Its Aggregated Game Library will allow users to access games on multiple storefronts in one app
- It's going up against SteamOS and its game library setup
Microsoft's ROG Xbox Ally handheld gaming PCs are set for release later this summer, alongside a significant Xbox app upgrade – and it appears that our first taste of the handheld-friendly app is closer than ever.
Announced on Xbox Wire, Microsoft's new Aggregated Game Library will be available for Xbox Insiders to preview, leading up to its full launch alongside the ROG Xbox Ally handhelds. It will let users launch games across Steam, Battle.net, and multiple platforms like Epic Games, all on the Xbox app, essentially emulating Valve's SteamOS.
It's set to act as a direct competitor to Valve's efforts at creating a handheld-friendly gaming experience; first with the Steam Deck, and now with the Legion Go S and other handhelds without an official SteamOS license. It's been a while since fans and I have pleaded with Microsoft for a portable Windows 11 mode, and I couldn't be happier to see it doing just that.
However, I'd say it's evident that Microsoft has a lot of work ahead, attempting to improve Windows 11 and going up against SteamOS. We already know that gaming performance on SteamOS is better than Windows 11's – and yes, while we still need to see the Xbox app first, it may have some catching up to do.
While Windows 11 has the advantage of running most multiplayer games using anti-cheat, there's a strong chance of this compatibility on Linux improving – and that's because SteamOS is making its way to other handhelds away from the Steam Deck. Not to mention, Splitgate 2 developers tweaked its anti-cheat to make the title playable on SteamOS, so others may follow suit.
Analysis: I may not turn my back on SteamOS, but Microsoft's move is a welcome oneLet's get one thing straight: I'm absolutely all-in for the new Xbox app, and I'll more than likely be using it on my dual-booted Asus ROG Ally. However, I'm keeping my expectations low, and I don't think the new upgrade will convince me to move away from SteamOS completely.
Now, you could say it's an unfair judgment as the upgrades aren't available yet – but fans have been asking Microsoft to consider a portable handheld mode for a long while now, so the onus isn't on the fans, but rather Microsoft itself.
Valve's SteamOS has multiple years of work under its belt, with optimizations pushing for a smoother and more customizable handheld experience. Tools like Decky Loader (which isn't affiliated with Valve) are a massive part of that – and I hope that Microsoft can replicate a smooth and customizable experience within the Xbox app.
The preview should arrive later this week, and you can be certain that I'll be testing it on my Asus ROG Ally...
You might also like...- Microsoft is digging its own grave with Windows 11, and it has to stop
- Windows 11 user has 30 years of 'irreplaceable photos and work' locked away in OneDrive - and Microsoft's silence is deafening
- Still on Windows 11 23H2 because you’re worried 24H2 is a disaster for PC gaming? Microsoft’s latest update could persuade you to finally upgrade
Google Earth is now an even better time-travel machine thanks to this Street View upgrade – and I might get hooked
- Google Earth is celebrating is 20th birthday this month
- It's just added a new historical Street View feature for time-traveling
- Pro users will also get AI-powered upgrades to help with urban planning
Google Earth has just turned 20 years old and the digital globe has picked up a feature that could prove to be an addictive time-sink – historical Street View.
Yes, we've been able to time-travel around our cities and previous homes for years now on Google Maps, but Google Earth feels like a natural home for the feature, given its more immersive 3D views and satellite imagery. And from today, Google Earth now offers Street View with that historical menu bar.
That means you can visit famous buildings and landmarks (like the Vessel building in New York City above) and effectively watch their construction unfold. To do that, find a location in Google Earth, drag the pegman icon (bottom right) onto the street, click 'see more dates', and use the film strip menu to choose the year.
Around major cities and landmarks, Street View images are updated so regularly now that their snapshots are often only months apart, but in most areas they're renewed every one to two years. That opens up some major nostalgia potential, particularly if the shots happen to have frozen someone you know in time.
Bringing history to lifeTo celebrate Earth's birthday, Google has also made timelapses of its favorite historical aerial views, which stitch together satellite photos over several decades. This feature became available in the web and mobile versions of Earth last year – to find it, go the the layers icon and turn on the 'historical imagery' toggle.
One fascinating example is the aerial view of the Notre-Dame de Paris cathedral (above), which Google made exclusively for us. It shows the gothic icon from 1943 through to its unfortunate fire in 2019, followed by its recent reconstruction.
But other examples that Google has picked out include a view of Berlin, from its post-war devastation to the Berlin Wall and its modern incarnation, plus the stunning growth of Las Vegas and San Francisco over the decades.
There's a high chance that Google Earth will, once again, send me down a hours-long rabbit hole with these Street View and historical imagery tricks. But it's also giving Pro users some new AI-driven features in "the coming weeks", with features like 'tree canopy coverage' and heatmaps showing land surface temperatures underlining Earth's potential for urban planning.
That perhaps hints at the Gemini-powered treats to come for us non-professional users in the future. But for now, I have more than enough Earth-related treasure hunts to keep me occupied.
You might also likeForget about SEO - Adobe already has an LLM Optimizer to help businesses rank on ChatGPT, Gemini, and Claude
- Adobe wants to help decide how your brand shows up inside ChatGPT and other AI bots
- LLM Optimizer promises SEO-like results in an internet where search engines no longer rule
- Your FAQ page could now influence what AI chatbots say about your brand to customers
Popular AI tools such as ChatGPT, Gemini, and Claude are increasingly replacing traditional search engines in how people discover content and make purchasing decisions.
Adobe is attempting to stay ahead of the curve by launching LLM Optimizer, which it claims can help businesses improve visibility across generative AI interfaces by monitoring how brand content is used and providing actionable recommendations.
The tool even claims to assign a monetary value to potential traffic gains, allowing users to prioritize optimizations.
Shift from search engines to AI interfacesWith a reported 3,500% increase in generative AI-driven traffic to U.S. retail sites and a 3,200% spike to travel sites between July 2024 and May 2025, Adobe argues that conversational interfaces are no longer a trend but a transformation in consumer behavior.
“Generative AI interfaces are becoming go-to tools for how customers discover, engage and make purchase decisions, across every stage of their journey,” said Loni Stark, vice president of strategy and product at Adobe Experience Cloud.
The core of Adobe LLM Optimizer lies in its monitoring and benchmarking capabilities, as it claims to give businesses a “real-time pulse on how their brand is showing up across browsers and chat services.”
The tool can help teams identify the most relevant queries for their sector and understand how their offerings are presented, as well as enabling comparison with competitors for high-value keywords and uses this data to refine content strategies.
A recommendation engine detects gaps in brand visibility across websites, FAQs, and even external platforms like Wikipedia.
It suggests both technical fixes and content improvements based on attributes that LLMs prioritize, such as accuracy, authority, and informativeness.
These changes can be implemented “with a single click,” including code or content updates, which suggests an effort to reduce dependency on lengthy development cycles.
It is clear the best SEO tool tactics may need to adapt, especially as AI chat interfaces do not operate with the same crawling and ranking logic as standard web browsers.
For users who already rely on the best browser for private browsing or privacy tools to avoid data profiling, the idea that businesses are now optimizing to appear inside chatbots could raise questions about how content is sourced and attributed.
Adobe insists that the tool supports “enterprise-ready frameworks” and has integration pathways for agencies and third-party systems, though the wider implications for transparency and digital content ethics remain to be seen.
You might also like- These are the best AI website builders around
- Take a look at our pick of the best internet security suites
- How to grow your ecommerce business with AI
I tried Google’s new Search Live feature and ended up debating an AI about books
- Google’s new Search Live feature lets users hold real-time voice conversations with an AI-powered version of Search
- The Gemini-powered AI attempts to simulate a friendly and knowledgeable human.
- Google is keen to have all roads lead to Gemini, and Search Live could help entice people to try the AI companion without realizing it
Google's quest to incorporate its Gemini into everything has a new outlet linked to its most central product. The new Google Search Live essentially gives Google Search's AI Mode a Gemini-powered voice.
It’s currently available to users in the U.S. via the Google app on iOS and Android, and it invites you to literally talk to your search bar. You speak, and it speaks back; unlike the half-hearted AI assistants of yesteryear, this one doesn’t stop listening after just one question. It’s a full dialogue partner, unlike the non-verbal AI Mode.
It also works in the background, which means I could leave the app during the chat to do something else on my phone, and the audio didn’t pause or glitch. It just kept going, as if I were on the phone with someone.
Google refers to this system as “query fan-out,” which means that instead of just answering your question, it also quietly considers related queries, drawing in more diverse sources and perspectives. You feel it, too. The answers don’t feel boxed into a single form of response, even on relatively straightforward queries like the one about linen dresses in Google's demo.
AI Search LiveTo test Search Live out, I tapped the “Live” icon and asked for speculative fiction books I should read this summer. The genial voice offered a few classic and a few more recent options. I then opened Pandora's box by asking it about its own favorites. Surprisingly, it had a few. I then decided to push it a bit and tell it it was wrong about the best fantasy books and listed a few of my own. Suddenly, I found myself in a debate not only about the best examples of the genre, but also about how to define it.
We segued from there to philosophical and historical opinions about elvish empathy and whether AI should be compared to genies or the mythical brownies that do housework in exchange for cream. Were it not for the smooth, synthetic voice and its relentless good cheer, I might have thought I was actually having an idle argument with an acquaintance over nothing important.
It's obviously very different from the classic Google Search and its wall of links. If you look at the screen, you still see the links, but the focus is on the talk. Google isn't unique with a vocal version of its AI, as ChatGPT and others proffer similar features. Google Search Live does come off as smoother, and I didn't have to rephrase my questions or repeat myself once in 10 minutes. Being integrated with Google’s actual search systems might help keep things grounded. It’s like talking to someone who always has a stack of citations in their back pocket.
I don't think Search Live is what people will use to replace their usual online search methods, but here’s a real accessibility benefit to it. For people who can’t comfortably type or see, voice-first tools like this open new doors. Same goes for kids asking homework questions, or for someone cooking dinner who has a random question but doesn't want to pause to wipe flour off their screen.
There’s a tradeoff, of course, in terms of how people browse the web. If this kind of conversational AI becomes the dominant interface for search on Google, what happens to web traffic? Publishers already feel like they’re shouting into the void when their content is skimmed by AI and hiring lawyers to fight it. What will the AI search if its sources shrink or vanish? It's a complicated question, worthy of debate. I'll have to see how Search Live lays out the arguments.
You might also likeForget virtual pets – the next AI video craze is cats doing Olympic diving, and it’s all thanks to this new Google Veo 3 rival
- MiniMax’s new Hailuo 02 AI video model has sparked a viral trend of cats performing Olympic dives
- The videos blend advanced physics-based animation with internet absurdity
- Though not the quality of Google Veo 3, Hailuo 2 is rapidly gaining in popularity among casual AI users
Watching the cat walk onto the diving board, I could imagine calls to the fire department or a huge crowd rushing to save it, causing a catastrophe, while the feline simply blinked at the tragedy. Instead, the cat executed an Olympic-caliber triple somersault into the pool. If it weren't for the impossible feat and my awareness that it was an AI-generated video, I'd be checking to see if there was a Freaky Friday situation with the U.S. swim team.
Instead, it's a hugely viral video produced using Chinese AI video developer MiniMax's Hailuo 02 model. The millions of people watching the video of cats diving may not be real, but it's real enough to elbow its way into the competition for AI video dominance, alongside Google Veo 3 and OpenAI's Sora, among many others.
MiniMax debuted Hailuo 02 earlier this summer, but the virality of the faux Olympics video suggests it's going to become a very popular tool for making still images or text prompts into videos. The model only makes five- to ten-second clips for now, but its motion customization, camera effects, and impressive imitation of real-world physics, like the movement of fur or splashing of water, make it more intriguing.
Testing Hailuo 02 on cats diving came about seemingly organically when X user R.B Keeper (presumably not their real name) tried a prompt they'd seen tested on Veo 3. The idea spread from there to a version that garnered millions of views in a matter of hours and appeared on TikTok, Reddit, and Instagram, with numerous variations.
A post shared by Pablo Prompt (@pabloprompt)
A photo posted by on
AI video battlesHailuo 02 uses frame-by-frame physics simulation, attention-mapped motion prompts, and multimodal input parsing. In other words, if you type a strange idea, the model will do its best to make it look and behave like it would in an approximation of the real world.
Notably, Hailuo 02 is reportedly far cheaper and faster than Veo 3, though perhaps without quite the high-end gloss. Still, it's more accessible, not being limited to enterprise services and beta programs like Veo 3.
The cat diving videos are the apex of a very specific Venn diagram of internet trends, accessible tools, and low-stakes fun. You don’t need to be a professional editor or own a supercomputer to try it. And more upgrades are on the horizon. MiniMax has outlined plans to integrate synchronized audio, lighting, and texture control, as well as longer clips.
As for Google Veo 3 and other major players, they have their professional niche for now. But if they want to widen their appeal to the masses, they might look to what MiniMax and smaller developers like Midjourney, with its V1 video model, are doing. Hailuo 02 is the kind of tool that will get people, like the cats, to dive in.
You might also likeI adore my Meta Ray-Bans, but these new Oakley smart glasses are making me jealous
- Meta and Oakley are officially making smart glasses
- They're based on Oakley's HSTN glasses design
- Launching later this summer, they'll start at $399 / £399
It’s official. Following a teaser earlier this week, Oakley and Meta have made smart glasses, and as an owner of the almost two-year-old Ray-Ban Meta smart specs, I’m green with envy.
Later this summer, six pairs of Oakley smart specs will be available in the US, UK, Australia, Canada, Ireland, and several other European countries, starting at $399 / £399 (we’re still waiting for Australian pricing details).
Limited-Edition Oakley Meta HSTN (featuring gold accents and 24K PRIZM polarized lenses) will be available for preorder sooner – from July 11 – for $499 / £499 (again, we’re waiting for Australian pricing).
Image 1 of 4Why am I jealous? Well, for a start, these smart glasses are set to boast a few important hardware and software upgrades over my Ray-Bans.
First is an upgrade to the camera. The Ray-Bans have a built-in 12MP snapper which can capture full-HD (1440x1920 resolution) video at 30fps. Meta is promising these Oakley specs will record Ultra HD (3K) video, perhaps making them possible alternatives to the best action cameras for people who want to record their sporting stunts and look good doing it.
Secondly, they’ll be able to record for longer with a boosted battery life. My Meta Ray-Bans boast a four-hour battery life for ‘standard use.’ They can play music, Meta AI can answer the odd question, and they should last about this long; as soon as you start capturing videos, their battery will drain much faster
With the case recharging them, the Ray-Bans can get up to 36 hours of total use.
Meta is doubling the glasses’ built-in battery with its Oakleys, promising they’ll last for eight hours with standard use, and 19 hours if they’re on standby. Meta adds that you can recharge them to 50% in just 20 minutes with their case, and said the charging case holds up to 48 hours of charge.
Finally, Meta’s AI will still be able to answer various questions for you and use the camera for context to your queries, as we’ve seen from the Ray-Ban Meta smart glasses, but it will also get some new sporting-related knowledge.
Golfers can ask about wind speed, while surfers can check the surf conditions, and you can also ask the glasses for possible ways to improve your sporting technique, too.
As with all these promises, we’ll want to test the Oakley Meta HSTNs for ourselves to see if they live up to the hype, but one way we can already see they’re excelling is on the design side.
Damn, are these things gorgeous.
Interestingly, the Oakley specs design choice is one major detail the leaks got wrong. Instead of its Sphaera visor-style shades, it’s Oakley’s HSTN glasses (I’m told it’s pronounced how-stuhn).
These glasses look like more angular Ray-Ban Wayfarers – you know, one of Meta’s existing smart glasses designs – but they do boast a serious design upgrade for athletes that you won’t find on Meta’s non-Oakley specs: Oakley’s PRIZM lenses.
Without getting too technical, PRIZM lenses are designed to provide increased contrast to what you can see. There are different models for snow sports, cycling, and other sports (as well as everyday usage), but each is designed to highlight key details that might be relevant to the wearer, such as the contours in different snow terrains, or transitions in trail types and possible road hazards.
If PRIZM lenses sound like overkill, you can also get a pair with transition lenses or with completely clear lenses.
I swapped my always-shaded Ray-Bans for a pair with transition lenses, and the difference is stark. Because they’re clear in darker environments and shaded in brighter weather, I’ve found it so much easier to use the transition lens pair as everyday smart glasses. Previously, I could only use my shaded pair in the sun, and that doesn’t come out all too often here in the UK.
The complete list of six Oakley smart glasses options is:
- Oakley Meta HSTN Warm Grey with PRIZM Ruby Lenses
- Oakley Meta HSTN Black with PRIZM Polar Black Lenses
- Oakley Meta HSTN Brown Smoke with PRIZM Polar Deep Water Lenses
- Oakley Meta HSTN Black with Transitions Amethyst Lenses
- Oakley Meta HSTN Clear with Transitions Grey Lenses
- Oakley Meta HSTN Black with Clear Lenses
Beyond the style and lenses, one striking factor is that despite some serious battery upgrades, the frames don’t seem massively chunky.
Like their Ray-Ban predecessors, they’re clearly thicker than normal specs, but they don’t look too much unlike normal shades.
All in all, these Oakley glasses look and sound really impressive. I’m chomping at the bit to try a pair, and if you’ve been on the fence about picking up the Ray-Ban Meta glasses, these enhanced options could be what convinces you to finally get some AI-powered eyewear.
New research says using AI reduces brain activity – but does that mean it's making us dumber?
Amid all the debates about how AI affects jobs, science, the environment, and everything else, there's a question of how large language models impact the people using them directly.
A new study from the MIT Media Lab implies that using AI tools reduces brain activity in some ways, which is understandably alarming. But I think that's only part of the story. How we use AI, like any other piece of technology, is what really matters.
Here's what the researchers did to test AI's effect on the brain: They asked 54 students to write essays using one of three methods: their own brains, a search engine, or an AI assistant, specifically ChatGPT.
Over three sessions, the students stuck with their assigned tools. Then they swapped, with the AI users going tool-free, and the non-tool users employing AI.
EEG headsets measured their brain activity throughout, and a group of humans, plus a specially trained AI, scored the resulting essays. Researchers also interviewed each student about their experience.
As you might expect, the group relying on their brains showed the most engagement, best memory, and the most sense of ownership over their work, as evidenced by how much they could quote from them.
The ones using AI at first had less impressive recall and brain connectivity, and often couldn’t even quote their own essays after a few minutes. When writing manually in the final test, they still underperformed.
The authors are careful to point out that the study has not yet been peer-reviewed. It was limited in scope, focused on essay writing, not any other cognitive activity. And the EEG, while fascinating, is better at measuring overall trends than pinpointing exact brain functions. Despite all these caveats, the message most people would take away is that using AI might make you dumber.
But I would reframe that to consider if maybe AI isn’t dumbing us down so much as letting us opt out of thinking. Perhaps the issue isn’t the tool, but how we’re using it.
AI brainsIf you use AI, think about how you used it. Did you get it to write a letter, or maybe brainstorm some ideas? Did it replace your thinking, or support it? There’s a huge difference between outsourcing an essay and using an AI to help organize a messy idea.
Part of the issue is that "AI" as we refer to it is not literally intelligent, just a very sophisticated parrot with an enormous library in its memory. But this study didn’t ask participants to reflect on that distinction.
The LLM-using group was encouraged to use the AI as they saw fit, which probably didn't mean thoughtful and judicious use, just copying without reading, and that’s why context matters.
Because the "cognitive cost" of AI may be tied less to its presence and more to its purpose. If I use AI to rewrite a boilerplate email, I’m not diminishing my intelligence. Instead, I’m freeing up bandwidth for things that actually require my thinking and creativity, such as coming up with this idea for an article or planning my weekend.
Sure, if I use AI to generate ideas I never bother to understand or engage with, then my brain probably takes a nap, but if I use it to streamline tedious chores, I have more brainpower for when it matters.
Think about it like this. When I was growing up, I had dozens of phone numbers, addresses, birthdays, and other details of my friends and family memorized. I had most of it written down somewhere, but I rarely needed to consult it for those I was closest to. But I haven't memorized a number in almost a decade.
I don't even know my own landline number by heart. Is that a sign I’m getting dumber, or just evidence I've had a cell phone for a long time and stopped needing to remember them?
We’ve offloaded certain kinds of recall to our devices, which lets us focus on different types of thinking. The skill isn’t memorizing, it’s knowing how to find, filter, and apply information when we need it. It's sometimes referred to as "extelligence," but really it's just applying brain power to where it's needed.
That’s not to say memory doesn’t matter anymore. But the emphasis has changed. Just like we don’t make students practice long division by hand once they understand the concept, we may one day decide that it’s more important to know what a good form letter looks like and how to prompt an AI to write one than to draft it line by line from scratch.
Humans are always redefining intelligence. There are a lot of ways to be smart, and knowing how to use tools and technology is one important measure of smarts. At one point, being smart meant knowing how to knap flint, make Latin declensions or working a slide rule.
Today, it might mean being able to collaborate with machines without letting them do all the thinking for you. Different tools prioritize different cognitive skills. And every time a new tool comes along, some people panic that it will ruin us or replace us.
The printing press. The calculator. The internet. All were accused of making people lazy thinkers. All turned out to be a great boon to civilization (well, the jury is still out on the internet).
With AI in the mix, we’re probably leaning harder into synthesis, discernment, and emotional intelligence – the human parts of being human. We don't need the kind of scribes who are only good at writing down what people say; we need people who know how to ask better questions.
Knowing when to trust a model and when to double-check. It means turning a tool that’s capable of doing the work into an asset that helps you do it better.
But none of it works if you treat the AI like a vending machine for intelligence. Punch in a prompt, wait for brilliance to fall out? No, that's not how it works. And if that's all you do with it, you aren't getting dumber, you just never learned how to stay in touch with your own thoughts.
In the study, the LLM group’s lower essay ownership wasn’t just about memory. It was about engagement. They didn’t feel connected to what they wrote because they weren’t the ones doing the writing. That’s not about AI. That’s about using a tool to skip the hard part, which means skipping the learning.
The study is important, though. It reminds us that tools shape thinking. It nudges us if we are using AI tools to expand our brains or to avoid using them. But to claim AI use makes people less intelligent is like saying calculators made us bad at math. If we want to keep our brains sharp, maybe the answer isn’t to avoid AI but to be thoughtful about using it.
The future isn't human brains versus AI. It’s about humans who know how to think with AI and any other tool, and avoiding becoming someone who doesn't bother thinking at all. And that’s a test I’d still like to pass.
You might also likeMidjourney just dropped its first AI video model and Sora and Veo 3 should be worried
- Midjourney has launched its first AI video model, V1.
- The model lets users animate images into five-second motion clips.
- The tool is relatively affordable and a possible rival for Google Veo or OpenAI’s Sora.
Midjourney has long been a popular AI image wizard, but now the company is making moves and movies with its first-ever video model, simply named V1.
This image-to-video tool is now available to Midjourney's 20 million-strong community, who want to see five-second clips based on their images, and up to 20 seconds of them extended in five-second increments.
Despite being a brand new venture for Midjourney, the V1 model has enough going on to at least draw comparisons to rival models like OpenAI’s Sora and Google’s Veo 3, especially when you consider the price.
For now, Midjourney V1 is in web beta, where you can spend credits to animate any image you create on the platform or upload yourself.
To make a video, you simply generate an image in Midjourney like usual, hit “Animate,” choose your motion settings, and let the AI go to work.
The same goes with uploading an image; you just have to mark it as the start frame and type in a custom motion prompt.
You can let the AI decide how to move it, or you can take the reins and describe how you want the motion to play out. You can pick between low motion or high motion depending on whether you want a calm movement or a more frenetic scene, respectively.
The results I've seen certainly fit into the current moment in AI video production, both good and bad. The uncanny valley is always waiting to ensnare users, but there are some surprisingly good examples from both Midjourney and initial users.
AI video battlesMidjourney video is really fun from r/midjourneyMidjourney isn’t trying to compete head-on with Sora or Veo in terms of technical horsepower. Those models are rendering cinematic-quality 4K footage with photorealistic lighting and long-form narratives based solely on text. They’re trained on terabytes of data and emphasize frame consistency and temporal stability that Midjourney is not claiming to offer.
Midjourney’s video tool isn’t pretending to be Hollywood’s next CGI pipeline. The pitch is more about being easy and fun to use for independent artists or tinkerers in AI media.
And it really does come out as pretty cheap. According to Midjourney, one video job costs about the same as upscaling, or one image’s worth of cost per second of video.
That’s 25 times cheaper than most AI video services on the market, according to Midjourney and a cursory examination of other alternatives.
That's probably for the best since a lot of Hollywood is going after Midjourney in court. The company is currently facing a high-stakes lawsuit from several Disney, Universal, and other studios over claims it trained its models on copyrighted content.
For now, Midjourney's AI generators for images and video remain active, and the company has plans to expand its video production capabilities. Midjourney is teasing long-term plans for full 3D rendering, scene control, and even immersive world exploration. This first version is just a stepping stone.
Advocates for Sora and Veo probably don't have to panic just yet, but maybe they should be keeping an eye on Midjourney's plans, because while they’re busy building the AI version of a studio camera crew, Midjourney just handed a magic flipbook to anyone with a little cash for its credits.
You might also like‘My kids will never be smarter than AI’: Sam Altman’s advice on how to use ChatGPT as a parent leaves me shaking my head
Sam Altman has appeared in the first episode of OpenAI’s brand new podcast, called simply the OpenAI Podcast, which is available to watch now on Spotify, Apple Podcasts, and YouTube.
The podcast is hosted by Andrew Mayne and in the first episode, OpenAI CEO Sam Altman joins the host to talk about the future of AI: from GPT-5 and AGI to Project Stargate, new research workflows, and AI-powered parenting.
While Altman's thoughts on AGI are always worth paying attention to, it was his advice on AI-powered parenting that caught my ear this time.
You have to wonder if Altman’s PR advisors have taken the day off, because after being asked the softball question, “You’ve recently become a new parent, how is ChatGPT helping you with that?”, Altman somehow draws us into a nightmare scenario of a generation of AI-reared kids who have lost the ability to communicate with regular humans in favor of their parasocial relationships with ChatGPT.
“My kids will never be smarter than AI.”, says Altman in a matter-of-fact way. “But also they will grow up vastly more capable than we were when we grew up. They will be able to do things that we cannot imagine and they’ll be really good at using AI. And obviously, I think about that a lot, but I think much more about what they will have that we didn’t…. I don’t think my kids will ever be bothered by the fact that they’re not smarter than AI. “
That all sounds great, but then later in the conversation he says: “Again, I suspect this is not all going to be good, there will be problems and people will develop these problematic, or somewhat problematic, parasocial relationships.“
In case you’re wondering what "parasocial relationships" are, they develop when we start to consider media personalities or famous people as friends, despite having no real interactions with them; the way we all think we know George Clooney because he’s that friendly doctor from ER, or from his movies or the Nespresso advert, when, in fact, we have never met him, and most likely never will.
Mitigating the downsidesAltman is characterizing a child’s interactions with ChatGPT in the same way, but interestingly he doesn’t offer any solutions for a generation weaned on ChatGPT Advanced Voice mode rather than human interaction. Instead he sees it as a problem for society to figure out.
“The upsides will be tremendous and society in general is good at figuring out how to mitigate the downsides”, Altman assures the viewer.
Now I’ll admit to being of a more cynical bent, but this does seem awfully like he’s washing his hands of a problem that OpenAI is creating. Any potential problems that a generation of kids brought up interacting with ChatGPT are going to experience are, apparently, not OpenAI’s concern.
In fact, earlier when the podcast host brought up the example story of a parent using ChatGPT’s Advanced Voice Mode to talk to their child about Thomas the Tank Engine, instead of doing it themselves, because they are bored of talking about it endlessly, Altman simply nods and says ,“Kids love Voice Mode in ChatGPT”.
Indeed they do Sam, but is it wise to let your child loose on ChatGPT’s Advanced Voice Mode without supervision? As a parent myself (although of much older children now) I’m uncomfortable with hearing of young kids being given what sounds like unsupervised access to ChatGPT.
AI comes with all sorts of warnings for a reason. It can make mistakes, it can give bad advice, and it can hallucinate things that aren’t true. Not to mention that “ChatGPT is not meant for children under 13” according to OpenAI’s own guidelines, and I can’t imagine there are many kids older than 13 who are interested in talking about Thomas the Tank Engine!
I have no problem using ChatGPT with my kids, but when ChatGPT was available they were both older than 13. If I was using it with younger children I’d always make sure that they weren’t using it on their own.
I'm not suggesting that Altman is in any way a bad parent, and I appreciate his enthusiasm for AI, but I think he should leave the parenting advice to the experts for now.
You might also like