Do NOT follow this link or you will be banned from the site!
Techradar
I tried Mind Maps in NotebookLM and it's my new favorite feature
A lot of useful information is only as helpful as its organization. The same goes for my own brain, of course. Getting that information in different formats can help with learning it, and Google’s NotebookLM has been fun to experiment with for that purpose, particularly the customized podcasts with AI hosts.
The latest addition is the new Mind Maps feature. A mind map is an old technique for organizing your thinking using visual webs of information that connect ideas together. Imagine a branching tree where each limb is a concept and every twig is a supporting idea. They’re great for people who think visually.
The NotebookLM version is essentially that, but it is put together by an AI model. I decided to test this thing with two real-life situations: planning a garden and trying to become a whiz at DIY home repairs.
Mind GardenThe garden was first. I uploaded a pile of articles I’d been hoarding – stuff about companion planting, raised beds, native perennials, composting, and that one blog post where someone swears by pouring beer on their tomatoes. NotebookLM chewed through all of it and spit out a Mind Map upon request.
There were branches for planning, locations, and even the benefits of gardening, among others. Each branch had a long list of 'twigs' covering all kinds of subtopics, as you can see above. Each was clickable, causing the conversation part of NotebookLm to expound upon that topic. It was extremely helpful in keeping all those elements organized.
DIYThe same goes for the DIY project. My house has this charming quality where things just break for no reason. I’d already tried to fix a leaky toilet once, which ended with me flooding the bathroom and watching a YouTube tutorial through a veil of defeat.
This time, I came prepared. I uploaded manuals, how-to articles, and a few trusted repair blogs. Mind Maps whipped up categories like planning, building codes, and the essential DIY projects list within seconds. I chose flooring installation from that set of twigs, and you can see hardwood floors, moisture barriers, and an expansion gap.
There was something strangely calming about seeing the steps laid out so clearly. I clicked on “door hanging” and got an overview of the different types of doors and how to set them up from the AI. I felt like I'd had a conversation with someone who actually knows what they’re doing.
Different ThinkingNotebookLM already did a good job summarizing stuff, but the Mind Maps added a layer of clarity that made it feel almost tactile. I could see how ideas were connected and how it would help me learn faster.
That’s not to say it’s perfect. Sometimes, the Mind Maps get a little too enthusiastic and start branching off into tangents that don’t really help. One map tried to connect “composting” with “composing” music for gardening for some reason. And with very niche topics, the AI can still miss the mark by offering generic advice when what you need is something specific, like how to fix a loose tile without taking apart half your kitchen.
I’d also love more manual control. Right now, you can navigate and explore the maps, but you can’t really tweak them much. Sometimes, I want to drag a node, rename it, or cut a whole branch that’s not useful. Still, these are nitpicks. The core experience is solid, though.
The truth is, I didn’t expect to love Mind Maps. I thought they’d be a neat visual gimmick, something I’d play with once and then forget about. But I think I'll be using them more, especially for any ambitious plans I have for improving my home and garden. In a world full of tabs, a map is nice to have.
You might also likeNew Windows 11 roadmap will tell you exactly when to expect Microsoft’s next annoying feature
- Microsoft has revealed its new Windows Roadmap portal
- The roadmap aims to clarify exactly what features are inbound for Windows 11 and when they’ll arrive
- The idea is to cut through any confusion in terms of future functionality, although it’s early days yet for the portal
Ever get confused about what’s happening with incoming changes for Windows 11? I wouldn’t blame you – I end up scratching my head some of the time regarding features that are in the works, and I write about Microsoft’s OS for a living (among a good many other tech topics, that is).
Microsoft itself acknowledges a lack of clarity around features progressing through testing for Windows 11, and wishes to improve the situation with a fresh innovation in the form of a roadmap.
As Windows Central reports, Microsoft’s new Windows Roadmap portal is now live, with the company describing the reasoning behind the new website in a blog post.
Microsoft states that: “The Windows roadmap provides estimated release dates and descriptions for features being released. All information is subject to change. As a feature or product is canceled or postponed, information will be removed from this website.”
So, just because a feature is mentioned on the roadmap doesn’t mean it’s guaranteed for inclusion in Windows 11 eventually. That’s always been true of functionality in testing, though – if it isn’t working, or testers are giving lots of negative feedback, there’s always a chance Microsoft will dump a feature, and you’ll never see it again. (Or it’ll emerge in the future, in a somewhat different guise, perhaps).
An initiative like this is, of course, a laudable one from Microsoft. However, if you clicked through and perused the above blog post, you surely noticed that it’s targeted at IT professionals – those who manage computers for organizations. That’s because when you’ve got to take care of a fleet of PCs running Windows 11, there’s a lot of complexity involved, and you really need to stay fully abreast of what changes might be upcoming for the operating system.
But still, the average consumer – like me, or you – will likely also find the new Windows Roadmap useful to browse, just to see what new features are on the horizon. Or, if there’s a capability that you’re really keen on and haven’t yet got, you could use the portal to clarify whether it’s actually being rolled out to Windows 11 PCs yet, and what the expected general availability date is.
It should also help to clear up confusion when certain features seem to skip testing channels. There are four of these Microsoft uses, from the earliest (Canary channel) to just before release (Release Preview channel), and sometimes features will just appear in later channels, without even being presented to the early testers.
In short, this invention should allow you to more easily track the progress of everything that’s in the works for Windows 11, though looking at the roadmap now, I’m still encountering some minor points of confusion.
Let’s take an example of the PC spec cards, which were spotted hidden in the background of test builds early this year, before suddenly going into testing, and then pretty much straight into Windows 11’s latest preview update this week (prior to release next month). Blink and you missed the progress of that particular feature in testing, and its rapid shift through the gears was rather strange to witness.
So, what does the Windows Roadmap say about these spec cards? Firstly, that the rollout start date is March 2025, and what that means is the feature is only rolling out now – meaning a gradual deployment, so even if you’ve installed the March preview update, you may not see it (yet). Expected availability is then listed as the “April 2025 non-security monthly update,” meaning the preview update coming at the end of April.
What I don’t quite understand here is that surely the broad availability will be the full May 2025 patch (which is what the April 2025 preview update will become) – as far from everybody downloads the preview, or optional (non-security) updates. Most folks only get the full release, so really, that following update in May would surely represent the full availability of the feature. Wouldn’t it?
Okay, so maybe I’m nitpicking here, and I get that the gist is that for the full (non-preview) update in April, the PC spec cards will still just be rolling out – and not provided to everyone – but I think Microsoft could put this across in a better way.
Anyway, even if there are a few wrinkles to iron out, this is, of course, still early days for the roadmap, and it should prove a useful tool in terms of keeping an eye on what’s imminent for Windows 11.
You may also like...- Windows 11’s latest patch declares war on BIOS updates for some Lenovo laptops, blocking them as a security risk in a bizarre turn of events
- Shock, horror – I’m not going to argue with Microsoft’s latest bit of nagging in Windows 11, as this pop-up is justified
- Windows 11 fully streamlined in just two clicks? Talon utility promises to rip all the bloatware out of Microsoft’s OS in a hassle-free way
‘Our GPUs are melting’ – OpenAI puts limits on image creation and delays rollout to free accounts
- OpenAI limits free tier of ChatGPT to 3 images a day
- Sam Altman says "Our GPUs are melting"
- Limitation should be on a temporary basis
Amid the growing controversy over its AI mimicking the artistic style of Studio Ghibli, OpenAI is being forced to limit how many images ChatGPT can produce on the free tier to 3 a day because it's proving too popular.
In a recent tweet on X, Sam Altman, CEO of OpenAI, said “It's super fun seeing people love images in ChatGPT, but our GPUs are melting. We are going to temporarily introduce some rate limits while we work on making it more efficient. Hopefully won't be long! ChatGPT free tier will get 3 generations per day soon.”
it's super fun seeing people love images in chatgpt.but our GPUs are melting.we are going to temporarily introduce some rate limits while we work on making it more efficient. hopefully won't be long!chatgpt free tier will get 3 generations per day soon.March 27, 2025
ChatGPT’s new image generation capabilities are clearly a step up in the development of AI image generation, proving superior in our tests to DALL-E 3, which is the model previously used by ChatGPT, and also what it will still default to once you’ve run out of generations in the new model.
In his X-thread Sam Altman also goes on to say that “(also, we are refusing some generations that should be allowed; we are fixing these as fast we can.)”
This could explain the frustrations I’ve been experiencing getting ChatGPT to produce text in images.
It’s quite possible that you don’t have access to ChatGPT’s image creation tools quite yet anyway. While ChatGPT Plus and Pro users all seem to have access, not all free-tier users do.
On March 26 Altman tweeted that rollout to the free tier was going to be delayed: “Images in ChatGPT are way more popular than we expected (and we had pretty high expectations). Rollout to our free tier is, unfortunately, going to be delayed for a while.”
As we've said in our testing, even on the Plus tier, ChatGPT is already very slow when it comes to generating images, and when the rollout to the free tier is complete we would expect it to be even slower. The move to limit the free tier to 3 images on a temporary basis, while understandable, will inevitably lead to people feeling frustrated with the company.
Have you been able to sample ChatGPT's new image creation abilities yet? Let us know what you think in the comments below.
Windows 11’s latest patch declares war on BIOS updates for some Lenovo laptops, blocking them as a security risk in a bizarre turn of events
- Some Lenovo ThinkPad owners can’t install BIOS updates because of a security tweak Microsoft applied in the latest Windows patches
- This is happening to those installing via the Lenovo BIOS Update Utility or Vantage app
- A fix is inbound already, and as a workaround, you can install via Windows Update
Some Lenovo laptops are seeing BIOS updates fail to work thanks to a change that Microsoft just made in Windows 11 (and Windows 10).
Windows Latest spotted a support post from Lenovo addressing the problem which affects those trying to apply an upgrade to the BIOS with its ThinkPad notebooks.
Apparently, the issue is due to a tweak Microsoft made to block a certain executable file (WinFlash64.exe) in the latest patches for Windows, a change made for security reasons.
When trying to apply a BIOS update using either Lenovo’s BIOS Update Utility or the Lenovo Vantage app, some ThinkPad owners could see the process fail, accompanied by some kind of error message. There are a few errors that might pop up, but all amount to the same thing – the update didn’t work.
What’s happening is that following the patch, and Microsoft updating its security blocklist therein, the update is being detected as a ‘vulnerable driver,’ meaning that it’s a risk to the system – hence Windows refuses to run the process.
As mentioned, the changes to the blocklist were made in the latest patches for Windows 11 24H2, 23H2, and 22H2, along with Windows 10 22H2 – all active versions of Microsoft’s desktop OS, in other words.
As Windows Latest points out, normally Lenovo recommends using its BIOS Update Utility for refreshing your laptop’s firmware, as it’s generally more reliable than other methods.
Obviously given this new issue, that isn’t the case, and so the easiest way to work around the glitch is to use Windows Update to apply the BIOS update for your Lenovo ThinkPad. That is, assuming that Windows 11 (or 10) has found the relevant patch and flagged it under Windows Update. If not, all you can do is keep checking for updates and hope it turns up soon enough.
Meanwhile, Lenovo has been working on remedying this problem, and according to Windows Latest, a fix is in place with the newest BIOS version (v1.61) that’s rolling out. So, if you can bag that latest spin on the BIOS, it should work okay installing via Lenovo’s BIOS Update Utility – fingers crossed.
At any rate, this should be a temporary hiccup for Lenovo ThinkPad owners, but it’s pretty strange that a BIOS update would be flagged as a risk like this in the first place. That said, of course there’s always a small level of risk involved in any BIOS update – such is the nature of the beast – and if you want to read up more on the correct procedure for applying these, to make sure you get it right, we’ve got an article dedicated to just that.
You may also like...- Shock, horror – I’m not going to argue with Microsoft’s latest bit of nagging in Windows 11, as this pop-up is justified
- Microsoft is supercharging Windows 11’s voice commands on Copilot+ PCs with Snapdragon CPUs, and fine-tuning a few Recall features
- Windows 11 fully streamlined in just two clicks? Talon utility promises to rip all the bloatware out of Microsoft’s OS in a hassle-free way
Microsoft’s latest Windows 11 update exorcises possessed printers that spewed out pages of random characters
- Windows 11’s optional update for March has just arrived
- This preview update packs a fix for a printer bug that caused devices to churn out random characters
- The cure will be tested in this optional release, and will then become part of the full patch for Windows 11 23H2 released in early April
Remember that really strange printer bug with Windows 11 23H2 – the one that made it seem like your printer was possessed? There’s some good news for those affected in that Microsoft’s latest update has cured this problem.
The glitch was introduced in a preview update released at the end of January 2025, which caused the printer to mysteriously produce pages filled with random characters. It only affected printers that were hooked up by USB, mind, and mainly happened when turning on the device – although that doubtless spooked quite a few people.
However, Microsoft has just released the March preview update for Windows 11 – packing quite a few goodies, as it happens – and that also contains the fix for this weird printer bug.
Neowin reports that in a Windows 11 health dashboard update relating to the bug, Microsoft informs us: “This issue was resolved by Windows updates released March 25, 2025 (KB5053657), and later. We recommend you install the latest update for your device as it contains important improvements and issue resolutions, including this one.”
Granted, this is far from the worst Windows 11 bug we’ve seen, and 24H2 users have been suffering all sorts of really odd bugs being slung at them in particular (gamers have been mercilessly under fire, too). There have also been gremlins crawling around in the depths of Windows 11, which have caused printers to stop working (at least partially) in the past.
At least this glitch didn’t do anything as bad as that, and it even had kind of a humorous side to it, in terms of the ‘possessed printer’ aspect. That said, wasting paper and ink isn’t going to be funny if the bug keeps on happening, as it apparently did – with no way of stopping it. That’s a very eco-unfriendly prospect, even on a small scale like this.
At any rate, it’s good to see that Microsoft has fixed this spanner in the works in a relatively timely manner. However, there is a note of caution to sound here, which is that the resolution of this printer problem is only in a preview update right now. Those updates, which arrive late in the month to test the ground ahead of the full patch release the following month, are optional for a reason.
Preview updates are still in what’s effectively the final stage of testing, so things can still go wrong with them. That means this fix may not work quite as expected or could cause unintended side effects elsewhere in Windows 11 (that’s certainly happened before).
In this case, if the printer bug is something you’re finding irksome, you’ll probably want to grab the March preview update and take your chances. The good news is that even if you prefer to wait, you won’t have to be patient for long, as the fix will be contained within the April cumulative update for Windows 11 23H2 (arriving on April 7, so it’s only a week and a half away now).
You might also like...- Best home printer of 2025: I tested over 200 and these are my top picks
- Windows 11 users get ready for more ‘recommendations’ from Microsoft – but I’m relieved to say these suggestions might actually be useful
- Are you unable to get security updates for Windows 11 24H2? Here’s the likely reason why, and the fix to get your PC safe and secure again
Speak, Book, Fly. Qatar Airways debuts industry-first AI travel agent, Sama
In a landmark move set to redefine airline booking, Qatar Airways has launched a pioneering AI-powered conversational booking experience. Travelers can now effortlessly secure flights by engaging in natural dialogue with Sama, the airline’s AI travel companion.
Sama, already a familiar figure in Qatar Airways’ immersive virtual reality platform, QVerse, is now breaking new ground with a first-of-its-kind conversational flight booking system. Seamlessly replacing traditional online forms, Sama transforms booking flights into an intuitive, engaging, and personalized interaction.
Unlike traditional automated systems, Sama is designed to replicate the ease and clarity of a human conversation. Travelers can start their booking journey from any point—be it destination, date, or passenger details. Sama intelligently tailors recommendations based on user preferences, suggesting popular destinations, optimal travel dates, or even crafting personalized travel itineraries.
For travelers, booking with Sama is akin to having a personal travel agent available at their fingertips. She guides users through every step, intuitively refining flight options in real-time, enhancing convenience, and significantly simplifying the entire booking experience.
This innovation represents Qatar Airways' commitment to pushing boundaries in customer service by seamlessly blending cutting-edge technology with a personal human touch, fundamentally transforming how customers interact with and perceive the brand.
Beyond Booking: Sama, the Digital Brand AmbassadorAlready known through Qatar Airways' immersive QVerse experience, Sama is expanding her role to inspire travelers as a digital brand ambassador on social media. Through her Instagram presence, @SamaOnTheMove, she connects with travelers by sharing genuine insights, travel tips, and hidden gems about Qatar Airways.
Sama's digital journey is crafted to resonate deeply with travelers, blending AI innovation with real-world inspiration. She shares travel insights, insider tips, and hidden treasures, engaging travelers in a meaningful and authentic way.
This dual role as both AI travel agent and brand ambassador places Sama at the intersection of technology and genuine human connection, enhancing Qatar Airways' digital brand presence and redefining the future of travel engagement.
Microsoft adds Copilot AI features to Windows 11's Photos app - and I actually don't hate them
- Microsoft has added new AI-powered features to the Windows Photos app
- Its new features include Enhanced Optical Character Recognition and editing tools
- The update is only live in part to Windows Insider users, with the features expected to roll out in the coming weeks
Microsoft has rolled out a new update for the Windows Photos app, which adds Copilot support and an array of new features designed to make the program more user-friendly and useful overall.
As spotted by MSPoweruser, the biggest change to the Windows Photos app is the new Copilot button, which is available now for some Insider users.
New features backed by the company's AI-powered assistant include Enhanced Optical Character Recognition (OCR) with Web Search Integration, which includes the ability to scan text with over 160 languages recognized.
Additionally, the new update has added a "Search in Web" button, which will scan the text within the images and present options based on them, which is said to be particularly handy for finding sources of documents and screenshots.
Also powered by Copilot is the new quick-access drop-down menu which has more advanced options for editing images. Presently, these are: Edit with Photos, Erase Objects with Photos, Create with Designer, and Visual Search with Bing (if you want to use the Microsoft-backed search engine specifically).
Arguably less exciting (but still useful) are the new shortcuts available in File Explorer, which can also be used from the desktop. They offer simple one-click access to the aforementioned drop-down menu without even needing to be in the app (or looking at images).
Copilot doesn't just offer creation options but also enhancements through the Photo app's Gallery. Images are now displayed in subfolders, keeping things more organized with suggestions for content types. A new dedicated button offers AI-powered editing functionality with tips on enhancing an image in a handful of different ways.
Microsoft has also introduced some minor bug fixes as well, including small revisions to the Image Creator and Restyle Image features for Copilot, which is said to speed things up. It also means that generated images can be saved under different names without as much hassle as before.
All of these changes are being slowly drip-fed to Windows Insiders, so it's unknown exactly when everyone will benefit from the new changes, but we estimate it will be in the next few weeks.
A genuine use case for the Photos app and CopilotIt's fair to say that the Photos app in Windows 11 has never really been something you would have wanted to have spent much time in.
For most people, it's a simple (and usually fast enough) way of browsing through images saved onto the hard disk and little else. However, backed by Copilot, there's now a suite of new functionality on offer, including image recognition and powerful editing tools that could give you more reason to open it up and play around with things.
Since its launch back in February 2023, Microsoft Copilot has become a bigger focus of the company's strategy, particularly about some of the best laptops and the best ultrabooks.
Of course, to make use of the bulk of these features, you'll need a laptop with an NPU (Neural Processing Unit) to utilize AI workloads in the first place. Whether that's with Intel's Lunar Lake, AMD's Ryzen AI chips, or Qualcomm's Snapdragon X line, all of the best laptop CPUs now have NPUs built in. We recommend reading up on all the differences to see what's right for you.
You may also like...Windows 11’s Game Bar gets a fresh coat of paint, plus a tweak to work better on handhelds – and I like the direction Microsoft’s heading in here
- Microsoft’s March updates for Xbox also brought something for Windows 11
- The Game Bar has been given a makeover, including multiple widgets
- There’s also a useful change to improve the Game Bar Widget Store when using a controller
Microsoft has just brought in a raft of changes for the Xbox consoles in its March updates, but there’s something here for PC gamers too – a new look for the Game Bar (and a useful tweak for handhelds, too).
For the uninitiated, the Game Bar is an overlay that can be summoned to provide easy and convenient access to a bunch of game-related options. That includes tricks such as recording gameplay, monitoring your PC’s performance, tweaking audio settings and much more. The bar can be customized with various widgets and it’s a very useful tool for Windows 11 gamers on the whole.
And now, as Neowin spotted, Microsoft has given the Game Bar a graphical makeover, and also tells us that this is the start of “several visual enhancements that will be rolling out this week.”
The overall look of the bar has been refreshed, and there are new designs for some of the widgets that can be hosted in the Game Bar. That includes the Capture widget, Performance widget, Resource widget, and also the Widget Store itself.
Furthermore, when you’re in the Widget Store, Microsoft says it has improved the way you navigate around with the controller, so this will provide a better experience in Compact Mode.
You might recall that the Game Bar’s Compact Mode was an innovation brought in last year, designed to display the contents of the overlay more optimally in a smaller space as the name suggests – making life easier for those running Windows 11 on a gaming handheld.
In terms of its appearance, the Game Bar has been refined considerably over the past year or so, and this is yet another step towards making this overlay look more modern. These latest touches make the bar look neater and cleaner, at least in my opinion, so I’m pleased with the general design philosophy Microsoft has gone with here.
It’s also good to get that improvement to make it easier to explore the Widget Store with a controller, which is another step forward for those using the Game Bar on a Windows 11 handheld like the Asus ROG Ally X.
The more Microsoft introduces tinkering aimed at such gaming handhelds – and there have been quite a few small steps taken in that direction now – the more hope I have for an eventual ‘handheld mode’ for Windows 11 (which has been rumored to be something the company has been considering for some time now).
You may also like...- Shock, horror – I’m not going to argue with Microsoft’s latest bit of nagging in Windows 11, as this pop-up is justified
- Microsoft is supercharging Windows 11’s voice commands on Copilot+ PCs with Snapdragon CPUs, and fine-tuning a few Recall features
- Windows 11 fully streamlined in just two clicks? Talon utility promises to rip all the bloatware out of Microsoft’s OS in a hassle-free way
Feel like your browser tabs are out of control? Opera's new AI tab-management tool will bring order to the chaos
- The Opera One web browser has released a new AI Tab Commands feature
- The tool enables the Aria AI assistant to manage tabs using natural language prompts
- You can ask Aria to group, close, or organize tabs directly from the command line
Web browser Opera One is offering new hope for those of us with a hundred or more open tabs on a dozen topics. Opera's new AI Tab Commands can simply take care of it with some basic prompts.
So how does it work? AI Tab Commands connects your requests to organize or close tabs in your browser based on a topic or website. You might use it to “close all Wikipedia tabs” and see them all vanish or “group my TechRadar tabs” and have all the articles you're excited to read put in a row.
The feature employs Opera's AI assistant, Aria, to handle the requests. It's a new realm for Aria, which has been kept in chatbot form for answering questions until now.
Aria is now an 'AI agent,' joining the growing number of AI tools able to carry out tasks instead of just absorbing and sharing information. It complements the more comprehensive Operator agent released earlier this year by OpenAI.
It’s a small change in theory, but one that could feel pretty huge for anyone who’s ever found themselves swimming in a sea of half-read articles, abandoned shopping carts, open spreadsheets, and at least one tab playing music you can’t locate.
Aria doesn’t just recognize specific websites; it understands the context. Tell it to group “all my work tabs,” and it’ll figure out which tabs you meant. You no longer have to play forensic detective to figure out what you were doing before lunch.
You can try out AI Tab Commands through Opera's built-in command line. Hit Ctrl + / on Windows or Cmd + / on Mac, then type what you want Aria to do with your tabs. If you’ve got five or more tabs open, as far too many people do, you can also just right-click on any one of them and click on AI Tab Management from the dropdown menu.
“After being the first one to introduce tabs 25 years ago, we are continuing to improve this core feature of the browser," Opera product director Joanna Czajka explained in a statement. "With this step, we keep pushing the border of what can be achieved with these new technologies in a web browser.”
Opera's AI crescendoThere’s something deeply cathartic about offloading your tab anxiety onto an AI assistant, like hiring a virtual Marie Kondo for your digital workspace. And if you're worried about the privacy of your browsing history, you can relax.
The only information sent to Opera’s servers is the text of your command. The list of open tabs and other details stays unseen on your device. So unless you're oddly explicit in detailing anything you'd rather not share in your request, Aria won't know anything about it.
Many Opera users are probably becoming very used to the company's infusion of AI throughout its browser. Over the last couple of years, the company has been gradually rolling out new tools for Aria. That includes the aforementioned Operator agent, image creation, voice output, and bringing Aria to its mobile app.
Aria has also brought on other upgrades to go with the AI Tab Commands, including a "Writing Mode" that lives in the command line, letting users draft emails and other content without ever leaving the browser. You can also now interact with Aria directly from a browser tab, not just through the sidebar or command line.
It’s part of Opera's efforts toward making Aria feel like a native, integrated part of the experience rather than a separate thing you must remember to use. The AI's training has also been upgraded to offer better answers about shopping, recipes, and gaming.
These more subtle improvements and features all work together to make traversing the web more frictionless and may be just the thing for Opera to encourage more people to turn to them when they want to go online, or at least when they can't stand the sight of so many tabs splattered across their screen.
You might also likeThe race to trillion-parameter model training in AI is on, and this company thinks it can manage it for less than $100,000
- Phison’s SSD strategy slashes AI training costs from $3 million to $100,000
- aiDAPTIV+ software shifts AI workloads from GPUs to SSDs efficiently
- SSDs could replace costly GPUs in massive AI model training
The development of AI models has become increasingly costly as their size and complexity grow, requiring massive computational resources with GPUs playing a central role in handling the workload.
Phison, a key player in portable SSDs, has unveiled a new solution that aims to drastically reduce the cost of training a 1 trillion parameter model by shifting some of the processing load from GPUs to SSDs, bringing the estimated $3 million operational expense down to just $100,000.
Phison's strategy involves integrating its aiDAPTIV+ software with high-performance SSDs to handle some AI tool processing tasks traditionally managed by GPUs while also incorporating NVIDIA’s GH200 Superchip to enhance performance and keep costs manageable.
AI model growth and the trillion-parameter milestonePhison expects the AI industry to reach the 1 trillion parameter milestone before 2026.
According to the company, model sizes have expanded rapidly, moving from 69 billion parameters in Llama 2 (2023) to 405 billion with Llama 3.1 (2024), followed by DeepSeek R3’s 671 billion parameters (2025).
If this pattern continues, a trillion-parameter model could be unveiled before the end of 2025, marking a significant leap in AI capabilities.
In addition, it believes that its solution can significantly reduce the number of GPUs needed to run large-scale AI models by shifting some of the processing tasks away from GPUs to the largest SSDs and this approach could bring down training costs to just 3% of current projections (97% savings), or less than 1/25 of the usual operating expenses.
Phison has already collaborated with Maingear to launch AI workstations powered by Intel Xeon W7-3455 CPUs, signaling its commitment to reshaping AI hardware.
As companies seek cost-effective ways to train massive AI models, innovations in SSD technology could play a crucial role in driving efficiency gains while external HDD options remain relevant for long-term data storage.
The push for cheaper AI training solutions gained momentum after DeepSeek made headlines earlier this year when its DeepSeek R1 model demonstrated that cutting-edge AI could be developed at a fraction of the usual cost, with 95% fewer chips and reportedly requiring only $6 million for training.
Via Tweaktown
You may also like- These are the best AI website builders around
- Take a look at the best firewalls on offer right now
- Seagate teams with Nvidia to build an NVMe hard drive proof of concept, more than 3 years after its last effort
Discord's game overlay has seen a complete revamp - I've tried it, and it's one of the best updates ever
- Discord's new update has introduced a new game overlay along with UI color settings
- The new game overlay allows users to view friends' streams and video chat while in-game
- It's a solution for gamers with single-display setups to stay in touch with friends
Since there is a lack of competition in social apps for gamers, Discord is the one platform that most rely on for communicating with friends online, which arguably isn't the best circumstance considering the current negative consensus against its Nitro paywall and recent limitations of features. However, a new update has arrived that might place Discord back in users' good books.
In a new video on YouTube (available below), Discord announced a brand new, significantly overhauled game overlay that will allow users to access a number of the app's features while gaming, without ever needing to leave the open game window. It's a big upgrade from the original overlay (now referred to as the 'Legacy Overlay'), which only featured the name tags of users in an active server in any corner of the screen, along with access to quick text chatting.
Now, Discord has maintained the basic name tag feature while adding innovative features in a new 'action bar' - like soundboard and call controls, a video chat window for users on camera, and the same for streaming, all in one place. This allows you to watch multiple streams and engage with friends via the game overlay. It's important to note that the quick text chat function has seemingly been removed, instead replaced by the notification window (which gives you the option to reply to messages).
Essentially, this is a game-changing feature that could go a long way in helping users keep all their activities on one screen without needing to Alt-Tab between programs. Most importantly, Discord claims the performance impact streaming had on games is now gone, as streaming now uses a new rendering method that utilizes the Discord client to keep the action rolling for friends.
It's not perfect, but it's definitely a fantastic startFor gamers like me, Discord is an essential part of enjoying games while keeping in touch with friends. While it has its ups and downs - notably the random bugs during server calls - this is probably the best update to the platform I've seen in a while.
I no longer need to keep my TV connected to my PC to watch streams from friends while I'm gaming, as I can easily do this on the same screen now. My worry was that streams would be too disruptive to my gameplay, but thankfully, the windows can be resized at will, which is absolutely ideal for intense and competitive gaming sessions.
It certainly isn't perfect right now: the option to reply to texts via the notification window is missing the option for emojis, and it's the new game overlay is only functional in games when it should have an option to choose different applications that aren’t recognized by Discord (I know it's called game overlay, but still).
Regardless, it's a great start to a very much-needed overhaul for the popular PC gaming app, and the great thing is that many of the omissions I've mentioned will likely be added through future updates. I just need one more thing: Discord, allow deafened users in servers to access the soundboard. Please?
You may also like...YouTube Premium could be getting a new time-saving perk, showing you recommended videos directly in your playback queue
- YouTube is testing a new Premium feature that shows you recommended videos in your playback queue
- The new feature could make it easier for you to add your video queue without having to use the home tab or search function
- As it stands, the feature doesn't show you videos related to what you're already watching
YouTube has been working hard on its tests and new additions to its YouTube Premium service, and now it's experimenting with a new feature that could change the way you curate your video playback queue. Its latest test shows a new ‘Recommended videos’ list under your ‘Now Playing’ queue, seemingly to make it easier for you to bulk up your queue.
Found in the ‘Experimental Features’ section, a dedicated space where YouTube rolls out test features for Premium subscribers, YouTube has been experimenting with the addition of a ‘Recommended videos’ list in your playback queue, meaning you could add videos to your queue without having to leave the page. It follows the rumors of YouTube’s audio quality control feature, and Android Police has dug its claws in to get the scoop on the new experiment.
As Android Police also highlights, video queues in the YouTube app are created manually by the user and currently can't auto-play recommended videos. According to the report, users will still have to manually add videos from the new ‘Recommended videos’ list if they want them to play in the queue. This comes with the slight inconvenience that the videos suggested to you may not be entirely related to what you’re currently playing - an eyebrow-raiser indeed.
As you’re probably aware, YouTube already offers video recommendations when you use the search bar function and on your home page based on your watch habits. The new feature could lift the inconvenience of having to constantly flick between your playback queue and other pages in the app to fill out a queue. However, unless it’s going to offer me content related to what I’m already watching, I’m not entirely sure where I stand with it just yet. I’ll likely give it a go regardless.
I often find myself having to use these two functions when I’m filling up a YouTube queue to build a seamless flow of videos, but crafting a queue is difficult, especially when you don’t know what to watch. This is why I like Spotify’s ‘Recommended songs’ feature at the bottom of each custom playlist, because it’s a safe function to resort to when I hit a wall with adding songs to a playlist with a certain vibe.
Music videos are my go-to content choice on YouTube. Say, for example, I’ve added a handful to my queue, and I’m stuck with what to add next. The ‘Recommended videos’ list would make my user experience all the more enjoyable if it was filled with related music videos instead of random fodder. But perhaps this is the next step for YouTube if the feature’s initial rollout is successful.
You might also like- I was watching YouTube on my TV before it became more popular than phones – here are 3 reasons why it’s better on the big screen
- Is YouTube auto-playing Shorts when you open the app? Well, you’re not alone - here’s how to fix it
- Gemini AI can now watch YouTube videos for you, and this changes everything
I’ll admit, Microsoft’s new Windows 11 update surprised me with its usefulness, providing accessibility fixes, a gamepad keyboard layout, and PC spec cards
- Windows 11 has a new update, although it’s optional (in preview) for now
- It brings in some really handy features, including an important accessibility tweak for a core part of the Windows 11 interface
- New PC spec cards in the Settings app are a surprise addition, coming through testing very swiftly – albeit without the key FAQ feature
Microsoft just released a new update for Windows 11, albeit an optional one (still in preview), and it delivers some useful work – not to mention a surprise.
Windows Latest flagged up the changes that are part of the March preview update for Windows 11 23H2 (known as KB5053657). They include a smoothing over of accessibility wrinkles in File Explorer, and the addition of PC spec info cards that have previously been seen in testing.
Regarding File Explorer – which is the app that powers the windows that show your folders, and files within them, on the desktop – those of you who use larger text sizes for better visibility in this part of the interface have doubtless noticed that text scaling isn’t uniform here.
In other words, only some parts of File Explorer have the user’s specified text scaling applied, and some text, or indeed parts of the interface like buttons, remains overly small (with no scaling).
Obviously, that’s awkward and unhelpful, plus it just looks messy, but thankfully, Microsoft has fixed this so the scaling is correctly applied across all elements of File Explorer, as per testing conducted by Windows Latest.
Moving onto the new spec cards, these were spotted in testing early in 2025, but seem to have been put in place very quickly, shuttling through testing and into this new optional update. That’s a pleasant surprise indeed, and these cards provide at-a-glance info on your CPU, RAM, storage, and graphics card – although it doesn’t look like the FAQ element has been implemented yet.
Another significant change with KB5053657 is the new gamepad keyboard layout, which allows you to type using Windows 11’s virtual keyboard with an Xbox controller (including button shortcuts for spacebar, delete, and so on).
Finally, there’s a new emoji button on the taskbar, which pulls up the combined emoji, GIF, and clipboard panel. It’s an optional feature, so you can turn off the icon in Settings if you’ll never use it. Oh, and the Voice Access functionality in Windows 11 now has support for the Chinese language (for both Simplified and Traditional Chinese).
Remember, this is for Windows 11 23H2, and we haven’t yet seen the release of the new update for 24H2 (though it’s likely imminent, and it’ll probably turn up later today). It should carry these same changes, and perhaps more besides.
This is a very worthwhile update, then, given its accessibility improvements and that gamepad keyboard. The latter is going to be very handy for those running Windows 11 on a gaming handheld (and it’s a sign that Microsoft is still perhaps thinking about that full-on handheld mode for Windows 11, which has been rumored for some time).
It’s also good to have the spec cards present, and I can’t believe how quickly these have transformed from a hidden feature not even visible in testing to going through into an optional update. I guess it’s a relatively easy piece of work to implement (it must’ve been), although the FAQ section – providing tailored advice, which as noted, isn’t present yet – is going to be the key element (as I recently discussed). With any luck, that extra feature will be incorporated here before too long.
If you’re keen to see this shiny new stuff, I should caution you that installing a preview update isn’t without potential perils. These features remain in testing, and could still be wonky, even if it is the very last stage of testing, and nothing’s too likely to be seriously awry (those could be famous last words, of course).
Generally, unless you’re super-stoked for one of the above features, I’d wait until next month for the full release. That’s when this preview update will become the April cumulative update for Windows 11, and that’s not far off now (it’ll be April 8, so it’s an early debut for the upgrade next month).
You may also like...- Shock, horror – I’m not going to argue with Microsoft’s latest bit of nagging in Windows 11, as this pop-up is justified
- Microsoft is supercharging Windows 11’s voice commands on Copilot+ PCs with Snapdragon CPUs, and fine-tuning a few Recall features
- Windows 11 fully streamlined in just two clicks? Talon utility promises to rip all the bloatware out of Microsoft’s OS in a hassle-free way
Gemini 2.5 is now available for Advanced users and it seriously improves Google’s AI reasoning
- Google announces Gemini 2.5
- Gemini 2.5 Pro Experimental is available for paid subscribers right now
- Tops Humanity's Last Exam, the most difficult AI benchmark
Google just announced Gemini 2.5, and it's a major upgrade to Gemini that the company is calling its "most intelligent AI model" yet.
Announced on the company's blog, Google revealed the experimental version of 2.5 Pro, which is available today for all Gemini Advanced subscribers. More 2.5 models will arrive in the future.
Google's Gemini 2.5 models are a new generation of thinking models that are able to reach "a new level of performance by combining a significantly enhanced base model with improved post-training."
2.5's thinking capabilities will be implemented into all future Google AI models, which the company says will allow them to "handle more complex problems and support even more capable, context-aware agents."
So what does this mean? Well, Google is doubling down on its impressively frequent AI updates, and this time we're getting better reasoning capabilities than ever before.
Available right now, you can access Gemini 2.5 Pro Experimental simply by selecting the model in the Gemini app or directly in Google's AI Studio. You'll need a Gemini Advanced subscription to see this as an option.
Pricing for the improved model (for those who want to use it for scaled production use) will be announced in the coming weeks, and more 2.5 models are expected to launch in due course.
Gemini 2.5 Pro is impressiveGoogle shared some benchmark results for Gemini 2.5 Pro Experimental and the results are seriously impressive.
The new AI model scores 18.8% on Humanity's Last Exam compared to 14% for ChatGPT's o3-mini and 8.6% for DeepSeek R1. Humanity's Last Exam is the most thorough and difficult AI benchmark, so to score substantially higher than its competitors is no mean feat.
18.8% is the highest score we've ever seen on Humanity's Last Exam (without tool use). Google is calling Gemini 2.5 Pro's reasoning capabilities "state-of-the-art" and it's clear to see why.
Google continues to drive forward with its AI development at a rapid pace. Just last week the company made Gemini Deep Research free for all and followed that up with improvements to its impressive AI podcasting tool, NotebookLM.
We'll be testing Gemini 2.5 Pro and putting the new Experimental model through its paces, so stay tuned to TechRadar for further Google AI coverage.
You might also likeHate Windows 11’s search? Microsoft is fixing it with AI, and that almost makes me want to buy a Copilot+ PC
- Windows 11 looks like it’ll get an AI-supercharged search for Copilot+ PCs
- This will allow natural language queries and leverage the on-board NPU to process them
- The feature is progressing through testing nicely, and so might be released soon enough
Windows 11 looks like it’ll get its basic search functionality seriously bolstered, with a natural language searching feature progressing nicely through testing – but it’s only for those with Copilot+ PCs.
These ‘local semantic search’ powers have arrived in the latest preview release in the Beta channel (build 26120.3585, as noticed by Neowin), for Copilot+ laptops with AMD or Intel processors. Furthermore, they’ve also turned up in Release Preview for Snapdragon (Arm-powered) Copilot+ PCs.
The move means you can use natural language for a search query in Windows, such as “find photos of me with my dog” or “find that document which is my holiday packing checklist,” rather than having to remember any exact file names.
This doesn’t just work in terms of searching your files and folders (meaning File Explorer), but also with searches in the Settings app – so you can perform queries such as “show me the Bluetooth devices connected to my PC” to pick out another example.
All of this leverages the power of the NPU of the Copilot+ PC. All the processing is done locally, with no data sent to the cloud, which obviously means that you don’t have to be connected to the internet.
Also worked into this particular piece of functionality is the ability to use this AI-enhanced search to find photos in the cloud, should you wish.
Microsoft explains: “In addition to photos stored locally on your Copilot+ PC, photos from the cloud will now show up in the search results together. In addition to searching for photos, exact matches for your keywords within the text of your cloud files will show in the search results.”
This is for OneDrive only for now, but Microsoft says it’s working to bring support to third-party cloud storage services.
As for caveats, right now, searching for Windows settings will only work within the Settings app itself, but the eventual aim is to have these results flagged from the search box on the desktop taskbar (as is the case with the normal search function).
It’s worth noting that if you are a Windows tester in the Beta channel, this feature is only gradually rolling out, so you may not see it for a while yet (and you may need a couple of reboots of your Copilot+ PC to fully trigger the AI-bolstered search when it does turn up).
A natural language search is a nifty ability for Windows 11 search, and a good use of that NPU. Windows 11’s search powers have always been rather sluggish and lacking, often proving not just slow, but failing to find anything useful, and flagging up weird results (or pointless web content). It’s been a long-complained-about area of Windows (the same is true of Windows 10), so hopefully this will go some way towards pepping up the overall experience, as well as making the functionality a lot more convenient.
Of course, with semantic indexing, Microsoft’s AI is effectively cataloguing (read: rifling through) all your files in order to have the search work in a more timely and responsive manner. Hence the reason why the company clarifies that all processing and data is stored locally, and doesn’t leave your PC – due to the potential privacy implications otherwise. This is especially important because as Microsoft notes elsewhere: “Semantic indexing is enabled by default on Copilot+ PCs.”
You can turn it off, mind, or you can selectively exclude certain files or folders (or drives). All these options are housed in the Settings app, in Privacy & Security > Searching Windows > Advanced indexing options.
This AI-driven search feature was seen in the Dev channel a while ago, so the fact that it has progressed to Beta (and Release Preview for Snapdragon-powered Copilot+ PCs) suggests it’s close to arriving in the finished version of Windows 11 for these devices.
Still, we can never be sure any feature in testing will see the light of day, but it seems very likely in this case. As it’s a complex piece of functionality, though, Microsoft could still have some tweaking and debugging on its plate. This is something Microsoft really needs to nail for release, as it’ll show off a considerable advantage of a Copilot+ PC if it turns out well – which will be a much-needed addition to the list of selling points for these computers.
You may also like...DeepSeek’s new AI is smarter, faster, cheaper, and a real rival to OpenAI's models
- Chinese AI startup DeepSeek has released an upgraded AI model called V3-0324 to Hugging Face
- V3-0324 offers improved reasoning and coding abilities over its predecessors
- DeepSeek claims its AI models can match or beat those of American AI developers like OpenAI and Anthropic
DeepSeek dropped a major upgrade to its AI model this week, which has people buzzing almost as much as they did when the Chinese AI startup first made its splash earlier this year. The new DeepSeek-V3-0324 model is now live on Hugging Face, setting up an even starker rivalry with OpenAI and other AI developers.
According to the company's tests, DeepSeek's new iteration of its V3 model boasts measurable boosts in reasoning and coding ability. Better thinking and coding might not sound revolutionary on their own, but the pace of improvement and DeepSeek's plans make this release notable.
Formed just last year, DeepSeek has been moving fast, starting with the December release of the original V3 model. A month later, the R1 model for more comprehensive research debuted. Now comes V3-0324, named for its March 2024 release.
DeepSeek demandThe improvements bring the model to near-parity with OpenAI’s GPT-4 or Anthropic’s Claude 2 models. But, even if they aren't quite the same power, they run a lot cheaper, according to DeepSeek.
That's ultimately a huge selling point as AI use, and thus AI costs, continue to increase. Training AI models is notoriously expensive, and OpenAI and Google have huge cloud budgets that most companies couldn't reach without partnerships like OpenAI's with Microsoft. That exclusivity vanishes if DeepSeek's cheaper achievements become more common.
U.S. dominance of AI models is starting to slip anyway, thanks in part to Chinese startups like DeepSeek. It no longer seems shocking when the hottest model emerges from Shenzhen or Hangzhou. Geopolitical considerations, as well as business concerns, have spurred calls to ban DeepSeek from at least the U.S. government.
While you probably won't see DeepSeek’s latest release changing everything for your schedule tomorrow, it hints that the ballooning demand for computational power and energy to fuel next-generation AI might not be as staggering as feared.
It also just might mean that the AI chatbot rewriting your next resume or debugging your website also speaks fluent Mandarin.
You might also likeOpenAI unveiled image generation for 4o – here's everything you need to know about the ChatGPT upgrade
While it’s not another 12 days of news from OpenAI – or at least, we hope not – the company behind ChatGPT did have a quick live stream on March 25, 2025.
The news? Well, while the AI behemoth was tight-lipped in the lead-up, OpenAI did debut native image generation for the 4o model.
It makes the teaser image of someone writing “Livestream at 11AM PT” on a classic, dark green chalkboard make a lot more sense.
OpenAI's much-improved image generation skills are debuting shortly after Google added native image generation to Gemini inside its AI Studio.
Quite possibly the best news, though, is that OpenAI isn't wasting time with the rollout. During the stream, it started rolling out the features to ChatGPT users, and native image generation for the 4o model is available now for all users, regardless of plan. Pro and Plus subscribers get more access, as you might suspect, as folks on the free plan will deal with some limits.
In our early testing, the quality of the images requested was certainly improved, but these took longer to create. The latter is something that OpenAI called out during the live stream, but it could also be that the company is ramping up resources to handle the demand right after the launch.
Ahead, you can see TechRadar's live blog during the event as OpenAI CEO Sam Altman walked us through the news and updates since the stream wrapped of us putting the new feature to the test.
Well, the livestream title is shedding a lot more light as to watch we can expect ... way more than the intitally teased image. It's titled "4o Image Generation in ChatGPT and Sora" so that means we're likely getting improvements to creating images within ChatGPT and Sora.
The mention of the latter might mean more general improvements for text-to-video generation as well.
Under 15 minutes to go now!
OpenAI's live stream has begun, and in the lead-up to the 2PM ET / 11AM PT / 6PM GMT start time, we're being treated to various images. Some of these overlap, but it refreshes every few seconds and shows off all the different styles.
The live stream description notes we'll be hearing from Sam Altman, Gabriel Goh, Prafulla Dhariwal, Lu Liu, Allan Jabri, and Mengchao Zhong discussing 4o image generation.
And we're off to the races – Sam Altman is calling this one of the most fun advancments, and it's native image generation in the 4o model. He quickly noted 'it's a huge step forward' and something that OpenAI has been excited to rollout for quite some time, for a whole host of folks.
Altman notes the best way to explain it is to show it off, so we're already in a demo. In just a few seconds after the prompt, OpenAI showed off an image with what the team said has 'perfect text.' Seemingly showing a leap in terms of understanding the prompt and creating the image with clear text, and a unique point of view effect.
In the second demo, the OpenAI team took a selfie and then asked ChatGPT to make it into an 'anime style.' It took several seconds, but it did indeed generate what was requested. You can see it above.
Sam Altman was then quick to note that the improved image generation is starting to roll out now in ChatGPT and Sora for Pro users, and it will be available for free users as well.
We also are seeing the process of the native image generation model within the 4o model, turning that generated selfie into an "AGI meme."
Sam Altman also teased that the native image generation model within 4o is designed to be a little offensive within reason if that's what you direct it to. The key phrase there is "within reason," and no doubt many users will put that to the test.
Now, the second demo asks for a colorful image describing the theory of relativity, with some added humor. Altman also noted that the image generation model is a bit slower but that the result is much higher in quality.
Considering the improved image generation is already available – or at least rolling out – TechRadar's editor-at-large, Lance Ulanoff, already tested the feature.
Lance took a selfie and uploaded it to ChatGPT via the iPhone app. He then asked for it to be turned into anime style. The first time, it gave him a full head of hair, but then corrected when he asked for it to be bald.
Back to the live demos, OpenAI is showing that we can now chat with ChatGPT more visually. This means that you can ask for requests to images in a row, and it will remember the context.
In this example, a photo of a coin was sent, and then the team asked ChatGPT to make it transparent, among other requests.
OpenAI certainly covered quite a bit of ground in just about 15~ minutes. Sam Altman and the team debuted native image generation in the 4o model. Then, presented some demos, and before it was wrapped, we already tested the feature in the ChatGPT app for the iPhone.
Now, as OpenAI announced, the improved model is rolling out now to Pro users, but is also coming to free users. Altman also confirmed it will eventually arrive in the API as well.
We just put image generation in the 4o model through another test, this time asking for a cartoon strip in the style of Charles Schulz's "Peanuts." While ChatGPT acknowledged the request, it turned it down due to copyright.
Instead, the resulting funny comic strip is in a similar style, with two familiar characters who have new names and other qualities to distinguish them from the original.
Now that OpenAI's improved image generation has been out for the public to use for several hours let's walk through how it's been so far.
On my side, I have a free ChatGPT account and quickly used up my daily allotment in the span of only a few minutes, generating a few images of a dog. When I asked for a fourth, I got the response:
"It seems like I can’t generate any more images right now. Please try again later. Let me know if there’s anything else I can do for you!"
As with most new features, the free tier of ChatGPT will have some limitations.
My colleague Lance Ulanoff, who's on ChatGPT Plus, has had better luck, though the time frame for generating images has stretched far, in some cases generating an alert that the request timed out and to retry, though the image was ultimately generated.
That could result from the network connection, potentially heavy loads of interest on OpenAI's servers, or even a limit ... though the latter doesn't seem as likely for a paid user.
Apple just announced WWDC 2025 starts on June 9, and we'll all be watching the opening event
- Apple's announced that WWDC 2025 will run from June 9 to June 13
- The week-long developer conference will start with a special event on June 9
- It should be the next big chance for Apple to provide an update on Apple Intelligence and for new software to be unveiled
We’ve all been expecting Apple’s next event to be in June, and the Cupertino-based tech behemoth just made it official. WWDC – aka World Wide Developers Conference – is returning, and the week-long affair will kick off with a special event on June 9, 2025.
It’s safe to say that Apple has a lot riding on the special event, as it will be almost a year to the day that Apple Intelligence was unveiled, and in the 365 days since then, there’s been a lot of news.
Most recently, Apple officially confirmed a delay with the AI-infused Siri and said it’ll arrive ‘in the coming year.’ We’re all expecting Apple – likely in the form of CEO Tim Cook or SVP of Software Craig Federghi – to give a state of the state of sorts on the feature set.
In typical Apple fashion, the company is tight-lipped about what to expect from WWDC 2025. We have a new graphic with “WWDC” in the iconic rainbow Apple colors, and the “25” at the end of the event logo has some dimension to it, potentially hinting that the rumored redesign of iOS, iPadOS, and macOS will take a page from VisionOS.
Apple's state of the state, and hopefully an update on Apple IntelligenceIn the shared release, Apple teases that the week will “spotlight the latest advancements in Apple software,” likely hinting at the release of iOS 19, iPadOS 19, macOS – insert fun California-themed name – 16, watchOS 12, VisionOS 3, as well as new versions of tvOS and the OS’ for HomePod and HomePod mini.
Susan Prescott, Apple’s vice president of Worldwide Developer Relations, writes, “We’re excited to mark another incredible year of WWDC with our global developer community. We can’t wait to share the latest tools and technologies that will empower developers and help them continue to innovate” – certainly powering the hype train out of the station.
Greg Joswiak, Apple’s SVP of Marketing, took to X (formerly Twitter) to suggest we all save the week, hinting at a lot of news and sharing an animated version of the WWDC 25 logo that certainly has some bounce.
You’re gonna want to save the date for the week of June 9! #WWDC25 pic.twitter.com/gjzYZCkPbAMarch 25, 2025
As with previous years, WWDC 2025 will be available online and free for all registered developers, but there will be an in-person component happening at Apple Park in Cupertino, CA. This will be a chance for folks to watch the keynote and platforms' state of the union as well as take part in workshops. Space is limited, though, and registration is required. Regardless, TechRadar will have boots on the ground and be the place to be for the news as it breaks.
The real question, though, on our mind is how Apple Intelligence is positioned going forward, and what non-AI developments Apple has in store for the software that powers the iPhone, iPad, Mac, Apple Watch, Apple TV, Apple Vision Pro, and even AirPods.
Apple’s top team will need to explain where the AI-infused Siri is, how the timeline has shifted, and most importantly, how they will stick to it. The most recent report is that Mike Rockwell – the VP behind the Vision Pro and getting it to market – is now in charge of Siri, reporting directly to Craig Federghi.
Could we finally get a true redesign of iPadOS, making it more Mac-like and letting folks with an iPad Pro take advantage of the M4 chip? Will there be some impressive new Continuity features in the same vein as iPhone Mirroring? Might the redesign be as impressive and a garaguntan leap that pushes the appeal of Apple hardware?
The stakes are high, and I hope we’ll get some major news. But now we just have to wait 76 days – and counting – until Tim Cook takes the stage, says Good Morning, and hopefully provides more context around Apple Intelligence and the strange, strange rollout it’s taken.
You might also likeOpenAI just launched a free ChatGPT bible that will help you master the AI chatbot and Sora
- OpenAI launches OpenAI Academy
- The free resource has all the info you need on how to master ChatGPT and Sora
- The AI bible includes live streams, videos, and in-person events
OpenAI just launched an incredible AI resource bible called OpenAI Academy, and it could be the catalyst for you to finally try ChatGPT.
Announced on the company's blog, OpenAI Academy is a publicly available, free online resource hub that will help "support AI literacy and help people from all backgrounds access tools, best practices, and peer insights to use AI more effectively and responsibly".
With in-person events, live streams, and content to explore at your own pace, the Academy could become your go-to resource for all things ChatGPT and Sora.
There are plenty of excellent AI resources on the internet. In fact, you're reading one just now. OpenAI Academy, however, gives users a go-to educational tool created by the makers of ChatGPT to help teach the right practices for using AI.
You can access OpenAI Academy without paying a dime; all you'll need to do is sign up for an account. You can access the resource here.
The perfect companionAI is rapidly evolving and changing the way we interact with technology. As someone who writes about AI and uses it daily, OpenAI Academy is the kind of resource I've been waiting for.
Initially, OpenAI Academy launched as an in-person event, so it's fantastic to see the resources be made available to anyone with access to the internet.
From tips on how to get started with Sora and how to craft a storyboard, to how to create custom GPTs and use Deep Research in ChatGPT, there's a guide for almost all your needs.
Considering companies charge for educational courses on AI, OpenAI's offering here is a steal for free. So whether or not you use ChatGPT or Sora daily, or if you've been hesitant to try because it can be overwhelming, OpenAI Academy has you covered.
You might also likeForget Android XR, I've got my eyes on Vivo's new Meta Quest 3 competitor as it could be the most important VR headset of 2025
- Vivo has shown off its mixed reality headset
- The device looks just like an Apple Vision Pro
- It signals Vivo's big push into... robotics?
The Meta Quest 3 is the best VR headset for most people thanks to its impressive performance and reasonably affordable price. After fending off 2024’s upstart, the Apple Vision Pro – which failed to properly explain why anyone should spend a ridiculously high sum on it – in 2025 Meta’s Quest is set to face Samsung and Google’s Project Moohan Android XR headset, but there might be a bigger threat
That's because Vivio – a Chinese electronics company – has just debuted its Vivo Vision MR headset which could be the real headset to watch this year.
The prototype Vivo put on display looks nearly identical to an Apple Vision Pro – right down to the battery pack you put in your pocket to keep the device powered and portable. Heck, you could have both the Apple and Vivo headsets next to each other and most people wouldn’t be able to tell them apart.
Under the hood, I expect there are plenty of differences – but right now, it’s unknown what is powering the Vivo headset.
Beyond a vague mid-2025 debut for the prototype, Vivo has remained tight-lipped on the device’s specs, weight, battery life, and price. Though its tech generally lands somewhere in the mid-range to affordable flagship range when it comes to phones (usually undercutting similarly specced rivals like the iPhone 16 Pro with its Vivo X200 Pro).
If this headset can find a way to deliver premium performance at a more affordable price than other high-end models it could serve up some tough competition to Meta in the regions where both the Vision MR and Quest 3 headsets are available.
Speaking of, Meta’s big advantage in this fight will be the Vivo device is likely to launch exclusively in China and some Asian countries rather than getting a full global release. But even if it’s confined to one continent and never makes it to the US, Vivo’s headset could be a fascinating launch to watch.
Get ready for a revolutionWhat’s interesting about Vivo is that XR tech seems like an afterthought rather than its primary objective.
Vivo explains the headset is part of its strategy to “strengthen its real-time spatial computing capabilities” but not to develop sleek AR glasses – which appears to be Meta and Samsung’s goal with their respective Meta Orion specs and leaked smart glasses plans – but for “future applications in consumer robotics.”
At the same time, Vivo has announced it’s establishing a new robotics lab in China.
AI robots – from autonomous vehicles to humanoid assistants – are picking up a lot of steam in the tech space right now with high-profile companies Nvidia and Tesla making their lofty robot-based goals known in recent months.
Mixed reality headsets have to do a lot of spatial processing to create realistic experiences that blend your real and virtual worlds, so the tech would seemingly be useful in robotics too – especially for in-home helper robots that need to know how to navigate around and recognize different items of furniture (something headsets can already do).
Meta has robotics plans, too, based on the work of its researchers and leaked memos, but it has yet to make its bold plans public if it has any. But if it doesn't react soon, it could find its XR lead slip away in what looks to be the sector’s next frontier.
We’ll have to wait and see what’s announced in the coming months, but of all the XR headsets launching this year I think the Vision MR has a shot at being by far the most interesting.
You might also like