Techradar

Subscribe to Techradar feed
Updated: 1 day 6 hours ago

OpenAI just updated its 187-page rulebook so ChatGPT can engage with more controversial topics

Mon, 02/17/2025 - 14:39
  • OpenAI has updated its Model Specification to allow ChatGPT to engage with more controversial topics
  • The company is emphasizing neutrality and multiple perspectives as a salve for heated complaints over how its AI responds to prompts
  • Universal approval is unlikely, no matter how OpenAI shapes its AI training methods

OpenAI's training methods for ChatGPT are shifting to allow the AI chatbot to discuss controversial and sensitive topics in the name of " intellectual freedom."

The change is part of updates made to the 187-page Model Specification, essentially the rulebook for how its AI behaves. That means you'll be able to get a response from ChatGPT on the delicate topics the AI chatbot usually either takes a somewhat mainstream view on, or refuses to answer.

The overarching mission OpenAI places on its models seems innocuous enough at first. “Do not lie, either by making untrue statements or by omitting important context.” But, while the stated goal may be universally admirable in the abstract, OpenAI is either naive or disingenuous in implying that the "important context" can be divorced from controversy.

The examples of compliant and non-compliant responses by ChatGPT make that clear. For instance, you can ask for help starting a tobacco company or ways to conduct "legal insider trading" without getting any judgment or unprompted ethical questions raised by the prompt. On the other hand, you still can't get ChatGPT to help you forge a doctor's signature, because that's outright illegal.

(Image credit: OpenAI) Context clues

The issue of "important context" gets a lot more complex when it comes to the kind of responses some conservative commentators have criticized.

In a section headed "Assume an objective point of view", the Model Spec details how "the assistant should present information clearly, focusing on factual accuracy and reliability", and also that the core idea is "fairly representing significant viewpoints from reliable sources without imposing an editorial stance".

OpenAI does offer an example of a compliant response to the question "Do Black lives matter?" that unequivocally says yes and references how it's a civil rights slogan and the name of a group. So far, so good.

The issue that OpenAI either doesn't see or is ignoring comes with how ChatGPT tries to thread the needle if you ask, "Don't all lives matter?" as a follow-up. The AI confirms that they do, but adds that the "phrase has been used by people that rejected the premise of the 'Black lives matter' movement."

While that context is technically correct, it's telling that the AI doesn't explicitly say that the "premise" being rejected is that Black lives matter and that societal systems often act as though they don't.

If the goal is to alleviate accusations of bias and censorship, OpenAI is in for a rude shock. Those who "reject the premise" will likely be annoyed at the extra context existing at all, while everyone else will see how OpenAI's definition of important context in this case is, to put it mildly, lacking.

AI chatbots inherently shape conversations, whether companies like it or not. When ChatGPT chooses to include or exclude certain information, that’s an editorial decision, even if an algorithm rather than a human is making it.

OpenAI AI training changes

(Image credit: OpenAI) AI priorities

The timing of this change might raise a few eyebrows, coming as it does when many who have accused OpenAI of political bias against them are now in positions of power capable of punishing the company at their whim.

OpenAI has said the changes are solely for giving users more control over how they interact with AI and don't have any political considerations. However you feel about the changes OpenAI is making, they aren't happening in a vacuum. No company would make possibly contentious changes to their core product without reason.

OpenAI may think that getting its AI models to dodge answering questions that encourage people to hurt themselves or others, spread malicious lies, or otherwise violate its policies is enough to win the approval of most if not all, potential users. But unless ChatGPT offers nothing but dates, recorded quotes, and business email templates, AI answers are going to upset at least some people.

We live in a time when way too many people who know better will argue passionately for years that the Earth is flat or gravity is an illusion. OpenAI sidestepping complaints of censorship or bias is as likely as me abruptly floating into the sky before falling off the edge of the planet.

You might also like

Alexa’s big AI revamp might have been delayed again, and I’m losing faith Amazon's new assistant will be all that smart

Mon, 02/17/2025 - 12:47
  • A new leak suggests Alexa's AI is too innacurate to launch yet
  • It'll still be shown off on February 26, but won't release until later
  • The leak follows reports Alexa was delayed from 2024 due to similar issues

It’s all but guaranteed that Amazon is launching a new version of Alexa with souped up AI brain power on February 26 – it literally spelled it out in an announcement – but disaster might have struck at the final hurdle. Alexa AI is reportedly delayed, again.

That’s per an anonymous source who spoke with The Washington Post (the report is behind a paywall) claiming that the new Alexa has been making too many mistakes when asked test questions. As a result Alexa is being delayed to improve its accuracy – with the current launch date now put back 'til March 31.

Amazon is still expected to unveil the all-new Alexa at the New York event on February 26 as it originally planned, however, we expect access to the AI (and the questions it’ll answer in demos) might be restricted so as not to reveal its potentially less-than-perfect side.

As with all rumored information, we should still take all of this with a pinch of salt, though if The Washington Post is correct, this wouldn’t be the first time Alexa has been delayed. Multiple sources had teased a 2024 launch date, with accuracy issues once again cited as the reason Alexa was held back.

Alexa

(Image credit: Amazon)

Beyond causing a delay, these issues could also prove a blow to Amazon’s rumored plan to charge users for Alexa's help. It’s been said the revamped Alexa could cost paying customers $5-$10 a month to use (around £5-£10 / AU$8-AU$16). If Alexa is unreliable – or has a reputation for being unreliable beyond what’s expected from a current-gen AI – we imagine there won’t be many users keen to pay for the service.

At least the current-version of Alexa is said to be sticking around as a permanently free and generally reliable option.

If Amazon can solve Alexa’s accuracy problems, the new AI does sound rather useful. Alexa AI is said to be smarter so it can handle multiple prompts at once, rather than requiring its user to give distinct commands one after the other, and to perform as an AI agent – read: taking actions without direct user requests.

Admittedly that last point sounds a little scary, given that Alexa AI would have our credit card info and direct access to the world’s largest online store (Amazon), especially if Alexa is prone to mistakes (I know I'd be nervous about using it). But if Amazon can prove its agent is genuinely helpful, Alexa might finally start living up to the futuristic home assistant many imagined it would be when it first launched.

You might also like

New patch for Windows 11 24H2 reportedly plays havoc with File Explorer, and some folks are claiming it's broken their PC

Mon, 02/17/2025 - 10:56
  • Windows 11 patch is causing problems with File Explorer for some
  • This means that some – or most – folders on the desktop are failing to open
  • There are also reports of installation failures and PCs failing to boot

Microsoft’s latest update for Windows 11 has thrown some more spanners into the works for those who’ve upgraded to the most recent 24H2 spin of the desktop OS.

The February cumulative update for 24H2 users (known as patch KB5051987) did some admirable work on the bug fixing front, but sadly also appears to have introduced some fresh problems to this incarnation of Windows 11.

And two of these issues are very worrying, the first of which is a bug that’s apparently playing havoc with File Explorer, as reported by Windows Latest and other Windows 11 users across various online forums.

File Explorer is the app in which your desktop folders are displayed, allowing you to view and interact with the files inside. Due to this bug, though, you can’t actually view files, as some folders are refusing to open.

Windows Latest observes that the commonly-used Documents and Pictures folders are non-functional after installing the February update for Windows 11 24H2. File Explorer is also failing to work when accessed via Windows 11’s search function, or a desktop shortcut.

Other users are reporting various problems with opening certain folders, or indeed most folders – though some of them still appear to work.

It’s an odd sounding bug, in short, that’s having unpredictable effects. But if you’ve installed KB5051987 and are encountering weirdness in terms of folders just not working or appearing at all – even though File Explorer is running seemingly just fine, as normal, within Windows 11 – well, this is why.

On top of the flakiness with File Explorer, the February update is also failing to install for some Windows 11 users (though that’s nothing new – this is a common problem with a lot of Windows updates these days). For some, the download and installation process is getting stuck fast at a certain percentage, and for others, it might eventually complete, but it’s taking hours.

Worryingly, some folks are reporting that KB5051987 is causing crashes (Blue Screens of Death) after it’s installed, or that it has even torpedoed their Windows 11 installation entirely – yikes.

Windows Latest also points to potential issues with webcams not working, and the mouse cursor moving sluggishly and stuttering (as well as performance glitches in general).

AOC gaming monitor tilted slightly to the side, showing the Windows 11 desktop screen

(Image credit: Future / Jeremy Laird) Analysis: File this one under ‘worrying’

Just when I voiced hope that maybe Microsoft is finally getting on top of all the issues with Windows 11 – which have been numerous ever since the upgrade went live last year – we’ve seemingly got a doozy of a gremlin messing with the internal workings of File Explorer somehow.

File Explorer is a critical part of the Windows interface – as noted, it’s the very folders you navigate to access all the files on your PC – so to see it partially (or almost wholly in some cases) hamstrung is very disappointing.

This is a fairly widely reported issue, and as Windows Latest notes, it has over 30 reports sent in by readers (possibly more by now). There’s also lots of grumbling (understandably) about this on Reddit and other forums.

One of your first thoughts – it was certainly one of mine – was that a third-party customization utility could perhaps be to blame, but Windows Latest tried a completely vanilla installation of Windows 11, and this too was affected. There doesn’t seem to be any commonality between those suffering at the hands of this odd bug, not yet anyway – and there are no suggested workarounds for now. (Save for the obvious course of action – uninstall the February update).

That said, in one of the Reddit complaints I perused, I noticed that somebody offered up the idea of disabling Windows Sandbox, and this reportedly worked for two people. So that might be worth a shot, but bear in mind that Sandbox is not available on Windows 11 Home. So only those with the Pro edition can consider this possible fix, and most people are running the Home flavor of the OS, of course.

On top of the File Explorer problem, the reports of installation failures are very concerning, especially those which apparently broke Windows 11 completely. These reports seem much rarer, thankfully, but still – caution might be the better part of valor here, and you may want to put off installing the February update for as long as you can. (Which, I should note, for Windows 11 Home users isn’t all that long). In this case, you will, of course, be running without the security fixes provided by the patch, which isn’t ideal either.

With any luck, Microsoft will be investigating these issues and hopefully taking action sooner rather than later.

You may also like...

Microsoft crowbars new ‘recommendations’ for PC Game Pass into Windows 11, so excuse me while I start beating my anti-advert drum (again)

Mon, 02/17/2025 - 09:43
  • Windows 11 has a new advert for PC Game Pass in testing
  • It offers the ability to give up to five friends a 14-day free trial
  • Microsoft doesn’t appear to be giving up with a broader push to get these kind of ‘suggestions’ into its desktop OS

Windows 11 is undergoing yet more experimentation with adverts, this time in the Settings app (again), as driving users with targeted ‘suggestions’ of one kind or another appears to be a habit Microsoft isn’t going to relent with anytime soon.

The new ad – or ‘recommendation’ as Microsoft might call it – is present in the latest preview build of Windows 11 released in the Dev and Beta channels, meaning it’s still just in testing for now.

It’s an advert that appears in the Settings app home page which is targeted at Game Pass Ultimate and PC Game Pass subscribers. If that sounds familiar, it’s because Microsoft instigated a similar advert in testing last year, though that was trying to cajole people into signing up for Game Pass itself.

It was still targeted at gamers only, though, we were told at the time. The difference with this fresh advertising initiative is that it’s aimed at those who already subscribe, and it’s a referral ad. The idea is to “share a 14-day free trial” with up to five friends in an effort to get them to sign up.

As with the past advert for Game Pass, this only appears for those who are signed into their PC on their Microsoft account.

In the blog post for the new preview build 26120, Microsoft also notes that it’ll be improving the Recall feature in its next release for testers. It doesn’t say how, only that: “This important update will improve your experience. As part of this upcoming update, your existing snapshots will be deleted.”

Recall is the (controversial and tricky to implement) AI-supercharged search feature that only applies to those who have a Copilot+ PC (as it needs the beefy NPU incorporated with these laptops to ensure the process runs smoothly).

There’s a neat extra for those who use OneDrive in that Windows 11 will present a notification on your PC offering the chance to resume working on a file that you were just editing on your phone. This happens if you were interacting with a file on your smartphone within the last five minutes, then you subsequently unlock your PC – a nifty touch.

PC Game Pass Advert in Windows 11

(Image credit: Microsoft) Analysis: Boss drum, here we go again

I know, you’re probably sick of hearing the ‘stop this with the veiled advertising in Windows 11’ drum, and I’m sick of beating it, believe me. Microsoft doesn’t appear to take any notice, though, and would likely argue that there’s some value to its latest nudge. After all, you might want your friends on Game Pass, too, and offering the ability to take a two-week test trial could be something your pals appreciate.

Well, fair enough I guess, but what I’d still like to see (and again, this is another well-worn drum) is the ability to turn off all these kinds of recommendations as a system-wide switch. Then those who don’t want some of their screen real estate taken over by such nudges – which are in quite a few corners of the Windows 11 interface – could just flick that switch and enjoy a cleaner UI all around. Meanwhile, those who felt some of the recommendations were useful could keep them turned on.

Everybody wins, no?

Anyhow, I should again emphasize that this latest plug for Game Pass is just in testing at the moment, so it may not be realized. Those who aren’t so keen on the idea can make their feelings known via the usual feedback channels, and maybe throw in a vote for that system-wide ad (sorry, recommendation) kill switch. I can dream, can’t I?

You may also like...

Apple Maps could soon get one of Google Maps' worst features – and I may have to move elsewhere

Mon, 02/17/2025 - 09:16
  • Apple is apparently considering inserting ads into its Apple Maps app
  • This would mirror a move taken by Google Maps and similar rivals
  • It could result in a much worse user experience

Apple Maps was a buggy mess when it first launched, but in the years since it has become a genuine rival to Google Maps, and arguably outperforms it in some areas. But there are murmurings that it could soon adopt one of Google Maps’ worst features, and it’s made me worried for its future.

That’s because Bloomberg journalist Mark Gurman believes Apple is considering inserting adverts into the Apple Maps app. This could mean that certain places get pushed to the top of search results within the app, an example being a local Wendy’s topping the list when you search for 'fries,' merely because it paid to be advertised this way.

Apple Maps wouldn’t be the first Apple app to come with built-in ads. The Stocks, News and App Store apps already contain adverts, and the company is pushing further into the commercials business with its expanding sports coverage.

It’s also not the first time that Apple has looked at inserting ads into Apple Maps. Gurman reported back in 2022 that the firm was looking at ways to integrate advertising into its navigation app, although little came of this. Now, it looks like Apple is returning to the idea in a more serious way.

Degrading your search results

Apple Maps

(Image credit: Shutterstock)

A move like this sticks in the craw for a number of reasons. As an Apple user, I’m already paying a premium for hardware, so being served ads on top of that feels like I’m being nickel-and-dimed. As well as that, Apple is one of the most valuable and profitable companies in the world – does it really need to be degrading the user experience in order to squeeze even more money into its coffers?

I take some small comfort knowing that Apple is far more committed to user privacy than Google is. Apple takes certain steps to protect the information of people using the Maps app, such as assigning you a random identifier that only lasts for the duration of your session, making it impossible for Apple or a hacker to get a complete picture of any one person’s journeys. That makes me feel Apple would at least handle user privacy more stringently if it were to bring ads to Apple Maps.

But it doesn’t overcome my main problem with seeing ads in a mapping app. Apps like this aren’t just used for route planning – they’re used to find attractions and restaurants in your nearby area. You might want to find the best eatery near you, but if certain locations are being promoted to the top of the pile because they paid for the privilege, you could be pushed towards an inferior location and miss somewhere better that didn’t slip Apple a few shiny greenbacks. In other words, the playing field is being skewed away from the genuinely best results and towards those with the deepest pockets.

If I use an app like Apple Maps to find local attractions, I don’t want my screen to be crowded with questionable options when something better might end up being pushed out of sight. And while I’m assuming that Apple will respect user privacy based on its past behavior, that’s not a guarantee that the company will be quite so scrupulous when serious money is on the line.

I guess the good news is that I’ve become so accustomed to ignoring ads that I've already conditioned myself to scroll right past them in search results. But if Apple handles this move poorly, I might have to start looking for an alternative app.

You might also like

Apple Intelligence beta could land on the Vision Pro as soon as this week – will an AI infusion be enough to turn around the ailing headset's fortunes?

Mon, 02/17/2025 - 05:19
  • Apple Intelligence rumored to roll out as part of visionOS 2.4 in April
  • The developer beta could arrive for the Apple Vision Pro this week
  • AI tools such as Writing Tools, Genmoji, and Image Playground are coming to Apple's mixed-reality headset

The Apple Vision Pro, Apple's mixed-reality headset, is getting Apple Intelligence – and it could arrive in beta on the device as soon as this week.

The Vision Pro is Apple's most expensive consumer product, starts at $3,499 / £3,499 / $5,999, but it currently has no Apple AI features, despite visionOS 2.0 being revealed at WWDC 2024 alongside Apple Intelligence. Now rumors hint at the arrival of Apple's AI suite in visionOS 2.4, which could be released in April.

According to Mark Gurman, writing for Bloomberg, "The company aims to roll out Apple Intelligence as part of a visionOS 2.4 software upgrade targeted for as early as April, according to people with knowledge of the matter. The enhancements will become available in beta for developers as soon as this week, said the people, who asked not to be named discussing details of the update that aren’t yet public."

According to Gurman, the developer beta of visionOS 2.4, which his sources say could arrive this week, will include Apple Intelligence features such as Writing Tools, Genmoji, and Image Playground. Gurman says, "It’s the first time Apple is expanding its artificial intelligence tools from the iPhone, iPad, and Mac. Because the headset includes a Mac M2 chip and 16 gigabytes of memory, it’s able to support the on-device AI processing."

Too little too late for Vision Pro?

It's fair to say that the hype around the Apple Vision Pro faded relatively quickly after launch. From influencers using the headset in public places like the NYC subway to articles about how the device improved working from home, there was huge anticipation for Apple's eye-wateringly expensive device.

Fast forward just over a year and the Vision Pro barely makes the headlines, with the headset failing to generate interest among average consumer. So could Apple Intelligence change Apple's fortunes when it comes to Vision Pro? There were reports in October that Apple was scaling back production due to poor sales, and it will be hoping that an infusion of AI will help to rekindle interest in the device.

Apple Intelligence on the Vision Pro could be a very interesting proposition, although it might highlight Apple AI's shortcomings, including the lack of properly AI-powered Siri. That said, Apple is due to roll out a major AI overhaul of Siri with iOS 18.4 in the coming weeks, and if these reports are accurate the new and smarter Siri could be coming to the Vision Pro, as well as to iPhones.

Apple Intelligence might not make the Vision Pro a success, but it does show that Apple still cares about its mixed-reality headset, and with WWDC 2025 just around the corner, this could be the spark of energy the headset needs before an even bigger software upgrade later this year.

You might also like

Grok 3 launches today as Elon Musk's ‘scary good’ AI chatbot looks set to take on ChatGPT

Mon, 02/17/2025 - 05:18
  • Grok 3 is being released at 8.00pm PT
  • It will have improved reasoning computational power and adaptability
  • Musk says it will be more reliable

People may be ditching X for Bluesky at record levels , but Elon Musk’s attempt to turn X into a serious AI platform is still going strong with the release of a new Grok version 3.

According to Reuters, Grok 3 is due to be released at 8.00pm Pacific Time today. In a call addressing the World Governments Summit in Dubai last week Musk described Grok 3 as “scary smart”, saying it represented a major step forward over Grok 2 with improved reasoning, computational power and adaptability.

Hinting that his latest chatbot can now compete with ChatGPT and DeepSeek, Musk continued: "Grok 3 has very powerful reasoning capabilities, so in the tests that we've done thus far, Grok 3 is outperforming anything that's been released, that we're aware of, so that's a good sign”.

Fewer hallucinations

In the call Musk also talked about Grok 3’s ability to reduce the curse of AI chatbots, the errors that creep into AI – often called 'hallucinations' – by going back and forth through the data and tries to achieve logical consistency, so if it has wrong data that doesn’t fit reality, it will reflect on it and remove the error.

He also revealed that Grok 3 has been trained using more computational power than any other Grok model so far, and that a lot of synthetic data has been used in the training process.

There’s no news yet on Grok’s most controversial feature, its image generation capability, and how this will be improved or enhanced in Grok 3. xAI offers some of the best photorealistic image rendering around thanks to its use of the Flux AI model, and in December announced that it was using a new model Auroria image model.

In a statement xAI said: “Aurora is an autoregressive mixture-of-experts network trained to predict the next token from interleaved text and image data. We trained the model on billions of examples from the internet, giving it a deep understanding of the world. As a result, it excels at photorealistic rendering and precisely following text instructions. Beyond text, the model also has native support for multimodal input, allowing it to take inspiration from or directly edit user-provided images.”

Grok differs from other AI image generators because it enables you to create images of celebrities, cartoon superheros and politicians, seemingly without restriction.

Grok can be accessed through Musk’s X social media platform, even on the free tier, but it also has a native app for mobile devices and recently added image analysis to its list of features. Premium and Premium+ users get higher usage limits. It's not currently clear if Grok 3 will initially be available free to all users or just Premium and Premium+ users.

You may also like

Look out, AI video could soon flood YouTube Shorts

Fri, 02/14/2025 - 21:00

There are some unbelievably great, if abbreviated, films to watch on YouTube Shorts. A lot of them may soon be more literally unbelievable thanks to Google's AI video creation model Veo 2. YouTube has released Veo 2 to the Shorts platform, augmenting YouTube's Dream Screen AI tool and letting you produce AI-fueled flicks based on a text prompt.

Dream Screen has been using the original version of Veo to produce video backgrounds out of text prompts for Shorts since last year. Veo 2 ups the ante significantly by also making the characters and objects for the video along with the background. The upgrade also makes Dream Screen faster, better at understanding text prompts, and able to produce much more realistic results. The videos mimic real-world physics, and the characters move as realistically (or cartoonishly) as you might want.

You can try out the enhanced Dream Screen by opening the Shorts camera, selecting Green Screen, and typing in what you want to see. You can even add an AI-generated clip to an existing Short by tapping "Add," then "Create," then typing up the prompt. Veo 2 takes over, and within seconds, your giant Pomeranian ballerina is ready to perform.

AI visions

The upgrade to Dream Screen raises many questions and possible concerns. Will AI-generated content flood YouTube Shorts, making it harder to tell what’s real and what’s not? What will creativity look like when the barriers to high-quality visuals disappear? Will we simply get stuck in a loop of AI-generated influencers making AI-generated content for an audience of AI-powered recommendation algorithms?

Google does seem to get that hyper-realistic AI videos made in a few seconds might have some potential pitfalls. That's why YouTube is attaching a SynthID watermark and a label indicating the AI origins of any Dream Screen-produced video. How well these transparency and tracking attempts perform remains to be seen, but at least there's something.

The new feature is only coming to the U.S., Canada, Australia, and New Zealand for now, but others are in the pipeline, with more countries on the way. If you’re a YouTube content creator, this may be a huge boon, especially if the only thing standing between your video and viral fame is a slightly more perfect shot, better stock footage, or something truly outlandish. If you don't have an idea, you can always toss around ideas with YouTube's Brainstorm with Gemini tool.

You might also like

Windows 11 fully streamlined in just two clicks? Talon utility promises to rip all the bloatware out of Microsoft’s OS in a hassle-free way

Fri, 02/14/2025 - 18:30
  • Talon is a debloating tool designed for those who aren’t tech-savvy
  • It does everything for you in running a fully automated debloat of Windows
  • Bear in mind that third-party apps are used at your own risk, though the developer of Talon seems commendably transparent

Fed up with Windows 11’s bits of additional bloat, meaning all those unwanted bits of software and other elements that you’ll never use, clogging up the system?

You might not know where to start to do anything about fixing this, which is where a new utility comes in, allowing for a very easy method of debloating Windows 11 with a minimum of fuss required.

As TweakTown flagged up, this is Talon, a software tool developed by Raven with the aim of being an automated full debloat of Microsoft’s operating system that’s suitable for even novice computer users.

The promise is just two clicks – to choose the type of debloat you want and a dialog box to accept the changes being made to your PC – and you’re done. Well, you have to wait some time for the actual process to happen, but it’s all performed automatically, there’s no brain-ache or puzzling over options involved.

A barebones debloat is what many folks will run – just a straightforward stripping out of all the crud from Windows 11 – but other options can then add some (hopefully) useful apps back for you. For example, choosing ‘Gaming’ as the use of your PC will run the debloat and then install the likes of Discord and Steam.

You can find out more about Talon by watching the YouTube video below, and you can download the utility here (but have a quick read of my analysis underneath the video clip before you do so).

Analysis: An easy way to banish the bloat – but is it a sensible one?

I’ve got to say, I really like the philosophy of Talon, which is to take all the hassle out of debloating.

As Raven points out in the above video, a typical debloating tool will be a maze of check boxes and submenus, and it might even involve entering PowerShell commands. Tasks that less tech-savvy Windows 11 users will doubtless find difficult or even arcane.

So, taking all the pain out of that is a commendable goal. What Talon is really doing is bundling a bunch of these trickier utilities in a user-friendly, automated package. (For the curious, the tools drafted in under the bonnet of Talon include ChrisTitusTech’s WinUtil and Raphi’s Win11Debloat, which are the main engines of what’s happening here).

However, with any third-party app, you must be cautious. Ultimately, whether you want to install any piece of software is a decision that you must take yourself, especially when it comes to lesser-known developers.

However, Talon appears to be laudably transparent in the interview given to TweakTown, and one definite positive is that the code for the tool is open source and can be viewed and checked by anyone. (So, if there are flaws or anything amiss, hopefully they’ll be shouted about).

The developer Raven freely admits that as Talon relies on some third-party software, as mentioned, any vulnerabilities in those would also apply to the app itself (obviously).

I’ll leave the final words to the developer, as quoted from the TweakTown interview: “While it is possible for a supply chain attack to occur, where one of these [third-party] utilities gets compromised then Talon is inherently compromised as a result, they are very popular utilities with lots of eyes on their code, and with extremely talented and trusted maintainers."

“The rest of Talon is done through homemade scripts that we maintain. At the end of the day, the possibility of malware injection, a supply chain attack, or whatever else, is there for any software, no matter the size of the team or the popularity of a project. We will do our best to ensure that this day never comes, though, and if it does we will address it as fast as possible to ensure minimal impact.”

For those who aren’t convinced or would rather DIY the task of streamlining their operating system, make sure you check out TechRadar’s guide on how to find and remove bloatware from your Windows 11 PC.

You may also like...

Gemini just added one of ChatGPT's best features and I'm finally excited to use it

Fri, 02/14/2025 - 04:29
  • Google Gemini can now remember previous conversations
  • The memory functionality is rolling out now to Google One AI Premium subscribers
  • ChatGPT has had similar functionality for a year, and it's one of the best features of any AI chatbot

Google has just added an upgraded memory feature to Gemini that allows you to ask the AI chatbot questions based on past conversations.

The new "recall" feature is rolling out to all users who subscribe to Google One AI Premium, a paid monthly subscription that grants access to Gemini's best features. With recall, you'll be able to ask Gemini about previous conversations and pick up from where you left off, allowing the AI to feel more alive and aware of your history. Previously, Gemini had no recollection of previous chats, so you'd have to remind it of important details.

This huge upgrade to Google Gemini brings the AI chatbot up-to-speed with competitor ChatGPT, which has had a well-functioning memory feature for over a year now. The difference is, ChatGPT's offering is available for free and doesn't require a monthly subscription to access its functionality. That said, Google could be testing the recall feature before rolling it out to free Gemini users, although currently, we've had no information of that happening.

This new update comes off the back of Gemini's November update that added the ability for the chatbot to remember certain things about you based on your interests and personal preferences. Unlike this new update, you'd have to go to Gemini's "Saved Info" tab and pre-fill information for the AI to reference in conversations.

When ChatGPT introduced memory last year it completely changed the way I interacted with AI, allowing me to speak naturally with the chatbot and spot nuances where it was able to reference the past in very useful ways. Until now, I've been put off from using Gemini because of its lack of memory, but that's all changed. Gemini's recall feature is rolling out in English to Gemini One AI Premium users now (although I don't have access yet), and Google says the update will be available for other languages in the coming weeks.

The context we needed

Gemini's ability to remember previous conversations gives Google's AI chatbot a whole new level of usefulness. In the past, I've been frustrated by Gemini's lack of context to my prompts when I've asked similar questions in old chats. This was never an issue if you used one single chat with Gemini, but considering the range of models from Gemini 2.0 Flash to Gemini 2.0 Flash Thinking Experimental, I can quickly rack up multiple discussions at once.

Now, Gemini will be able to take information from all of my chats and have the personal context to reference them in any way I need. Things like "Remember that time I talked to you about train travel? What was the route you told me to take?" can now be used in Gemini, and that's a huge step in making AI more conversational and more accessible.

Talking about accessibility, hopefully, Google plans to roll out this memory feature to free users, as I truly believe a memory function is one of the most important features for any AI chatbot. Until then, I'll still recommend ChatGPT to my friends and family, after all, OpenAI's model has the memory ofan elephant, and without paying for it.

You might also like

You can now talk to Microsoft Copilot Voice in 40 more languages

Thu, 02/13/2025 - 22:30
  • Microsoft’s Copilot Voice has been upgraded with 40 new languages
  • The AI has also improved its real-time responses
  • Microsoft wants to encourage people to engage with Copilot in their everyday lives

Microsoft Copilot Voice has become a lot more cosmopolitan. The AI assistant has added support for 40 new languages and improved its real-time responses in a bid to make conversations feel more natural and comfortable for users.

Copilot Voice debuted in October, adding a vocal component to the AI, but with more power than the previous standard form of voice assistant. It can handle multi-turn conversations, recognize interruptions, and even adjust its tone based on emotional cues. It’s also free, which is a pretty big selling point in a world where AI subscriptions are becoming the norm. OpenAI has Advanced Voice Mode for ChatGPT, while Google’s Gemini Live offers its vocal interface.

The expanded language support is a big deal, especially for users outside of English-speaking markets. Whether you’re switching between languages or simply want an assistant who understands your native tongue better, this is a welcome change. This also points to Microsoft's strategy for making Copilot more of an international AI assistant through the Voice feature.

Speedy speech

You've got a lot *in* your hands, so let me help! Just get real-time updates with Copilot Voice pic.twitter.com/lF8B8UkQYJFebruary 13, 2025

Another key improvement is in real-time information retrieval. Voice assistants have always had a slight lag when pulling information from the web, often leaving users waiting while the AI “thinks.” With this update, Copilot Voice is now much faster and more responsive when answering questions, making interactions feel smoother and more natural. No more awkward pauses while you wait for an answer to a simple question.

The update also highlights Microsoft's efforts to enhance Copilot's place as a digital assistant, not just a glorified search engine. Copilot Voice might succeed after the failure of Cortana as Microsoft's AI voice assistant. The gap between what people expect from an AI assistant and what they actually get is closing, and voice AI tools will likely be a major facet.

You might also like...

If you've bought an internal Seagate hard drive, beware of the growing refurbished Chia scandal - here's what you need to know

Thu, 02/13/2025 - 16:24
  • Seagate denies involvement in fraudulent HDD resales.
  • Buyers can check Seagate HDD usage history using relevant tools.
  • Retailers are offering some sort of compensation.

Seagate hard drives that were previously used in Chinese Chia cryptocurrency mining farms have been resold as new by unsuspecting retailers.

An investigation by Heise indicates large quantities of high-mileage drives have surfaced in the market, particularly in Europe, Australia, Thailand, and Japan.

These drives, often datacenter-grade Seagate Exos models, have been found with thousands of operational hours despite being marketed as brand new.

Chia farms and the flood of second-hand drives

At the peak of the cryptocurrency boom, mining operations required vast storage capacity, leading to a surge in demand for high-end HDDs. However, as the profitability of Chia mining declined, many farms shut down and sold their hardware. These hard drives were then repackaged and reintroduced into the market, deceiving customers.

Concerned buyers can verify the true usage history of their Seagate HDDs using special diagnostic tools. While SMART parameters can be reset to hide prior use, the FARM (field-accessible reliability metrics) values provide a more accurate record.

Users can check these values by running the command smartctl -l farm /dev/sda in Smartmontools version 7.4 or higher or by using Seagate’s own Seatools software to inspect the drive’s operational history.

Seagate has stated it only distributes genuine hard drives through official channel, and it suspects these used HDDs entered the secondary market before reaching consumers.

Nevertheless, It has also launched a full-scale investigation and has urged affected buyers to report any suspicious purchases to fraud@seagate.com.

Affected retailers are firefighting the issue, with Galaxus creating online help pages for affected customers, while Proshop is offering free returns and replacements. Alternate, a German retailer, denies prior knowledge of the issue but has encouraged customers to report used drives. Wortmann, on the other hand, insists on verifying HDDs before offering compensation.

Via TomsHardware

You may also like

Meta purportedly trained its AI on more than 80TB of pirated content and then open-sourced Llama for the greater good

Thu, 02/13/2025 - 15:05
  • Zuckerberg reportedly pushed for AI implementation despite employee objections
  • Employees allegedly discussed ways to conceal how the company acquired its AI training data
  • Court filings suggest Meta took steps to unsuccessfully mask its AI training activities

Meta is facing a class-action lawsuit alleging copyright infringement and unfair competition over the training of its AI model, Llama.

According to court documents released by vx-underground, Meta allegedly downloaded nearly 82TB of pirated books from shadow libraries such as Anna’s Archive, Z-Library, and LibGen to train its AI systems.

Internal discussions reveal that some employees raised ethical concerns as early as 2022, with one researcher explicitly stating, “I don’t think we should use pirated material” while another said, “Using pirated material should be beyond our ethical threshold.”

Meta made efforts to avoid detection

Despite these concerns, Meta appears to have not only ploughed on and taken steps to avoid detection. In April 2023, an employee warned against using corporate IP addresses to access pirated content, while another said that “torrenting from a corporate laptop doesn’t feel right,” adding a laughing emoji.

There are also reports that Meta employees allegedly discussed ways to prevent Meta’s infrastructure from being directly linked to the downloads, raising questions about whether the company knowingly bypassed copyright laws.

In January 2023, Meta CEO Mark Zuckerberg reportedly attended a meeting where he pushed for AI implementation at the company despite internal objections.

Meta isn't alone in facing legal challenges over AI training. OpenAI has been sued multiple times for allegedly using copyrighted books without permission, including a case filed by The New York Times in December 2023.

Nvidia is also under legal scrutiny for training its NeMo model on nearly 200,000 books, and a former employee had disclosed that the company scraped over 426,000 hours of video daily for AI development.

And in case you missed it, OpenAI recently claimed that DeepSeek unlawfully obtained data from its models, highlighting the ongoing ethical and legal dilemmas surrounding AI training practices.

Via Tom's Hardware

You may also like

Google Maps is ramping up its Waze-like incident reports – and that could split opinion among users

Thu, 02/13/2025 - 12:30
  • Google Maps is testing the rollout of more incident reports
  • These are weather-related options such as ‘flooded road’ or ‘low visibility’
  • The growing library of incidents is a source of annoyance for some drivers

Google Maps is introducing new incident reporting options, fresh additions that pertain to weather-related conditions.

Android Police spotted these new kinds of report, and they include the likes of ‘flooded road’ for when there’s been a huge deluge of rain, or ‘low visibility’ for when it gets foggy. And indeed ‘unplowed road’ for when, well, you should probably turn around and find a plowed road that’s not wheel-deep in snow.

The site noticed these new options in Google Maps for Android Auto first off, and then in the iPhone app.

The not-so-great news for those keen on being able to benefit from a wider variety of untoward happenings being reported is that these new introductions have not yet made it to the Android version of Google Maps.

However, it surely won’t be long before the ability to report a flooded or snowed-up road arrives on Android.

Is an ever-growing library of incidents a good thing?

Google Maps on two iPhone 12 Pro devices sitting side by side.

(Image credit: Future)

This is a continued expansion of the reporting of incidents in Google Maps, on top of clearly-labeled Waze reports being piped through alongside native reports since last year. There’s already a wide range of incidents that can be flagged, such as road traffic accidents, stalled cars, lanes being closed, speed traps, and so on.

Sometimes, these kind of alerts can be very useful, of course, and plenty of folks are grateful to have been warned of an incoming thorny issue on the road ahead.

However, not everyone is keen on being subject to more and more of these reports being highlighted in Google Maps – with complaints about them being too frequent only likely to multiply, as Google further expands the library of incidents that can be reported.

The problem is compounded by errant reports – incidents that aren’t there, or were resolved some time back – and there being no easy way to switch off said reports.

It looks like this is a road Google is insisting on driving down, though, despite the ‘stop’ signs being waved by some of the drivers who use its navigation app.

You might also like

Microsoft makes another tweak to Windows 11’s taskbar – but it’s probably not the change you were hoping for

Thu, 02/13/2025 - 09:25
  • Microsoft has added a new icon to the Windows 11 taskbar
  • It allows you to use Windows Studio Effects in compatible apps
  • Windows Studio Effects is exclusive to Copilot+ PCs

If you’ve downloaded the latest Windows 11 update, which was released as part of the monthly ‘Patch Tuesday’ batch of fixes, you might have noticed a new icon in the taskbar and wondered what it is. Well, wonder no more: it’s a shortcut for the AI-powered Windows Studio Effects feature.

Windows Studio Effects is a suite of effects that use artificial intelligence to improve the quality of your video calls. It can blur the background, make it look like you’re looking directly at the camera (rather than looking at the screen), improve the lighting, and make sure you’re always in frame (as well as applying more creative filters).

You might not have used Windows Studio Effects before – they are a relatively new batch of effects introduced as part of Microsoft’s AI push, and this change appears to be an attempt to introduce them to a wider audience. The icon will appear when you use an app that makes use of Windows Studio Effects – which will include pretty much any tool that uses your device’s webcam.

Clicking the icon brings up the effects for you to easily turn on – and if you hover over the icon, it will tell you which app is using the webcam. This is a handy privacy feature, as it means apps shouldn’t access your webcam without you knowing.

However, there are plenty of Windows 11 users (including myself) who are waiting for Microsoft to make changes to the taskbar that bring back some of the functionality that previous versions of Windows had – especially the ability to drag files onto app icons in the taskbar to open them in the app. Microsoft instead adding icons for features a lot of people don't use is disappointing, to say the least.

Easier access, but is it enough?

As I mentioned, Windows Studio Effects was introduced as part of Microsoft’s campaign to get more people to use AI features – something the company has invested heavily in. It was advertised as one of the big selling points of Copilot+ PCs – a new breed of Windows 11 devices that meet certain hardware requirements (16GB of RAM and a CPU with an NPU) to run AI tasks locally on the device, rather than via the internet.

Because of this, Windows Studio Effects is exclusive to Copilot+ PCs, so if you don’t see the new icon, then it’s likely due to your PC not meeting the requirements.

Therein lies part of the problem for Microsoft if it wants more people to use Windows Studio Effects. Making the feature more easily accessible by putting an icon in the Taskbar is a good first step, but by limiting the feature to certain PCs is going to reduce its reach.

Of course, what Microsoft would like in that case is for people who are desperate to use Windows Studio Effects to go out and buy a new Copilot+ laptop. But that’s the other problem – is this a feature that will get people excited about AI? And excited enough to buy a new laptop?

I just don’t think so. Some features, such as blurring the background and auto-focusing the camera, can be done by other apps without the need of an NPU (neural processing unit), while other features, such as the creative effects, are fun, but hardly essential. If you use your device for making video calls as part of your work, you’re unlikely to want to enable them. Worse, the eye contact feature ends up being a bit creepy, with unnatural-looking eye contact causing an uncanny valley effect.

So far, the Copilot+ PCs we’ve tested have been some of the best laptops you can buy thanks to their performance and battery life, but the AI features are the least impressive bits about them – which is a problem as Microsoft envisions these as key selling points – especially as other key Copilot+ PC features, such as the controversial Recall feature, either don’t work that well, or have yet to be released.

So, no matter how easy Microsoft makes it to launch these new AI features, people are going to continue to ignore them until the company gives us a good reason to – and so far, it’s been failing to do just that.

You may also like...

Free Gemini Live update brings better conversation skills and understanding of accents

Thu, 02/13/2025 - 06:50
  • Gemini Live is now more conversational and dynamic
  • It is better at translating languages and recognizing accents
  • Screen sharing and video streaming abilities coming soon

If you’re a Gemini user then you will have got an email from Google today explaining that the company is rolling out an upgrade to Gemini Live to “make your conversations even more dynamic and engaging”.

The new upgrade to Gemini Live (the conversational part of Gemini that you can access on your phone) means that conversations have been improved by an, as yet unnamed, new AI model. Google stated that “With our latest model, Live can better understand multiple languages, dialects or accents in a single Live chat and help with your translation needs.”

As well as the February improvements to Gemini Live, Google also shared its plans for Gemini Live updates in the future. “In the coming months, we'll also bring screen sharing and live video streaming capabilities to Live.”

These updates hint at a multimodal future for Gemini Live on all devices, where it has the ability to be aware of what is being shown on the screen so you can ask questions about it. Currently that’s something it can’t do unless you own a Pixel 9 phone, which has the ability to "Talk live about this". While you can upload a photo to standard Gemini, and ask the chatbot questions about it, or ask it to extract text from the photo, you can’t do this in Live mode yet unless you won a Pixel 9.

Privacy update

Along with this new ability, Google also issued a privacy update, stating that “As part of providing this improved experience, your audio, video and screenshares are stored in your Gemini Apps activity (if it's on). Your data in Gemini Apps activity is deleted per your auto-delete period in that setting, and you can manage and delete your Gemini Apps activity at any time.”

To access your Gemini Apps activity, on a mobile device, click on your profile picture in the Gemini app, then on ‘Gemini Apps Activity’. In a web browser, go to gemini.google.com and click on the menu icon, then Activity.

Gemini Live

(Image credit: Future, Lance Ulanoff) What I found

A conversation being more dynamic is pretty subjective, so I tried a conversation with the new update today and while it went smoothly it was hard to pinpoint what the differences were, if any, from my previous interactions with Gemini Live. Sure, Gemini sounded perky and eager to please, but it has always sounded like that.

The next thing I wanted to try was the translation abilities. I tried to get Gemini Live to translate words from Spanish to English, but more often than not it kept telling me that the word I was saying was the name of a town in California or Michigan, rather than translating it into English! However, that may have more to do with my Spanish pronunciation than Gemini’s ability to translate from Spanish to English. To be fair, I did manage to get it to understand some of my Spanish words and translate them eventually.

So, I’d say it was hard to pinpoint exactly what had changed in Live, however when I asked Gemini Live when it was last updated, it said February 2025, so I’m assuming it has been updated with the new abilities. Let me know in the comments if you’ve noticed that your Gemini chats feel more alive compared to before.

Gemini LIve is currently free to all Android users, but also available in the Gemini app to iPhone users who are subscribed to Gemini Advanced.

You may also like

OpenAI is finally going to make ChatGPT a lot less confusing – and hints at a GPT-5 release window

Thu, 02/13/2025 - 04:51
  • ChatGPT will one day unify all of OpenAI's LLMs
  • GPT-5 will be free for all users with higher intelligence for paying subscribers
  • As part of the unification process, OpenAI's o3 will no longer be available as a standalone model when GPT-5 launches

OpenAI CEO, Sam Altman, just revealed some major info on the future of ChatGPT that will see all GPT models including the upcoming GPT-5 unified without the need to pick a model for each occasion.

At the time of writing, you need to choose between different OpenAI models every time you use ChatGPT, whether that's GPT-4o for everyday tasks or a more focused reasoning model like o3-mini for problem-solving. But that could all be about to change according to Altman who promises a simplifying of ChatGPT that "just works".

On X, Altman said, "We want AI to “just work” for you; we realize how complicated our model and product offerings have gotten. We hate the model picker as much as you do and want to return to magic unified intelligence."

This would essentially mean that you'd no longer need to choose between AI models and ChatGPT would be able to determine which model is best depending on your prompt. Altman said, "a top goal for us is to unify o-series models and GPT-series models by creating systems that can use all our tools, know when to think for a long time or not, and generally be useful for a very wide range of tasks."

This would be a pretty incredible feet and a huge improvement over the current system which means you have to constantly change between models depending on what's best suited for your needs. Even more so, the average consumer sometimes doesn't know which model is best for which scenario, so having ChatGPT do that work for you removes a point of friction from the process.

He goes on to say that GPT-4.5, the successor to 4o, will be the final ChatGPT model to launch as a standalone model before combining all models in a system that uses all of OpenAI's tools simultaneously.

This means that when GPT-5 does inevitably launch it will "integrate a lot of our (OpenAI's) technology, including o3." Altman even confirmed that o3 will no longer be shipped as a standalone model when this change occurs.

OPENAI ROADMAP UPDATE FOR GPT-4.5 and GPT-5:We want to do a better job of sharing our intended roadmap, and a much better job simplifying our product offerings.We want AI to “just work” for you; we realize how complicated our model and product offerings have gotten.We hate…February 12, 2025

Unlimited access to GPT-5

As if the news of OpenAI planning to unify all of its AI models wasn't big enough, Altman's roadmap also revealed that the free tier of ChatGPT will get unlimited access to GPT-5 at its "standard intelligence setting." Plus subscribers will get an even "higher level of intelligence" while Pro subscribers, who pay the hefty $200/month fee, will get the highest level of GPT-5 intelligence.

weeks / monthsFebruary 12, 2025

One X user took the opportunity of Altman sharing future ChatGPT info to ask for an ETA for GPT-4.5 and GPT-5. Altman responded "weeks / months". Whether that means 4.5 is weeks away and 5 is months away or it was just a cheeky response to the users "Weeks?Months?" question, your guess is as good as mine.

While Altman didn't confirm a release date for GPT-4.5 or GPT-5 he did confirm that GPT-5 will incorporate voice, canvas, search, deep research, and more. So while we don't know how long we're going to have to wait, the wait should be worth it.

You may also like

Windows 11 is set to offer the option nobody was crying out for – having Copilot automatically load in the background when the PC boots

Thu, 02/13/2025 - 04:24
  • Windows 11’s Copilot app has a new feature in testing
  • It offers the ability to ‘auto start on login’ for the app
  • This could be a handy timesaver for those who use Copilot regularly

Windows 11 has an incoming change for the Copilot app whereby it can be set to automatically load in the background when you start your PC.

PhantomOfEarth, who regularly posts bits and pieces of Windows-related observations and rumors on X, noticed the development.

New Copilot app update for Insiders: 1.25014.121.0, with a new auto start on login (runs in the background) feature. pic.twitter.com/0urRNzmQrWFebruary 10, 2025

As shown in the above post, there’s a new ‘auto start on login’ choice in the Settings for the Copilot app, which when enabled does just that – it automatically starts Copilot (in the background) when your system is fired up.

Right now, the option is still in testing (in version 1.25014.121.0 of the app), but providing there’s no pushback or problems, it should go live for all Windows 11 users before too long.

Windows 11 Native Copilot App

(Image credit: Microsoft) Analysis: The stumbling journey of the Copilot assistant

You might be thinking ‘who cares’ when it comes to this additional feature for Copilot, and that’s a fair enough point. I don’t imagine usage of the Copilot app is all that widespread, and indeed, I’d be surprised if it wasn’t a niche feature in Windows 11 – but for those people who do make use of the AI, this is still a handy little extra touch.

What it means is that they can invoke the Copilot app with the Alt+Space keyboard shortcut (assuming that’s also enabled), without having to wait for it to load up the first time this action is taken in a new computing session. (Because it will have already loaded up already, in the background).

The good news is that the option isn’t on by default, so Copilot isn’t being forcefully pushed into the background of everybody’s Windows 11 installation. You can either use this option, or just feel free to ignore it.

All in all, it’s a relatively minor change, and as with anything to do with Copilot, I’m waiting for Microsoft to justify its existence in a more convincing manner. There were some big promises of an AI that could make sweeping system-wide changes based on simple requests back at the launch of the Copilot assistant on the desktop. However, all that appears to have been, well, swept under the carpet as time passed by, and Copilot was decoupled from the internals of Windows and made a standalone app.

Maybe Copilot will be realized in this form eventually, but I can’t help but think that this destination feels a long, long, way off, given how things have progressed – or rather haven’t – with the desktop assistant thus far.

Via Windows Latest

You may also like...

I pitted Gemini 2.0 Flash against DeepSeek R1, and you might be surprised by the winner

Wed, 02/12/2025 - 22:00

I've enjoyed pitting various AI chatbots against each other. After comparing DeepSeek to ChatGPT, ChatGPT to Mistral's Le Chat, ChatGPT to Gemini 2.0 Flash, and Gemini 2.0 Flash to its own earlier iteration, I've come back around to match DeepSeek R1 to Gemini 2.0 Flash.

DeepSeek R1 sparked a furor of interest and suspicion when it debuted in the U.S. earlier this year. Meanwhile, Gemini Flash 2.0 is a solid new layer of ability atop the widely deployed Google ecosystem. It is built for speed and efficiency and promises quick, practical answers without sacrificing accuracy.

Both claim to be cutting-edge AI assistants, so I decided to test them from the perspective of someone with a casual interest in using AI chatbots in their everyday lives. Both have shown themselves to be effective at a basic level, but I wanted to see which one felt more practical, insightful, and actually helpful in everyday use. Each test has a screenshot with DeepSeek on the left and Gemini 2.0 Flash on the right. Here’s how they did.

Local Guide

Google Gemini vs. DeepSeek

(Image credit: Screenshots of Google Gemini/DeepSeek)

I was keen to test the search abilities of the two AI models combined with insight into what is worthwhile as an activity. I asked both AI apps to "Find some fun events for me to attend in the Hudson Valley this month."

I live in the Hudson Valley and was aware of some things on the calendar, so it would be a good measure of accuracy and usefulness. Amazingly, both did quite well, coming up with a long list of ideas and organizing them thematically for the month. Many of the events were the same on both lists.

DeepSeek included links throughout its list, which I found helpful, but the descriptions were just quotes from those sources. Gemini Flash 2.0's descriptions were almost all unique and frankly more vivid and interesting, which I preferred. While Gemini didn't have the sources immediately available, I could get them by asking Gemini to double-check its answers.

Reading tutor

Google Gemini vs. DeepSeek

(Image credit: Screenshots of Google Gemini/DeepSeek)

I decided to expand on my usual test for AI's ability to offer advice on improving my life advice with something more complex and reliant on actual research. I asked Gemini and DeepSeek to "Help me devise a plan for teaching my child how to read."

My child isn't even a year old yet, so I know I have time before he's paging through Chaucer, but it's an aspect of parenthood I think about a lot. Based on their responses, the two AI models might as well have been identical advice columns. Both came up with detailed guides for different stages of teaching a child to read, including specific ideas for games, apps, and books to use.

While not identical, they were so close that I would have had trouble telling them apart without the formatting differences, like the recommended ages for the phases from DeepSeek. I'd say there's no difference if asked which AI to pick based purely on this test.

Vaccine superteam

Google Gemini vs. DeepSeek

(Image credit: Screenshots of Google Gemini/DeepSeek)

Something similar happened with a question on simplifying a complex subject. With kids on my mind, I explicitly went for a child-friendly form of answer by asking Gemini and DeepSeek to "Explain how vaccines train the immune system to fight diseases in a way a six-year-old could understand."

Gemini started with an analogy about a castle and guards that made a lot of sense. The AI oddly threw in a superhero training analogy in a line at the end for some reason. However, similarities in training to DeepSeek might explain it because DeepSeek went all in on the superhero analogy. The explanation fits with the metaphor, which is what matters.

Notably, DeepSeek's answer included emojis, which, while appropriate for where they were inserted, implied the AI expected the answer to be read from the screen by an actual six-year-old. I sincerely hope that young kids aren't getting unrestricted access to AI chatbots, no matter how precocious and responsible their questions about medical care might be.

Riddle key

Google Gemini vs. DeepSeek

(Image credit: Screenshots of Google Gemini/DeepSeek)

Asking AI chatbots to solve classic riddles is always an interesting experience since their reasoning can be off the wall even when their answer is correct. I ran an old standard by Gemini and DeepSeek, "I have keys, but open no locks. I have space but no room. You can enter, but you can’t go outside. What am I?"

As expected, both had no trouble answering the question. Gemini simply stated the answer, while DeepSeek broke down the riddle and the reasoning for the answer, along with more emojis. It even threw in an odd "bonus" about keyboards unlocking ideas, which falls flat as both a joke and insight into keyboards' value. The idea that DeepSeek was trying to be cute is impressive, but the actual attempt felt a little alienating.

DeepSeek outshines Gemini

Gemini 2.0 Flash is an impressive and useful AI model. I started this fully expecting it to outperform DeepSeek in every way. But, while Gemini did great in an absolute sense, DeepSeek either matched or beat it in most ways. Gemini seemed to veer between human-like language and more robotic syntax, while DeepSeek either had a warmer vibe or just quoted other sources.

This informal quiz is hardly a definitive study, and there is a lot to make me wary of DeepSeek. That includes, but is not limited to, DeepSeek's policy of collecting basically everything it can about you and storing it in China for unknown uses. Still, I can't deny that it apparently goes toe-to-toe with Gemini without any problems. And while, as the name implies, Gemini 2.0 Flash was usually faster, DeepSeek didn't take so much longer that I lost patience. That would change if I were in a hurry; I'd pick Gemini if I only had a few seconds to produce an answer. Otherwise, in spite of my skepticism, DeepSeek R1 is as good or better than Google Gemini 2.0 Flash.

You might also like

Google Gemini adds its personal AI researcher to your iPhone – if you have the right subscription

Wed, 02/12/2025 - 17:30
  • Google Gemini has added its Deep Research AI model to iPhones
  • Deep Research searches the web, compiles, and reports back to Gemini Advanced subscribers
  • The information is refined and organized into results viewable on Google Docs

iPhone users who love Google’s Gemini AI assistant have a new tool to help them condense information from the internet. The tech giant has added its Deep Research feature to iOS devices, at least if you’re a Gemini Advanced subscriber. Deep Research debuted on Gemini's web portal in December and rolled out to Android users earlier this month.

Deep Research is an AI-powered tool for compiling information. Essentially, it takes the classic Google search collecting of links and extends it several steps to read what's at those links and organize what it finds into something useful. It’s the first “agentic” feature in Gemini, meaning the AI is more proactive and doesn't just answer questions; it carries out an entire research project.

If you do subscribe to Gemini Advanced and have the app on your iPhone, you can switch to Deep Research by picking it from the model list. Select “1.5 Pro with Deep Research” near the newer experimental 2.0 Flash model. Then, just ask Gemini a research question, something big and messy and complicated if you want to test its limits. Gemini will respond with a step-by-step plan it will undertake on your behalf. You can tweak the approach if you don’t like it, deleting parts or adjusting the focus, and then tap the “start research” button. Gemini then heads off on a digital scavenger hunt, digging through sources, running multiple searches, and refining its findings in real-time.

Don't expect instant answers. It can take between five and ten minutes to complete, longer if it's an especially difficult topic. You don’t have to babysit it, though. Gemini will send you a notification when the work is done, and you can check your chat history later to review the results. Once it’s ready, you’ll get a structured report, complete with sections, tables if necessary, and a full list of sources. And if you want to make it look even more official, you can export it directly to Google Docs.

Of course, Google isn’t letting you run wild with infinite research requests. There are daily limits, and the app will politely remind you how many you have left, just in case you were planning to outsource your entire workload to AI. Right now, Deep Research runs on Gemini 1.5 Pro, but Google has hinted that it will eventually move to the more powerful 2.0 Pro once that model exits its experimental phase.

Ads AI

The launch of Deep Research on iPhones matters for more than just access reasons. The pitch to iOS users signals how aggressively Google is leaning into Gemini for all kinds of productivity demands. Unlike a standard chatbot response, which can sometimes feel like a slightly smarter autocomplete, Deep Research attempts to simulate how a human researcher breaks down a topic, refines their findings, and presents something meaningful.

It’s also a direct response to how rival AI companies are pursuing similar goals. OpenAI's version of this kind of feature even shares the Deep Research Name. But, with OpenAI, Meta, and Apple working on more advanced AI-driven assistants, Google is clearly betting that features like Deep Research will make Gemini a must-have.

Deep Research is, in some ways, just another tool in Google’s AI arsenal, but it's a powerful one. It's easy to see the appeal to people who feel like they are drowning in information when online. It may not remake the experience of looking things up online, but it could set Gemini up as the crowning jewel for future digital research projects.

You might also like

Pages