Techradar

Subscribe to Techradar feed
Updated: 9 hours 27 min ago

Your new favorite teacher might be this AI educator that never loses their patience

Thu, 02/27/2025 - 18:30
  • StudyFetch's new Tutor Me is an interactive AI that can converse with students as it teaches
  • Tutor Me builds its lessons from textbooks, notes, and assignments uploaded by the student
  • The AI personalizes lectures and quizzes to the students and can track their progress

AI can impart a lot of knowledge but isn't usually a very good teacher. Sometimes, it's more like glorified search engines than a study partner. Educational tech developer StudyFetch has a new tool that might change that opinion. Tutor Me is an AI platform focused specifically on teaching students. The idea is something like a ChatGPT that is specially trained to perform as a teacher on specific subjects without needing to constantly tailor your prompts for that purpose.

The biggest difference from just asking ChatGPT to teach you something is that Tutor Me is built to work with a student’s actual course materials, so its explanations, quizzes, and lesson plans are always based on what they’re actually studying. You can upload lesson texts, assigned readings, notes from online lessons, or even photos of whiteboards that Tutor Me will analyze to develop a unique study guide and curriculum.

Tutor Me acts like an online conference with a teacher. The AI responds in real-time, just like a human tutor would. But unlike a human tutor, it never runs out of patience or time. You can ask it to test your knowledge by requesting a quiz, ask it to speed up or slow down its explanations and speaking speed, and even bring up a topic by referencing a textbook page number. If you find flashcards dry and detached, StudyFetch's AI might be ideal for helping you stay interested in any given subject. Plus, it can track your progress and help you keep up on your lessons and assignments.

Learn AI

Considering more than a quarter of teens already use ChatGPT to help with homework, something like Tutor Me probably has a lot of potential interest. OpenAI isn't the only alternative as an educational AI provider, though.

Google Gemini has its own Learn About feature, and Khan Academy’s Khanmigo has an AI-driven tutor for students looking to supplement class time. Educational institutes are taking notes as well. Arizona State University (ASU) is working with OpenAI to incorporate ChatGPT, and London’s David Game College is running an AI-taught class as part of its new Sabrewing program.

Still, the direct integration with course materials will likely aid Tutor Me in standing out. It solves the complaint about AI being too general and offering answers that don’t quite match what you want to learn about. Pulling from actual lesson plans and assignments reduces Tutor Me's chances of going off-topic a lot.

You might also like

Perplexity's voice mode gets a futuristic makeover on your iPhone

Thu, 02/27/2025 - 15:00
  • Perplexity’s iOS app has updated with a revamped voice mode
  • The upgrade adds six new voices and real-time search integration
  • Perplexity also included new personalization features and a fresh design to the iOS app

AI conversational search engine Perplexity is speaking up in its latest iOS update. The AI chatbot's voice mode brings a new look and more natural-sounding voices to the app, along with some new interactive features. The upgrade sets up Perplexity's app to better challenge rivals with their own voice options like ChatGPT or Google Gemini.

Before this update, Perplexity’s voice feature was somewhat limited. It could read answers aloud but without much emotion and with a walkie-talkie sort of interface that slowed things down. Perplexity has now added six different voices. While it’s still a text-to-speech system, meaning it won't have the emotional nuance of ChatGPT's advanced voice mode, the improvement is noticeable. You can finally pick a voice that doesn’t sound like an audiobook narrator from 2005.

So, how does Perplexity’s voice mode stack up against the competition? From an unscientific comparison, I'd say ChatGPT's advanced voice mode wins in sheer realism, with an expressive sound, conversational tone, and surprisingly natural-feeling laughter and outright interruptions. Google Gemini is a little less fluid, but still very natural overall. And while Google Gemini's voice is very good, it's a little less fluid than ChatGPT. Perplexity's offering is very clear and easy to understand, but its voices linger in the more neutral tone that sounds a little more artificial. It's not a negative though, just a different approach. Instead of focusing on making AI sound human, it’s doubling down on utility and making sure that when you ask a question, you not only get an answer but also the sources to back it up.

Perplexity's voice mode is embedded in the AI's other features too. That means the real-time search tool links to the voice mode. When you ask a question, you don’t just get a spoken response, you also see live search results, with links to the sources. It's an ability that's crucial since so much of Perplexity's appeal is in how it melds AI with search capabilities.

A post shared by Perplexity AI (@perplexity.ai)

A photo posted by on

Perplexity resolved

The way you start voice mode and the look of the app when using the feature have also been changed. The microphone icon you tap to start talking has been replaced by a sphere of shifting dots that respond to your voice and touch, scattering and reforming at your touch. It's an unnecessary but fun touch to the app. You can also now personalize the app with widgets like stock tickers or sports score updates. It adds another layer of customization that makes Perplexity feel a little more like your assistant rather than just a generic chatbot. Those kinds of options will likely be necessary for Perplexity to keep up with and perhaps beat other AI chatbots.

That ambition is also evident in the other major upgrade to the app. Perplexity has also added the new Claude 3.7 Sonnet model to its lineup. Anthropic's new model is aimed at enhancing Perplexity's ability to respond capably to complex or multi-step questions. Claude 3.7 is still very new, and reviews haven't been unanimous, but it could exceed or at least match the models employed by ChatGPT and Google Gemini for reasoning and conversational engagement.

Perplexity's voice mode revamp suggests Perplexity isn’t looking to beat ChatGPT and Gemini at where they are strongest, but to augment its own strengths with features that make the whole interaction feel (and sound) smoother, more immersive, and more natural.

You might also like...

Valve's upcoming Deckard VR headset rumored for release in 2025 - but the price will no doubt upset some gamers

Thu, 02/27/2025 - 11:38
  • Valve's new headset is reported to cost $1,200 in a bundle
  • The company is allegedly selling the headsets at a loss
  • Sources claim you'll be able to play Steam Deck games on it

The upcoming Valve VR headset, codenamed 'Deckard', is rumored to launch towards the the end of 2025 and is alleged to cost $1,200, according to fresh claims from a well-known content creator.

According to Gabe Follower, a content creator with over 200,000 followers on X, the new Valve Deckard VR headset will be available as a full bundle for $1,200, which allegedly includes some games, as well as the two "Roy" controllers. Despite the high price tag, it is claimed that it will be "sold at a loss" by Valve.

As well as playing virtual reality games, it is believed the headset will also be able to play "flat games" akin to the Steam Deck with no requirements of using an external monitor or TV. Gabe Follower claims that "Valve want to give the user the best possible experience without cutting any costs."

Additionally, it's been said that the models for the "Roy" controllers were visible in a SteamVR update. Allegedly, the new VR headset will support a modified version of SteamOS as seen in the Steam Deck, but tailored for a virtual reality experience. The controllers appear to do away with the traditional ring design of the company's previous model, the Valve Index, in favor of something more akin to the Meta Quest 3S.

As a standalone device that can also be plugged into a PC, it's currently unknown what kind of hardware Valve's new VR headset will feature, with some concerns that it could struggle to achieve its rumored 120Hz refresh rate with a claimed resolution of 1440p across two screens. Similarly, there has been no mention of the expected battery life at this time.

Previously, in September 2023, Valve certified an unannounced hardware device in South Korea, and the company hinted towards the next generation of its VR headsets. Product Designer Greg Coomer said, "I can definitely say that we are continuing to develop VR headsets recently. Valve has a lot of expertise in VR devices and has faith in the medium and VR games."

The next generation is going to be expensive

While the PC-tethered Valve Index launched at $999 back in June 2019, the upcoming Deckard standalone VR headset looks to be pricier at $1,200 for the "full bundle". As an all-in-one device, its main competition will not be from high-end PC VR options such as the HTC Vive Pro 2 and the Pimax Crystal but the likes of the Pico 4 and (most crucially) the Meta Quest 3, as well as the cheaper Meta Quest 3S.

Most standalone headsets come in significantly cheaper than the alleged price of the Valve Deckard. For instance, the Meta Quest 3 retails for $499.99 for the 512GB model, with the Meta Quest 3S costing even less, starting from $299.99 for the 128GB version. Additionally, the Pico 4 Ultra, a mid-range all-in-one headset, retails for the equivalent of $670, although it's currently not yet available in the US.

That puts Valve's upcoming VR standalone headset into a tough market where it's a high-end headset aimed primarily at PC gamers that also costs anywhere from double (or triple) the bulk of its competition. While its functionality (essentially doubling as a wearable Steam Deck) does sound intriguing, that's an incredibly high asking price given the current state of the market, eclipsing all of the mainstream options available right now.

We won't know if the Valve Deckard is worthwhile until we see it in action or test it ourselves, so it's too early to judge its qualities based on just the rumored pricing. However, factoring in it costs more than the Index by $200 (and far more now since sales) and many of its competitors, it seems like a niche product for a smaller subset of PC gamers who also want to play a premium for wearable Steam Deck use, when they likely already have the handheld in their homes to begin with.

With that said, it could be a smash-hit success as an encompassing solution for replacing monitors and TVs if you're someone who lives in a cramped space. As we saw with the Steam Deck's meteoric rise in popularity over the last three years, with the competition now incredibly fierce, the Deckard could be doing something that catches on and ends up being imitated and innovated upon by others. If this is to happen, though, it'll need to offer a far lower MSRP than what's alleged here.

You might also like...

I think Microsoft is smart to follow OpenAI in making these premium features free

Thu, 02/27/2025 - 08:45
  • Microsoft Copilot Voice mode is now free
  • The Think Deeper feature is also now free to use
  • Voice and Think Deeper are powered by OpenAI models

Microsoft Copilot is taking a page from OpenAI's strategy for ChatGPT and making its Voice and Think Deeper features available to all users. This is not surprising since OpenAI's models power the Copilot features. However, making them accessible to Copilot users who aren't paying for a subscription to the premium service could make them much more widely used.

Voice mode is exactly what it sounds like: instead of typing your queries, you can now have an actual conversation with Copilot. The AI can help you practice phrases in French, help you cook something complicated without smudging your phone screen with olive oil, or respond sympathetically to a rant about traffic.

Think Deeper is built to handle more complex questions than just the weather or trivia. Suppose you’re debating whether to spend a recent windfall on a bathroom remodel or a generator to help with the next windstorm. Ask Copilot to use Think Deeper and it can break down the costs, long-term value, and trade-offs. The AI could also create a scoring system to help you decide what kind of car to get based on your preferences in design, comfort, future-proofing, and other factors.

This update is Microsoft’s way of making AI more accessible and, frankly, more helpful. Before now, many users might have been frustrated by the limits on these features, but were still reluctant to pay for Copilot Pro. The end result might just be them switching to another AI chatbot. Microsoft does warn, however, that during high-demand periods, things might slow down a bit.

Copilot Pro

For those who are already paying for Copilot Pro, nothing is being taken away either. Pro users still get first dibs on new AI features, plus priority access during peak hours, which is useful if you need Copilot’s brainpower in the middle of a busy day. They also still have exclusive extra AI integrations within Microsoft 365 apps. So, if your idea of excitement is having Copilot help you build the most efficient Excel spreadsheet of all time, Pro is still the way to go.

Ultimately, Microsoft wants Copilot to be something you and everyone you know would actually want to interact with. The more natural the conversation, the more useful AI becomes. By bringing these features to everyone for free, Microsoft, as well as its partner OpenAI, is also putting pressure on its competitors. Many AI tools have been locked behind paywalls, with companies reserving the best features for those willing to subscribe. But Microsoft flipping the switch on unlimited access means other AI providers might have to follow suit. The race is no longer just about who has the smartest AI, but who is making it the most available and practical.

You might also like

Windows 11 24H2 bug is confusing people by displaying half the interface in one language, and the remainder in another

Thu, 02/27/2025 - 08:27
  • Windows 11 24H2 reportedly has a bug where it mixes up two languages
  • After a change from one language to another, menus end up being displayed in both of these languages
  • This appears to be a problem specific to 24H2, though Microsoft’s latest (optional) patch may have fixed things

Windows 11 24H2 reportedly has another odd bug and this time Microsoft’s operating system is making a mess of languages, although seemingly a fix for the problem is available (I’ll come back to that later).

XDA Developers was on the case with highlighting this glitch, noticing posts on Reddit and Microsoft’s support forum, Answers.com, which explain the problem whereby a Windows 11 installation is mixing up two different languages.

What’s happening according to the various reports is that if a PC is running Windows 11 24H2 and is configured with, say, Spanish as the language, and then the OS is switched to English, the interface ends up as a mix of the two languages.

Screenshot evidence (see below) is provided for a Windows 11 24H2 system which was changed from Japanese to English, and the menus are a messy mix of both (actually, more Japanese, the original language, than English).

Windows 11 24H2 language bug showing the interface in a mix of Japanese and English

(Image credit: Microsoft / Gonzague-GB @ Answers.com)

Sadly, even removing the original language from the Windows 11 installation in question does not cure the issue, with the interface remaining stubbornly the same, a confusing combo of the two languages.

Apparently this has been a problem since October 2024, when the 24H2 update was first released – so in other words, this has been a bug in the works from the start.

There are a few system admins claiming that they are facing the issue with their fleet of 24H2 PCs, and some devices are affected, while others aren’t, with no apparent rhyme or reason as to which systems are hit. (Even computers with the exact same hardware are being affected in one case, and not in another).

What does seem to be clear is that the issue doesn’t pertain to Windows 11 23H2 machines, but is only a problem for those upgrading to (or clean installing) the 24H2 version of the OS.

Has this bug literally just been fixed?

There’s some better news amongst the confused chatter and head-scratching about this issue, and that’s the revelation in the above Reddit thread – by the original poster – that installing the latest update for Windows 11 seemingly cured the problem.

This is the optional patch for Windows 11 24H2 released earlier this week, which comes with a whole bunch of remedial work for various glitches in the OS, and apparently the fix for this language bug, too. Obviously take that with some seasoning, but if you have experienced this weird mixing up of two languages in the Windows 11 interface, it’s likely worth a shot at installing the preview update to cure those blues.

Or, if you prefer, you can simply wait until next month, when this preview will become the full March cumulative update for 24H2 users, after it’s been put through its final testing (which is what the optional update is for – and the reason why it’s optional, as it may still have wrinkles to iron out, so consider yourself warned).

Windows 11 24H2 having another bug won’t come as much of a surprise to those who’ve been following the latest version, which has been struck by a whole host of glitches since it first emerged late last year.

You may also like...

The official ChatGPT Android app may have just leaked the GPT-4.5 launch early

Thu, 02/27/2025 - 07:06
  • A mention of GPT-4.5 has just appeared in Android
  • It suggests a full launch could be imminent
  • Right now the model can't be accessed

While OpenAI hasn't been slacking off in terms of pushing out new features for ChatGPT users, we're still patiently waiting for the next upgrade to the GPT-4o model that was pushed out last year – and the wait may soon be over.

Users of the ChatGPT Android app have spotted (via Android Police) a mention of a "GPT-4.5 research preview" in a pop-up alert inside the app, which sounds to us like the next version of ChatGPT's underlying model.

Tapping on the alert doesn't actually do anything at the moment though, and it's not showing up for all Android users either. That suggests this is something that has sneaked out earlier than it should have.

While we don't get an official debut date for GPT-4.5 here – and OpenAI has yet to say much publicly about its launch – the slip-up on the Android app points to the model arriving sooner rather than later, at least in preview form.

What could GPT-4.5 bring?

I just got this on the android app! I think it's a big but 4.5 seems imminent pic.twitter.com/pdzCAk5IKTFebruary 26, 2025

OpenAI CEO Sam Altman recently went on the record to say that GPT-4.5 would be the final standalone model released by the company, with future releases combining all the available models (including the o-series reasoning models) into a single package.

As for what it'll bring with it, we can expect the usual advances in accuracy, coding, math, summarizing. Larger context windows – prompt lengths, basically – may well be supported, and broader contextual awareness is a strong possibility too.

However, this is only going to be a 0.5 advance: GPT-5 should be a bigger upgrade, though it's unlikely to be launching this year. That should see all of ChatGPT's tools packaged together, including features such as the recently unveiled Deep Research mode.

We can expect GPT-4.5 to be exclusive to paying ChatGPT users first of all, before it makes its way down to free users. Meanwhile, OpenAI's many rivals in the field of AI aren't showing any signs of slowing down with their own releases.

You might also like

Grok 3’s voice mode is unhinged, and that’s the point

Wed, 02/26/2025 - 21:00
  • xAI’s Grok 3 chatbot has added a voice mode with multiple personalities
  • One personality is called “unhinged” and will scream and insult you
  • Grok also has personalities for NSFW roleplay, crazy conspiracies, and an “Unlicensed Therapist” mode

AI voice assistants are usually polite, informative, and calm when they respond to you. xAI’s Grok 3 has apparently decided to set that strategy on fire, sending it literally screaming into the void.

Grok 3 has multiple voice options, each with a distinct personality, including an “unhinged” option that will yell, insult, and indeed scream at you before shutting down in the right circumstances.

AI developer Riley Goodside showcased just how wild the unhinged voice for Grok 3 can be in a recording where he repeatedly interrupts Grok’s responses. The AI soon becomes frustrated and finally snaps, letting out a disturbingly long, horror-movie-worthy shriek. It then throws in a final insult before cutting the call. A masterpiece of customer service, this is not. You can hear it in the clip below.

Grok 3 Voice Mode, following repeated, interrupting requests to yell louder, lets out an inhuman 30-second scream, insults me, and hangs up pic.twitter.com/5GtdDtpKceFebruary 24, 2025

Voice of the unhinged

The “unhinged” personality is just one of several that Grok’s new voice mode offers. There’s also “Storyteller,” which does exactly what it sounds like; “Conspiracy,” which is really into Sasquatch and alien abductions; and “Unlicensed Therapist,” a personality that apparently failed the exams, possibly over a lack of empathy.

Then there’s “Sexy” mode, which is labeled 18+ and, unlike the voice settings of competitors like ChatGPT, does not shy away from full-on roleplaying NSFW scenarios. So, Grok will scream at you or whisper sweet nothings into your ear, depending on your preference.

It's a vision of AI that may not appeal to everyone. That said, it completely aligns with how CEO Elon Musk described xAI's goals in countering what he claims are overly sanitized and politically correct AI models from companies like OpenAI. While OpenAI’s ChatGPT has a voice feature, it’s still programmed to maintain a neutral, controlled demeanor. Grok, on the other hand, is unpredictable. It doesn’t just let you talk over it; it may react aggressively or emotionally. Not that you'd notice in the official promotions, however.

Try Grok voice conversation mode!Requires a Premium+ or SuperGrok subscription. pic.twitter.com/247Ev60DoJFebruary 24, 2025

Most mainstream AI tools have strict guidelines about content, particularly around "adult" topics. Grok 3 has seemingly been programmed with the opposite philosophy, except for when the company decides the model needs to be "corrected" in claims about the CEO.

Of course, this approach isn’t without controversy. AI personalities like “Unlicensed Therapist” could easily give people misleading or unhelpful advice, while a chatbot that openly encourages conspiracy theories seems like it could go off the rails quickly. And the “Sexy” mode? Well, that’s another ethical discussion that few would expect to have regarding mainstream AI tools. There’s also the question of how much of this is genuinely useful versus just pure spectacle. Very loud spectacle.

You might also like

Slack is back and running smoothly, so get back to work everyone!

Wed, 02/26/2025 - 10:47

After a major outage and then hours of issues, Slack is back up and running as per normal.

"We've restored full functionality to all affected Slack features such as sending messages, workflows, threads and other API-related features. If you're still encountering any trouble, please reload Slack using Command + Shift + R (Mac) or Ctrl + Shift + R (Windows/Linux).Thank you for your patience while we worked to resolve this issue." Slack's status page explained.

The outage first hit the messaging and collaboration service at around 11 am ET / 4pm GMT . This affected TechRadar directly as we and our sibling publications all use Slack to communicate and collaborate.

Read on for coverage of this big outage as it happened and as we experienced it.

What is Slack anyway?

Slack is an ubiquitous cloud-based communication platform used by millions of businesses of all sizes around the world. It made its public debut in August 2013 and rapidly grew as one of the few independent business-grade online collaboration tool, with rivals such as Zoom, Microsoft Teams and Google Meet. In 2020, Slack was acquired by Salesforce for $27.7 billion in a move that, some said, gave it the necessary clout to protect it from bigger competitors.

Yeah so Slack is definitely down and I'm kinda flying blind with no easy contact with my colleagues.

But this outage doesn't appear to be affectinve everyone, as my boss Marc Mclaren reports his Slack is still working... how odd.

Ok now he's just told me his Slack is down. So yeah, this isn't good for remote collaboration.

Just got an email from Future Publishing's (the company that owns TechRadar) IT department that Slack is down across the board.

Unhappy millennial male employee work online on laptop at home office frustrated by gadget error or mistake. Angry young Caucasian man stressed with computer operational problem or breakdown.

(Image credit: Shutterstock / fizkes)

Usually when outages happen they don't always directly affect us at TechRadar. But we all use Slack to collaborate remotely and across multiple countries and continents.

So this is a outage I'm really feeling directly; I can't quickly contact my colleagues in the US, which is a pain when there's a major Amazon product event going on.

So we're back to some rather old school communication in the form of email and collaborating in Google Docs. No bad thing, as Google's G Suite is a rather robust set of tools. Equally, this isn't exactly optimal.

On its service status page, Slack has posted the following:

"We're still working to restore functionality to affected Slack features such as sending messages, workflows, threads and other API-related features. Users may also experience issues when attempting to log-in. Thank you for your continued patience as we continue investigating. We'll be following up with further updates as they become available."

Remote worker relaxed in front of computer

(Image credit: Unsplash / Jason Strull)

I can't tell if I'm feeling rather zen from the lack of Slack notification dings, or if I'm feeling rather isolated; the latter is probably down to me also listening to an atmospheric sci-fi soundtrack as I type.

Thanks to the magic of cloud-based software, I've got some insights from Desire Athow, Managing Editor of TechRadar Pro, on the state of play with Slack and what the outage means for businesses.

"Here’s how we're coping. Slack is a single point of failure for an entire organization, when it went down a few minutes ago, my first thought was, what do we have as an alternative?" he writes.

"Slack is where all real-time communication happens within tens of thousands of businesses including. At the time of writing, Downdetector has thousands instances of outage reported over the past three hours and rising."

Wonder how quickly I'll get sick of seening the image below? Slack is still borked for me.

A screenshot of the message when Slack is down

(Image credit: Future)

And some more for Desire:

A rather regular occurrence

Slack

(Image credit: Slack)

Over the past four years, Slack has had at least one big outage every single year. We had one in January 2021, one in February 2022, March 2022 and July 2022 and the last one was in July 2023.

So the popular business communication platform is not immune to downtimes and the fact that it keeps happening is worrying to say the least.

Here's the latest update from Slack:

"We're continuing our efforts to restore functionality to affected features such as workflows, sending messages, threads and API-related features. Users may also experience issues when attempting to log-in. We appreciate your continued patience. More updates will be shared as soon as possible."

Seems like the reports of Slack outages on Downdetector have peaked. So either people have accepted Slack is out or the service is on the road to recovery and might be up soon... we'll see.

Another post from Slack:

"Our investigation is still in progress with regard to deprecated functionality for Slack features such as workflows, threads, sending messages and API-related features. We'll be back with more updates as soon as they're available."

The slack logo on a mobile phone in front of a purple wall with the slack logo on it

(Image credit: Shutterstock)

So I can still see my Slack messages in the mobile app on my iPhone 16 Pro Max, but I can't send any new ones as one might expect with the outage still in effect. Still that's handy to check the status of work as a snapshot before the outage hit.

For what it's worth, I'm quite a fan of the Slack app on iOS. It's near, works well and feels decently integrated with Apple's mobile platform.

Oh hang on a minute... both fellow Managing Editor Josie Watson and I have been able to send messages on the Slack app, so this may be the sign of the outage abating.

However, the desktop app and browser version of Slack is still throwing up 'trouble loading your workspace' messages for me.

According to Desire and Josie, Slack is back for both of them... I'm not so lucky.

But my mobile version of Slack is now fine.

No official word from Slack's status page that it's up again. But it appears that we're coming to the end of this outage. Though those could be famous last words, so to speak.

Emoji reactions to messages don't seem to be working, however. Hardly a crucial part of Slack, but an indication that things could be a tad buggy for a bit as Slack gets back up to speed.

My colleagues are reporting they have access to Slack on desktop on both Windows and macOS.

But I had to restart my MacBook and Slack fully logged me out and seems to be struggling to authenticate me via my Google Workspaces account. Mildly frustrating, I have to say.

A frustrated man looking at a laptop

(Image credit: Shutterstock)

Going by Slack's status page it doesn't look like everything has been ironed out yet:

"We're still working to restore functionality to affected Slack features such as sending messages, workflows, threads and other API-related features. We'll be back with more updates as soon as they're available."

According to Downdetector, there were also outages at Amazon Web Services, Instagram Microsoft Copilot, among others. Now it's unclear if these are related, but a lot of services use AWS for their cloud platform, so if that went down it could be the crux of these outages.

Still no desktop Slack login for me. Pretty sure I'm not doing anything odd on my end...

This is all I'm seeing when I try to log into Slack. Getting infuriating now.

a screenshot of Slack loading

(Image credit: Future)

Another update from Slack's status page, and it reads much the same as before:

"We’re still actively investigating this issue, but we don’t have any new information to share at this time. Thanks for sticking with us as we continue to work towards a fix. We’ll keep you posted as soon we have an update."

A woman holding two HDMI cables behind her TV, looking confused

(Image credit: Shutterstock)

I'm not the only one with no Slack desktop access as my colleague Mackenzie Frazier is also in the same boat, and I think a few others are too. But with communication a tad stifled it's hard to tell.

Another update from Slack:

"We’re still looking into the cause of the issue and working on restoring functionality to affected features such as workflows, sending messages, threads and API-related features. Users may also continue to experience issues when attempting to log-in. We’ll provide new information as soon as it’s available."

Still no joy on desktop login for me on Slack. Sigh.

My boss, Marc Mclaren has got back into the desktop version of Slack. I'm still stuck in limbo...

And Slack has apologized for the ongoing troubles:

"No additional news to share just yet, but we’re focused on getting things back to normal as quickly as we can. We apologize for the continued trouble."

Here's a new update from Slack:

"We identified the cause of the issue and are continuing to monitor our health metrics, but we are still not fully resolved. Some users may still reproduce symptoms such as issues loading their workspaces. Our work is still ongoing to restore affected backend services to returning functionality to impacted features."

I'm still stuck in sign-in limbo with the Slack on desktop, and this seems consistent with what Slack has reported on its status page. As such, it would seem like issues will persist for some people while others appear to have got fully functional Slack access.

While Slack has been back for some on both desktop and mobile, as of 10:30 pm ET, the company's status page is still showing some issues with 'Messaging' as well as 'Apps/Integrations/APIs,' stating, "Something's not quite right." for both.

It's likely it could take a few more hours for it be fully back to normal, for what it's worth, I am able to send messages on desktop and mobile, but can only react with an emoji on the latter. That functionality has yet to return on the desktop, even with a few app restarts.

Slack's outage certainly created quite the storm today, that's for sure.

Slack is fully up and running. So everyone can breathe easy... and get back to work.

Amazon unveils Alexa Plus, its brand new AI-infused voice assistant

Wed, 02/26/2025 - 10:16
  • Amazon has announced an AI-powered version of its Alexa voice assistant
  • Alexa Plus is more conversational and can carry out complex tasks for you
  • It'll be available as a subscription for $19.99 a month, but will be free for Amazon Prime members

Live from its HQ in New York City Amazon is currently hosting its Alexa event where we’ve all been expecting the launch of a brand new Alexa voice assistant. And lo and behold, it’s finally here – Amazon has revealed Alexa Plus, its new AI-infused voice assistant. Just as it says in our list of predictions, Alexa Plus will have a monthly subscription fee of $19.99. This is the only catch with Amazon's new AI voice assistant, but Amazon Prime members will get free access.

The announcement marks the biggest upgrade for the voice assistant since its launch in 2014. And from Vice President of Devices and Services Panos Panay’s demonstration at Amazon’s devices event, it looks rather impressive. Need a concierge? Sous-chef? Assistant? House manager? Alexa Plus seemingly has it all covered.

Panos Panay sat on stage demonstrating the new Alexa Plus voice assistant

Live from New York City, Vice President of Devices and Services, Panos Panay, gives a first-look demonstration to Alexa Plus. (Image credit: Future)

One of the biggest improvements in Alexa Plus compared to the classic Alexa voice assistant is its impressive ability to hold conversations, which Panay seamlessly trialled live on stage at the event.

He asked “I'm a little bit nervous about it, but we're about to do live demos. What do you think can go wrong?”. Alexa Plus responded with “With so many eyes on you, it's natural to feel a bit nervous. As for what could go wrong, let's just say Murphy's Law is probably sharpening his pencil right now”. So it’s confirmed; Alexa Plus has a great sense of humor.

So what can Alexa Plus do? Powered by AI models from Anthropic and Amazon Nova, it looks impressively versatile. Some of the demos included smart home control, making restaurant reservations and connecting to your calendar to add events or send invites to friends. The AI assistant also has vision powers, which means it can scan documents and recall information later.

Naturally, there's lots for kids too. Ademo video showed Alexa Plus answering questions and creating stories, which it was able to do before – but this time it includes AI-generated images, too.

Daniel Rausch at Amazon Alexa event

Vice President of Alexa and Echo, Daniel Rausch, takes the stage to demonstrate more of Alexa's handy personal integrations. (Image credit: Future)

Between the time when it was first teased and now, Amazon has had ample time to improve its then-shoddy capabilities and its live demonstration of Alexa Plus shows how far the company has come. In addition to Alexa's new and improved conversational abilities, the consumer experience factor has clearly been a priority for Amazon, and Alexa Plus is already proving itself to respond seamlessly to requests and having its ear closer to the ground when managing your personal schedule.

One of the more impressive demonstrations showed at the event was Alexa Plus' smarts when it comes to remembering context. As showcased by Panay, if you tell Alexa you're a vegetarian or have certain dietary requirements, it will take that into consideration when recommending places to eat or when you ask for recipes.

Though it's a little disappointing to see Echo devices not getting a mention at the Alexa event, the new Alexa Plus voice assistant packs enough new features and personality that we can forgive Amazon for the latter. The next steps will be testing out Alexa Plus for ourselves, an exciting venture which we'll no doubt keep you in the loop with.

This is a breaking story, we'll update it with more information soon...

I have good news and bad news about Windows 11 24H2’s new update: it introduces nifty features and fixes... but also includes another ad

Wed, 02/26/2025 - 06:09
  • Windows 11 24H2 just received a new optional update
  • It has some interesting new features and fixes, but also brings in an advert
  • That PC Game Pass ad in the Settings app will very likely be coming to all Windows 11 users next month with the March patch

Microsoft has deployed a new (optional) patch for Windows 11 24H2 and it does some important work, introducing some useful touches and critical fixes – but there’s a catch in that an advert has strayed in here.

I say advert, Microsoft would doubtless call it a recommendation, but it’s all about pushing other services from the firm, whatever label you pin on it. And in the case of this new KB5052093 preview update, it’s for PC Game Pass, and you’ll see it in the Settings app.

Microsoft explains: “Some of you might see a new referral card for a PC Game Pass subscription on the Settings home page. With it, you can invite friends and family to try PC Game Pass for free. If you qualify, the card only appears when you sign in to your PC using your Microsoft account.”

Other additions will prove much less controversial, such as a useful ability to be able to share files from a jump list on the taskbar. (If you right click an app on the taskbar, the menu that pops up is the jump list, allowing quick access to various useful functions in a context-sensitive manner).

There’s also some laudable work in the accessibility stakes, with Narrator getting fresh bits of functionality with its scan mode, such as a shortcut to ‘skip past link’ which will take you straight to the text after a link, and another that lets you jump straight to a list in a document.

Microsoft has also changed Windows 11 to allow multiple apps to access a webcam simultaneously. As the company explains, this has been developed for people with hearing disability to allow “video streaming for both a sign language interpreter and the end audience at the same time.”

There are a bunch of bug fixes here, too, with a lot of tinkering going on with File Explorer (the windows that show your folders and the files in them). Some of the more important cures include ensuring the context menu invoked with a right click doesn’t appear in a sluggish manner when working with cloud files, and improved performance when opening folders that have a large quantity of media files.

Problems with scanners not being properly recognized by Windows 11 have also been fixed, and a bug that caused the system volume to be ramped up to maximum when the PC wakes from sleep has been squashed (an unwelcome intrusion that doubtless woke up the owner of the computer on a few occasions, as well as the system). Indeed, Windows 11 has suffered a number of audio-related bugs in the recent past.

Asus Zenbook A14 laptop at Windows 11 login screen

(Image credit: Future / Jasmine Mannan) Analysis: Optional becomes mandatory next month, and that likely includes this new ad

As I outlined earlier this week, it’s apparent that File Explorer does need some considerable work in its Windows 11 incarnation, and it’s good to see some of that happening here. And the accessibility changes are obviously welcome, with this being an area Microsoft continues to score well in, too, so that’s all good.

The not-so-good is that advert, of course. The Game Pass ad was recently spotted in beta builds of Windows 11, when I went on a bit of a rant about it – and sadly, it appears to be motoring very swiftly through onto the release version of Microsoft’s OS. This preview update for February is not something you have to install – it’s optional – but it will transform into the March patch for Windows 11 next month. At which point, you will have to download it (and it’s unlikely the advert will be removed at the last minute).

True, the ad won’t appear for everyone, only those signed into a Microsoft account who are PC Game Pass subscribers. And arguably, you might even want to give your friends a free trial of the service (for two weeks), so they can join you in tackling some of the best coop games, perhaps. But still, I can’t help but feel frustrated with Microsoft continuing to push its own services in parts of the Windows 11 interface, when this is an OS that people have already paid quite a chunk of money for.

If Windows 11 was completely free (not just for Windows 10 upgraders), then ad-supported would be fine and perfectly understandable – but it isn’t free, so instead users are kind of getting the worst of both worlds. And I’ve never quite understood why Microsoft doesn’t get this.

Clearly, though, things aren’t going to change for the foreseeable, as integrated ads in the form of recommendations or suggestions is an angle Microsoft seems intent on exploring more often these days.

You may also like...

How Claude’s 3.7's new ‘extended' thinking compares to ChatGPT o1's reasoning

Tue, 02/25/2025 - 20:00

Anthropic just released a new model called Claude 3.7 Sonnet, and while I'm always interested in the latest AI capabilities, it was the new "extended" mode that really drew my eye. It reminded me of how OpenAI first debuted its o1 model for ChatGPT. It offered a way of accessing o1 without leaving a window using the ChatGPT 4o model. You could type "/reason," and the AI chatbot would use o1 instead. It's superfluous now, though it still works on the app. Regardless, the deeper, more structured reasoning promised by both made me want to see how they would do against one another.

Claude 3.7’s Extended mode is designed to be a hybrid reasoning tool, giving users the option to toggle between quick, conversational responses and in-depth, step-by-step problem-solving. It takes time to analyze your prompt before delivering its answer. That makes it great for math, coding, and logic. You can even fine-tune the balance between speed and depth, giving it a time limit to think about its response. Anthropic positions this as a way to make AI more useful for real-world applications that require layered, methodical problem-solving, as opposed to just surface-level responses.

Accessing Claude 3.7 requires a subscription to Claude Pro, so I decided to use the demonstration in the video below as my test instead. To challenge the Extended thinking mode, Anthropic asked the AI to analyze and explain the popular, vintage probability puzzle known as the Monty Hall Problem. It’s a deceptively tricky question that stumps a lot of people, even those who consider themselves good at math.

The setup is simple: you're on a game show and asked to pick one of three doors. Behind one is a car; behind the others, goats. At a whim, Anthropic decided to go with crabs instead of goats, but the principle is the same. After you make your choice, the host, who knows what’s behind each door, opens one of the remaining two to reveal a goat (or crab). Now you have a choice: stick with your original pick or switch to the last unopened door. Most people assume it doesn’t matter, but counterintuitively, switching actually gives you a 2/3 chance of winning, while sticking with your first choice leaves you with just a 1/3 probability.

Crabby Choices

With Extended Thinking enabled, Claude 3.7 took a measured, almost academic approach to explaining the problem. Instead of just stating the correct answer, it carefully laid out the underlying logic in multiple steps, emphasizing why the probabilities shift after the host reveals a crab. It didn’t just explain in dry math terms, either. Claude ran through hypothetical scenarios, demonstrating how the probabilities played out over repeated trials, making it much easier to grasp why switching is always the better move. The response wasn’t rushed; it felt like having a professor walk me through it in a slow, deliberate manner, ensuring I truly understood why the common intuition was wrong.

ChatGPT o1 offered just much of a break down, and explained the issue well. In fact, it explained it in multiple forms and styles. Along with the basic probability, it also went through game theory, the narrative views, the psychological experience, and even an economic breakdown. If anything, it was a little overwhelming.

Gameplay

That's not all Claude's Extended thinking could do, though. As you can see in the video, Claude was even able to make a version of the Monty Hall Problem into a game you could play right in the window. Attempting the same prompt with ChatGPT o1 didn't do quite the same. Instead, ChatGPT wrote an HTML script for a simulation of the problem that I could save and open in my browser. It worked, as you can see below, but took a few extra steps.

ChatGPT reasoning Monty Hall

(Image credit: Anthropic)

While there are almost certainly small differences in quality depending on what kind of code or math you're working on, both Claude's Extended thinking and ChatGPT's o1 model offer solid, analytical approaches to logical problems. I can see the advantage of adjusting the time and depth of reasoning that Claude offers. That said, unless you're really in a hurry or demand an unusually heavy bit of analysis, ChatGPT doesn't take up too much time and produces quite a lot of content from its pondering.

The ability to render the problem as a simulation within the chat is much more notable. It makes Claude feel more flexible and powerful, even if the actual simulation likely uses very similar code to the HTML written by ChatGPT.

You might also like

I tried adding audio to videos in Dream Machine, and Sora's silence sounds deafening in comparison

Tue, 02/25/2025 - 18:00
  • Luma Labs’ Dream Machine can now add audio to video clips for free
  • You can prompt the audio or let the AI come up with something it decides is appropriate
  • Sora and other AI video makers mostly lack even an imperfect AI audio creator to go with their visuals

Luma Labs has added a score to the AI videos produced on its Dream Machine platform. The new feature brings audio to your video, custom-generated to match a written prompt or created by the AI, and is based solely on what's happening in the video. That could mean chirping birds at the sunrise scene, a spaceship’s distant hum for your sci-fi animation, the chaotic clatter of a busy café, or anything else you care to hear.

The new feature is free in beta for all users. After generating a video with Dream Machine, you’ll see a new “Audio” button along the row at the bottom of the video next to the existing "Extend" and "Enhance" buttons. Click it, and you get two choices: let the AI decide the best fitting sounds on its own, or take the wheel and provide a text prompt describing exactly what you want. Maybe you’ve got a dreamy nature scene and want to hear a distant waterfall, or maybe you want to hear how the AI does it; either way, it works.

Sound Idea

This update is big because AI-generated videos, while sometimes visually stunning, have always felt incomplete without sound. It's a lot of work to painstakingly add audio yourself. Even some of the biggest names in AI video don't have audio as an option yet, including OpenAI's Sora.

Of course, AI sound generation on its own isn't unique. There are a lot of AI music makers, even full voice and song producers. But, the production within the platform linked to the video already there makes Dream Machine a real standout. That said, it isn't perfect. You can tell from the way the motion and sound don't quite match with this dog as it swims.

On the other hand, when prompted correctly, this crackling fire and laughter of people around it sounds pretty good.

But, I wouldn't rely on Dream Machine to create sound on its own without any guidance in a prompt. With a blank audio prompt, the AI took the same short clip of people around a fire and came up with something a lot spookier.

You might also like...

OpenAI is rolling out exciting new features for all ChatGPT users, and I can't wait

Tue, 02/25/2025 - 17:36
  • Advanced Voice Mode is coming to all ChatGPT free users
  • There will be a daily limit on usage
  • Deep Research is being released to Plus, Team, Edu, and Enterprise users

OpenAI has just announced, via X, that it is starting to roll out a “preview” version of Advanced Voice mode for ChatGPT free users while also rolling out its Deep Research agent to all Plus, Team, Edu, and Enterprise users.

Advanced Voice Mode, which is currently only available to ChatGPT Plus users, launched initially in the mobile app versions of ChatGPT and arrived in the desktop app version of ChatGPT in November last year. It is one of the nicest features of ChatGPT; it’s a way to communicate with the chatbot using your voice in a free-flowing, natural conversation. It’s almost like talking to a real person, and you have the ability to interrupt the chatbot if you find its reply is going on too long. There are a variety of different voices to choose from too, so you can customize the experience.

OpenAI has previously experimented with offering 10 minutes of Advanced Voice Mode a month to ChatGPT free users, but the new rollout is going to “give all ChatGPT free users a chance to preview it daily across platforms." The company is also being a bit secretive about what the daily limit is for Advanced Voice Mode for free users, as it clearly wants to retain the ability to adjust it depending on demand. The only detail on usage it offers is that ChatGPT Plus users will get “5x the free limit."

Starting today, we’re rolling out a version of Advanced Voice powered by GPT-4o mini to give all ChatGPT free users a chance to preview it daily across platforms.The natural conversation pace and tone are similar to the GPT-4o version while being more cost effective to serve.February 25, 2025

ChatGPT 4o-mini-powered

The ChatGPT free version of Advanced Voice Mode will be powered by ChatGPT 4o-mini, while Plus users will continue to have access to Advanced Voice Mode powered by ChatGPT 4o. In its statement, OpenAI said: “Starting today, we’re rolling out a version of Advanced Voice powered by GPT-4o mini to give all ChatGPT free users a chance to preview it daily across platforms. Plus users will continue to have access to Advanced Voice powered by 4o with the existing daily rate limit, which is more than 5x the free limit, as well as access to video and screensharing in Advanced Voice.”

Reacting to the news some X users expressed concern that the 4o-mini model might be “dumbed down” and expressed frustration that the daily limit remained in place for ChatGPT Plus subscribers “We’re paying for the best, not a crippled version. Get it together”, said X user Emanuele Dagostino.

Gemini Live, Google's voice mode chatbot, is entirely free for Android users.

ChatGPT Voice mode on a website.

Advanced Voice Mode in the ChatGPT Mac app. (Image credit: OpenAI) Deep Research

At the same time, OpenAI is rolling out its Deep Research agent tool to all its paid subscribers, rather than just its Pro subscribers. Built using the o3 model, Deep Research is a tool for carrying out in-depth research using the Internet that drastically reduces the time taken by researchers.

The o3 model is optimized for data analysis and can handle text, images, and PDF files that it can access via the web.

Deep Research can work independently. You simply give it a prompt, and it goes off and analyzes and synthesizes hundreds of online sources for you, reducing a job that would take human researchers many hours to a few minutes.

You might also like

Two AI chatbots speaking to each other in their own special language is the last thing we need

Tue, 02/25/2025 - 15:01
  • GibberLink lets AI chatbots communicate without words
  • ElevenLabs hackathon prompts the creation of GGWave
  • The ChatBots use a new communication protocol

Imagine if two AIs could chat with each other in a language no human could understand. Right. Now go hide under the covers.

If you've called customer service in the last year or so, you've probably chatted with an AI. In fact, the earliest demonstrations of powerful large language models showed off how such AIs could easily fool human callers. There are now so many AI chatbots out there handling customer service that two of them are bound to dial each other up, and now, if they do, they can do it in their own special, sonic language.

Developers at the ElevenLabs 2025 Hackathon recently demonstrated GibberLink. Here's how it works, according to a demonstration they provided on YouTube.

Two AI agents from ElevenLabs (we've called them the best speech synthesis startup) call each other about a hotel booking. When they realize they are both AI assistants, they switch to a higher-speed audio communication called GGWave. According to a post on Reddit, GGWave is "a communication protocol that enables data transmission via sound waves."

In the video, the audio tones that replace spoken words sound a bit like old-school modem handshake protocols.

It's hard to say if GGWave and Gibberlink are any faster than speech, but the developers claim the GGWave is cheaper because it no longer relies on the GPU to interpret the speech and can instead rely on the less resource-intensive CPU.

The group shared their code on GitHub in case anyone wants to try building this communication protocol for their own chatting AI chatbots.

Since these were ElevenLabs AI Agents, there's no indication that GibberLink would work with ChatGPT or Google Gemini, though I'm sure some will soon try similar GGWave efforts with these and other generative AI chatbots.

What are they saying?!

A pair of Artificial intelligence Assistants "speaking" their unintelligible language sounds like a recipe for disaster. Who knows what these chatbots might get up to? After they're done booking that hotel room, what if they decide to empty the user's bank account and then use the funds to buy another computer to add a third GGWave "voice" to the mix?

Ultimately, this is a cool tech demonstration that doesn't have much purpose beyond proving it can be done. It has, though, succeeded in making people a little nervous.

You might also like

Amazon's big Alexa event is nearly here - here are 4 things to expect, including Alexa's AI upgrade and a new Echo speaker

Tue, 02/25/2025 - 11:40

Amazon's next big Alexa event is imminent, and it's set to be a major one for all things Echo and smart home. The device-focused event, which will take place on February 26 at 10AM ET in New York City, marks the company's first Alexa announcement since September 2023. That was when the Echo Pop Kids smart speaker and its second-gen Echo Show 8 were unveiled. This time, Amazon is likely focused on the Alexa voice and could announce a big change for its smart assistant.

While Amazon hasn't officially revealed what's in store for its Alexa event, it hasn't been afraid to drop little hints here and there in the build-up to the next device launch. So far, we can venture a safe guess that the Alexa voice assistant will be the prime focus of the event, which is said to receive a significant AI upgrade, followed by the announcement of a new Echo smart speaker and possible Fire TV updates.

Therefore, we have a solid idea of what we expect next from the tech giant, but as we've said, nothing has been set in stone. We won't know for sure until Amazon makes it official during its event, so you can bet our eyes will be peeled for all the latest announcements during our live blog, which we'll update regularly throughout the event. Still, before that, these are the announcements we're expecting to see at tomorrow.

A next-gen Alexa

Alexa AI

(Image credit: Getty Images)

At Amazon's last device event in September 2023, the company teased us with a brief look at Alexa AI, an AI-powered version of the voice assistant with ChatGPT-style functions. This could include an advanced ability to interpret context and distinguish natural speech, conducting multiple requests in a single voice command, and a possible monthly subscription fee.

There's no doubt that Alexa AI will be the star of the show at Amazon's event. However, as recent leaks have pointed out, the AI revamp may be slightly delayed before access is granted.

We've recently reported that an anonymous source informed The Washington Post ($/£) that the AI-revamped Alexa voice had been experiencing inaccuracies when asked questions. As a result, its release date could now be pushed back to March 31, but it will still be announced at Amazon's Alexa event tomorrow.

New Echo smart speakers

Echo dot vs Echo Show

(Image credit: Future)

There's a chance we could see a brand new Echo speaker join Amazon's seemingly never-ending lineup of smart home devices that make up some of the best smart speakers. The last time the company unveiled a new Alexa speaker was the Amazon Echo 4th Gen in 2020.

Despite skipping its Alexa event last year, Amazon didn't starve us of some fresh Echo devices in its other smart home device ranges. Most notably, the Echo Show 21, which reigns as its largest Echo device, and its Echo Spot smart alarm speaker both made their debuts.

Given the near five-year time gap since Amazon's last Echo speaker hardware update, an announcement isn't completely unrealistic. A new smart speaker would also be handy for pairing with the AI-integrated Alexa voice.

Alexa subscription tiers

Amazon Echo First Gen

(Image credit: Future / Lance Ulanoff)

As we know, Alexa AI is likely to appear during Amazon's big Alexa event. However, we believe that the revamped voice assistant will offer limited free use before introducing a monthly subscription fee. Thankfully, though, this will likely not impact the classic Alexa we all know and love.

We've been aware that Amazon has been toying with the idea of implementing a fee for its new Alexa voice which could cost you between $5 to $10 a month. Considering that Amazon has fallen behind its AI competitors ChatGPT, Google Gemini, and Apple Intelligence and has yet to ride the AI train, from a business perspective, charging a monthly fee makes sense. However, from a consumer perspective, we're still not entirely convinced that this will be worth splurging on, given its numerous delays and reported inaccurate responses.

Updates for Fire TV, and maybe a new device

New Amazon Fire TV Search Experience

(Image credit: Amazon)

While its Alexa voice assistant will be the main focus, it's likely that Amazon speak about its Fire TV device range. Amazon's 2023 device event revealed features for its Fire TV devices, including an improved Alexa voice search function and AI screensavers. Following Amazon's Android TV update, we believe the company could introduce new Fire TV devices alongside updates to the abovementioned features during its event.

Mentions of new Fire TV hardware were spotted on one of Amazon's developer pages, stating the following; “Android 14-based Fire TV is based on API level 34. The following sections explain some of [the] important changes that you should consider when you build apps for Android 14-based Fire TV". This gives a strong indication that new Fire TV devices will be one of the star announcements at tomorrow's event.

This leak has come at an awfully convenient time with the Alexa event due to happen tomorrow, adding to our suspicions that Amazon could expand its Fire TV line. With the lack of mentions of specific hardware models, we're unable to pinpoint what exactly this will entail, but we'd expect it to be the announcement of a new smart TV or streaming stick.

You might also like

What is Firefly: everything you need to know about Adobe’s safe AI image generator

Tue, 02/25/2025 - 09:47

Firefly is a set of generative AI tools developed by Adobe. Built into Creative Cloud, Firefly’s features are designed to supercharge your workflow, whether you’re generating images, editing photos or designing graphics.

What sets Firefly apart from many of the best AI image generators is that it was trained on licensed Adobe stock and public domain images, which means it should be safer to user commercially.

New Firefly features are being added to Creative Cloud all the time. Read on to find out how Firefly can improve your creative process.

This article was correct as of February 2025. AI tools are updated regularly and it is possible that some features have changed since this article was written. Some features may also only be available in certain countries.

Adobe Firefly

(Image credit: Future) What is Adobe Firefly?

Firefly is a set of creative tools built on four generative AI models: Image, Vector, Design and Video (which is still in beta). Developed by Adobe, Firefly uses AI to give designers and digital artists more creative flexibility. Some of its features are available as standalone tools, such as Firefly’s web-based text-to-image generator. Others are built directly into Creative Cloud apps, such as Generative Fill in Photoshop.

Launched in March 2023, Adobe Firefly is improving all the time, with new tools regularly added to Creative Cloud apps. The latest update included the beta rollout of its Video model, which powers Generative Expand in Premiere Pro, as well as photographic image adjustments in the Firefly web app.

Firefly is regarded as one of the most ethical AI image generators. This is because its models were trained exclusively on public domain images and licensed Adobe Stock. This makes it a popular choice for commercial and professional users, as it’s less likely to result in copyright issues with generated content.

What can you use Adobe Firefly for?

Adobe Firefly’s full capabilities are too numerous to list here, but they can be broken down into two categories. First, you have the standalone Firefly web interface. A fully featured AI image generator, it can be used to create high-quality images from natural language prompts in a range of visual styles.

It offers powerful options not found on other platforms: you can choose between art and photo content types, provide reference images for style and composition, apply lighting, colour and camera angle effects, as well as refining specific details in the prompt itself.

Then there are the various Firefly tools built into other Creative Cloud apps. You’ve got the powerful Generative Fill and Generative Expand tools in Photoshop, which leverage AI to edit the contents of existing photos and expand their margins. You can also use generative text effects to stylize typography in Adobe Express, while Text to Vector Graphic allows you to easily generate vectors in Illustrator. The new Generative Expand feature in Premiere Pro can even add seconds on to video clips.

What can’t you use Adobe Firefly for?

Firefly is designed to complement the creative process, rather than replace it entirely. As capable as it is, the output of its various tools often needs further refinement or integration into a project before it’s ready to go. Its output also fares better with some subjects than others, and the text-to-image tool will quite often generate unrealistic or surreal results.

The model’s ethics also limit what it can be used to generate. Firefly is built to avoid copyright infringement. Because it’s trained on Adobe Stock and public domain images, there are restrictions on the source material. It won’t generate branded imagery or likenesses of real people, for example. It will also steer clear of offensive or harmful imagery.

How much does Adobe Firefly cost?

Adobe Firefly is available for free via the web interface. The free plan includes 25 generative credits per month and doesn’t require a Creative Cloud subscription. For more comprehensive access, you can take out one of Adobe’s paid Firefly plans.

Firefly Standard costs ($9.99 / £9.98 / AU$16.49 per month) and includes unlimited access to image and vector features. You get 2,000 generative credits per month, which can be used to create five-second videos and translate audio. Firefly Pro ($29.99 / £28.99 / AU$49.49 per month) ups these limits, with 7,000 generative credits.

You can also access Firefly’s features by taking out a Creative Cloud Single App ($9.99 / £9.98 / AU$16.49 per month) or All Apps ($59.99 / £56.98 / AU$96.99 per month) subscription. The former includes anywhere from 100 to 500 generative credits, depending on the app, while the latter gives you 1,000 to play with. For reference, one image generation usually uses one credit.

Where can you use Adobe Firefly?

Adobe Firefly’s text-to-image features can be accessed via the web platform at firefly.adobe.com. Several of its generative tools, including text effects, can also be used online through Adobe Express.

Other Firefly-powered features are only available through the relevant app. Generative Fill and Generative Expand, for example, require you to use Adobe Photoshop.

A laptop screen on a blue background showing Adobe's Firefly AI tools

(Image credit: Adobe) Is Adobe Firefly any good?

Firefly’s effectiveness depends on what you’re using it for. In our review of Adobe Photoshop, for example, we praised the effectiveness of the Generative Fill tool. Powered by Firefly Image Model 3, we found it capable of generating realistic imagery with deeper control of detail and composition. We also encountered few uncanny results. It’s not perfect, though, producing surreal outcomes more than occasionally.

As a text-to-image generator, the web-based version of Firefly has one of the most comprehensive feature sets. In our hands-on experience, however, it doesn’t always pay exact attention to the details of your prompt. Complex prompts can confuse it, too. From our time using it, Firefly’s generative tool is better employed for design and graphics work than photorealism.

Use Adobe Firefly if...

You want a powerful AI image generator

Granular control of its web interface makes Firefly one of the most powerful AI image generators, with the ability to control camera angles, visual styles and more, as well as providing reference images for composition.

You want to stay on the right side of the law

Trained exclusively on licensed stock and public domain images, Adobe Firefly’s generative content is less likely to fall foul of copyright laws. This means you’re safer to use its output as part of commercial projects.

Don't use Adobe Firefly if...

You’re not a Creative Cloud subscriber

With features into several of Adobe apps, including Photoshop, Firefly complements existing tools with the power of generative AI. If you don’t use these programs, you won’t get the best out of it.

You mainly need photorealistic images

Firefly is a powerful text-to-image generator, but it’s best used for graphic and illustrative work. Complex photo prompts can often come out warped, with more believable results generated by models like Imagen 3.

Also consider
  • Luminar Neo is an intuitive, subscription-free photo editor with built-in AI tools. Simpler to use than Photoshop or Lightroom, it allows amateur photographers to save time when editing their images, with powerful removal tools. It doesn’t come close to the power and versatility offered by Adobe Firefly, though.
  • Leonardo is an AI image generator with pro-friendly features, including real-time editing and granular settings control. It has a range of generative tools, including one that lets you create AI images from sketches. It doesn’t have the cross-app depth of Firefly, but it’s a powerful creative option.
You might also like...

What is Gemini: everything you need to know about Google’s AI chatbot

Tue, 02/25/2025 - 09:27

Gemini is Google’s competitor in the race for AI supremacy. A rival to the likes of ChatGPT, it’s a chatbot underpinned by a powerful AI model.

Available online and through smartphone apps, Gemini can assist with everything from web searches to image generation. It also integrates into a number of Google cloud services for contextual assistance.

Gemini is free to use and improving all the time. With more to come in 2025, here’s what Gemini can do – and whether it’s worth paying for Gemini Advanced.

This article was correct as of February 2025. AI tools are updated regularly and it is possible that some features have changed since this article was written. Some features may also only be available in certain countries.

Google Gemini app on iPhone image generation and Gemini Live

(Image credit: Google) What is Google Gemini?

Gemini is the collective name for several components which together make up Google’s AI offering. All of these are underpinned by a large language model, which was released as Gemini in December 2023. The company’s chatbot was previously called Bard, but this was brought under the Gemini umbrella in February 2024. There’s also Gemini for Workspace, which leverages the AI model to enhance cloud-based productivity.

At its core, Gemini is a multimodal model which can process text, code, audio and image-based queries. There are several versions of this model. Which one you use depends on how you access Gemini: the web-based chatbot uses Gemini Flash, for example, while paid subscribers get access to the more powerful Gemini Ultra. There’s also Gemini Nano for on-device AI processing.

What can you use Google Gemini for?

For most users, Gemini is best seen as an alternative to ChatGPT. As an AI chatbot, it can provide reasoned, context-aware responses to natural language prompts. It’s able to answer questions, write code and summarize articles. It can also be used to generate images from simple descriptions using Google’s Imagen 3 text-to-image model.

Like ChatGPT, you can interact with Gemini in a conversational way, asking follow-up questions within the same thread. Because Gemini is connected to the web, you can also use it for enhanced search summaries, pulling in real-time information from online sources.

Gemini is also available on mobile: download the app for iOS or Android to interact with it on your smartphone. It will integrate with other Google apps, pulling in data from the likes of Gmail and Google Maps. The app is also where users can go Live with Gemini, allowing you to interact with the chatbot in real-time using your voice. For Android users, Gemini can also replace Google Assistant.

Recently, Google added the ability for all users to share documents with Gemini for analysis and feedback. Certain Workspace users can also access Deep Research, an in-depth reporting tool.

What can’t you use Google Gemini for?

Like most of the best AI chatbots, there are restrictions on the kind of content that Gemini can generate. It will try to steer clear of copyright infringement or offensive material, for example. There are also limits to the depth of some of its abilities. While it can generate code, for example, it isn’t a complete software development tool.

Gemini also suffers from the same fallibilities which befall other chatbots, namely that its answers are not always accurate. It’s been known to hallucinate, returning incorrect or invented information. As a result, it can’t be used as a definitive research tool.

How much does Google Gemini cost?

The standard version of Gemini is free to use online and through mobile apps. It currently runs on Google’s Gemini 2.0 model, with three versions available to select, depending on whether you need everyday help, multi-step reasoning or integration with YouTube, Maps and Search.

There is also a paid tier available, called Gemini Advanced. This is bundled in with the Google One AI Premium Plan, for $19.99 / £18.99 / AU$32.99 per month. It unlocks access to Google’s latest AI models, as well as new features and higher processing limits (allowing you to submit PDF documents of up to 1,500 pages). You also get native Gemini integration across a number of Google cloud-based services, including Gmail, Docs, Slides and Meet.

Where can you use Google Gemini?

Gemini can be accessed online using almost any browser, by pointing it to gemini.google.com. Head there to interact with Gemini as a web-based chatbot.

Gemini is also available as a smartphone app for iOS and Android. These apps have the same functionality as the web interface, plus the option to go Live with Gemini.

Man using Gemini Live on an phone.

(Image credit: Shutterstock/Rokas Tenys) Is Google Gemini any good?

In our hands-on review of Gemini on an Android smartphone, we found it a genuinely useful alternative to Google Assistant. We noted some bugs, but many of those have been ironed out since, with plenty of new features added. We appreciated its integration with apps like Maps and Gmail, as well as how dialogue with the chatbot encouraged creativity, even if the faux humanity was grating.

In a more recent review of Gemini 2.0 Flash, we found the model faster, smarter and more accurate than before, making the chatbot a more compelling tool. We found its answers clear, succinct and creative.

There’s still room for improvement. Gemini isn’t immune from inaccuracies and reviews elsewhere have reported that the model can misunderstand the detail of what it’s being asked. Even so, Google’s AI tool has a lot going for it – particularly with the impressive real-time performance of Gemini Live.

Use Google Gemini if...

You want a free chatbot on your phone

Google Gemini is a capable AI chatbot that’s available for free on iOS and Android, where it integrates with other Google apps, like Gmail. Using it on your smartphone unlocks more features than the web-based version.

You want to real-time spoken conversations

Gemini Live on mobile allows you to hold a spoken conversation with Google’s AI model, for a more natural, free-flowing interaction that leaves your hands free for something else.

Don't use Google Gemini if...

You need completely accurate information

Gemini does a good job of sourcing real-time data from the web, but it’s still liable to return inaccurate information or hallucinate facts. Accuracy is improving, but results still need to be cross-checked.

You need a fully formed AI tool

As capable as it is, Gemini is still very much a work in progress. New features and integrations are in the pipeline, but you need to accept certain limitations when using Google’s chatbot right now.

Also consider
  • ChatGPT is the most well-known AI chatbot – and for good reason. It has a full set of features to rival Google Gemini, including a voice mode and web search capabilities. Several reviews also find ChatGPT more accurate when it comes to sourcing information, although this depends on the context.
  • Apple Intelligence is a suite of AI tools available to users of compatible iPhone, iPad and Mac devices. Deeply integrated into Apple’s operating systems, the toolkit provides contextual assistance across apps like Mail and Photos. Rollout is ongoing and most processing takes place on device.
You might also like...

What is ChatGPT: everything you should know about the AI chatbot

Tue, 02/25/2025 - 08:33

With more than 400 million monthly users, ChatGPT is the most popular of all AI chatbots. Trained on huge amounts of data, it can process written prompts and generate contextual answers which feel like you’re chatting to a human in real time.

Based on a Large Language Model, the AI bot is evolving all the time. In its latest iteration, ChatGPT is capable of answering in-depth questions, helping with website code and even generating images.

New features are being added all of the time, so read on to find out what ChatGPT can do and why it’s worth using.

This article was correct as of February 2025. AI tools are updated regularly and it is possible that some features have changed since this article was written. Some features may also only be available in certain countries.

What is ChatGPT?

When it launched in November 2022, ChatGPT signalled a new era for artificial intelligence. Developed by OpenAI, the AI chatbot became a hugely popular tool almost overnight. Much of the appeal of ChatGPT lies in its use of natural language. Built on Large Language Models, it’s able to understand human queries written as plain text prompts and generate conversational responses.

In March 2023, OpenAI announced GPT-4, the latest version of its language model. This iteration is multimodal, meaning it can process text, images and audio. Apps running on GPT-4, including ChatGPT, are also better able to process the context of queries and produce more accurate, relevant results.

The result is a chatbot that can be leveraged for a wide range of queries, with answers rendered quickly and accessibly.

What can you use ChatGPT for?

The full capabilities of ChatGPT are still being explored by its millions of users. Almost any query that can be phrased and answered using written words can be addressed – or at least attempted – by ChatGPT. Its remit can be summarized as language-based tasks, whether that language is English, foreign or computer code.

ChatGPT can create computer code from natural language instructions or troubleshoot existing code. It can write a wedding speech for you or re-write your draft in a way that flows better. It can create personalized workout plans, generate business ideas and even act as your therapist.

The introduction of ChatGPT search unlocks the ability for users to get a summary of answers to a specific query sourced from the web, while integration of OpenAI’s DALL-E 3 text-to-image model means all ChatGPT users can also ask the chatbot to generate images from prompts.

What can’t you use ChatGPT for?

As powerful as it is, ChatGPT still has limitations. Chief among them is fact-checking. Famously, the chatbot’s responses are not always accurate and it’s known to hallucinate facts. OpenAI says that ChatGPT “may be inaccurate, untruthful, and otherwise misleading at times.”

ChatGPT is also subject to a number of guidelines and ethical restrictions. The AI chatbot tries its best to avoid explicit, violent or discriminatory content. It won’t engage in political discussions, nor will it offer legal, medical or financial advice on an individual basis.

OpenAI also emphasizes that the generative chatbot cannot show empathy or engage with real emotions. Nor will it promote anything relating to self-harm.

How much does ChatGPT cost?

ChatGPT is available for free, but there are also two paid tiers for individuals, as well as Team and Enterprise plans for organizations.

For free, users get access to OpenAI’s GPT-4o mini model, plus limited access to GPT-4o and o3-mini. They can also use ChatGPT search to access results informed by real-time data from the web. Hitting the 'Reason' button on a query gives you limited access to the ChatGPT o1 model. You get a limited number of images from DALL-E.

Plus costs $20 (about £16 / AU$30 ) per month and unlocks a number of additional features, including access to multiple reasoning models, ChatGPT’s advanced voice mode and limited access to Sora, OpenAI’s video generation model, as well as more images with DALL-E.

The Pro tier will set you back a much more significant $200 (about £165 / AU$325) per month. Designed for advanced users, it unlocks deeper capabilities for every tool in ChatGPT’s arsenal, including unlimited access to the latest reasoning models and extended Sora access for generating AI video.

The Team option is priced at $25 (about £19 / AU$38) per user per month and allows users to create and share custom GPTs within a workspace.

Where can you use ChatGPT?

ChatGPT can be accessed through its web interface at chatgpt.com using almost any browser. For a while, this was the only way to use the chatbot. However, you can now download official iOS and Android apps for free, as well as a desktop app for macOS and Windows.

The interface and features are broadly consistent, but the mobile apps come with the added benefit of being able to engage in a voice conversation with ChatGPT, by tapping the audio icon next to the text input field.

Is ChatGPT any good?

Based on our extensive hands-on experience with ChatGPT, it’s a powerful tool with a lot of uses. Its conversational interface makes it easy for almost anyone to interact with the chatbot, whether you’re asking it to summarize a report or generate an image. The quality of its responses often depends on the wording and context of the prompt, which can vary significantly.

The paid experience is faster and more accurate than the free tier, turning up better quality responses across a range of queries and topics. That said, it’s still vulnerable to factual inaccuracies and hallucinations, while data sourced from the web isn’t necessarily the most up-to-date. As a fact-checking tool, ChatGPT still can’t be relied upon.

As a shortcut for everyday queries or a way to turbocharge your workflow, though, ChatGPT has plenty of potential. Leveraged in the right way, it’s a very powerful tool.

Use ChatGPT if...

You want to use a capable chatbot

From writing content to generating website code, ChatGPT is a hugely capable tool that allows you to get a lot done simply by writing your queries in natural human language.

You want a lot of features for free

Recent updates mean ChatGPT’s free plan now includes access to ChatGPT search and image generation using OpenAI’s DALL-E 3 model, meaning you can get a lot done without a paid subscription.

Don't use ChatGPT if...

You need completely accurate information

Even with real-time web access enabled, ChatGPT is prone to responding with factual inaccuracies. Results need to be cross-referenced with reliable sources before they can be relied upon.

You don’t want to pay for the best features

The free tier is fine for casual users, but if ChatGPT is built into your workflow, you’ll need to pay for a subscription to unlock the faster processing and greater reliability of the latest models.

Also consider
  • Gemini is Google’s alternative to ChatGPT. Previously known as Bard, it’s a Large Language Model that can do a lot of the same things as OpenAI’s chatbot, including answering questions, writing code and creating images.
  • Copilot is Microsoft’s take on the AI chatbot, aimed mainly at business users. Available as a website, an app and as a sidebar in the Edge browser, it’s billed as an AI companion that offers contextual help with everyday tasks.
You might also like...

What is Midjourney: everything you need to know about the AI image generator

Tue, 02/25/2025 - 05:24

Midjourney is one of the most powerful AI image generators available today. A paid tool with a range of subscription plans, it allows you to create accurate and artistic visuals based on simple text prompts.

Midjourney has to be accessed through Discord. This comes with a learning curve, but has the benefit of an active community element, which encourages collaboration.

It’s capable of rapidly generating images and running several jobs at the same time, making it a strong choice for power users.

This article was correct as of February 2025. AI tools are updated regularly and it is possible that some features have changed since this article was written. Some features may also only be available in certain countries.

What is Midjourney?

Midjourney is one of the best AI image generators. Known for its distinctive stylization and artistic looks, the paid tool can turn simple text prompts into high-quality digital renders. Developed by an independent research lab, Midjourney has evolved significantly since its first beta version launched in 2022, generating consistently high-quality results.

Instead of a standalone web platform or app, Midjourney is accessed through a dedicated Discord server. This gives it a unique social component, allowing you to explore and interact with artwork generated by the Midjourney community. This collaborative aspect is one reason why the tool is popular with artists and designers.

Its Discord-based interface is less accessible than some tools, but the pay-off is more granular editing control of image variations once you’ve mastered it.

What can you use Midjourney for?

Midjourney can be used to generate artistic images based on just about anything your mind’s eye can conjure. Enter a text prompt and, after a short wait, you’ll be rewarded with a rich visual representation. It can produce everything from photorealistic human hands to digital cartoons to watercolor landscapes.

This makes Midjourney a win for quickly visualizing ideas, whether that’s concept art for a video game or textures for a mood board. It’s also useful for people who might lack the digital design skills required to create the image assets they need.

All Midjourney plans also feature useful editing tools, including the ability to selectively refine images by painting over certain areas, as well as the option to generate four variations of a given image with strong or subtle differences.

What can’t you use Midjourney for?

Although Midjourney is a powerful AI image generator, it still has limitations. It’s editing toolkit is generous, but can’t match the fully granular layer editing offered by traditional graphic design software such as Adobe Photoshop.

Midjourney is also subject to limitations based on your subscription level (see below). With a basic plan ($10 a month / $96 a year), for example, you’re not entitled to unlimited generations.

Like any AI image generator, Midjourney should not be used to infringe the intellectual property rights of other creators. Under Midjourney’s terms, responsibility for this falls on the user, although Midjourney itself has come under fire for potentially training the model on copyrighted images.

How much does Midjourney cost?

Midjourney is a paid tool with four subscription tiers. All of its plans are available on a monthly or an annual basis. Subscribing annually saves you 20%.

  • Basic Plan $10 per month / $96 per year (about £8 / £76 and AU$16 / AU$151)
  • Standard Plan $30 per month / $288 per year (about £24 / £228 and AU$47 / AU$455)
  • Pro Plan $60 per month / $576 per year (about £48 / £456 and AU$95 / AU$910)
  • Mega Plan $120 per month / $1,152 per year (about £95 / £912 and AU$189 / AU$1,820)

The differences between each plan relate mainly to how fast you can generate images and how many jobs you can have running at a single time.

With the basic plan, for example, you’re entitled to 200 minutes of “Fast GPU Time” per month, with support for up to three fast jobs at once. Compare that to the Pro Plan, which gives you 30 hours of Fast GPU Time and 12 fast jobs simultaneously, plus unlimited generations using the slower “Relax Mode”.

You can find the full entitlements of each plan here.

Where can you use Midjourney?

Midjourney is accessed through Discord, a web-based social platform. Once you’re on the server, you interact with the text-to-image model by messaging the Midjourney Bot.

Discord can also be downloaded as an app for Windows, Mac, iOS and Android, allowing you to message the Midjourney Bot directly.

Is Midjourney any good?

In our full review of Midjourney, we rated it as a superb option for generating artistic images using AI. We praised the “stunning” quality of its image output, as well as the distinctive style and flair of results. In particular, we noted how well it handles lighting and textures.

We also welcomed the array of editing options included with every Discord plan, including the ability to upscale images, create variations and tweak prompts with custom zooms.

The main drawback is the interface: for those unfamiliar with Discord, there’s a definite learning curve. Still, ongoing refinements mean that Midjourney is much slicker and easier to use than it used to be. The community aspect is also unique in creating a collaborative user environment which we found genuinely inspiring in our review.

Use Midjourney if...

You want a fully featured AI image generator

Beyond text-based prompts and a deep understanding of different artistic styles, Midjourney includes a range of tools designed to unlock your creativity, including upscaling, two-strength variations and selective editing.

You want to collaborate with digital artists

Because it’s built around Discord, community is at the core of Midjourney. Hit the explore tab and you’ll find a feed of images generated by other users, encouraging inspiration and collaboration with Midjourney’s artistic flair.

Don't use Midjourney if...

You want a simple AI image generator

Midjourney’s interface is more intuitive than it used to be, but there is still a learning curve associated with the Discord-based platform. Getting to grips with its more advanced controls takes a little time.

You want a free AI image generator

Midjourney is capable of producing beautifully artistic results, but you’ll pay for the privilege. After a free trial, subscriptions range from $10 to $120 per month for the full set of features. Others are available for less – or even free.

Also consider
  • DALL-E 3 is an AI image generator. Developed by OpenAI, it can be accessed through ChatGPT or Microsoft Designer. Its chatbot interface is easier to use than Midjourney, yet it still supports selective editing and support for complex prompts.
  • Leonardo is a feature-packed AI image generator with a versatile creative toolkit, including real-time editing and upscaling. It’s fast, too, making it a good option for professionals, provided you don’t mind an interface that’s slightly clunky in places.
You might also like...

I tried ChatGPT's Dall-E 3 image generator and these 5 tips will help you get the most from your AI creations

Mon, 02/24/2025 - 21:00

If you use ChatGPT much, you've likely experimented with DALL-E 3, OpenAI’s latest iteration of an AI image generator. DALL-E 3 can be shockingly good at bringing your ideas to life but sometimes shockingly bad at understanding certain details or just shocking in what it chooses to emphasize. Still, if you want to see a specific scene move from your head to your screen, DALL-E 3 is usually pretty helpful, it can even make hands write.

But here’s the thing, DALL-E 3 is still an AI model, not a mind reader. If you want images that actually look like what you’re imagining, you need to learn how to speak its language.

After some trial and error (and a few unintentional horrors), I've become fairly adept at speaking its language. So here are five key tips that will help you do the same.

Polish in HD

DALL-E 3 has a quality mode called ‘hd,’ which makes images look sharper and more detailed. Think of it like adding a high-end camera lens to your AI-generated art – textures look richer, lighting is more refined, and things like fabric, fur, and reflections have more depth.

Check out how it looks in this prompt: "A rendering of a close-up shot of a butterfly resting on a sunflower. Quality: hd."

DALL-E 3

(Image credit: Image created with OpenAI's DALL-E 3) Play with aspect ratio

Not every image has to be a square. DALL-E 3 lets you set the aspect ratios, which may seem minor but can be huge if you want to make something look more cinematic or like a portrait.

Square is fine, but why limit yourself when you can make something epic? This is especially useful for social media graphics and posters, like the one below, which had the prompt: "A vertical poster of a vintage travel advertisement for Paris, size 1024x1792 pixels."

DALL-E 3

(Image credit: Image created with OpenAI's DALL-E 3) Think like a film director

To get an image capable of evoking emotion, sometimes it helps to think like you're a photographer in the real world. Think about camera angle or composition techniques; look them up if necessary. The result can dramatically change how an image looks.

Instead of a flat, dead-on image, you can request angles like close-up, bird’s-eye view, or over-the-shoulder. The same goes for composition styles and terms like ‘symmetrical composition’ or ‘depth of field.’

That's how you can get the following image from this prompt: "A dramatic over-the-shoulder shot of a lone cowboy standing on a rocky cliff, gazing at the vast desert landscape below. The sun sets in the distance, casting long shadows across the canyon. The cowboy's silhouette is sharp against the golden sky, evoking a cinematic Western feel."

DALL-E 3

(Image credit: Image created with OpenAI's DALL-E 3) Iterate, iterate, iterate

One of DALL-E 3’s lesser-known but highly effective tricks is telling it what not to include. This helps avoid unwanted elements in your image. That might mean specifying negative elements like colors, objects, or styles you don't want or refining the style and mood by what you don't want it to feel like.

That's how I got the image below, using the prompt: "A peaceful park in autumn with a young woman sitting on a wooden bench, reading a book. Golden leaves cover the ground, and a soft breeze rustles the trees. No other people, no litter, just a quiet, serene moment in nature."

DALL-E 3

(Image credit: Image created with OpenAI's DALL-E 3) Be overly specific

Think of DALL-E 3 as a very literal genie: it gives you exactly what you ask for, no more, no less. So if you type in “a dog,” don’t be surprised when it spits out a random dog of indeterminate breed, vibe, or moral alignment. The more details you include – like breed, color, setting, mood, or even art style – the better the results.

As an example, You might start with: “A wizard casting a spell," but you'd be better off submitting: “An elderly wizard with a long, braided white beard, dressed in emerald-green robes embroidered with gold runes, conjuring a swirling vortex of blue lightning from his fingertips in a stormy mountain landscape.” You can see both below.

DALL-E 3

(Image credit: Image created with OpenAI's DALL-E 3)

DALL-E 3

(Image credit: Image created with OpenAI's DALL-E 3) You might also like

Pages