Techradar

Subscribe to Techradar feed
Updated: 14 hours 3 min ago

I tried adding audio to videos in Dream Machine, and Sora's silence sounds deafening in comparison

Tue, 02/25/2025 - 18:00
  • Luma Labs’ Dream Machine can now add audio to video clips for free
  • You can prompt the audio or let the AI come up with something it decides is appropriate
  • Sora and other AI video makers mostly lack even an imperfect AI audio creator to go with their visuals

Luma Labs has added a score to the AI videos produced on its Dream Machine platform. The new feature brings audio to your video, custom-generated to match a written prompt or created by the AI, and is based solely on what's happening in the video. That could mean chirping birds at the sunrise scene, a spaceship’s distant hum for your sci-fi animation, the chaotic clatter of a busy café, or anything else you care to hear.

The new feature is free in beta for all users. After generating a video with Dream Machine, you’ll see a new “Audio” button along the row at the bottom of the video next to the existing "Extend" and "Enhance" buttons. Click it, and you get two choices: let the AI decide the best fitting sounds on its own, or take the wheel and provide a text prompt describing exactly what you want. Maybe you’ve got a dreamy nature scene and want to hear a distant waterfall, or maybe you want to hear how the AI does it; either way, it works.

Sound Idea

This update is big because AI-generated videos, while sometimes visually stunning, have always felt incomplete without sound. It's a lot of work to painstakingly add audio yourself. Even some of the biggest names in AI video don't have audio as an option yet, including OpenAI's Sora.

Of course, AI sound generation on its own isn't unique. There are a lot of AI music makers, even full voice and song producers. But, the production within the platform linked to the video already there makes Dream Machine a real standout. That said, it isn't perfect. You can tell from the way the motion and sound don't quite match with this dog as it swims.

On the other hand, when prompted correctly, this crackling fire and laughter of people around it sounds pretty good.

But, I wouldn't rely on Dream Machine to create sound on its own without any guidance in a prompt. With a blank audio prompt, the AI took the same short clip of people around a fire and came up with something a lot spookier.

You might also like...

OpenAI is rolling out exciting new features for all ChatGPT users, and I can't wait

Tue, 02/25/2025 - 17:36
  • Advanced Voice Mode is coming to all ChatGPT free users
  • There will be a daily limit on usage
  • Deep Research is being released to Plus, Team, Edu, and Enterprise users

OpenAI has just announced, via X, that it is starting to roll out a “preview” version of Advanced Voice mode for ChatGPT free users while also rolling out its Deep Research agent to all Plus, Team, Edu, and Enterprise users.

Advanced Voice Mode, which is currently only available to ChatGPT Plus users, launched initially in the mobile app versions of ChatGPT and arrived in the desktop app version of ChatGPT in November last year. It is one of the nicest features of ChatGPT; it’s a way to communicate with the chatbot using your voice in a free-flowing, natural conversation. It’s almost like talking to a real person, and you have the ability to interrupt the chatbot if you find its reply is going on too long. There are a variety of different voices to choose from too, so you can customize the experience.

OpenAI has previously experimented with offering 10 minutes of Advanced Voice Mode a month to ChatGPT free users, but the new rollout is going to “give all ChatGPT free users a chance to preview it daily across platforms." The company is also being a bit secretive about what the daily limit is for Advanced Voice Mode for free users, as it clearly wants to retain the ability to adjust it depending on demand. The only detail on usage it offers is that ChatGPT Plus users will get “5x the free limit."

Starting today, we’re rolling out a version of Advanced Voice powered by GPT-4o mini to give all ChatGPT free users a chance to preview it daily across platforms.The natural conversation pace and tone are similar to the GPT-4o version while being more cost effective to serve.February 25, 2025

ChatGPT 4o-mini-powered

The ChatGPT free version of Advanced Voice Mode will be powered by ChatGPT 4o-mini, while Plus users will continue to have access to Advanced Voice Mode powered by ChatGPT 4o. In its statement, OpenAI said: “Starting today, we’re rolling out a version of Advanced Voice powered by GPT-4o mini to give all ChatGPT free users a chance to preview it daily across platforms. Plus users will continue to have access to Advanced Voice powered by 4o with the existing daily rate limit, which is more than 5x the free limit, as well as access to video and screensharing in Advanced Voice.”

Reacting to the news some X users expressed concern that the 4o-mini model might be “dumbed down” and expressed frustration that the daily limit remained in place for ChatGPT Plus subscribers “We’re paying for the best, not a crippled version. Get it together”, said X user Emanuele Dagostino.

Gemini Live, Google's voice mode chatbot, is entirely free for Android users.

ChatGPT Voice mode on a website.

Advanced Voice Mode in the ChatGPT Mac app. (Image credit: OpenAI) Deep Research

At the same time, OpenAI is rolling out its Deep Research agent tool to all its paid subscribers, rather than just its Pro subscribers. Built using the o3 model, Deep Research is a tool for carrying out in-depth research using the Internet that drastically reduces the time taken by researchers.

The o3 model is optimized for data analysis and can handle text, images, and PDF files that it can access via the web.

Deep Research can work independently. You simply give it a prompt, and it goes off and analyzes and synthesizes hundreds of online sources for you, reducing a job that would take human researchers many hours to a few minutes.

You might also like

Two AI chatbots speaking to each other in their own special language is the last thing we need

Tue, 02/25/2025 - 15:01
  • GibberLink lets AI chatbots communicate without words
  • ElevenLabs hackathon prompts the creation of GGWave
  • The ChatBots use a new communication protocol

Imagine if two AIs could chat with each other in a language no human could understand. Right. Now go hide under the covers.

If you've called customer service in the last year or so, you've probably chatted with an AI. In fact, the earliest demonstrations of powerful large language models showed off how such AIs could easily fool human callers. There are now so many AI chatbots out there handling customer service that two of them are bound to dial each other up, and now, if they do, they can do it in their own special, sonic language.

Developers at the ElevenLabs 2025 Hackathon recently demonstrated GibberLink. Here's how it works, according to a demonstration they provided on YouTube.

Two AI agents from ElevenLabs (we've called them the best speech synthesis startup) call each other about a hotel booking. When they realize they are both AI assistants, they switch to a higher-speed audio communication called GGWave. According to a post on Reddit, GGWave is "a communication protocol that enables data transmission via sound waves."

In the video, the audio tones that replace spoken words sound a bit like old-school modem handshake protocols.

It's hard to say if GGWave and Gibberlink are any faster than speech, but the developers claim the GGWave is cheaper because it no longer relies on the GPU to interpret the speech and can instead rely on the less resource-intensive CPU.

The group shared their code on GitHub in case anyone wants to try building this communication protocol for their own chatting AI chatbots.

Since these were ElevenLabs AI Agents, there's no indication that GibberLink would work with ChatGPT or Google Gemini, though I'm sure some will soon try similar GGWave efforts with these and other generative AI chatbots.

What are they saying?!

A pair of Artificial intelligence Assistants "speaking" their unintelligible language sounds like a recipe for disaster. Who knows what these chatbots might get up to? After they're done booking that hotel room, what if they decide to empty the user's bank account and then use the funds to buy another computer to add a third GGWave "voice" to the mix?

Ultimately, this is a cool tech demonstration that doesn't have much purpose beyond proving it can be done. It has, though, succeeded in making people a little nervous.

You might also like

Amazon's big Alexa event is nearly here - here are 4 things to expect, including Alexa's AI upgrade and a new Echo speaker

Tue, 02/25/2025 - 11:40

Amazon's next big Alexa event is imminent, and it's set to be a major one for all things Echo and smart home. The device-focused event, which will take place on February 26 at 10AM ET in New York City, marks the company's first Alexa announcement since September 2023. That was when the Echo Pop Kids smart speaker and its second-gen Echo Show 8 were unveiled. This time, Amazon is likely focused on the Alexa voice and could announce a big change for its smart assistant.

While Amazon hasn't officially revealed what's in store for its Alexa event, it hasn't been afraid to drop little hints here and there in the build-up to the next device launch. So far, we can venture a safe guess that the Alexa voice assistant will be the prime focus of the event, which is said to receive a significant AI upgrade, followed by the announcement of a new Echo smart speaker and possible Fire TV updates.

Therefore, we have a solid idea of what we expect next from the tech giant, but as we've said, nothing has been set in stone. We won't know for sure until Amazon makes it official during its event, so you can bet our eyes will be peeled for all the latest announcements during our live blog, which we'll update regularly throughout the event. Still, before that, these are the announcements we're expecting to see at tomorrow.

A next-gen Alexa

Alexa AI

(Image credit: Getty Images)

At Amazon's last device event in September 2023, the company teased us with a brief look at Alexa AI, an AI-powered version of the voice assistant with ChatGPT-style functions. This could include an advanced ability to interpret context and distinguish natural speech, conducting multiple requests in a single voice command, and a possible monthly subscription fee.

There's no doubt that Alexa AI will be the star of the show at Amazon's event. However, as recent leaks have pointed out, the AI revamp may be slightly delayed before access is granted.

We've recently reported that an anonymous source informed The Washington Post ($/£) that the AI-revamped Alexa voice had been experiencing inaccuracies when asked questions. As a result, its release date could now be pushed back to March 31, but it will still be announced at Amazon's Alexa event tomorrow.

New Echo smart speakers

Echo dot vs Echo Show

(Image credit: Future)

There's a chance we could see a brand new Echo speaker join Amazon's seemingly never-ending lineup of smart home devices that make up some of the best smart speakers. The last time the company unveiled a new Alexa speaker was the Amazon Echo 4th Gen in 2020.

Despite skipping its Alexa event last year, Amazon didn't starve us of some fresh Echo devices in its other smart home device ranges. Most notably, the Echo Show 21, which reigns as its largest Echo device, and its Echo Spot smart alarm speaker both made their debuts.

Given the near five-year time gap since Amazon's last Echo speaker hardware update, an announcement isn't completely unrealistic. A new smart speaker would also be handy for pairing with the AI-integrated Alexa voice.

Alexa subscription tiers

Amazon Echo First Gen

(Image credit: Future / Lance Ulanoff)

As we know, Alexa AI is likely to appear during Amazon's big Alexa event. However, we believe that the revamped voice assistant will offer limited free use before introducing a monthly subscription fee. Thankfully, though, this will likely not impact the classic Alexa we all know and love.

We've been aware that Amazon has been toying with the idea of implementing a fee for its new Alexa voice which could cost you between $5 to $10 a month. Considering that Amazon has fallen behind its AI competitors ChatGPT, Google Gemini, and Apple Intelligence and has yet to ride the AI train, from a business perspective, charging a monthly fee makes sense. However, from a consumer perspective, we're still not entirely convinced that this will be worth splurging on, given its numerous delays and reported inaccurate responses.

Updates for Fire TV, and maybe a new device

New Amazon Fire TV Search Experience

(Image credit: Amazon)

While its Alexa voice assistant will be the main focus, it's likely that Amazon speak about its Fire TV device range. Amazon's 2023 device event revealed features for its Fire TV devices, including an improved Alexa voice search function and AI screensavers. Following Amazon's Android TV update, we believe the company could introduce new Fire TV devices alongside updates to the abovementioned features during its event.

Mentions of new Fire TV hardware were spotted on one of Amazon's developer pages, stating the following; “Android 14-based Fire TV is based on API level 34. The following sections explain some of [the] important changes that you should consider when you build apps for Android 14-based Fire TV". This gives a strong indication that new Fire TV devices will be one of the star announcements at tomorrow's event.

This leak has come at an awfully convenient time with the Alexa event due to happen tomorrow, adding to our suspicions that Amazon could expand its Fire TV line. With the lack of mentions of specific hardware models, we're unable to pinpoint what exactly this will entail, but we'd expect it to be the announcement of a new smart TV or streaming stick.

You might also like

What is Firefly: everything you need to know about Adobe’s safe AI image generator

Tue, 02/25/2025 - 09:47

Firefly is a set of generative AI tools developed by Adobe. Built into Creative Cloud, Firefly’s features are designed to supercharge your workflow, whether you’re generating images, editing photos or designing graphics.

What sets Firefly apart from many of the best AI image generators is that it was trained on licensed Adobe stock and public domain images, which means it should be safer to user commercially.

New Firefly features are being added to Creative Cloud all the time. Read on to find out how Firefly can improve your creative process.

This article was correct as of February 2025. AI tools are updated regularly and it is possible that some features have changed since this article was written. Some features may also only be available in certain countries.

Adobe Firefly

(Image credit: Future) What is Adobe Firefly?

Firefly is a set of creative tools built on four generative AI models: Image, Vector, Design and Video (which is still in beta). Developed by Adobe, Firefly uses AI to give designers and digital artists more creative flexibility. Some of its features are available as standalone tools, such as Firefly’s web-based text-to-image generator. Others are built directly into Creative Cloud apps, such as Generative Fill in Photoshop.

Launched in March 2023, Adobe Firefly is improving all the time, with new tools regularly added to Creative Cloud apps. The latest update included the beta rollout of its Video model, which powers Generative Expand in Premiere Pro, as well as photographic image adjustments in the Firefly web app.

Firefly is regarded as one of the most ethical AI image generators. This is because its models were trained exclusively on public domain images and licensed Adobe Stock. This makes it a popular choice for commercial and professional users, as it’s less likely to result in copyright issues with generated content.

What can you use Adobe Firefly for?

Adobe Firefly’s full capabilities are too numerous to list here, but they can be broken down into two categories. First, you have the standalone Firefly web interface. A fully featured AI image generator, it can be used to create high-quality images from natural language prompts in a range of visual styles.

It offers powerful options not found on other platforms: you can choose between art and photo content types, provide reference images for style and composition, apply lighting, colour and camera angle effects, as well as refining specific details in the prompt itself.

Then there are the various Firefly tools built into other Creative Cloud apps. You’ve got the powerful Generative Fill and Generative Expand tools in Photoshop, which leverage AI to edit the contents of existing photos and expand their margins. You can also use generative text effects to stylize typography in Adobe Express, while Text to Vector Graphic allows you to easily generate vectors in Illustrator. The new Generative Expand feature in Premiere Pro can even add seconds on to video clips.

What can’t you use Adobe Firefly for?

Firefly is designed to complement the creative process, rather than replace it entirely. As capable as it is, the output of its various tools often needs further refinement or integration into a project before it’s ready to go. Its output also fares better with some subjects than others, and the text-to-image tool will quite often generate unrealistic or surreal results.

The model’s ethics also limit what it can be used to generate. Firefly is built to avoid copyright infringement. Because it’s trained on Adobe Stock and public domain images, there are restrictions on the source material. It won’t generate branded imagery or likenesses of real people, for example. It will also steer clear of offensive or harmful imagery.

How much does Adobe Firefly cost?

Adobe Firefly is available for free via the web interface. The free plan includes 25 generative credits per month and doesn’t require a Creative Cloud subscription. For more comprehensive access, you can take out one of Adobe’s paid Firefly plans.

Firefly Standard costs ($9.99 / £9.98 / AU$16.49 per month) and includes unlimited access to image and vector features. You get 2,000 generative credits per month, which can be used to create five-second videos and translate audio. Firefly Pro ($29.99 / £28.99 / AU$49.49 per month) ups these limits, with 7,000 generative credits.

You can also access Firefly’s features by taking out a Creative Cloud Single App ($9.99 / £9.98 / AU$16.49 per month) or All Apps ($59.99 / £56.98 / AU$96.99 per month) subscription. The former includes anywhere from 100 to 500 generative credits, depending on the app, while the latter gives you 1,000 to play with. For reference, one image generation usually uses one credit.

Where can you use Adobe Firefly?

Adobe Firefly’s text-to-image features can be accessed via the web platform at firefly.adobe.com. Several of its generative tools, including text effects, can also be used online through Adobe Express.

Other Firefly-powered features are only available through the relevant app. Generative Fill and Generative Expand, for example, require you to use Adobe Photoshop.

A laptop screen on a blue background showing Adobe's Firefly AI tools

(Image credit: Adobe) Is Adobe Firefly any good?

Firefly’s effectiveness depends on what you’re using it for. In our review of Adobe Photoshop, for example, we praised the effectiveness of the Generative Fill tool. Powered by Firefly Image Model 3, we found it capable of generating realistic imagery with deeper control of detail and composition. We also encountered few uncanny results. It’s not perfect, though, producing surreal outcomes more than occasionally.

As a text-to-image generator, the web-based version of Firefly has one of the most comprehensive feature sets. In our hands-on experience, however, it doesn’t always pay exact attention to the details of your prompt. Complex prompts can confuse it, too. From our time using it, Firefly’s generative tool is better employed for design and graphics work than photorealism.

Use Adobe Firefly if...

You want a powerful AI image generator

Granular control of its web interface makes Firefly one of the most powerful AI image generators, with the ability to control camera angles, visual styles and more, as well as providing reference images for composition.

You want to stay on the right side of the law

Trained exclusively on licensed stock and public domain images, Adobe Firefly’s generative content is less likely to fall foul of copyright laws. This means you’re safer to use its output as part of commercial projects.

Don't use Adobe Firefly if...

You’re not a Creative Cloud subscriber

With features into several of Adobe apps, including Photoshop, Firefly complements existing tools with the power of generative AI. If you don’t use these programs, you won’t get the best out of it.

You mainly need photorealistic images

Firefly is a powerful text-to-image generator, but it’s best used for graphic and illustrative work. Complex photo prompts can often come out warped, with more believable results generated by models like Imagen 3.

Also consider
  • Luminar Neo is an intuitive, subscription-free photo editor with built-in AI tools. Simpler to use than Photoshop or Lightroom, it allows amateur photographers to save time when editing their images, with powerful removal tools. It doesn’t come close to the power and versatility offered by Adobe Firefly, though.
  • Leonardo is an AI image generator with pro-friendly features, including real-time editing and granular settings control. It has a range of generative tools, including one that lets you create AI images from sketches. It doesn’t have the cross-app depth of Firefly, but it’s a powerful creative option.
You might also like...

What is Gemini: everything you need to know about Google’s AI chatbot

Tue, 02/25/2025 - 09:27

Gemini is Google’s competitor in the race for AI supremacy. A rival to the likes of ChatGPT, it’s a chatbot underpinned by a powerful AI model.

Available online and through smartphone apps, Gemini can assist with everything from web searches to image generation. It also integrates into a number of Google cloud services for contextual assistance.

Gemini is free to use and improving all the time. With more to come in 2025, here’s what Gemini can do – and whether it’s worth paying for Gemini Advanced.

This article was correct as of February 2025. AI tools are updated regularly and it is possible that some features have changed since this article was written. Some features may also only be available in certain countries.

Google Gemini app on iPhone image generation and Gemini Live

(Image credit: Google) What is Google Gemini?

Gemini is the collective name for several components which together make up Google’s AI offering. All of these are underpinned by a large language model, which was released as Gemini in December 2023. The company’s chatbot was previously called Bard, but this was brought under the Gemini umbrella in February 2024. There’s also Gemini for Workspace, which leverages the AI model to enhance cloud-based productivity.

At its core, Gemini is a multimodal model which can process text, code, audio and image-based queries. There are several versions of this model. Which one you use depends on how you access Gemini: the web-based chatbot uses Gemini Flash, for example, while paid subscribers get access to the more powerful Gemini Ultra. There’s also Gemini Nano for on-device AI processing.

What can you use Google Gemini for?

For most users, Gemini is best seen as an alternative to ChatGPT. As an AI chatbot, it can provide reasoned, context-aware responses to natural language prompts. It’s able to answer questions, write code and summarize articles. It can also be used to generate images from simple descriptions using Google’s Imagen 3 text-to-image model.

Like ChatGPT, you can interact with Gemini in a conversational way, asking follow-up questions within the same thread. Because Gemini is connected to the web, you can also use it for enhanced search summaries, pulling in real-time information from online sources.

Gemini is also available on mobile: download the app for iOS or Android to interact with it on your smartphone. It will integrate with other Google apps, pulling in data from the likes of Gmail and Google Maps. The app is also where users can go Live with Gemini, allowing you to interact with the chatbot in real-time using your voice. For Android users, Gemini can also replace Google Assistant.

Recently, Google added the ability for all users to share documents with Gemini for analysis and feedback. Certain Workspace users can also access Deep Research, an in-depth reporting tool.

What can’t you use Google Gemini for?

Like most of the best AI chatbots, there are restrictions on the kind of content that Gemini can generate. It will try to steer clear of copyright infringement or offensive material, for example. There are also limits to the depth of some of its abilities. While it can generate code, for example, it isn’t a complete software development tool.

Gemini also suffers from the same fallibilities which befall other chatbots, namely that its answers are not always accurate. It’s been known to hallucinate, returning incorrect or invented information. As a result, it can’t be used as a definitive research tool.

How much does Google Gemini cost?

The standard version of Gemini is free to use online and through mobile apps. It currently runs on Google’s Gemini 2.0 model, with three versions available to select, depending on whether you need everyday help, multi-step reasoning or integration with YouTube, Maps and Search.

There is also a paid tier available, called Gemini Advanced. This is bundled in with the Google One AI Premium Plan, for $19.99 / £18.99 / AU$32.99 per month. It unlocks access to Google’s latest AI models, as well as new features and higher processing limits (allowing you to submit PDF documents of up to 1,500 pages). You also get native Gemini integration across a number of Google cloud-based services, including Gmail, Docs, Slides and Meet.

Where can you use Google Gemini?

Gemini can be accessed online using almost any browser, by pointing it to gemini.google.com. Head there to interact with Gemini as a web-based chatbot.

Gemini is also available as a smartphone app for iOS and Android. These apps have the same functionality as the web interface, plus the option to go Live with Gemini.

Man using Gemini Live on an phone.

(Image credit: Shutterstock/Rokas Tenys) Is Google Gemini any good?

In our hands-on review of Gemini on an Android smartphone, we found it a genuinely useful alternative to Google Assistant. We noted some bugs, but many of those have been ironed out since, with plenty of new features added. We appreciated its integration with apps like Maps and Gmail, as well as how dialogue with the chatbot encouraged creativity, even if the faux humanity was grating.

In a more recent review of Gemini 2.0 Flash, we found the model faster, smarter and more accurate than before, making the chatbot a more compelling tool. We found its answers clear, succinct and creative.

There’s still room for improvement. Gemini isn’t immune from inaccuracies and reviews elsewhere have reported that the model can misunderstand the detail of what it’s being asked. Even so, Google’s AI tool has a lot going for it – particularly with the impressive real-time performance of Gemini Live.

Use Google Gemini if...

You want a free chatbot on your phone

Google Gemini is a capable AI chatbot that’s available for free on iOS and Android, where it integrates with other Google apps, like Gmail. Using it on your smartphone unlocks more features than the web-based version.

You want to real-time spoken conversations

Gemini Live on mobile allows you to hold a spoken conversation with Google’s AI model, for a more natural, free-flowing interaction that leaves your hands free for something else.

Don't use Google Gemini if...

You need completely accurate information

Gemini does a good job of sourcing real-time data from the web, but it’s still liable to return inaccurate information or hallucinate facts. Accuracy is improving, but results still need to be cross-checked.

You need a fully formed AI tool

As capable as it is, Gemini is still very much a work in progress. New features and integrations are in the pipeline, but you need to accept certain limitations when using Google’s chatbot right now.

Also consider
  • ChatGPT is the most well-known AI chatbot – and for good reason. It has a full set of features to rival Google Gemini, including a voice mode and web search capabilities. Several reviews also find ChatGPT more accurate when it comes to sourcing information, although this depends on the context.
  • Apple Intelligence is a suite of AI tools available to users of compatible iPhone, iPad and Mac devices. Deeply integrated into Apple’s operating systems, the toolkit provides contextual assistance across apps like Mail and Photos. Rollout is ongoing and most processing takes place on device.
You might also like...

What is ChatGPT: everything you should know about the AI chatbot

Tue, 02/25/2025 - 08:33

With more than 400 million monthly users, ChatGPT is the most popular of all AI chatbots. Trained on huge amounts of data, it can process written prompts and generate contextual answers which feel like you’re chatting to a human in real time.

Based on a Large Language Model, the AI bot is evolving all the time. In its latest iteration, ChatGPT is capable of answering in-depth questions, helping with website code and even generating images.

New features are being added all of the time, so read on to find out what ChatGPT can do and why it’s worth using.

This article was correct as of February 2025. AI tools are updated regularly and it is possible that some features have changed since this article was written. Some features may also only be available in certain countries.

What is ChatGPT?

When it launched in November 2022, ChatGPT signalled a new era for artificial intelligence. Developed by OpenAI, the AI chatbot became a hugely popular tool almost overnight. Much of the appeal of ChatGPT lies in its use of natural language. Built on Large Language Models, it’s able to understand human queries written as plain text prompts and generate conversational responses.

In March 2023, OpenAI announced GPT-4, the latest version of its language model. This iteration is multimodal, meaning it can process text, images and audio. Apps running on GPT-4, including ChatGPT, are also better able to process the context of queries and produce more accurate, relevant results.

The result is a chatbot that can be leveraged for a wide range of queries, with answers rendered quickly and accessibly.

What can you use ChatGPT for?

The full capabilities of ChatGPT are still being explored by its millions of users. Almost any query that can be phrased and answered using written words can be addressed – or at least attempted – by ChatGPT. Its remit can be summarized as language-based tasks, whether that language is English, foreign or computer code.

ChatGPT can create computer code from natural language instructions or troubleshoot existing code. It can write a wedding speech for you or re-write your draft in a way that flows better. It can create personalized workout plans, generate business ideas and even act as your therapist.

The introduction of ChatGPT search unlocks the ability for users to get a summary of answers to a specific query sourced from the web, while integration of OpenAI’s DALL-E 3 text-to-image model means all ChatGPT users can also ask the chatbot to generate images from prompts.

What can’t you use ChatGPT for?

As powerful as it is, ChatGPT still has limitations. Chief among them is fact-checking. Famously, the chatbot’s responses are not always accurate and it’s known to hallucinate facts. OpenAI says that ChatGPT “may be inaccurate, untruthful, and otherwise misleading at times.”

ChatGPT is also subject to a number of guidelines and ethical restrictions. The AI chatbot tries its best to avoid explicit, violent or discriminatory content. It won’t engage in political discussions, nor will it offer legal, medical or financial advice on an individual basis.

OpenAI also emphasizes that the generative chatbot cannot show empathy or engage with real emotions. Nor will it promote anything relating to self-harm.

How much does ChatGPT cost?

ChatGPT is available for free, but there are also two paid tiers for individuals, as well as Team and Enterprise plans for organizations.

For free, users get access to OpenAI’s GPT-4o mini model, plus limited access to GPT-4o and o3-mini. They can also use ChatGPT search to access results informed by real-time data from the web. Hitting the 'Reason' button on a query gives you limited access to the ChatGPT o1 model. You get a limited number of images from DALL-E.

Plus costs $20 (about £16 / AU$30 ) per month and unlocks a number of additional features, including access to multiple reasoning models, ChatGPT’s advanced voice mode and limited access to Sora, OpenAI’s video generation model, as well as more images with DALL-E.

The Pro tier will set you back a much more significant $200 (about £165 / AU$325) per month. Designed for advanced users, it unlocks deeper capabilities for every tool in ChatGPT’s arsenal, including unlimited access to the latest reasoning models and extended Sora access for generating AI video.

The Team option is priced at $25 (about £19 / AU$38) per user per month and allows users to create and share custom GPTs within a workspace.

Where can you use ChatGPT?

ChatGPT can be accessed through its web interface at chatgpt.com using almost any browser. For a while, this was the only way to use the chatbot. However, you can now download official iOS and Android apps for free, as well as a desktop app for macOS and Windows.

The interface and features are broadly consistent, but the mobile apps come with the added benefit of being able to engage in a voice conversation with ChatGPT, by tapping the audio icon next to the text input field.

Is ChatGPT any good?

Based on our extensive hands-on experience with ChatGPT, it’s a powerful tool with a lot of uses. Its conversational interface makes it easy for almost anyone to interact with the chatbot, whether you’re asking it to summarize a report or generate an image. The quality of its responses often depends on the wording and context of the prompt, which can vary significantly.

The paid experience is faster and more accurate than the free tier, turning up better quality responses across a range of queries and topics. That said, it’s still vulnerable to factual inaccuracies and hallucinations, while data sourced from the web isn’t necessarily the most up-to-date. As a fact-checking tool, ChatGPT still can’t be relied upon.

As a shortcut for everyday queries or a way to turbocharge your workflow, though, ChatGPT has plenty of potential. Leveraged in the right way, it’s a very powerful tool.

Use ChatGPT if...

You want to use a capable chatbot

From writing content to generating website code, ChatGPT is a hugely capable tool that allows you to get a lot done simply by writing your queries in natural human language.

You want a lot of features for free

Recent updates mean ChatGPT’s free plan now includes access to ChatGPT search and image generation using OpenAI’s DALL-E 3 model, meaning you can get a lot done without a paid subscription.

Don't use ChatGPT if...

You need completely accurate information

Even with real-time web access enabled, ChatGPT is prone to responding with factual inaccuracies. Results need to be cross-referenced with reliable sources before they can be relied upon.

You don’t want to pay for the best features

The free tier is fine for casual users, but if ChatGPT is built into your workflow, you’ll need to pay for a subscription to unlock the faster processing and greater reliability of the latest models.

Also consider
  • Gemini is Google’s alternative to ChatGPT. Previously known as Bard, it’s a Large Language Model that can do a lot of the same things as OpenAI’s chatbot, including answering questions, writing code and creating images.
  • Copilot is Microsoft’s take on the AI chatbot, aimed mainly at business users. Available as a website, an app and as a sidebar in the Edge browser, it’s billed as an AI companion that offers contextual help with everyday tasks.
You might also like...

What is Midjourney: everything you need to know about the AI image generator

Tue, 02/25/2025 - 05:24

Midjourney is one of the most powerful AI image generators available today. A paid tool with a range of subscription plans, it allows you to create accurate and artistic visuals based on simple text prompts.

Midjourney has to be accessed through Discord. This comes with a learning curve, but has the benefit of an active community element, which encourages collaboration.

It’s capable of rapidly generating images and running several jobs at the same time, making it a strong choice for power users.

This article was correct as of February 2025. AI tools are updated regularly and it is possible that some features have changed since this article was written. Some features may also only be available in certain countries.

What is Midjourney?

Midjourney is one of the best AI image generators. Known for its distinctive stylization and artistic looks, the paid tool can turn simple text prompts into high-quality digital renders. Developed by an independent research lab, Midjourney has evolved significantly since its first beta version launched in 2022, generating consistently high-quality results.

Instead of a standalone web platform or app, Midjourney is accessed through a dedicated Discord server. This gives it a unique social component, allowing you to explore and interact with artwork generated by the Midjourney community. This collaborative aspect is one reason why the tool is popular with artists and designers.

Its Discord-based interface is less accessible than some tools, but the pay-off is more granular editing control of image variations once you’ve mastered it.

What can you use Midjourney for?

Midjourney can be used to generate artistic images based on just about anything your mind’s eye can conjure. Enter a text prompt and, after a short wait, you’ll be rewarded with a rich visual representation. It can produce everything from photorealistic human hands to digital cartoons to watercolor landscapes.

This makes Midjourney a win for quickly visualizing ideas, whether that’s concept art for a video game or textures for a mood board. It’s also useful for people who might lack the digital design skills required to create the image assets they need.

All Midjourney plans also feature useful editing tools, including the ability to selectively refine images by painting over certain areas, as well as the option to generate four variations of a given image with strong or subtle differences.

What can’t you use Midjourney for?

Although Midjourney is a powerful AI image generator, it still has limitations. It’s editing toolkit is generous, but can’t match the fully granular layer editing offered by traditional graphic design software such as Adobe Photoshop.

Midjourney is also subject to limitations based on your subscription level (see below). With a basic plan ($10 a month / $96 a year), for example, you’re not entitled to unlimited generations.

Like any AI image generator, Midjourney should not be used to infringe the intellectual property rights of other creators. Under Midjourney’s terms, responsibility for this falls on the user, although Midjourney itself has come under fire for potentially training the model on copyrighted images.

How much does Midjourney cost?

Midjourney is a paid tool with four subscription tiers. All of its plans are available on a monthly or an annual basis. Subscribing annually saves you 20%.

  • Basic Plan $10 per month / $96 per year (about £8 / £76 and AU$16 / AU$151)
  • Standard Plan $30 per month / $288 per year (about £24 / £228 and AU$47 / AU$455)
  • Pro Plan $60 per month / $576 per year (about £48 / £456 and AU$95 / AU$910)
  • Mega Plan $120 per month / $1,152 per year (about £95 / £912 and AU$189 / AU$1,820)

The differences between each plan relate mainly to how fast you can generate images and how many jobs you can have running at a single time.

With the basic plan, for example, you’re entitled to 200 minutes of “Fast GPU Time” per month, with support for up to three fast jobs at once. Compare that to the Pro Plan, which gives you 30 hours of Fast GPU Time and 12 fast jobs simultaneously, plus unlimited generations using the slower “Relax Mode”.

You can find the full entitlements of each plan here.

Where can you use Midjourney?

Midjourney is accessed through Discord, a web-based social platform. Once you’re on the server, you interact with the text-to-image model by messaging the Midjourney Bot.

Discord can also be downloaded as an app for Windows, Mac, iOS and Android, allowing you to message the Midjourney Bot directly.

Is Midjourney any good?

In our full review of Midjourney, we rated it as a superb option for generating artistic images using AI. We praised the “stunning” quality of its image output, as well as the distinctive style and flair of results. In particular, we noted how well it handles lighting and textures.

We also welcomed the array of editing options included with every Discord plan, including the ability to upscale images, create variations and tweak prompts with custom zooms.

The main drawback is the interface: for those unfamiliar with Discord, there’s a definite learning curve. Still, ongoing refinements mean that Midjourney is much slicker and easier to use than it used to be. The community aspect is also unique in creating a collaborative user environment which we found genuinely inspiring in our review.

Use Midjourney if...

You want a fully featured AI image generator

Beyond text-based prompts and a deep understanding of different artistic styles, Midjourney includes a range of tools designed to unlock your creativity, including upscaling, two-strength variations and selective editing.

You want to collaborate with digital artists

Because it’s built around Discord, community is at the core of Midjourney. Hit the explore tab and you’ll find a feed of images generated by other users, encouraging inspiration and collaboration with Midjourney’s artistic flair.

Don't use Midjourney if...

You want a simple AI image generator

Midjourney’s interface is more intuitive than it used to be, but there is still a learning curve associated with the Discord-based platform. Getting to grips with its more advanced controls takes a little time.

You want a free AI image generator

Midjourney is capable of producing beautifully artistic results, but you’ll pay for the privilege. After a free trial, subscriptions range from $10 to $120 per month for the full set of features. Others are available for less – or even free.

Also consider
  • DALL-E 3 is an AI image generator. Developed by OpenAI, it can be accessed through ChatGPT or Microsoft Designer. Its chatbot interface is easier to use than Midjourney, yet it still supports selective editing and support for complex prompts.
  • Leonardo is a feature-packed AI image generator with a versatile creative toolkit, including real-time editing and upscaling. It’s fast, too, making it a good option for professionals, provided you don’t mind an interface that’s slightly clunky in places.
You might also like...

I tried ChatGPT's Dall-E 3 image generator and these 5 tips will help you get the most from your AI creations

Mon, 02/24/2025 - 21:00

If you use ChatGPT much, you've likely experimented with DALL-E 3, OpenAI’s latest iteration of an AI image generator. DALL-E 3 can be shockingly good at bringing your ideas to life but sometimes shockingly bad at understanding certain details or just shocking in what it chooses to emphasize. Still, if you want to see a specific scene move from your head to your screen, DALL-E 3 is usually pretty helpful, it can even make hands write.

But here’s the thing, DALL-E 3 is still an AI model, not a mind reader. If you want images that actually look like what you’re imagining, you need to learn how to speak its language.

After some trial and error (and a few unintentional horrors), I've become fairly adept at speaking its language. So here are five key tips that will help you do the same.

Polish in HD

DALL-E 3 has a quality mode called ‘hd,’ which makes images look sharper and more detailed. Think of it like adding a high-end camera lens to your AI-generated art – textures look richer, lighting is more refined, and things like fabric, fur, and reflections have more depth.

Check out how it looks in this prompt: "A rendering of a close-up shot of a butterfly resting on a sunflower. Quality: hd."

DALL-E 3

(Image credit: Image created with OpenAI's DALL-E 3) Play with aspect ratio

Not every image has to be a square. DALL-E 3 lets you set the aspect ratios, which may seem minor but can be huge if you want to make something look more cinematic or like a portrait.

Square is fine, but why limit yourself when you can make something epic? This is especially useful for social media graphics and posters, like the one below, which had the prompt: "A vertical poster of a vintage travel advertisement for Paris, size 1024x1792 pixels."

DALL-E 3

(Image credit: Image created with OpenAI's DALL-E 3) Think like a film director

To get an image capable of evoking emotion, sometimes it helps to think like you're a photographer in the real world. Think about camera angle or composition techniques; look them up if necessary. The result can dramatically change how an image looks.

Instead of a flat, dead-on image, you can request angles like close-up, bird’s-eye view, or over-the-shoulder. The same goes for composition styles and terms like ‘symmetrical composition’ or ‘depth of field.’

That's how you can get the following image from this prompt: "A dramatic over-the-shoulder shot of a lone cowboy standing on a rocky cliff, gazing at the vast desert landscape below. The sun sets in the distance, casting long shadows across the canyon. The cowboy's silhouette is sharp against the golden sky, evoking a cinematic Western feel."

DALL-E 3

(Image credit: Image created with OpenAI's DALL-E 3) Iterate, iterate, iterate

One of DALL-E 3’s lesser-known but highly effective tricks is telling it what not to include. This helps avoid unwanted elements in your image. That might mean specifying negative elements like colors, objects, or styles you don't want or refining the style and mood by what you don't want it to feel like.

That's how I got the image below, using the prompt: "A peaceful park in autumn with a young woman sitting on a wooden bench, reading a book. Golden leaves cover the ground, and a soft breeze rustles the trees. No other people, no litter, just a quiet, serene moment in nature."

DALL-E 3

(Image credit: Image created with OpenAI's DALL-E 3) Be overly specific

Think of DALL-E 3 as a very literal genie: it gives you exactly what you ask for, no more, no less. So if you type in “a dog,” don’t be surprised when it spits out a random dog of indeterminate breed, vibe, or moral alignment. The more details you include – like breed, color, setting, mood, or even art style – the better the results.

As an example, You might start with: “A wizard casting a spell," but you'd be better off submitting: “An elderly wizard with a long, braided white beard, dressed in emerald-green robes embroidered with gold runes, conjuring a swirling vortex of blue lightning from his fingertips in a stormy mountain landscape.” You can see both below.

DALL-E 3

(Image credit: Image created with OpenAI's DALL-E 3)

DALL-E 3

(Image credit: Image created with OpenAI's DALL-E 3) You might also like

Gabby Petito murder documentary sparks viewer backlash after it uses fake AI voiceover

Mon, 02/24/2025 - 18:00
  • Netflix’s American Murder: Gabby Petito has upset some people for using an AI-generated voice to narrate Petito’s journal entries.
  • Despite permission from Petito’s family, critics argue the AI voice raises ethical concerns.
  • This isn't the first occurrence of such debate, and it will likely keep happening as the technology improves.

Netflix’s latest true-crime docuseries, American Murder: Gabby Petito, has stirred up a heated debate over how to deploy AI to mimic the voices of people who have passed away. The filmmakers employed AI to recreate Petito's voice and have it narrate excerpts from her personal writings, which has reportedly made many viewers feel uncomfortable and raised ethical concerns about using AI to give voice to the deceased.

The three-part series chronicles the 2021 murder of 22-year-old Petito at the hands of her fiancé, Brian Laundrie. It pieces together her final months through interviews, personal videos, and social media posts, evoking how the tragedy happened in real-time on the internet. True crime aficionados famously dissected every frame of Petito’s travel vlogs before authorities found her remains in Wyoming.

At the start of the series, a disclaimer appears: “Gabby’s journal entries and text messages are brought to life in this series in her own voice, using voice recreation technology.” That means the voice narrating parts of the documentary isn’t actually Petito’s but a synthetic recreation made with an AI model. Netflix has said the filmmakers received permission from Petito’s family to do so. That hasn’t stopped some people from vocalizing how eerie the AI-generated voice feels. Social media content creators have racked up hundreds of thousands of views discussing it.

AI ghosts

This isn't the first controversy over AI-generated voices. Roadrunner: A Film About Anthony Bourdain faced similar criticism when its director revealed that parts of the documentary featured AI-generated narration of Bourdain’s own words. That movie didn't indicate which bits were narrated by the AI or by Bourdain, which led many to feel that the technique was deceptive.

Filmmaker Michael Gasparro defended the decision in an interview with Us Weekly, saying the team wanted to tell the story as much “through Gabby’s voice as possible.” They had access to a wealth of her journals, notes, and online posts and thought AI narration would bring them to life in a more powerful way. “At the end of the day, it’s her story.”

Technology has always shaped the way we tell stories, but AI presents a new challenge, especially when it comes to memorializing people who can no longer speak for themselves. Robert Downey Jr. has vowed that AI will never replicate him on screen, while James Earl Jones secured a deal with Disney before passing away, allowing them to use his voice for Darth Vader under certain circumstances.

Meanwhile, ElevenLabs has inked deals with the estates of James Dean, Burt Reynolds, Judy Garland, and Sir Laurence Olivier to let it add AI versions of their voices to its Reader app. As deepfake technology and voice cloning become more sophisticated, filmmakers and media companies will have to reckon with how (and if) these tools should be used to tell real-life stories.

You might also like

Pages