Do NOT follow this link or you will be banned from the site!
Feed aggregator
Google Proposes Fix to Solve Search Monopoly
The search giant’s proposals included allowing flexibility for companies and consumers in choosing a search engine.
Google Antitrust Case: Why Chrome May Be Sold and What Happens Next
The federal judge who ruled Google was a monopolist in search is weighing his options to fix the monopoly. Here’s what happens now.
12 Days of OpenAI ends with a new model for the new year
- OpenAI announced upcoming o3 and o3-mini AI models.
- The new models are enhanced "reasoning" AI models that build on the o1 and o1-mini models released this year.
- Both models handily outperform existing AI models and will roll out in the next few months.
The final day of the 12 Days of OpenAI, brought back OpenAI CEO Sam Altman to show off a brand new set of AI models coming in the new year. The o3 and o3-mini models are enhanced versions of the relatively new o1 and o1-mini models. They're designed to think before they speak, reasoning out their answers. The mini version is smaller and aimed more at carrying out a limited set of specific tasks but with the same approach.
OpenAI is calling it a big step toward artificial general intelligence (AGI), which is a pretty bold claim for what is, in some ways, a mild improvement to an already powerful model. You might have noticed there's a number missing between the current o1 and the upcoming o3 model. According to Altman, that's because OpenAI wants to avoid any confusion with British telecom company O2.
So, what makes o3 special? Unlike regular AI models that spit out answers quickly, o3 takes a beat to reason things out. This “private chain of thought” lets the model fact-check itself before responding, which helps it avoid some of the classic AI pitfalls, like confidently spewing out wrong answers. This extra thinking time can make o3 slower, even if only a little bit, but the payoff is better accuracy, especially in areas like math, science, and coding.
One great aspect of the new models is that you can adjust that extra thinking time manually. If you’re in a hurry, you can set it to “low compute” for quick responses. But if you want top-notch reasoning, crank it up to “high compute” and give it a little more time to mull things over. In tests, o3 has easily outstripped its predecessor.
This is not quite AGI; o3 can't take over for humans in every way. It also does not reach OpenAI's definition of AGI, which describes models that outperform humans in the most economically valuable projects. Still, should OpenAI reach that goal, things get interesting for its partnership with Microsoft since that would end OpenAI's obligation to give Microsoft exclusive access to the most advanced AI models.
New year, new modelsRight now, o3 and its mini counterpart aren’t available to everyone. OpenAI is giving safety researchers a sneak peek via Copilot Labs, and the rest of us can expect the o3-mini model to drop in late January, with the full o3 following soon after. It’s a careful, measured rollout, which makes sense given the kind of power and complexity we’re talking about here.
Still, o3 gives us a glimpse of where things are headed: AI that doesn’t just generate content but actually thinks through problems. Whether it gets us to AGI or not, it’s clear that smarter, reasoning-driven AI is the next frontier. For now, we’ll just have to wait and see if o3 lives up to the hype or if this last gift from OpenAI is just a disguised lump of coal.
You might also likeOpenAI Unveils o3 System That Reasons Through Math, Science Problems
The artificial intelligence start-up said the new system, OpenAI o3, outperformed leading A.I. technologies on tests that rate skills in math, science, coding and logic.
Windows 11 24H2 strikes a sour note as audio bug hits the update, leaving some PCs silent
- Windows 11 24H2 has a new bug that breaks audio output
- It's caused by Dirac Audio software, and a fix is being worked on
- Microsoft has blocked the update for PCs running Dirac
The latest big update for Windows 11, version 24H2, has run into yet another problem, namely an issue with audio output - or lack of it.
This is a bug that breaks the sound output from affected PCs, so you’ll hear nothing through built-in speakers, or Bluetooth speakers, or headsets - which is a pretty nasty development.
Microsoft has confirmed the glitch under its ‘known issues’ list in the release health dashboard for Windows 11 24H2, along with another recently identified problem with the Auto HDR feature which is causing colors to be displayed incorrectly in games.
The software giant explains that the sound bug is related to the Dirac Audio software (and its cridspapo.dll file), which is designed to make your audio clearer. Microsoft informs us that the problem has hit a “limited set of devices from one manufacturer,” but doesn’t tell us which vendor that is, sadly.
Whatever the case, to deal with the bug, Microsoft has put a temporary update block in place, preventing the installation of the 24H2 update. This policy is what Microsoft calls a “compatibility safeguard hold,” which is a way to ensure that the update isn’t delivered to devices that are going to run into trouble.
(Image credit: Shutterstock) The current state of play with this audio glitchRight now, there’s no fix for this issue, so if you’ve already upgraded to 24H2 and are suffering from a silent PC all of a sudden, there’s not much you can do.
The good news is that Microsoft is working directly with Dirac to release a new version of its audio software to resolve the problem. When Dirac makes the new driver available, Microsoft will pipe it to PCs via Windows Update, and with the issue resolved, the upgrade block will be lifted - and those with Dirac Audio installed will be able to grab the 24H2 update.
The bug only affects version 24H2, so if you’re using an earlier release like Windows 11 23H2, you should be okay. You can read more about this audio glitch in Microsoft’s official documentation.
Interestingly, this isn’t the only audio-related issue we’ve seen with Windows 11 24H2. Another bug that cropped up causes some PCs to play sound at maximum volume without warning - so it’s the polar opposite of this new glitch - and Microsoft is still trying to implement a solution for that, as well.
I think that Windows 11 is definitely having a moment with the 24H2 update, running into a whole host of bugs, but it’s not like we haven’t seen this before - Windows 10 has suffered a multitude of issues at various points in its existence.
However, it won’t be long before Windows 10 reaches its End of Life - that happens next October, in fact - and Microsoft will have to continue to improve and hone Windows 11 to convince more people to switch over. With any luck, this bad run of bugs for Windows 11 will end sooner rather than later.
YOU MIGHT ALSO LIKE...- A Chrome-killing browser will be OpenAI's next big shot at Google, according to a new report
- Microsoft lays out reasons Windows 10 gamers should upgrade to Windows 11, but I can pick a few holes in these arguments
- Windows 11’s Start menu recommendations are being improved – but I’m still not impressed, Microsoft
Amazon Has Overhauled Its Drone Delivery. Will the Public Welcome It?
A recent visit to Amazon’s overhauled drone delivery program in Arizona left me impressed by the drones, but skeptical that the public will welcome them.
Is Amazon’s Drone Delivery Finally Ready for Prime Time?
We flew to Arizona to test the recently debuted service ourselves.
How Your Car Might Be Making Roads Safer
Researchers say data from long-haul trucks, and cars made by General Motors, is critical for addressing traffic congestion and road safety. Data privacy experts have their concerns.
This sleek new AI device will transcribe and analyze your conversations for way less than its rivals
- The Pocket is the latest standalone AI-powered gadget to be revealed.
- For $79, it promises to record, transcribe, and organize conversations.
The hype around AI-powered hardware at the beginning of the year has mostly faded as customers seemed reluctant to pay for the Humane AI Pin, Plaud.AI NotePin, or Rabbit R1, regardless of their many AI abilities. A new device called Pocket is approaching the market from a different angle, though, with a compact design and a far lower price point.
Created by Open Vision Engineering, Pocket promises to record, transcribe, and organize conversations as an affordable companion for professionals and or those who want to document their day. The $79 (about £79) device can be ordered now, with shipments expected in early 2025, and it links with a companion app for Android and iOS.
The device itself can magnetically attach to the back of smartphones and is activated with a button to capture both live conversations and phone calls and encrypt the recordings. Once recorded, Pocket transcribes the conversations and distinguishes between multiple speakers in the document.
The AI also analyzes the interactions with its Conversation Map feature. This tool breaks down the flow of discussion, helping you see how ideas developed, who contributed, and where the conversation went off on that inevitable tangent. Pair this with the thousands of customizable templates and you have a flexible way of organizing your thoughts.
Pocket price planPocket comes with 200 free minutes of recording per month and then requires users to purchase credits. Even so, it comes off as far more budget-friendly than its competitors. The Plaud NotePin, which clips to your clothes, is $169 and provides only 100 more minutes a month compared to Pocket, though there's a yearly $79 Pro Plan with 1,200 minutes per month and other features.
Then there’s the Rabbit R1, whose bright orange box comes at $199 and is also designed for web searches and app controls. Last, the $699 Humane AI Pin comes with voice commands and projects information onto your hand. These devices all bring different flavors of AI assistance and a lot of extra power, but that may not be what people want from AI hardware.
Pocket keeps things simple by comparison. Instead of trying to be a wearable wonder or a flashy lifestyle gadget, Pocket focuses purely on recording, transcribing, and organizing conversations. Whether this simplicity will help Pocket carve out a niche or get lost in the shuffle remains to be seen, but for those who just need a no-fuss way to keep track of conversations, Pocket might be the perfect fit.
You might also like...ChatGPT's Mac app gets a glowup with new coding and notetaking features
- OpenAI's introduced new features for the ChatGPT macOS app.
- Advanced Voice Mode enables users to engage with ChatGPT through voice commands.
- ChatGPT's "Working with Apps" feature allows it to interact directly with coding and notetaking apps.
For Day 11 of the 12 Days of OpenAI, the company moved from the landline phone to your computer, specifically macOS. OpenAI CPO Kevin Weil demonstrated how Mac owners using the ChatGPT desktop app will see it floating on their screen like a friendly digital sidekick.
There's no need to even type requests to it, thanks to the inclusion of Advanced Voice Mode on the desktop app now. You can just speak aloud, and ChatGPT will fulfill your requests even when you're doing other things. Instead of switching between tabs or opening a browser, you can call ChatGPT whenever needed to draft an email, brainstorm ideas, or fix some code.
Speaking of coding, the new “Working with Apps” feature has ChatGPT running around many apps on your computer, especially for coding. You can give it access to apps on your Mac, and it’ll peek inside, understand what’s going on, and lend a helping hand.
Say you’re using a code editor like Warp and staring at a long, confusing list of code. Instead of scrolling endlessly, you can ask ChatGPT to analyze what’s on the screen and it will offer suggestions, explanations, or even write new code snippets. OpenAI showcased how by asking ChatGPT to write some code directly into Xcode, Apple’s app development tool.
Notes for the holidaysThe apps ChatGPT works with aren't limited to coding services either. The AI assistant will write in English and programming languages right into apps like Apple Notes, Quip, and Notion. If you’re planning a trip and using Notion to jot down ideas, you can ask ChatGPT to help flesh them out, including citing sources from the internet.
If you're a Mac user, the new features are all available. For Windows users, an update to the existing app is coming soon, though OpenAI hasn’t given a specific timeframe.
These aren't the dramatic announcements of some of the previous days, but it's showcasing OpenAI's vision of ChatGPT spreading beyond the chat window. The AI could become a much more active collaborator, capable of blending multiple apps' powers into one convenient package. This is what OpenAI calls the “agentic” approach to AI, where ChatGPT performs tasks on your behalf, taking some of the workload off your shoulders.
One day to go, let's see if OpenAI can close out with as much excitement as they began. It has to be better than the last verse of the song. Nobody wants to take 12 drummer drumming home, but a brand new AI model might be a nice stocking stuffer.
You might also likeElon Musk Flexes His Political Strength as Government Shutdown Looms
The world’s richest man led the charge to kill a bipartisan spending deal, in part by promoting false and misleading claims about it.
Nvidia’s Global Chips Sales Could Collide With US-China Tensions
The chipmaker expects more than $10 billion in foreign sales this year, but the Biden administration is advancing rules that could curb that growth.
Is the Tech Industry Nearing an A.I. Slowdown?
Companies like OpenAI and Google are running out of the data used to train artificial intelligence systems. Can new methods continue years of rapid progress?
Gamers beware: Windows 11 24H2 update could wreck your colors and crash your games
- Windows 11 24H2 Auto HDR bug causes colors to be displayed wrongly
- There are also problems with games crashing
- Microsoft has promised a fix is coming soon
Windows 11’s 24H2 update has another bug that’s affecting PC gamers, and others besides, with the glitch causing colors to be displayed incorrectly.
Microsoft has confirmed the problem in its release health status dashboard, informing us that the bug is happening to those who’ve enabled the Auto HDR feature.
Windows Latest reports that issues with the 24H2 update aren’t just affecting games but also colors in general on the desktop, which may be rendered wrongly until you go to the Settings app in Windows 11 and switch off the ‘Automatically manage color for apps’ option.
Microsoft doesn’t mention bugs pertaining to anything outside of PC games when using Auto HDR, though.
Auto HDR is a Windows 11 feature that, ironically, is designed to enhance your gaming visuals automatically. If you turn on Auto HDR, it’ll apply HDR effects to an SDR game, meaning that game will appear more vibrant and immersive on an HDR monitor.
Games that support HDR natively will deliver a better visual experience, of course, but Auto HDR is much better than playing in SDR - unless, as is the case with this bug, it completely messes up your colors.
(Image credit: Shutterstock / Dean Drobot) This isn’t just about wonky colors - but also games crashingIn the same support document, Microsoft explains that not only does the Auto HDR bug cause colors to appear incorrectly in games, but it could cause some games to crash.
Windows Latest describes its own experience of the problem and references a Reddit thread where user Rachidramone describes multiple games freezing or crashing entirely (including popular titles like Call of Duty, Assassin’s Creed, and Far Cry).
Microsoft’s recommendation to remedy the issue is to either turn off Auto HDR in Settings, or to avoid using Windows 11 24H2, and stick with 23H2 instead.
Furthermore, Microsoft has enacted a ‘compatibility hold’ for PCs that have Auto HDR enabled, which means that these devices won’t be offered the 24H2 update. When the issue is fixed, 24H2 will then be rolled out to those PCs again.
Microsoft also warned against bypassing its upgrade block, and manually installing 24H2 (using, for example, the media creation tool), if you use Auto HDR at all.
For those who have already installed the 24H2 update, Microsoft recommends turning off Auto HDR to get things back to normal. You can do this by heading to Settings > System > Display, and then selecting Graphics. Under the ‘Default Settings’ panel you’ll see the toggle for Auto HDR and you just need to turn this off to disable it for all games, which is what I’d recommend. (However, this can also be done on a per-game basis via the ‘Custom settings for applications’ panel, should you wish).
Microsoft has said it’s working to fix the bug now, and will provide more information when it’s available. According to Windows Latest, the cure should be rolled out in the near future.
Windows 11 24H2 is clearly still finding its feet, especially when it comes to gaming. A bunch of bugs in 24H2 have hit PC gamers, unfortunately, including issues with anti-cheat systems, random crashes of one sort or another, and some serious glitches with Ubisoft games in particular.
If you’re not looking to deal with unexpected bugs and blips, it’s probably best to be a little cautious when it comes to updating to 24H2, and maybe hold off on upgrading until it becomes more stable and predictable. Hopefully, it won’t be too long before that happens.
YOU MIGHT ALSO LIKE...- Microsoft lays out reasons Windows 10 gamers should upgrade to Windows 11, but I can pick a few holes in these arguments
- Windows 11’s new webcam settings will make adjusting resolution a breeze - no extra software needed
- Microsoft continues to mess up Windows 11 Recall, failing to provide fix for weird bug that breaks the feature
Microsoft lays out reasons Windows 10 gamers should upgrade to Windows 11, but I can pick a few holes in these arguments
- Microsoft blog post outlines the strengths of Windows 11 for gamers
- They include AutoHDR mode, DirectStorage tech, and more
- Whether you benefit from some features will depend on your PC setup
Microsoft is keen to push Windows 10 users to upgrade to Windows 11, and the latest part of that drive is persuading PC gamers that they need to make the leap to its newest OS.
Neowin spotted a blog post from Microsoft on how to ‘Elevate your PC gaming experience with Windows 11 this holiday season’ outlining all the perks gamers get with the operating system. (Assuming you can upgrade from Windows 10 – not every PC will meet Windows 11’s more stringent system requirements).
The biggest carrots to make the move to Windows 11, as Microsoft sees it, include the Auto HDR feature, which gives SDR games an HDR makeover (for those with a supporting monitor).
Then there’s also DirectStorage, which turbocharges loading times considerably (and in-game frame rates, too) for those with NVMe SSDs (the caveat being that the game must be coded to support this tech).
Microsoft’s third highlight is Compact Mode for Game Bar, which allows the bar to be downsized to make it more usable on small displays such as the screens on gaming handhelds like the Asus Rog Ally X.
Other major features that have the spotlight shone on them are Dynamic Lighting, which gives you a central hub for controlling all devices with RGB lights, and Windows 11’s optimizations for running games in windowed mode. The latter smooths over lag and screen tearing issues that you might otherwise suffer when playing a game in a window rather than full-screen.
Further, more minor benefits are listed too, including the HDR Calibration app (does what it says on the colorful tin), color filters for colorblind players, and Automatic Super Resolution for upscaling tricks – but that last one is for Arm-based Copilot+ PCs only as it uses the beefy NPU on those machines.
(Image credit: Pexels) Analysis: A better gaming life on Windows 11?What’s the reality of these Windows 11 features – do they really make life for PC gamers better? Well, yes, they do, but there are considerations Microsoft doesn’t mention here.
Fair play on Auto HDR, which is a great feature for those with HDR monitors, as it really does elevate the visual quality of SDR games (those that don’t offer native HDR support). You must have an HDR display, though, of course.
DirectStorage is also an excellent feature, but again, there’s a hardware requirement, which is an NVMe SSD. The bigger catch with this game-speeding tech, though, is that it must be supported, and there aren’t many titles that do so (approaching 20 games or so). I should also note that DirectStorage works in Windows 10; it simply works better and speeds things up more in Windows 11, though.
Compact Mode for Game Bar is only useful for gaming handhelds, as already observed, and the rest of the tricks Microsoft brings our attention to in the blog post are helpful but more bonus trimmings than anything meaty.
Anecdotally, Windows 11 doesn’t run games appreciably faster than Windows 10 in the main, but there may be outliers, and of course, DirectStorage will help very much in supported titles. Auto HDR is definitely nice for those with an appropriate monitor, but is all this a compelling argument for a Windows 11 upgrade?
Not hugely, in my opinion, though it depends very much on your PC setup and whether you fall into any of the above categories. That said, upgrading to Windows 11 doesn’t really have any downsides, either – although you might want to stay off the 24H2 update until Microsoft fixes some of the fair few problems that have hit PC gamers who’ve upgraded to the latest version.
One final reason to upgrade that Microsoft doesn’t mention is if you have an AMD Ryzen 7000 or 9000 processor, there’s tuning work in Windows 11 (23H2 and 24H2) that boosts the speed of the CPU for gaming by a hefty amount (something like 10% faster). For those running that silicon in their gaming PC, this could be a compelling reason to make the jump.
At any rate, those on Windows 10 must consider their next move soon, as the operating system will hit End of Life in October 2025.
Expect Microsoft to not shut up about this as next year progresses, so if you’ve been mulling a Windows 11 upgrade, you may as well get on with it soon enough. It’s either that or stumping up the cash to get an extra year’s support for Windows 10 – or switching to an entirely different desktop OS. Linux gaming, however, is a bit of a minefield, albeit somewhat less so these days…
You may also like...Improved Meta Ray-Ban smart glasses could land in 2025 – with a much requested upgrade
- Meta reportedly making Ray-Ban glasses with a display
- Tipped to land in 2025
- Won't be Orion, but Meta's next step to consumer AR specs
While the consumer version of its impressive Meta Orion AR glasses prototype isn’t due to land any time soon, we might get a taste of what’s to come from the AR revolution next year as reports suggest we’ll get new Ray-Ban Meta smart glasses next year which incorporate “single small in-lens screen.”
That’s per the Wall Street Journal who says “a person familiar with the project” suggested that the Ray-Ban update is due in 2025, and that it could launch alongside the sensor wristband that early testers have used to control Orion.
The report added that while Meta wouldn’t confirm if this heads-up-display feature would be coming, a viewfinder that could show basic information like notifications was a much-requested feature according to Alex Himel, Meta’s vice president of wearables.
We should take rumors with a pinch of salt, but the display-equipped Meta Ray-Bans were previously teased back in 2023 by a leak from The Verge which revealed Meta’s internal roadmap – a roadmap which pegged 2025 as the release year for these glasses.
This is also a good sign for people wanting to get their hands on Orion, as that same leaked roadmap said full-on AR smart specs would drop in 2027. Assuming Meta is on track with Orion that means we could be wearing them in only two years, which feels almost too soon for the AR revolution.
As for when we might see the display packing Ray-Bans, sometime in September or October is most likely given Meta’s typical release schedule; it usually likes to showcase new hardware at Meta Connect which lands in that slot nearing the end of the year. It could always mix things up in 2025, but expect to be waiting a few more months for this improved smart glasses if they are indeed on the way.
Ray-Ban Meta smart glasses come with a variety of frames, lenses and colors. (Image credit: Meta) A display upgrade, or battery downgrade?Meta’s Ray-Ban smart glasses have been one of the best – if not the best – gadget of the year for me. Technically they landed in 2023, but their Meta AI upgrade which only landed this year took them from interesting novelty to easily the best AI wearable out there, and an easy option to include in our best smart glasses guide.
AR functionality, even something basic such as a single HUD, would vastly improve their usefulness. And if the upgrade can come without a price increase – admittedly unlikely – or at least not a massive one, and maintain the stylish design of the current model, then I could see these upgraded specs easily supplanting everything else out there.
My only concern would be battery life. The current glasses only have a four-ish hour maximum based on usage, and a display (even a simple small one) could eat into that. Though Meta would be aware of this and may have designed new and improved batteries to ensure your glasses don’t switch off after two minutes, and at least if they do run out of charge you can always rely on them as a pair of stylish specs.
You might also like...Now AI can keep you alive after you’re gone, and it’s as creepy as it sounds
- A company called Life's Echo is using AI and in-depth interviews to create interactive simulations of people when they pass away.
- They clone your voice to make effective 'digital ghosts' full of information about your life.
- Post-death AI mimics are slowly becoming more prevalent, but Life's Echo is more comprehensive than most.
Imagine going to a family reunion and reminiscing about a loved one who has passed away, only for someone to open an app to reveal an AI-fueled replica of the departed you can have a conversation with.
You ask about their childhood, first job, or their emotions on their wedding day, and they answer correctly, in their own voice and words. That's the vision of a new company called Life’s Echo, which offers a suite of AI tools to enable you to produce a digital ghost of yourself capable of conversing with your loved ones after you’ve died.
Life’s Echo is designed to capture the essence of who you are before you shuffle off this mortal coil. The idea is that your stories, voice, and personality don’t have to vanish. Instead, they can be preserved in a digital format with which your friends and family can interact, even when you’re long gone. It’s a way to keep a version of you alive – in the most uncanny valley way possible.
Here’s how it works: you sit down with an AI interviewer named Sarah, who conducts five 45-minute interviews. Sarah asks about your childhood, family, career, love life – all the big stuff. She digs deep with over 1,000 questions in her database, encouraging you to share your most personal stories and details. These interviews are casual and conversational, almost like therapy, but with a digital afterlife twist.
Once the sessions are complete, the conversations are transcribed, and the AI builds a unique model of you. It’s not just a recording; it’s a digital clone of your voice, stories, and personality. This is your “AI Echo.” Your family members can then ask this AI version of you questions, and it will respond with answers drawn from the life stories you provided. Imagine your daughter, decades from now, asking, “How did you feel when I was born?” and your AI Echo delivering a heartfelt answer as if you were right there.
AI tools like Character.AI have enticed users by offering to simulate the personalities of current and historical celebrities. Then, there are AI voice cloning tools like ElevenLabs and Respeecher that have demonstrated that AI can mimic people's voices incredibly well. At the same time, MyHeritage turns old still photos into moving videos. But Life's Echo is going for something deeper.
"Like most people, I am familiar with the lives of my parents and grandparents but I know nothing about my great grandparents. After three generations, knowledge of our existence almost completely vanishes," CEO Ruth Endacott said. “Life’s Echo will help to preserve a lasting record that allows future generations to engage with and learn intimate and very important details about our lives, key experiences, and perspectives."
AI EternityRuth co-founded Life's Echo with her husband, Steve Endacott. Appropriately, Steve Endacott is already known for his efforts to bring AI into the public sphere thanks to creating "AI Steve," the UK’s first AI candidate for Parliament.
The sentiment behind Life's Echo is touching and could be very heartwarming for the right people. But, it's undeniably an eerie concept too. Picture your virtual self relying on those interviews to convey who you were and what you were like to people who won't be born for a long time. It's uncomfortable to envision your voice, your memories, and your personality all distilled into an algorithm available for a posthumous chat at any time.
But, if you're really into the idea, you can use the same AI tools and interviews to produce a personalized autobiography for your funeral, record your own eulogy to be delivered by the AI version of yourself, and even a whole script for the person running the funeral based on your stories and preferences. It’s like having a ghostwriter who knows precisely what you’d want said at your send-off.
Of course, this isn’t the first time tech has tried to offer a digital afterlife. Other services, like Eternos and Project Lazarus, have explored similar ideas, where AI models of deceased loved ones can answer questions and share memories. But Life’s Echo goes beyond them with the voice mimicry and depth of its interviews.
There are other questions, of course. Even if you like the idea, will talking to a digital version of a loved one help people grieve, or will it keep them stuck in the past? How do you explain it to kids? And if your AI Echo exists in the cloud, who controls it after you’re gone? Regardless of whether you're curious or queasy imagining it, you may be having conversations with deceased loved ones before you know it.
You might also likeGoogle Street View Captures a Man Loading a Bag Into a Trunk. Arrests Follow.
The image, from northern Spain, showed a man with a white bag in the trunk of a car. The National Police said it helped them solve a missing-person case.
Supreme Court Fast-Tracks TikTok Case in Face of Jan. 19 Deadline
The company and its Chinese parent invoked the First Amendment in urging the justices to step in before a deadline to sell or be shut down.
You can now message ChatGPT on WhatsApp or call it on your landline (if you still have one)
Day 10 of the 12 Days of OpenAI went a little retro to make ChatGPT far more accessible than before. OpenAI has introduced new ways to interact with ChatGPT using a much older form of communication technology: a phone number. Specifically, you can text with ChatGPT through WhatsApp and by calling a toll-free phone number. AI by landline has arrived. Naturally, the number to call or message is 1-800-CHATGPT.
You can start a conversation with ChatGPT on WhatsApp by texting 1-800-242-8478 on the app. You can message ChatGPT like any other WhatsApp chatbot but get responses matching those from the free tier of ChatGPT on the mobile app or website. Not every ChatGPT feature is available on WhatsApp either. You can’t ask the AI to search for things online or analyze images, at least for now.
If you’d rather have your AI answers by audio, you can pick up your phone to dial 1-800-CHATGPT (that’s 1-800-242-8478), and a very friendly, very human-like female voice will answer all the same queries you might type out to send to ChatGPT. The experience is pretty much like ChatGPT’s Advanced Voice Mode, where you ask questions, and the AI responds in real-time. It can help you translate a sentence, give recommendations, or chat about whatever’s on your mind.
Even if you still you have a phone like this, you can call ChatGPT. (Image credit: Future) Search AIThere are obvious accessibility benefits to OpenAI in making ChatGPT far more globally available, even with all of the limits and caveats. It’s the same reason Google set up a phone number for Google Assistant that people could call to interact with the voice assistant. But, it also points to how OpenAI and its rivals want to see AI integrated into more communication channels. That’s why both OpenAI and Apple were keen to add ChatGPT capabilities to Siri, augmenting the iOS assistant with the AI model.
There are also limits to ChatGPT on WhatsApp and by phone. You can only message ChatGPT on WhatsApp a limited amount a day, though OpenAI is vague about what that limit actually is. You’ll get a warning when you approach the limit, so you’re not surprised by the cutoff. Similarly, ChatGPT phone conversations aren't unlimited. Instead of a message cap, you get 15 minutes a month for verbal interactions with the AI. And the phone number only works in the U.S. for now. An automated phone number was certainly a surprise for OpenAI’s latest ‘present,’ akin to finding an old wooden train under the wrapping paper. I'd expect that OpenAI will probably take a more future-facing approach to the final two gifts before the event ends.
You might also like