Close Menu
Gadget Guide News
  • Home
  • News
  • Features
  • Reviews
  • Best Stuff
  • Buying Guides
  • Deals

Subscribe to Updates

Get the latest tech news and updates directly to your inbox.

Trending

India is ordering Apple and other phone makers to preinstall a state-owned app

December 1, 2025

IBM CEO Arvind Krishna says there is no AI bubble after all

December 1, 2025

The race to AGI-pill the pope

December 1, 2025
Facebook X (Twitter) Instagram
Facebook X (Twitter) Instagram
Gadget Guide News
Subscribe
  • Home
  • News
  • Features
  • Reviews
  • Best Stuff
  • Buying Guides
  • Deals
Gadget Guide News
  • Best Stuff
  • Buying Guides
  • Reviews
  • Deals
  • Features
Home»News»IBM CEO Arvind Krishna says there is no AI bubble after all
News

IBM CEO Arvind Krishna says there is no AI bubble after all

News RoomBy News RoomDecember 1, 20250158 Mins Read
Share Facebook Twitter Pinterest Copy Link LinkedIn Tumblr Email Telegram WhatsApp
Follow Us
Google News Flipboard
Share
Facebook Twitter LinkedIn Pinterest Email Copy Link

Today, I’m talking with Arvind Krishna, the CEO of IBM. IBM is a fascinating company. It’s still a household name and among the oldest tech firms in the US. Without IBM, we simply wouldn’t have the modern era of computing — it was instrumental to the development of a whole stack of foundational technologies in the 20th century, and it still has a lot of patents to show for it.

But it’s a lot harder for most of us to see what IBM has been up to in this century. Watson, the company’s famous AI supercomputer, won Jeopardy! back in 2011. Yet since then, as far as most consumers are concerned, it’s been mostly ads during football games and not a lot else.

IBM has been busy, though, just not in a way most of us can see. It’s fully an enterprise company now, as Arvind explains, and that business is booming. But there’s a huge change coming to that business as well. The AI technology that Watson pioneered, all that natural language processing and the beginning of what we now call deep learning? Well, that’s given way to generative AI, and with it, a new way of thinking about how all the systems that run a company should be built and interact with each other.

Verge subscribers, don’t forget you get exclusive access to ad-free Decoder wherever you get your podcasts. Head here. Not a subscriber? You can sign up here.

So I really wanted to ask Arvind how he felt about IBM investing in all of that Watson technology and showing it off a decade before everyone else, only to have maybe made the wrong technology bet and potentially miss out on the modern AI boom.

You’ll hear Arvind be pretty candid that the way IBM was approaching AI back then was off the mark — he says outright that pushing Watson so early into the healthcare field was “inappropriate.” But his take, as you’ll hear him discuss, is that the infrastructure and research from that era weren’t wasted because developers and companies can still build on top of that foundation. So sure, Arvind says IBM got there a little too early. But he doesn’t seem too concerned that IBM will be stuck on the sidelines.

Of course, I did have to bring up how the AI industry has all the hallmarks of a bubble, and it’s one that I and a lot of other folks, even OpenAI’s Sam Altman, are pretty sure is going to pop. Arvind’s more optimistic — or maybe less cynical — than I am, though, and he’s pretty confident this isn’t a bubble. But you’ll hear us compare the current moment to the dotcom boom and bust of the early 2000s — before the smartphone came along to realize the promise of ubiquitous computing — and how ultimately disruptive all that was in a lot of really negative ways for a lot of people, even though all of the bets from the early dotcom era did eventually prove to be correct.

One other thing I had to ask him was: if this isn’t a bubble, then who’s going to win? Because it feels like Apple and Google managed to keep all the profit from the transition to a digital economy, thanks to their hugely successful ecosystems and app stores that effectively collect rent from the labor and transactions of almost every other player that has an app. If the AI economy goes that way, will there be room for IBM or anyone else to get big from it?

Arvind’s answer seems to be to play a different long-term game, which is where the company’s big bet on quantum computing comes in. That bet still isn’t making useful products for most people, but you’ll hear Arvind explain why he still has some faith. This is a good one; we went a lot of places, and Arvind is remarkably candid.

Okay: Arvind Krishna, CEO of IBM. Here we go.

This interview has been lightly edited for length and clarity.

Arvind Krishna, you’re the CEO of IBM. Welcome to Decoder.

Nilay, great to be here with you.

I’m excited to talk to you. IBM is one of the most famous companies in the world, but candidly, I think most consumers don’t know why anymore. It’s very much an enterprise company. It has a lot of businesses. You have been there for 35 years. What has IBM been, and what are you trying to make it today?

You’re right, IBM is an enterprise. It’s a B2B company, to use a more common parlance, as opposed to a B2C. Historically, IBM did create a lot of consumer products. We did that iconic typewriter that people kind of knew about. We did the IBM PC — even though it hasn’t been here for more than 20 years — and a few other consumer things along the way.

I would say candidly that for the last 30 years, we’ve really had no consumer products. So, what does IBM do? Our role is to help our clients deploy technology that makes their business better. Whether they’re on multiple public clouds, want to take advantage of their data, or want to get to their customers faster, that’s what we are really about today.

A lot of people know the Watson brand, which IBM has talked about for years. Famously, Watson competed on Jeopardy!. Now I think the brand has turned into Watsonx. There’s a lot of what I would call “airport” and “football advertising” around Watson that’s aimed directly at CIOs of companies and not at consumers, but we still all experience that advertising. How does Watson fit into the IBM brand? I think that’s what people really hook onto.

If you don’t mind, I’m going to give a slightly longer answer. It’ll be a few minutes, but stop me and ask questions.

So, if we think about the Watson brand, it did really well initially with putting AI on the map. The Watson computer won Jeopardy! and that shocked people. It was really the first time that a computer could understand human language, think about open-ended questions, and was more right than wrong. I wouldn’t say perfectly right, but more right than wrong. I think that woke people up to the possibilities of AI. I will take credit and say that it got us going on the current AI journey.

It fell off because we did things that were a little bit wrong for the market at the time. We were trying to be too monolithic, and we picked healthcare, maybe one of the toughest areas to go into, which I think was inappropriate. The world is ready to take these things as building blocks. Engineers want to open them up. They want to see what’s inside. They want to build their own applications. “I want to use it for this, but not that.”

So when LLMs came along, we had a chance to say, “Let’s rebrand things. Let’s really rebuild the stack, and let’s give people both the pieces, but also a lot easier capability.” That’s what Watsonx is. So it builds on that Watson is associated with artificial intelligence. I’m convinced that AI is a really big unlock for people. I call it the eighth technology, but that’s a later conversation. So, that’s what the Watsonx brand is all about.

Let me push on that a little bit. You described Watson as a computer, and it was a single computer that could go play Jeopardy!. Then, you described the introduction of LLM technology, and this ecosystem of building blocks.

What was the AI technology bet with the initial Watson computer? Do you think that that was the wrong bet as a technology? Because I have a lot of questions about LLMs as a technology and the bet we’re making, but I’m curious now that you’ve had that experience, what was the technology in the initial Watson computer, and was it the right bet or the wrong bet?

It’s literally the same technologies. So, LLMs were not known at that time, but various other neural network models were. Neural network models span from what we call machine learning to what was beginning to be called deep learning. What was inside the Watson at that time was a mixture of machine learning and a lot of statistical learning, which was the core of what became deep learning.

Let me just note, the first big deep learning algorithm was a year after Watson won Jeopardy!Watson won Jeopardy! in 2011, and 2012 was when the term came to be. But the early incarnations of those things were in there. Unfortunately, they were not there in a way that you could tune them, take one out, make it modular, and take another one. We were trying to give it to you as a monolith — that’s what I meant by monolith — and that was the wrong approach, just to be straightforward. Right technology, wrong go-to-market approach.

Can you draw the connection between that set of technologies and LLMs today? The counterargument that I would give to you is… I’ll just pick on Google. Google has made a number of bets across machine learning, deep research, and LLMs for a long time. It showed off LLMs really early. I remember [CEO Sundar Pichai] demoing it and saying something like, “I can talk to Pluto,” and no one knew what he was talking about. Then three years later, ChatGPT happened, and Google was like, “Wait, we invented all of that.” That was its technology bet, that was its paper: “Attention is all you need.”

You’re saying you had it, too, but it feels to me like there was actually an inflection point where the industry picked a different technology, they picked LLMs. So can you just draw the connection for me?

For sure. From 2010-2022, around 12 years, deep learning made incredible progress. No question about it. Here was the catch. Deep learning, to me, was incredibly bespoke. You could take a lot of data and employ a lot of people to label that data. It could do one task incredibly well, it really could, but tasks don’t stay static. The data changes. The tasks change. If I have to redo all that human labeling, relearning, and retraining, I’m calling that bespoke and fragile. So, the return was always a little bit out there. That applies if you have a massive, singular B2C task, maybe suggesting which photograph or ad you may love. It’s worth it because in the month or two months I use that model, I can get a lot of return. That’s a little harder in an enterprise context because it takes a lot more time to make up for all the costs.

To go back to the original work you referred to, when there were massive amounts of data, labeling goes away. Wow, that drops the cost by half. You do a brute force approach using a lot more compute and a lot fewer people. Wow, the cost comes down even more because tech always gets cheaper over time.

So now, half a dozen people and a ton of compute could do what previously may have taken 30 or 40 PhDs and 40 or 50 engineers over six months. You can now do the task that much shorter. That’s a huge unlock. In short, it looked like a 2x or 4x advantage, but if I compare from the beginning to the end, this is a 100x advantage in terms of speed, tuning, and deployability. That’s industrial scale. Plus, these models can be tuned for many tasks, not just one. I’m not saying all tasks, but many, which means that the applicability is massive.

Also, when I want to ingest new data, I don’t have to restart at the beginning. I can add some. At some stage it makes sense to restart, but I can do a bit more there. All of these are massive unlocks, which is why I think it’s the right technology to help massively scale AI. By the way, I don’t think it’s the end all. We’ll come back to that, but it is a hundred times better than the prior.

That’s the turn that I’m really interested in. There were all these shots at AI before, deep research being one of them. There were machine learning algorithms deployed broadly across the industry. Apple was talking about neural accelerators in the iPhone years ago, but they didn’t add up to what LLMs have since added up to in the industry.

I’m curious though. You mentioned cost and that the cost can come down, but you and I are talking at the end of an earnings cycle, and everyone’s costs are skyrocketing. Their CapEx is skyrocketing. There are some layoffs associated with the increased CapEx that I do want to ask you about.

But just purely on cost, it doesn’t seem like it’s that much cheaper, right? It seems like to win, you have to spend vastly more money, and that money does not, at the moment, have a defined ROI. There are a lot of bets. Can you reconcile the idea that there are lower costs in the industrial scale versus the actual expenditures we’re seeing?

I can, but if you’ll allow me to say this, there’s a difference in the B2C world versus the B2B world. First, let’s just talk about the cost. Are there huge amounts of not just capital but operating expenses being spent on populating data centers with GPUs and building out those infrastructures, and are those amounts being committed now up in the trillions? It’s absolutely true, and that’s what you just mentioned: “Hey, that doesn’t sound cheap. That doesn’t sound a lot cheaper than before.”

It doesn’t even sound safe, just to be clear. I don’t even think that sounds safe based on the potential returns.

Maybe we’ll come back to that. What I meant when I said it’s going to get a lot cheaper is that if I take a five-year arc, what has the semiconductor industry shown time over time? Go back to the beginning of the PC. You have half a dozen competing technologies, and some begin to win. That was the beginning of Moore’s Law really, right?

Every two years you get a 2x advantage in what you can do. I look at the semiconductor side, and I say, “Over five years, we’ll probably get a 10x advantage in pure semiconductor capability, or the amount of compute for a dollar you can spend.” Got it. That’s one. Second, nobody has said that a GPU is the only architecture that is great for deploying these large language models. It’s certainly one. There are other companies coming up. We have a partnership with Groq, they have a different kind. You have Cerebras, they have a different kind–

That’s Groq the processor company, not Grok, Elon [Musk’s] AI company.

Correct. Groq, the processor company. Yes, the word comes from computer science. A lot of people use the word. But yes, Groq, the inferencing chip company. At least in these first steps, Groq looks like it’ll be 10x cheaper. But that, again, is not going to be the only design possible. I think you’ll get a 10x advantage on the pure silicon side. You’re going to get a 10x from the design side. Then there’s the third piece. I think there’s a lot of work to be done around memory caching and how you deploy these models. Do I quantize them? Do I compress them? Do I always need the biggest?

So, there’s a 10x advantage from the software side. You put those three 10s together, and that’s a thousand times cheaper. I’m simply saying, “Hey, maybe we won’t get all of it in the next five years, but even if you get the square root of that, that’s 30 times cheaper for the same dollar spent.” That’s why I believe that this is going to play out. It is going to get a lot cheaper, but it’ll take five years to play through.

Five years right now, feels like forever to most people living through this disruption. It feels like forever when you can see the hundreds of billions of dollars being deployed today in data centers that are running mostly Nvidia GPUs. You talked about Moore’s Law. I look at all of that and I actually see a massive disincentive for Nvidia to come out with the next generation of its GPUs. There’s a lot of equity tied up in the H100 being the literal unit of currency that these deals are taking place upon.

That’s a weird dynamic, right? It sounds like you say there’s going to be competitors that upend that dynamic.

Not necessarily upend but provide a lot more competition, and that’s the nature of it.

You kind of nodded in agreement when I said there was a disincentive for Nvidia to release the next generation of GPUs. Do you think that’s true?

I think that when you have an incredibly valuable company that’s making its profit stream from a few products, there’s always an inherent or organic disincentive to try to modify that. That said, I would never bet against Jensen [Huang]‘s ability to disrupt himself and go towards the next plateau, if there is one. So, you have both. I think certain companies are able to disrupt themselves, others hesitate to do it, and that is actually what causes the up and down of companies in the tech world.

I’m obviously leading towards the big question, which is that this feels like a bubble. A lot of people think it’s a bubble. You have a markedly different view of how this industry will play out. You’re investing, and I want to talk about the fact that you’re hiring while some of your competitors are doing layoffs at a huge scale. But let me just ask the question directly, and then we can go into everything else. Do you think we’re in an AI bubble right now?

No. Do I believe that there will be some displacement and some of the capital being spent, especially the debt capital, will not get its payback? Yes, but let’s just look at it. So, this is a place that is a B2C, and then there is the B2B world. There is a lot of common tech in both, but let’s just look at the B2C. If you build a set of models that are very attractive in B2C, and half a billion people become consumers of that (which are roughly the current numbers), it makes economic sense to build a slightly better model by spending another $50 billion that can attract another 200 million users.

So, this is a race towards who can get more and more of the world’s 7.5 billion people to become subscribers of a given model because the next bet becomes that network scale and those economies of scale that will allow you to go succeed. You’ve seen that movie play out. That was social media in the last generation. So, I react with, “It makes sense for them.”

Now, if 10 of them are going to go compete, we know that maybe two or three of them will be the eventual winners, not all 10. To me, it makes economic sense that they’re chasing that. My point is that not all of that will see a return. By the way, if I look at fiber optics in the ground back in the year 2000, not all of those people got a return.

However, this is the beauty of capitalism, and I’m calling it a beauty. We spend the money, it gets corrected back to 30 cents on the dollar. At that point, it makes an incredible amount of sense for somebody else to get that asset and turn it into a profit stream, but not all of it will get lost. As I said, two or three are going to make a ton of money, and the others won’t. So, I think the equity being put in will actually get a return. Some of the debt will not.

I love the fiber comparison, and if you’ll indulge me, I want to sit in it for just a minute. I was very young when the fiber rollouts were happening. I was very excited to get faster internet access, and I remember that bubble well. Part of that bubble was wanting to build infrastructure for the internet, and the thing that really drove the bubble was wanting to move the entire economy onto the internet, and that didn’t work.

There was the Pets.com IPO, and that was the sign that we hadn’t quite moved the economy, but we built the infrastructure. The important thing and the important difference is the fiber in the ground didn’t go bad.

Earlier this year, I interviewed Gary Smith, who’s the CEO of Ciena, which does fiber multiplexing. It can get infinite returns on fiber that was deployed 30, 40 years ago to this day, and their technology helps them build data centers. That was really why he was on the show, because he really wanted to tell everyone that his technology could build data centers. The GPUs go bad. They’re already failing at a rate between 3-9 percent in the data centers. There also might be an H200, or the chip you’re investing in with Groq might displace the H100.

So, all of this CapEx is not going to be here 30 years from now for the next generation of entrepreneurs, like Gary, to build upon and create more capacity with. We’re just going to throw it away.

No, no, let’s decompose it. So, you’re building a physical data center that’s a lot larger. I think concrete and steel survive. Next to it is a power plant. We need the electricity. Actually, I believe those power plants will even get hooked up to the grid over time, which is even better for national infrastructure. That’s useful.

Now, the fiber coming out of them — the networking, storage, and CPUs inside these places — are all useful. I’ll acknowledge right now there is a very high failure rate, but being a bit of a semiconductor geek, though I’m not anywhere near as deep as some of my friends and competitors in those spaces, if you can run something at 3GHz and you try to run it at 4GHz, it will actually run but has a higher failure rate.

Maybe it’s great if you try to run it at 300W. If you run it at 400W, it has a higher failure rate. So, if today you just need the performance for training a model that much faster, it actually is worth it to tune it and say, “I’m okay to have that failure rate. I got software that worries about moving stuff around.” But you can de-tune it slightly for higher resilience.

I think that is actually a design point. That’s not really a bug, so to speak. Do I acknowledge that these will move up over time? I began by saying, “I think in five years, our semiconductors will be 100 times better.” So you’re right, there’s a five-year depreciation to the GPU or some of the compute infrastructure, but the other half is useful. But in five years, you don’t throw away all the CapEx. You throw away a little piece, and you replace that with something that is better at that point.

I think the specific comparison to fiber making — and maybe it’s too pedantic — but the fiber was in the ground and then it was there. It did not incur a recurring cost to the people who wanted to use it outside of wanting to create more capacity by multiplexing the fiber.

You’re right, the fiber in the ground is endurable. Maybe not forever, but at least for 100 years. At some point, even glass begins to occlude and do all kinds of weird things, but it’s good for 100 years. But people also built a lot of end stuff on top, all of which had to be thrown away.

You’re now forgetting all the failures. People were building Asynchronous Transfer Mode (ATM). People thought that they could build really intelligent video streaming and put the guts of that inside. People were talking about doing Wavelength Division Multiplexing (WDM), since you talked about Ciena. Then, it became simpler. Here’s dark fiber, it’s a dump pipe. Go throw your bits in it at a terabit, the intelligence belongs at the cloud end. That took 10 years to unfold. So there was actually a change in how it transpired. I’m sorry to be that geeky.

No, this is why we’re here, that’s why I asked the questions. I would actually argue that was one of the most exciting periods in tech, when no one knew how it would work, and there were many, many more shots being taken. It all did pop in a catastrophic bubble. But it was very exciting.

It did go down, and then today you could turn around and say, “But all the companies that got built on the back of that clearly proved that that investment was worthwhile.” If I look at it at a national or an aggregate investor level, while some people did lose a lot of money, some people made a lot of money.

I want to take the other part of that bubble comparison, which is that we were going to move the entire economy to the internet. You brought up social media. As someone who covered it very deeply from the beginning of the iPhone to now, I would characterize it as wanting to move the entire economy onto your phone.

First, we were going to put it all online. Maybe it didn’t have the distribution because we’re not all going to look at CRT monitors on our desktop, so that didn’t happen. But then we all got phones, and the idea that we could move an enormous amount of at least the consumer economy onto our phones happened. That occurred. We’re all living with the results of that today.

Do you feel like the argument, at least in the consumer space as you’ve described it, is that we’re going to move that app economy to AI? Because how I see it is that the same class of investors who got rich moving the economy onto smartphones now think they can run the playbook again with AI. Maybe we’ll re-architect the applications with [Model Context Protocol] (MCP) and maybe there’ll be agents using the websites instead of people, but the argument from the same set of characters feels broadly the same to me.

If you don’t mind, I’ll go a little bit deeper on your first part.

You’re absolutely correct that the front end of the economy moved on to the phone. It was definitely a massive unlock the moment the phone gave you access so that it could be with you everywhere and you were not just anchored to a desk with a laptop or a desktop. Let’s acknowledge that. But there is still a physical economy.

I always talk about how 60 percent of the workers in the United States are still frontline: people who do construction, people who have warehouses. If you’re buying a tangible good, it’s still coming from a warehouse. It’s maybe not from a retail store near you because they had a front end, but in the back, there’s a warehouse, a truck driver, and maybe multiple routes of distribution. We still go to restaurants, there’s still food, there’s still groceries, there’s physical healthcare, there’s all of that. It becomes more efficient, easier, and more convenient.

But now I say, “I don’t have to spend that much time, I’m going to have an agent or a front-end AI that helps to unlock even more and puts together four or five things that I have in my head,” I completely agree with you. Why wouldn’t we want that to happen? That is going to happen. You can see the early instances of that already happening. It’s so appealing now because it gives a chance for people (without me taking any names) to reform who are the biggest players, and it gives a chance for some disruption. On the other hand, I think it goes beyond the consumer and into the enterprise. I actually believe there’s going to be a billion new applications written.

Now, if you think about the smartphone ecosystem we talked about, people talked about half a million, a few million, I think this could be a billion. There may be a few million that sit on the consumer side, but if there are let’s say 1,000 enterprises and you go across the number of enterprises times 1,000, then that unlocks a lot more.

Let me ask you one question there, and then I do want to ask you the Decoder questions and about IBM specifically. The biggest winners of that move to put all economic activity onto the smartphone were in many ways Apple and Google because they collected an enormous amount of rent on the back of that transition with app store taxes and the fees.

Maybe that’s going to get unwound now with whatever antitrust litigation is happening in Europe, but it happened. They collected a huge amount of fees. They are some of the richest companies in the world on the back of that. Apple just reported its quarterly earnings, and its services revenue is higher than ever on the back of App Store fees. That’s what that line really is. I think it runs the TV business just to pretend that reality is not the reality.

Do you see that playing out in AI? Because I look at OpenAI announcing what looks like an app store. I look at Google announcing that Google Search will have inbuilt custom developed applications as you search. It’s very cool, but I see these points of centralization emerging again that don’t look like Apple and Google, and maybe there’s competition for that. There might be competition for that in the enterprise. Do you see those same points of centralization?

I wouldn’t say that we know who the winners are today because we are only in the first innings of the game. There will be some winners. How about I agree with you on that.

But do you think those winners look like the central points of control that we saw in the smartphone era?

There will be a few different winners. If you go back to the smartphone analogy, you had one who built a vertically integrated stack. It was an easier, more convenient device, and then to get access to that device, people had to come into the App Store. That was that model. The other model said, “We are completely open,” with the Android operating system. However, to get access to everything else, you had to go into the Play Store or into Google Search. That was the second model. It wasn’t identical, but it was similar. So, those became the two entry points to get access to the end individual. That’s why they could charge the appropriate… you’re calling it rent, which is from an economics term. Let’s say they could charge an appropriate margin from a business standpoint.

I think Tim Cook would call it a margin, but the developers I know feel very differently about that margin.

But there is also a massive amount of cost for those who build out that massive infrastructure. It’s not like they can maintain it forever. As the Chinese have shown, you can build competing products. If you can keep running ahead, then people will prefer these devices. But at the end of the day, the value is in the apps, as you were saying. If that app is available on something else or if the friction and innovation on the main platform slows down, people will switch.

It’ll take maybe three or five years. It’s not like there will be guaranteed returns forever. It will switch. As many other companies have seen, that switch takes a few years. It doesn’t take decades. When it happens though, it’s disastrous to the original company. Some manage to recover because they wake up and say, “Hey, wait a moment, I got to change.” Some don’t.

I think this brings me to IBM. This is the process you and IBM have been in for many years now. You took over as CEO in 2020, and you’ve been at the company for almost 30 years when that happened.

I ask everybody these questions. You have a unique perspective here. You’d been at the company for a very long time when you took over as CEO. How was IBM structured when you took over, and how have you changed that structure?

It’s much more about culture, focus, what we do, and how we do it than the formal organization structure. If you say that you’ve got to be focused on innovation, you’ve then got to be focused on where you can provide a unique value back to your clients. That’s the first question. I want to be clear that our sweet spot is helping our B2B clients succeed. You might say, “Okay, well, that’s a very big remit. What then?”

I hold two points of view that are somewhat unique. One, I don’t believe that the majority of our customers are going to go to a singular public cloud. Some will, but the majority will not. People outside the US tend to want to be somewhat split between an American cloud and something more sovereign. Then, there are people who use plenty of SaaS properties. There’s a huge amount of economic value in what they’ve already written in their preexisting applications. I’ll use the word hybrid to describe that.

Is there a place for a vendor to have leading-edge tech to help our clients in that journey? That’s the hybrid approach we took, and that has shown to be of incredible value over time. About 60 percent of the total spend is outside the US. Even inside the US, anyone in a regulated industry is going to be hybrid in some sense. So that’s the first.

The second is focusing on where AI can be deployed in the enterprise. Let’s not go try to compete. I will not try to compete with Google on building a chatbot that… what’s the current number? It’s 650 million active subscribers. That’s not where we have brand permission and credibility. But I can walk into a health insurance company and say, “I’ll make sure that your clients’, your patients’, health data is protected, but let’s unlock AI to make those people feel even happier and get quicker, easier answers.” Those people tend to trust us because in 114 years, we have never misused that data, not even once. You get that, and then you can give them the tech and get it deployed.

So we picked those two. Then, I asked, “What are we really good at?” We’re really good at building systems. I decided early on that the third bet was on quantum. Let’s see whether we can change it from being a science challenge to an engineering challenge. Once it’s an engineering challenge, how do we scale it to really get deployed? That was really the big inflection point as opposed to trying to do lots of things. I used the word innovation. That meant commodity services had to leave the company because you can’t do both. It meant that if we are going to be hybrid, I had to partner with everybody else that I talked about.

So, you begin with the clear view of what should be done, and then you say, “It doesn’t matter, I’ll make all the hard decisions of changing the way the sales teams are paid by changing the incentives of all the executives to align with what’s needed to make those things succeed.” Sorry for a really long answer.

No, that’s great. A trope on this show is that if you tell me your company structure, I can predict 80 percent of your problems. You might say culture and structure are divorced, but I see the connection, and they feed off each other.

So, you were at IBM for a long time. Vanishingly few people will ever interview to be the CEO of IBM. What was that process like? Did you come in saying, “This company is focused all wrong. We got to let go of the commodity stuff. I’m going to make these changes?” Then, once you had decided to do that, how did you actually change the structure of the company to focus on those things?

I probably didn’t spend 30 years aspiring to this job, just to be upfront. I think it was more of a process of discovery, even for myself, in the couple of years before that. I made the hybrid observation deeply in 2017. As I was making that, I said, “Okay, how do I test this? ” I actually had a partnership with Red Hat, and I said–

Is this why you have a red hat? I noticed you have the red hat behind you.

I have a red hat there because when we announced the decision in 2018, it took a year to get through regulators and close it. It was 30 percent of our market cap. Very few companies spent 30 percent of their market cap on a conviction and a belief. So, I keep the red hat there because to me it was clear: if that conviction turned out to be wrong, I should be fired. People hesitate to say those things, but I say, “If I’m that wrong, I should not be working here.” That is why I keep the red hat as a reminder to myself that not only must you have the conviction but you must then do the really hard action.

So, that’s the culture part of making conviction succeed. Otherwise, people will just fall back into the lanes they were in. There’s comfort in doing things the way they’ve always done them —

Put me in the room. It’s 2020, you’re going through the interview process with the board. Did you have a deck that said, “We’re doing too much commodity stuff. I’m going to cut it down, and we’re going to focus on these areas and the big bet with the quantum stack change?”

My deck was three pages of pros. It was not like 100 pages of analysis. I believe that you should talk about what you want. I said, “We have to grow, and my view is very simple: you’ve got to grow well above GDP growth, otherwise you’re not going to be relevant in the future.” “Okay. If you’re going to grow, where are you going to grow?”

If you look at us, our brand permission is fundamentally being a technology company. That was code for “high innovation.” Now, this is where I think many companies fall short. If you’re clear about that, then things that don’t belong should not be in the company. So, that is why the spinouts took a couple of years to get done.

Then, I said, “We have to grow in software because that is where our clients perceive value.” You talk about structure. Well, if you’re going to grow in software that becomes a big fundamental change. That’s where capital allocation and resource allocation go. That’s where you’ve got to put way more investment than you historically had. Then, how do you fundamentally line up with partners? That is organizational change because you got to say, “How do the sales teams get paid? How do you have the right incentives?” So, those were maybe the three first really big decisions I made in the first two years.

As you do that, you also realize people tend to be very risk-averse. How do you unlock them so they take that risk? To me, there’s no risk-free path to success. If you want to be risk-free, you’re going to almost always be slammed against the bottom end of performance. How do you unlock risk-taking in people so that they feel motivated to do it more often than not?

This leads me into the second go-to question I ask everybody. I have a sense of it, but I’m curious how you will describe it. How do you make decisions? What’s your framework for making decisions?

You always start with if there is value. If it’s a decision that’s going to impact what we do and how we do it, does a client benefit from this new way of doing it? If you’re pretty convinced of that — and I’ll come back to where you get your conviction — I always believe that you should triangulate. I will always talk to a number of people on the inside and outside. Maybe not with a full description because sometimes you don’t want to give that, but with enough to validate my assumptions or what the possible victory would be.

So, you arrive at a conviction, you triangulate it with a few people, and then you ask yourself, “What needs to change inside if we really want this to go all the way?” Once you arrive at conviction and all those, you are then able to go execute it.

I build on my own strengths. I think I’m a reasonably deep technologist. I think I generally understand where the tech can go, but I may not always fully understand what a client can do with the tech. That’s why the first piece is really important. Then, I triangulate. I don’t mind reaching 10 levels down in the organization to talk to somebody who I think has an opinion on that topic or knows about it. Talk to possible clients about it. Talk to partners about those things. It just informs your opinion. In any case, when you’re out talking to them, keep your ears open for what they say. That could actually inform some things later.

Let’s put that into practice on the farthest bet you’re making, which is quantum. All the big tech companies have quantum divisions. I’ve had Jerry Chow, who runs part of your quantum team, on the show before. That was a great conversation. I’ve looked at a lot of rooms where someone tells me that this is the coldest place on Earth to run their quantum or whatever qubit they’re trying to generate on that day.

None of that has paid off yet. We’re not close to what they call “utility-scale computing” in quantum. That’s not something your customers are asking for yet. That’s outside of structure and culture’s purview that you’re deciding. That’s a big bet where there will be a massive step change in how we build computers that unlocks vastly more value for everybody. You have to keep that investment even through all the turmoil, all the data center investment everyone else is doing, and Amazon saying, “We’re laying off 14,000 people because of AI“ while you’re saying, ”We’re going to hire more college graduates than anybody else.“

What is the decision to stay focused on quantum in that way? How do you maintain that decision?

You are right that you can’t go check with a customer because they don’t know what to do with it today. But that’s not fully true. So, over the first five years, 2015-2020, you’ve got to have a belief in what it could do. Maybe because of my graduate school math background, I thought, “Wow, if we can do that, I can immediately see what kind of problems could get unlocked.” But trying to explain that to anybody but the people excited in the field is impossible. I completely acknowledge that those five years were about an internal bet on a set of people and a possibility.

But from 2020 onwards, we began to say, “These are not utility scale. Let me acknowledge it. They’re full of errors. They are small. Could clients still get excited by it?” I did perform a full check. We have 300 non-commercial clients. There are 300 people working with us in… let’s call it a research mode. There are 100 who are purely commercial, 100 who are in the world of materials or medicine, and 100 who are pure academics. Those are the rough buckets.

That’s why HSBC proved to itself we could do bond trading pricing on it. Vanguard proved to itself that if it got big enough, it could build a portfolio that better appeals to your needs. You have Daimler working on EV batteries. You have Boeing looking at corrosion on materials. So, there is a proof point. They’re not saying they’ll buy it the way it is today. All they’re saying is, “Hey, if you get to that point, this is really interesting to us.”

There is validation, even from clients. Then I said, “How do I know there’s enough interest?” So, I asked the team to put the software out open source. Now, I’ll say for many people, including some currently in AI, that’s not common to do early on. Why open source? How will developers and universities use this stuff and get any excitement if you put a price on it? So, we put out all our software open source. The fact that there are 650,000 people globally who use it tells me that there is excitement, there is a movement, and that people are hungry for a new approach to solve other kinds of problems.

Those were the two validations on my framework that were useful. If that 650,000 had been 100,000, I might still be okay. The fact that it’s 650,000 tells me there is real, real traction. But if 650,000 had been 1,000, I would have told my people, “Guys, these are your physics friends. This is not a market.”

I’m curious about that. That is the kind of long-term bet, and the early interest from people who think, “This type of computing will let us do many more things.” It’s funny on the consumer side. I hear about it in terms of, “Well, when there’s quantum computing, we’ll need quantum proof encryption.” It’s like there’s a secondary market now based on whether or not you will succeed in quantum computing that has almost nothing to do with quantum computing succeeding. It’s a bet. It’s a strange hedge against your success, Microsoft’s success, or whoever else is doing quantum.

What does actual success look like? Is it a step change in computing that is as big as the re-architecture of all computers around AI that we’re experiencing today? Is it bigger than that? What does that feel like to you?

I actually think it’s an add. So, there are CPUs. GPUs did not replace CPUs, it was an add. Now, GPUs are priced much higher than CPUs, so the market is bigger for GPUs than CPUs, but it was a complete add. It didn’t displace what AMD, Intel, and ARM do.

I feel like Intel feels differently about that right now. Sure, I agree with you.

Some companies have many other issues. The number of x86 chips being sold per year is as high as it has ever been. How about if I phrase it like that?

Okay. So, it’s an add, but if the next one has more immediate value, you can price it at a different price point. Does that make sense?

Let’s just use the term QPU just to keep it simple with quantum. QPUs are going to have an incredible value when they come because they can solve problems that you actually cannot solve on GPUs and CPUs in any economic terms in the near term. Look, everything you can do on a GPU, you could do on a CPU, but it’s going to be a thousand times slower and not be as economically feasible. So, GPUs opened up a whole class of new problems.

QPUs are similarly going to open those up. It’s an add, not a displacement. But given there’s finite dollars in the world, if there’s an add and we have a first mover advantage, like what one of the companies you named had with GPUs, that opens up a possibility that the market is that big.

So we did work. We asked a couple of our friends in the consulting world, like Boston Consulting Group and McKinsey. “Tell us what you think the value is if we can arrive at some utility point?” They both came back and gave us a pretty consistent answer. It was very sparkly, but think of it as, “We think there’s $400 billion to $700 billion of value early on per year.” Great! “How much do you think the tech world could get out of that?” “Probably 20 to 30 percent. Seems reasonable.” I said, “Okay, that’s the size of the prize we’re going to chase.” How much of that share will we get versus others is out of the question, and that’s the journey we are on for the next five years.

You think you’ll be able to pay off the quantum investment in five years?

It’s really hard for engineering to put a dot on it and say, “This is not like building the next mainframe.” There, I really know what I’m doing. I know exactly how many months it’ll take, and I could put a dot on it.

Here, I gave it a spectrum. Will we get to something remarkable in maybe three and a half years? I’m going to give it low odds. It’s possible, but the odds are maybe 20 or 30 percent. Can we get there in four years? My odds go way up. Can I get that in five years? My odds go really high. So that’s why I say five. That’s not to say it’s really five years. I think it’ll be a bit of a spectrum. I’m hoping you’ll see some really early adopters in around years three to four. There’ll be more at the end of year four, and then the risk decreases for people after that.

That’s a lot of action in 24 months. That will be a very exciting two-year period if you hit it.

This is really interesting to talk about in comparison to AI. You’re talking about how you estimated the market size for a nascent technology that you have to develop actual capabilities for. You estimated how much of that market share you could take, and you’re making some investments based on the potential return.

So, that last part, why us? I assume others can do all of this. Why would we succeed? Because I think it’s much more. There’s so much talk. You mentioned the various qubit technologies, cold rooms, and alternate technologies. I actually love the fact that there is that much, but that’s not building a computer. I always tell people, “You absolutely need a great QPU and a great qubit. You also need a way for them all to talk to each other. You also need a way to control all of them. You also need a way for it to function by itself without six quantum physicists standing in the room tuning it.”

This is a great employment plan for quantum physicists. Come on.

[Laughs] So, you need all those things, and we are one of the unique players who have a lot of those skills in house. It unlocks people to go do that, and it really motivates and excites them. I think that is an advantage we have today in terms of underlying skills.

I would call that a very sober, very thoughtful, almost conservative approach to deploying billions of dollars in CapEx against a technology that has not yet proven itself in the market.

You’ve made some estimates. You have an idea of what your company can do to add value. You’re going to do the hard research, and then you’re going to get there. I would just compare that to OpenAI and the AI market that we see today. Just this week, OpenAI converted to a for-profit company. There’s reportedly a trillion-dollar IPO coming. There’s everything we’ve talked about in the enterprise space, where you can see how AI and enterprise can help accelerate data use and all this unstructured data that companies have. Fine.

But the bet is in the consumer space. We’re just going to build a full-fledged agent that’s going to run around and do stuff for you, and that will replace your smartphone. None of that seems sober, conservative, based on a real market estimate, or even whether consumers want the product. It’s just a pipe dream.

How do you reconcile those two things? The bet is there will be AGI. At the end of the day, the whole market is based on that someone’s going to figure out AGI, and then all of this would have been worth it. The press release from Microsoft announcing the restructured deal with OpenAI mentions several times in bullet points that the terms expire when OpenAI declares AGI.

I read that and I thought that this is the most remarkable press release I’ve ever read in my entire life. No one can even define this term, and now two of the richest companies in the world are issuing press releases saying their deal will restructure itself when that happens. That’s very different from your bet on quantum. How do you read that discrepancy?

Of the ones you mentioned, one has a huge amount of cash flow and ability to invest.

But it’s something that could be incredibly profitable. The other one is a classic Silicon Valley startup. Some will succeed, some will not. I’ll offer you an opinion. First, I don’t think deeply about the consumer side and how much money they’ll spend. It’s interesting to observe, but I’m not going to pretend that I deeply–

Well, let me ask you this question. Do you think there’s an enterprise ROI that would justify the spend we have today? Because I look at it and I say, “Absent AGI, this spend might not be worth it.”

I’ll actually put it this way. You said I’m a little numerical, I’m a little geeky.

I’m having the time of my life in this conversation, by the way. I love it.

So, let’s ground this in today’s costs because anything in the future is speculative. It takes about $80 billion to fill up a one-gigawatt data center. That’s today’s number. If one company is going to commit 20-30 gigawatts, that’s $1.5 trillion of CapEx. To the point we just made, you’ve got to use it all in five years because at that point, you’ve got to throw it away and refill it. Then, if I look at the total commits in the world in this space, in chasing AGI, it seems to be like 100 gigawatts with these announcements. That’s $8 trillion of CapEx. It’s my view that there’s no way you’re going to get a return on that because $8 trillion of CapEx means you need roughly $800 billion of profit just to pay for the interest.

Have you told Sam [Altman]? Because he seems to think he can get both the CapEx and the return.

But that’s a belief. It’s a belief that one company is going to be the only company that gets the entire market. I got it, that’s a belief. That’s what some people like to chase. I understand that from their perspective, but that’s different from agreeing with them. “Understand” is different from “agree.” I think it’s fine. I mean, they’re chasing it. Some people will make money, some people will lose money. All the [infrastructure] being built will be useful if it goes away, but if they make it, then they are the sole surviving company.

Nilay, I will be clear. I am not convinced, or rather I give it really low odds — we’re talking like 0 to 1 percent — that the current set of known technologies gets us to AGI. That’s my bigger gap. I think that this current set is great. I think it’s incredibly useful for enterprise. I think it’s going to unlock trillions of dollars of productivity in the enterprise, just to be absolutely clear.

That said, I think AGI will require more technologies than the current LLM path. I think it’ll require fusing knowledge with LLMs. We have words, and I’m not sure that’s the only way to create knowledge.

People talk about neuro-symbolic AI, but if I just said “knowledge” in a broader sense, I mean hard knowledge that people have spent thousands of years discovering. If we can figure out a way to fuse knowledge with LLMs, maybe. Even then I’m a maybe, I’m not like 100 percent, but that’s from a geeky technical view.

Actually, that was my question, and you started answering before I asked it.

I’m on the same path as you. I look at what LLMs can do today. I look at how people talk about the scaling laws they might hit, the need for more data that doesn’t necessarily exist at the scale it might be needed, and I say, “I don’t think LLMs can do it.” I don’t see a here-to-there path for this technology to get to what everybody says it can do.

It sounds like you don’t think that’s true either. I would just connect that to what we started with. IBM developed Watson, and it was very good at its tasks, but it wasn’t the right set of bets at that moment and you had to pivot. Do you see the next technology that LLMs or the AI industry would have to pivot to?

Let’s look at three examples. Machine learning was not actually replaced. Machine learning is incredibly useful for lots of simple tasks. Your little thermostat in your house uses machine learning, not LLMs.

We did the first profile of the Nest, and I remember meeting their machine learning person to talk about the Nest thermostat in 2011.

That’s incredibly useful. People look at it like golf ball, baseball, tennis ball trajectories. That’s all machine learning, it’s not being replaced. It’s really useful, but it’s not going to answer questions.

Deep learning will be replaced with LLMs. I actually think LLMs are here to stay, I don’t think they’ll go away. But that’s not the end technology for AI. There is a next one and the next one will be an add, too. There’s machine learning, which is robust. There are LLMs, which I think are robust but are statistical in nature. So, where’s the deterministic piece? Where’s the knowledge piece? Is there something beyond LLMs?

Look, this stuff is eight years old at this point. The first paper I think was in 2017, when intention and this approach with transformers came together. Is there another one? I don’t know. I suspect there is, but we don’t know. It’s the same as in 2016 when you couldn’t predict the current LLM technology.

A comparison I would make is there’s now a core technology that everyone feels very invested in. I live in New York, and when I go to San Francisco, I joke that it’s just a different planet. Everyone is maybe much happier and more optimistic about AI than I am. I look at the companies springing up with the people who have left OpenAI to start super intelligence companies or AGI labs. They are all still betting on LLMs. The core of their work is still LLMs.

The idea that you can feel the AGI is from a lot of people using Claude to write code and saying they can feel the AGI. Are you worried that there’s not enough investment in the stuff around the edges that might supplant or augment LLMs?

No, because I think when it is so unknown, it should not be companies that change it. I think that academia should change it. I do think there are enough AI researchers in academia who are going to be working around these topics, but when you don’t make enough progress, there isn’t going to be any media coverage or any other coverage. But from me talking to my friends — whether at MIT, at Illinois, or Chicago — there is work going on. It’s just not occupying attention because the airwaves are completely LLMs only.

That’s why I’m asking. Do you think that there’s enough work happening? It sounds like you do, even in America in 2025 where there is pressure on universities to not bring in foreign graduates or have other kinds of academics going on. It seems tenuous at best.

Do you think that investment is still happening there?

I’m more optimistic than pessimistic. Is there some of what you described happening? Absolutely. But when I look at the number of top faculty in the top 20 engineering schools, it’s not really decreasing. Are there some funding cuts? We’re talking like under 10 percent. It’s not like it’s massive. Yes, there are much larger numbers in some areas than in others that are not hard sciences — by hard sciences I mean physics, chemistry, math, and engineering — but that’s not where I spend my energy. If I think about physics and hard engineering, I’d say there are some cuts, but it’s not that extreme.

I also look at the national labs. No cuts. So it looks pretty good.

I’m happy the frontier is good.

Let me end by talking about the near term. We spent a lot of time talking about how things might go, how the core technology bets you’re making might play out over time, whether or not GPUs are dark fiber, which is one of my favorite arguments to have, I don’t know if you could tell.

In the short to medium term, what we are seeing is a bunch of companies saying, “Okay, we have AI. We can just do it. We’re going to make the job cuts.” Accenture had a bunch of job cuts. Amazon had a bunch of job cuts. UPS had a bunch of job cuts just in the week that we’re talking.

If I was to be as harsh as possible about the work of your average big consulting firm, I would look at it and say, “Boy, a lot of that can go.” You can just let the AI make those decks all day long because the point of this contract is to let the CEO restructure the company. We just need the gloss of external validation to make the changes and the layoffs that we’re already going to make.

That is McKinsey’s function in the world: “Boy, it’s a lot cheaper and faster to just let the AI make the deck that no one ever reads in the end.” I feel like I see that playing out. How do you think people should react to that inside of the timeframes you were talking about where the real change comes?

Could there be up to 10 percent job displacement? I believe that’ll be likely over the next couple of years. It’s not 30 or 40 percent, but it is up to 10 percent of the total US employment pool. It is very concentrated in certain areas.

Now, as you get more productive, companies are going to then hire more people but in different places. That was the point I was making. We are hiring more because people say, “I don’t need to do the entry-level task because an AI agent can do it.” I’m looking at them like, “Really?” Think strategically for a moment. Wouldn’t you rather have an entry-level person and AI makes them more like a 10-year expert? Isn’t that more useful to me than the other way around? Otherwise, where is the talent who’s going to come up with the next great product? Where is the person who’s going to be able to convince a client to deploy technology the way it should be deployed? That’s why I think some are being shortsighted.

But I also think that some of this is happening right now because if you look at the total employment numbers, I think people gorged on employment. I used that phrase during the pandemic and the year after. Some of the displacement is just people saying, “I don’t need so many people because I went up 30, 40, 50, 100 percent from 2020 to 2023.” There is going to be some natural correction. Business is never completely optimized. I think in engineering terms, it’s an underdamped system. When there’s a need, it goes above. Now, it has to correct. It’s probably going to go below what’s needed, and then it’ll hit the correct equilibrium, depending on market demand and growth.

Do you feel like the broader market is stable or predictable enough at this moment for that natural business correction cycle to play out in a healthy way?

People talk about, “With all the wars, with all of the cyber attacks, with interest rates, doom is coming. GDP is going to fall.” I kind of held the view where if I look at the demand, I think that global GDP growth near 3 percent looks likely. But that ignores inflation, so in real terms, we are at like 5 percent. I think that those two together are probably going to hold for quite some time.

I’m curious because I hear from our readers who are consumers and some who work at tech companies and build the products. The split between how they feel about AI and what AI is doing to the economy and what people are claiming AI will do to the economy is as vast as any split I’ve ever experienced in my time covering technology.

I think people who got trained on a certain set of technologies and who are experts with their expertise don’t acknowledge it, but it’s deeply tied to their identity. Now suddenly, the person who’s been coding the product for 10 years finds that a kid coming in from college using generative AI tools is three times faster than them. They didn’t know the code, but the AI knows the code, and they don’t know how to use the AI.

You’re the CEO of IBM. Is that your experience at IBM? Because what I hear from our readers is that it would be great, but it’s not true. It’s not happening.

We took a tool we built ourselves and that wasn’t one of the industry tools on code to help our people do software development. Within four months, the 6,000-person team that embraced it — so not a tiny number — was 45 percent more productive. Just to compare, we have 30,000 others who don’t yet use that tool.

So, those are real numbers. We are going to grow those teams. We’re not trying to cut any of them because if we can be that much more productive at software development, that means we can build a lot more products, which means we can go get more market share. It doesn’t mean that it’s a fixed amount of work. I think the amount of work is infinite. So, we can be more productive.

The calculus always is if it’s that expensive to build, is there enough margin so that it’s a viable business? If the answer is it’s cheaper to build it, I can sell it cheaper and still have a great margin. Does that make sense?

That is our lived experience, which is why I’m leaning into hiring more programmers and more tech people.

Arvind, this is great. Tell people what’s next for IBM. What should they be looking for?

Watch what we are going to do on quantum. I think that in about two or three years, you’ll see some surprising results.

Well, we’re going to have to have you back on Decoder very soon as this market shakes out, and then when the quantum bet pays off. That’s an exciting 24 months that I want to make sure you’re back for. Thank you so much for being on Decoder.

Questions or comments about this episode? Hit us up at [email protected]. We really do read every email!

Decoder with Nilay Patel

A podcast from The Verge about big ideas and other problems.

SUBSCRIBE NOW!

Follow topics and authors from this story to see more like this in your personalized homepage feed and to receive email updates.

  • Nilay Patel

    Nilay Patel

    Nilay Patel

    Posts from this author will be added to your daily email digest and your homepage feed.

    See All by Nilay Patel

  • AI

    Posts from this topic will be added to your daily email digest and your homepage feed.

    See All AI

  • Business

    Posts from this topic will be added to your daily email digest and your homepage feed.

    See All Business

  • Decoder

    Posts from this topic will be added to your daily email digest and your homepage feed.

    See All Decoder

  • IBM

    Posts from this topic will be added to your daily email digest and your homepage feed.

    See All IBM

  • Podcasts

    Posts from this topic will be added to your daily email digest and your homepage feed.

    See All Podcasts

  • Tech

    Posts from this topic will be added to your daily email digest and your homepage feed.

    See All Tech

Read the full article here

Follow on Google News Follow on Flipboard
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Copy Link
News Room
  • Website

Related Posts

India is ordering Apple and other phone makers to preinstall a state-owned app

December 1, 2025

The race to AGI-pill the pope

December 1, 2025

Soundcore’s Space One headphones are a Cyber Monday steal at just $68

December 1, 2025
Add A Comment
Leave A Reply Cancel Reply

Top Articles

A bundle with Amazon’s biggest smart display and a stand just got its biggest discount

October 3, 2025

The OnePlus 15 is the phone to buy if you hate charging your phone

November 13, 2025

Kodak’s has ‘new’ 35mm film on the market – here’s why it’s a big deal

October 3, 2025
Latest Reviews

Rivian’s software-powered e-bike won me over with its adaptability 

News RoomNovember 28, 2025

Arturia’s AstroLab 37 crams 44 synths into a tiny keyboard

News RoomNovember 25, 2025

The 5 best noise-canceling headphones of 2025

News RoomNovember 25, 2025

Subscribe to Updates

Get the latest tech news and updates directly to your inbox.

Demo
Most Popular

Pixel Airdropping to iPhones, Macs, and iPads works great

November 21, 2025

A bundle with Amazon’s biggest smart display and a stand just got its biggest discount

October 3, 2025

The OnePlus 15 is the phone to buy if you hate charging your phone

November 13, 2025
Our Picks

Soundcore’s Space One headphones are a Cyber Monday steal at just $68

December 1, 2025

Apple’s M2 MacBook Air is just $599 during Cyber Monday

December 1, 2025

Netflix kills casting from phones

December 1, 2025

Subscribe to Updates

Get the latest tech news and updates directly to your inbox.

Facebook X (Twitter) Instagram Pinterest
  • Home
  • Privacy Policy
  • Terms of use
  • For Advertisers
  • Contact
2025 © Prices.com LLC. All Rights Reserved.

Type above and press Enter to search. Press Esc to cancel.