Ah, that explains this patchset that was submitted to the Linux kernel today
"KVM: s390: Introduce arm64 KVM"
"By introducing a novel virtualization acceleration for the ARM architecture on
s390 architecture, we aim to expand the platform's software ecosystem. This
initial patch series lays the groundwork by enabling KVM-accelerated ARM CPU
virtualization on s390....."
I'd guess most apps are bytecode only, which will run on any platform. Some apps with native code have bytecode fallbacks. Many apps with native code include multiple support for multiple architectures; the app developer will pick what they think is relevant for their users, but mips and x86 are options. There were production x86 androids for a few years, some of those might still be in user bases; mips got taken out of the Native Development Kit in 2018 so probably not very relevant anymore.
This is a serious question. What does IBM, in fact, do? I'm surprised they are still around and apparently relevant. Are they more or less a services and consulting company now?
Putting consumer grade (aka "commodity") hardware in a datacenter and running your infra on it is a bit of a meme, in the sense that it's not the only way of doing things. It was probably pioneered/popularized by Google but that's because writing great software was their "hammer", ie they framed every computing problem as a software problem. It was probably easier for them (= Jeff Dean) to take mediocre hardware and write a robust distributed system on top instead of the other way around.
There is, however, a completely different vision for how web infrastructure should be and that is to have extremely resilient hardware and simple software. That's what a mainframe is. You can write a simple and easy to maintain single process backend program, run it on a mainframe and be fairly confident that it can run without stopping for decades. Everything from the power supply to the CPU is redundant and can be hot swapped without booting the OS. Credit card transactions and banking software run on this model for example (just think about how insanely reliable credit card transactions are).
IBM has a monopoly in the second world. You could say the entire field of distributed systems is one big indie effort to break free of IBM's monopoly on computing.
1. They run complicated infrastructure software, written by third-party developers.
2. And they run their own simple programs on top of them.
So for example you can rent Kubernetes cluster from AWS and run simple HTTP server. If your server crashes, Kubernetes will restart it, so it's resilient. There will be records in some metrics which will light up some alerts and eventually people will know about it and will fix it.
Another example: your simple program does some REST GET query. This query failed for some reason. But that query was intercepted by middleware proxy and that proxy determines that HTTP response was 5xx, so it can retry it. So it retries it few times with properly calibrated duration and eventually gets a response and propagates it back to the simple program. Simple program had no idea about all the stuff happening to make it work, it just threw HTTP query and got a response.
There's a lot of complicated machinery to enable simple programs to be part of resilient architecture. That's a goal, anyway.
> There is, however, a completely different vision for how web infrastructure should be and that is to have extremely resilient hardware and simple software.
You actually need both, the point of the extremely resilient hardware is that it can act as the single source of truth when you need it - including perhaps hosting some web-based transactions that directly affect your single source of truth. (Calling this a "model" for web-based infrastructure in general would be misleading though: a credit card transaction on the web is not your ordinary website! The web is just an implementation technology here.) Everything else can be ephemeral open systems, which is orders-of-magnitude cheaper.
> Credit card transactions and banking software run on this model for example
TSYS is super expensive and is dying out. The current generation of banking software is very much shifting to distributed software across commodity data centers.
I work in banking. We provide modern solutions for small local banks in the US. That's how our core runs. It's just Java apps (Spring Boot, Jakarta EE) running in the cloud.
> Credit card transactions and banking software run on this model for example
Eh, they can but even a couple of decades ago there was a shift to open platforms. 90s and early 00s, sure, it was mainframe and exotic x86 species like Stratus machines. But even then the power of “throw a ton of cheaper Unix at it” was winning.
Banks’ central systems maybe, I have less experience there. IBM did also try for a while to ride the Linux virtualisation wave as well, saying “hey, you can run thousands of Linux instances on a single mainframe”, and I did some work porting IBM software to s390 Linux around 2007.
A better question would probably what they don't do; just going off the wiki page (https://en.wikipedia.org/wiki/IBM) for recent history, they're in health care (imaging), weather, video streaming, cloud services, Red Hat, managed infrastructure (which branched off into a company called Kyndryl, which has 90.000 employees in 115 countries), warfare ("In June 2025, IBM was named by a UN expert report as one of several companies "central to Israel's surveillance apparatus and the ongoing Gaza destruction.""), etc etc etc.
Basically they do a lot, but they're not showy about it.
IBM has more revenue than Oracle even if we hear way less about it. 5 times smaller than Apple, thou. It also has more employees than Microsoft or Alphabet. But it has tighter profit margins than other tech companies.
IBM is not in consumer products nor services so we do not hear about it.
It’s a very different company post the PwC purchase. They have around 1/3 of the revenue from consulting which tends to push the valuation down due to its relative low margin when compared to software. This also inflates the number of employees.
Oracle/TSMC/SpaceX isn’t in consumer products/services, but they are heard about.
IBM was declining for 10 years while the rest of the tech related businesses were blowing up, plus IBM does not pay well, so other than it being a business in decline, there wasn’t much to talk about. No one expects anything new from IBM.
Also, they had quite a few big boondoggles where they were the bad guys helping swindle taxpayers due to the goodwill from their brand’s legacy, so being a dying rent seeking business as opposed to a growing innovative business was the assumption I had.
You don't read much about IBM here, but this is the wrong site to look for them. A big chunk of IBM's business comes from other businesses outside the IT industry. You're more likely to read about IBM in the Wall Street Journal; Google finds "IBM" at wsj.com about 48000 times (it finds "oracle" there about 30000 times).
Early in my career I spent some years working at the biggest bank in Canada, they were (and still are) an enormous IBM customer. Hardware, software, consulting, and probably lots of other things I had no visibility into.
Beneath the countless layers of VMs and copious weird purpose built gear like Tandem and Base24 for the ATMs was a whole bunch of true blue z/OS powered IBM mainframes chugging through thousands and thousands of interlocking COBOL programs that do everything from moving files between partner banks all over the world, moving money between accounts, compounding interest, and extracting a metric shitton of every type of fee imaginable.
If you know z/OS there's work available until your retirement. Miserable, pointless, banal, and archaic legacy as fuck mainframe work.
I don't how exaggerated this story is, but one of my buddies did his internship at TD. One of his skip managers told him if you know COBOL there are departments that will give you a blank cheque during salary ngotiation.
Yeah it's hard to say but I believe there's at least some truth to that. I took COBOL off my resume over a decade ago just to combat the volume of recruiters trying to drag me away from the cloud back to on-prem land.
A good friend of mine who worked on a CICS based credit card processing application at that bank doubled his salary twice inside of 4 yrs. First by quitting the bank and going to a boutique consultancy to build competing software (which they sold to other banks) and then by quitting that job and coming back to the bank to takeover the abysmal state the CICS app had lapsed into in his absence.
And that was circa 2010.
One thing that was true of the bank then and I'm sure is true now is that when they see a nail they truly have just the one hammer. When a problem comes along, hit it with a huge sack of cash until it goes away.
I don't think "know COBOL" is enough. I'm pretty sure I can learn COBOL in a week. It's more about "know COBOL and know all this old stuff like CLIs, etc, and know all these old approaches".
Typically it's not just about knowing COBOL as a language, the bottleneck is having real expertise wrt. highly specific, fiddly proprietary frameworks that are implemented on top of COBOL.
Tandem! Now there's a name i haven't heard in a long time. A college friend of mine worked with some of their stuff right out of college and I still remember him telling me about it. It seemed like magic, we were both floored with the capabilities.
/we were in our early 20s and the inet was just taking off so there were lots of "magic" everywhere
Man ... this question hits me really hard. I was absolutely miserable by the end of my years at the bank, and the part that really fucked me up was that (at the time) I could not understand why all my colleagues weren't.
Huge generalizations incoming, there are exceptions to every rule, but in my experience there are no nerds who love tech for tech's sake in the banking world. It's entirely staffed by the "C's get degrees" crowd who just want to clock in, clock out, keep their head down, and retire with a nice pension.
I wanted to work on sexy technology, wrangle clouds, contribute to open source, and hack in modern languages.
I have many friends who are still at that bank 20 yrs later. They're all directors of this that or the other thing, still just grinding out some midlevel whatever career and cruising comfortably. If that ticks all your boxes then by all means go hit up a bank job.
By the time I left I couldn't drink enough liquor in a day to rinse the stench of that job off me. If I hadn't managed to slip that place I'd be dead of liver failure by now.
It's the secret for a long life for some folks, but it ain't for everybody.
Own Red-Hat, thus major contributions to Wayland, GNOME, GCC and Java, at very least.
Have their own Java implementation, with capabilities like AOT before OpenJDK got started on Leyden, or even Graal existed, for years had extensions for value types (nowadays dropped), and alongside Azul, cluster based JIT compiler that shares code across JVM instances.
IBM i and z/OS are still heavely deployed in many organisations, alongside Aix, and LinuxONE (Linux running on mainframes and micros).
Research in quantum computing, AI, design processes, one of the companies that does huge amounts of patents per year across various fields.
And yes a services company, that is actually a consortium of IBM owned companies many of each under a different brand (which is followed by "an IBM company").
I work for a big international corp. We pay IBM a blankest sum annually because it’s that hard to quantify just how much we rely on their services and licensing costs.
Licensing of course just being typical rent seeking behaviour but their services are valuable given the financial impact if one of their solutions goes down on us (which is very rarely)
Everything. They have done for decades, and will do for decades. And what IBM focus on is probably worth looking into.
IBM (imho) is in the absolute frontline in quantum computers. One could argue if the number of startups in QC means that there is an actual market or not. Companies that lives on VC or the valuation of their stock.
But IBM is not showy, not on the front pages, does not live on VC or stock valuation. IBM makes tons of money decade after decade from customers that are also not showy but makes tons of money. Banks, financial institutions, energy, logistics, health care etc etc. If IBM thinks these companies will benefit from using QC from IBM (and pay tons of money for it), there is quite probably some truth in QC becoming useful in the near future. Years rather than decades.
IBM have run the numbers and have decided that spending the money for engineering, research required is outweighs the money possible to earn on QC services. QCs powerful enough to run the QC-supported algorithms these companies need to make more tons of money. And it's probably not breaking RSA or ECC.
Anyone who can't get any better AI accelerators elsewhere? Last I heard, these things were sold out for years on end. And anyone who can make one, can sell them.
Sort of, in the form of PowerPC, which was an Apple-IBM-Motorola (“AIM”) collaboration. It’s closely related to IBM’s Power line, but more like a predecessor than a sibling.
Yes. Apple used PowerPC, and PowerPC was also in the Xbox 360, PS3, Wii, and Wii U. It was also widespread in embedded sectors like networking, automotive, and aerospace.
IBM eventually stepped away from the embedded market and eventually lost their foothold in consoles as well. While Raptor did offer Power9 systems at a somewhat accessible price point, the IBM-produced CPUs were still fundamentally enterprise-grade hardware, meaning they retained the high costs and "big iron" features of server tech.
No, IBM has Unisys contractors, not employees. All the techs I’ve worked with from IBM have been a nightmare. One dropped an entire drive array on the ground, and tried to install it despite it being bent and no longer fitting on the rack. I have been acquired by IBM twice. They are a nightmare, horrible company.
IBM has plenty of hardware techs. They're called system services representatives (SSRs) and if you got a Unisys contractor, that just means you're not spending enough money for IBM to send an SSR.
1. Red Hat Enterprise Linux, which is by far the most commonly deployed Linux variant among US Enterprise orgs.
2. Ansible
3. Podman
4. Hashicorp Terraform / Consul / Packer / Vagrant / Nomad / Etc.
5. Giant B2B services arm
6. Mainframe, which a lot of science organizations / governments / credit card companies still run. Sometimes you may have an IBM rep show up to replace a part on the mainframe you didn't even know was broken - very reliable, fault tolerant system.
7. The only service I know where you can rent Quantum computing time in the cloud
8. Probably a ton of other things I'm not even aware of.
9. Red Hat OpenShift - so if you're big enterprise running k8s on prem, there's a good chance it's OpenShift, especially in banking / finance / government.
They exist to swallow up profitable companies, extract any “unnecessary” overhead (like benefits, PTO, pay that isn’t rock bottom), and package into large enterprise licensing agreements.
I was shocked when IBM acquired Red Hat a few years ago. I had silently assumed at the time that Red Hat was far bigger than IBM nowadays, so the reverse would have made more sense to me.
honestly I think it's a net positive (for me at least) because it ensures Fedora has great POWER support (I'll never be able to afford a POWER machine at this rate, but the architecture is an absolute pleasure to work with whenever I have to)
They sell (managed) database appliances (on z and Power) and associated software (think the platform/HANA parts of SAP) - all state-of-the-art in the late 1990s but since then put on maintenance mode and it shows (a bit like oracle...).
Their hardware is still cool custom built silicon and imo state of the art, but since k8s, high-speed-network and multi-TB-machines (for <100k$) are here and run Linux no new venture buys into that anymore (except for gulf states...).
Before, when the competition was a cluster of Itanium/VMS or Sparc/Solaris and the associated contract, noone bought into that either at scale but also noone using IBM had a very compelling reason to switch everything around.
So essentially they sell new hardware and "support" to customers who have been in need to process tabular, multi-GB databases since when a PC was 128MB memory and have been doing electronic record-keeping since the 1970s. They also allow their ~hostages~, ehm, customers who trust them with their data to run processing near the data at a cost/in a cloud style billing model. That is so expensive though that every large IBM-shop has built an elaborate layer of JVMs, Unix and mirror-databases around their IBM appliances. Lately they bought Redhat and hashicorp and confluent thus taking a cut from the "support" of the abominiations of IT systems they helped birth for some more time to come (also remember the alternative JVM OpenJ9, do you all?).
I think the later a company started using centralized electronic record keeping, the higher the likelyhood they are not paying IBM anymore: commercial banks, governments and insurance started digitizing in the 60s (with custom software) and if the companies are old (or in US-friendly petrostates) they are all IBM customers. Corps using ERP or PLM offerings (so manufacturing and retail chains which are younger than banks) used to start digitizing a little later (Walmart only was founded in the 60s and electronic CAD started in the 80s) and while they likely used IBM in the past (SAP was big on DB2) they might not use it anymore (also it helps they usually bought the ERP or PLM from someone else). New Companies whose sole business was to run a digital-platform started on Unix (see Amazon who successfully fought to ditch Oracle even) or just built their whole platform (Google). If those companies predate Unix they usually fought hard to get rid of IBM (Microsoft, Amadeus)
Consulting/outsourcing services have been spun out to Kyndryl, so nowadays IBM only sells hardware, support for their products and ostensibly has some people left to develop their products... The days when that was a big thing and IBM produced all the stuff they sell support for now, have been long gone. A fun link to see how their "product development" operates nowadays is this discussion to bring gitlab-runners to z/OS: https://gitlab.com/gitlab-org/gitlab-runner/-/work_items/275... - tl;dr "hey you opensource company, we are IBM and managed to pay someone to port a go compiler to z/OS. Now we have a customer who wants to use gitlab with z/OS. Would you like to make your software part of our product offering?".
A fun fact is that - even within IBM - access to the real mainframe seems to be very limited which shows a bit in the discussion linked above and also with an ex-Kyndryl-person saying: "oh, I once had a contract where we replaced the mainframe and we ran that on Linux-boxes inside IBM, because it was just cheaper that way. Just the big reporting was a bit slow, but the reliability was just fine"
> dual‑architecture hardware that helps enterprises run future AI and data intensive workloads with greater flexibility, reliability, and security
I think we can ignore the "AI" word here as its presence is only because everything currently has to be AI.
So why would IBM add ARM?
> As enterprises scale AI and modernize their infrastructure, the breadth of the Arm software ecosystem is enabling these workloads to run across a broader range of environments
I think it has become too expensive for IBM to develop their own CPU architecture and that ARM64 is starting to catch up in performance for a much lower price.
So IBM wants to switch to ARM without making a too big fuzz about it.
>So IBM wants to switch to ARM without making a too big fuzz about it.
That was my first thought too, but it does not make sense, because if IBM would sell ARM-based servers nobody would buy from them instead of using cheaper alternatives.
As revealed in another comment, at least for now their strategy is to provide some add-in cards for their mainframe systems, containing an ARM CPU which is used to execute VMs in which ARM-native programs are executed.
So this is like decades ago, when if you had an Apple computer with a 6502 CPU you could also buy a Z80 CPU card for it, so you could also run CP/M programs on your Apple computer, not only programs written for Apple and 6502.
Thus with this ARM accelerator, you will be able to run on IBM mainframes, in VMs, also Linux-on-ARM instances or Windows-on-ARM instances. Presumably they have customers who desire this.
I assume that the IBM marketing arguments for this are that this not only saves the cost of an additional ARM-based server, but it also provides the reliability guarantees of IBM mainframes for the ARM-based applications.
Taking into account that today buying an extra server with its own memory may cost a few times more than last summer, an add-in CPU card that shares memory with your existing mainframe might be extra enticing.
Im thinking maybe as a compliment to x86 offerings and eventual displacement as a primary offering , i do not see them ditching POWER.
The architecture might be non-standard and not very widespread however for what it does and workloads that are suited to it. I dont think any ARM design comes close , maybe Fujitsu's A64FX.
I do not think that I have seen any public benchmark for more than a decade that can compare ARM-based CPUs with IBM POWER CPUs.
The recent generations of IBM POWER CPUs have not been designed for good single-thread performance but only for excellent multi-threaded performance.
So I believe that an ARM CPU from a flagship smartphone should be much faster in single thread that any existing IBM POWER CPU.
On the other hand, I do not know if there exists any ARM-based server CPU that can match the multi-threaded performance of the latest IBM POWER CPUs.
At least for some workloads the performance of the ARM-based CPUs must be much lower, as the IBM CPUs have huge cache memories and very fast memory and I/O interfaces.
The ARM-based server CPUs should win in performance per watt (due to using recent TSMC processes vs. older Samsung processes) and in performance per dollar, but not in absolute performance.
I thought PPC was supposed to be highly performant, but not very efficient. I didn’t think ARM (at least non-Apple ARM) was hitting that level of performance yet. I thought ARM was by far more efficient, but not quite there in terms of raw performance.
But I could be wrong… I’m going from a historical perspective. I haven’t checked PPC benchmarks in quite a while.
Are you guys sure you're not confusing product lines? PPC is a PowerISA architecture, but hasn't been pushing desktop/server level performance for, what, almost 20 years? It's an embedded chip now, and AFAIK IBM doesn't even make them any more. Power (currently "10th gen"(-ish)) is the performant aarchitecture, used in the computers formally known as i-Series, formerly known as RS/6000. It's pretty fast, not not price competitive. They aren't really the same thing.
"PowerPC" was a modification of the original IBM POWER ISA, which was made in cooperation by IBM, Motorola and Apple.
Motorola made CPUs with this ISA. Apple used CPUs with this ISA, some made by IBM and some made by Motorola.
While Motorola and Apple used the name "PowerPC", IBM continued to use the original name "POWER" for its server and workstation CPUs. Later IBM sold its division that made CPUs for embedded applications and for PCs, retaining only the server/workstation CPUs.
However, nowadays, even if the official IBM name is "POWER", calling it "PowerPC" is not a serious mistake, because all the "PowerPC" ISA changes have been incorporated many years ago into the POWER ISA.
So the current POWER ISA is an evolution of the PowerPC ISA, which was an evolution of the original 1990 POWER ISA.
It is better to call it POWER, as saying "PowerPC" may imply a reference to an older version of the ISA, instead of referring to the current version, but the 2 names are the same thing. PowerPC was an attempt of rebranding, but then they returned to the original name.
Thanks for the lecture. My point is that people often confuse PPC in the embedded space (still in production) with Power in the enterprise space (where noone I know refers to it as 'PPC' other than historical artifacts like 'ppc64le' (we run mostly AIX), and haven't since the G5 days). Same/similar ISA, very very different performance expectations. YMMV.
There isn't really an arm64 processor available that runs as fast as a Power10 processor, and there isn't really a Power10 processor that runs as efficiently as an arm64 processor, so 'competitive' is probably the wrong word.
IBM has two architectures which are de-facto only used by them, s390x and ppc64le. They have poured a lot of resources into having open source software support those targets, and this announcement might mean they find it easier/cheaper going forward to virtualize ARM instead and maybe even migrate slowly to ARM.
I think they see customers wanting to have the flexibility to move to ARM and this is the fastest way to say they support ARM workloads. Maybe this is a path for IBM to eventually use ARM chips down the road, but I see this as being more about meeting customers where they think the demand is today rather than an explicit guess for tomorrow.
ppc64le has other machines. Raptor off the top of my head, but there's also that weird notebook project that seems to be talked about once every few years and probably won't ever happen and some pretty cool stuff in the amiga space (I don't know if that's strictly le but power is supposed to be bi-endian)
Once you parse the marketing speak, looks like there may be ARM ISA silicon in future System Z.
But, what are their legacy finance-sector customers asking for here? Are they trying to add ARM to LinuxONE, while maintaining the IBM hardware-based nine nines uptime strategy/sweet support contract paradigm?
If so, why don't the Visas of the world just buy 0xide, for example?
> develop new dual‑architecture hardware that helps enterprises run future AI and data intensive workloads with greater flexibility, reliability, and security.
> "This moment marks the latest step in our innovation journey for future generations of our IBM Z and LinuxONE systems, reinforcing our end-to-end system design as a powerful advantage."
I think the #1 use case here is allowing AI/cloud workloads the ability to execute against the mainframe's data without ever leaving the secure bubble. I.e., bring the applications to the data rather than the data to the applications.
IBM could put an entire 1k core ARM mini-cloud inside a Z series configuration and it could easily be missed upon visual inspection. Imagine being able to run banking apps with direct synchronous SQL access to core and callbacks for things like real-time fraud detection. Today, you'd have to do this with networked access into another machine or a partner's cloud which kills a lot of use cases.
If I were IBM, I would set up some kind of platform/framework/marketplace where B2B vendors publish ARM-based apps that can run on Z. Apple has already demonstrated that we can make this sort of thing work quite well with regard to security and how locked down everything can be.
It is wild how ARM - which was kind of a niche company and ISA - has taken the world by storm since the modern smartphone was born. Now their designs make their way upwards to big iron and AI datacenters.
Maybe I don't know enough technical details about these CPU architectures or IP agreements, but I don't see why IBM couldn't have done what Arm did but with PowerPC.
I wonder if we end up with z series running on arm long term.
The value in z series is in the system design and ecosystem, IBM could engineer an architecture migration to custom CPUs based on ARM cores. They would still be mainframe processors, but likely able to be able to reduce investment in silicon and supporting software.
You can run 1960s System/360 binaries unmodified on modern z/OS. The system also uses a lot of "high level assembler" and "system provided assembly macros" making a complete architecture switch extremely painful and complicated.
They called their new architecture "ESAME" for a while for a pretty obvious reason.
I don't think that would change if the underlying architecture changes; IBM has been committed to backward compatibility for a long time. Some hypothetical future mainframe class IBM ARM would undoubtedly be able virtualize a 360/370/390 without breaking a sweat. And ARM will undoubtedly enable IBM to add custom emulation hardware to their spin on ARM if they need it.
IBM is desperate to keep the mainframe relevant. The typical transactional workloads are going to stay on the mainframe, and by bolting on ARM “for AI”they’re giving their customer CIOs a reason to defend their decision to stick with the mainframe.
This certainly has been in the making for longer than the "everything we do must be for AI" bubble. In fact s390 has its own on-die inference engines and they have access to the same caching mechanisms as the main processor (which are quite insane).
That, weirdly, should be fine; ARM is bi-endian in the sense of being perfectly happy to run either way. In fact, the easiest way I know of to test software on a big endian system is to run a perfectly ordinary Raspberry Pi with NetBSD's big endian port for it:)
My gut feeling says to lean more on the bad side. I am very skeptic when corporations announce "this is for the win". Then I slowly walk over to the Google Graveyard and nod my head wisely in sadness ... https://killedbygoogle.com/
"KVM: s390: Introduce arm64 KVM"
"By introducing a novel virtualization acceleration for the ARM architecture on s390 architecture, we aim to expand the platform's software ecosystem. This initial patch series lays the groundwork by enabling KVM-accelerated ARM CPU virtualization on s390....."
https://patchwork.kernel.org/project/linux-arm-kernel/cover/...
things like https://www.youtube.com/watch?v=a6b4lYOI0GQ could get you a really interesting form of multitasking
I’ve been running VM/370 and MVS on my RPi cluster for a long time now.
Is there really SW that's limited to (Linux) ARM and not x86?
I'd guess most apps are bytecode only, which will run on any platform. Some apps with native code have bytecode fallbacks. Many apps with native code include multiple support for multiple architectures; the app developer will pick what they think is relevant for their users, but mips and x86 are options. There were production x86 androids for a few years, some of those might still be in user bases; mips got taken out of the Native Development Kit in 2018 so probably not very relevant anymore.
There is, however, a completely different vision for how web infrastructure should be and that is to have extremely resilient hardware and simple software. That's what a mainframe is. You can write a simple and easy to maintain single process backend program, run it on a mainframe and be fairly confident that it can run without stopping for decades. Everything from the power supply to the CPU is redundant and can be hot swapped without booting the OS. Credit card transactions and banking software run on this model for example (just think about how insanely reliable credit card transactions are).
IBM has a monopoly in the second world. You could say the entire field of distributed systems is one big indie effort to break free of IBM's monopoly on computing.
1. They run complicated infrastructure software, written by third-party developers.
2. And they run their own simple programs on top of them.
So for example you can rent Kubernetes cluster from AWS and run simple HTTP server. If your server crashes, Kubernetes will restart it, so it's resilient. There will be records in some metrics which will light up some alerts and eventually people will know about it and will fix it.
Another example: your simple program does some REST GET query. This query failed for some reason. But that query was intercepted by middleware proxy and that proxy determines that HTTP response was 5xx, so it can retry it. So it retries it few times with properly calibrated duration and eventually gets a response and propagates it back to the simple program. Simple program had no idea about all the stuff happening to make it work, it just threw HTTP query and got a response.
There's a lot of complicated machinery to enable simple programs to be part of resilient architecture. That's a goal, anyway.
You actually need both, the point of the extremely resilient hardware is that it can act as the single source of truth when you need it - including perhaps hosting some web-based transactions that directly affect your single source of truth. (Calling this a "model" for web-based infrastructure in general would be misleading though: a credit card transaction on the web is not your ordinary website! The web is just an implementation technology here.) Everything else can be ephemeral open systems, which is orders-of-magnitude cheaper.
TSYS is super expensive and is dying out. The current generation of banking software is very much shifting to distributed software across commodity data centers.
Eh, they can but even a couple of decades ago there was a shift to open platforms. 90s and early 00s, sure, it was mainframe and exotic x86 species like Stratus machines. But even then the power of “throw a ton of cheaper Unix at it” was winning.
Banks’ central systems maybe, I have less experience there. IBM did also try for a while to ride the Linux virtualisation wave as well, saying “hey, you can run thousands of Linux instances on a single mainframe”, and I did some work porting IBM software to s390 Linux around 2007.
Basically they do a lot, but they're not showy about it.
IBM is not in consumer products nor services so we do not hear about it.
IBM was declining for 10 years while the rest of the tech related businesses were blowing up, plus IBM does not pay well, so other than it being a business in decline, there wasn’t much to talk about. No one expects anything new from IBM.
Also, they had quite a few big boondoggles where they were the bad guys helping swindle taxpayers due to the goodwill from their brand’s legacy, so being a dying rent seeking business as opposed to a growing innovative business was the assumption I had.
Beneath the countless layers of VMs and copious weird purpose built gear like Tandem and Base24 for the ATMs was a whole bunch of true blue z/OS powered IBM mainframes chugging through thousands and thousands of interlocking COBOL programs that do everything from moving files between partner banks all over the world, moving money between accounts, compounding interest, and extracting a metric shitton of every type of fee imaginable.
If you know z/OS there's work available until your retirement. Miserable, pointless, banal, and archaic legacy as fuck mainframe work.
https://en.wikipedia.org/wiki/Tandem_Computers
https://en.wikipedia.org/wiki/BASE24
https://en.wikipedia.org/wiki/Z/OS
A good friend of mine who worked on a CICS based credit card processing application at that bank doubled his salary twice inside of 4 yrs. First by quitting the bank and going to a boutique consultancy to build competing software (which they sold to other banks) and then by quitting that job and coming back to the bank to takeover the abysmal state the CICS app had lapsed into in his absence.
And that was circa 2010.
One thing that was true of the bank then and I'm sure is true now is that when they see a nail they truly have just the one hammer. When a problem comes along, hit it with a huge sack of cash until it goes away.
Tandem! Now there's a name i haven't heard in a long time. A college friend of mine worked with some of their stuff right out of college and I still remember him telling me about it. It seemed like magic, we were both floored with the capabilities.
/we were in our early 20s and the inet was just taking off so there were lots of "magic" everywhere
https://www.youtube.com/watch?v=SSSB7ZTSXH4
The Remarkable Computers Built Not to Fail by Asianometry
Huge generalizations incoming, there are exceptions to every rule, but in my experience there are no nerds who love tech for tech's sake in the banking world. It's entirely staffed by the "C's get degrees" crowd who just want to clock in, clock out, keep their head down, and retire with a nice pension.
I wanted to work on sexy technology, wrangle clouds, contribute to open source, and hack in modern languages.
I have many friends who are still at that bank 20 yrs later. They're all directors of this that or the other thing, still just grinding out some midlevel whatever career and cruising comfortably. If that ticks all your boxes then by all means go hit up a bank job.
By the time I left I couldn't drink enough liquor in a day to rinse the stench of that job off me. If I hadn't managed to slip that place I'd be dead of liver failure by now.
It's the secret for a long life for some folks, but it ain't for everybody.
Have their own Java implementation, with capabilities like AOT before OpenJDK got started on Leyden, or even Graal existed, for years had extensions for value types (nowadays dropped), and alongside Azul, cluster based JIT compiler that shares code across JVM instances.
IBM i and z/OS are still heavely deployed in many organisations, alongside Aix, and LinuxONE (Linux running on mainframes and micros).
Research in quantum computing, AI, design processes, one of the companies that does huge amounts of patents per year across various fields.
And yes a services company, that is actually a consortium of IBM owned companies many of each under a different brand (which is followed by "an IBM company").
Licensing of course just being typical rent seeking behaviour but their services are valuable given the financial impact if one of their solutions goes down on us (which is very rarely)
IBM (imho) is in the absolute frontline in quantum computers. One could argue if the number of startups in QC means that there is an actual market or not. Companies that lives on VC or the valuation of their stock.
But IBM is not showy, not on the front pages, does not live on VC or stock valuation. IBM makes tons of money decade after decade from customers that are also not showy but makes tons of money. Banks, financial institutions, energy, logistics, health care etc etc. If IBM thinks these companies will benefit from using QC from IBM (and pay tons of money for it), there is quite probably some truth in QC becoming useful in the near future. Years rather than decades.
IBM have run the numbers and have decided that spending the money for engineering, research required is outweighs the money possible to earn on QC services. QCs powerful enough to run the QC-supported algorithms these companies need to make more tons of money. And it's probably not breaking RSA or ECC.
What I don't get however is who'd use their custom accelerators for AI inference.
Both have been around for many years, but neither is obsolete, they're just not designed for consumer applications.
They still generate $10-15 billion per year in revenue.
IBM eventually stepped away from the embedded market and eventually lost their foothold in consoles as well. While Raptor did offer Power9 systems at a somewhat accessible price point, the IBM-produced CPUs were still fundamentally enterprise-grade hardware, meaning they retained the high costs and "big iron" features of server tech.
But yes they’re mostly enterprise/services/mainframes not anything overly consumer
You can see their roadmap here:
https://www.ibm.com/roadmaps/
1. Red Hat Enterprise Linux, which is by far the most commonly deployed Linux variant among US Enterprise orgs.
2. Ansible
3. Podman
4. Hashicorp Terraform / Consul / Packer / Vagrant / Nomad / Etc.
5. Giant B2B services arm
6. Mainframe, which a lot of science organizations / governments / credit card companies still run. Sometimes you may have an IBM rep show up to replace a part on the mainframe you didn't even know was broken - very reliable, fault tolerant system.
7. The only service I know where you can rent Quantum computing time in the cloud
8. Probably a ton of other things I'm not even aware of.
9. Red Hat OpenShift - so if you're big enterprise running k8s on prem, there's a good chance it's OpenShift, especially in banking / finance / government.
If IBM runs them into the ground, there's a niche for a copy-cat of the original company that you can just found again. Rinse and repeat.
So essentially they sell new hardware and "support" to customers who have been in need to process tabular, multi-GB databases since when a PC was 128MB memory and have been doing electronic record-keeping since the 1970s. They also allow their ~hostages~, ehm, customers who trust them with their data to run processing near the data at a cost/in a cloud style billing model. That is so expensive though that every large IBM-shop has built an elaborate layer of JVMs, Unix and mirror-databases around their IBM appliances. Lately they bought Redhat and hashicorp and confluent thus taking a cut from the "support" of the abominiations of IT systems they helped birth for some more time to come (also remember the alternative JVM OpenJ9, do you all?).
I think the later a company started using centralized electronic record keeping, the higher the likelyhood they are not paying IBM anymore: commercial banks, governments and insurance started digitizing in the 60s (with custom software) and if the companies are old (or in US-friendly petrostates) they are all IBM customers. Corps using ERP or PLM offerings (so manufacturing and retail chains which are younger than banks) used to start digitizing a little later (Walmart only was founded in the 60s and electronic CAD started in the 80s) and while they likely used IBM in the past (SAP was big on DB2) they might not use it anymore (also it helps they usually bought the ERP or PLM from someone else). New Companies whose sole business was to run a digital-platform started on Unix (see Amazon who successfully fought to ditch Oracle even) or just built their whole platform (Google). If those companies predate Unix they usually fought hard to get rid of IBM (Microsoft, Amadeus)
Consulting/outsourcing services have been spun out to Kyndryl, so nowadays IBM only sells hardware, support for their products and ostensibly has some people left to develop their products... The days when that was a big thing and IBM produced all the stuff they sell support for now, have been long gone. A fun link to see how their "product development" operates nowadays is this discussion to bring gitlab-runners to z/OS: https://gitlab.com/gitlab-org/gitlab-runner/-/work_items/275... - tl;dr "hey you opensource company, we are IBM and managed to pay someone to port a go compiler to z/OS. Now we have a customer who wants to use gitlab with z/OS. Would you like to make your software part of our product offering?". A fun fact is that - even within IBM - access to the real mainframe seems to be very limited which shows a bit in the discussion linked above and also with an ex-Kyndryl-person saying: "oh, I once had a contract where we replaced the mainframe and we ran that on Linux-boxes inside IBM, because it was just cheaper that way. Just the big reporting was a bit slow, but the reliability was just fine"
I think we can ignore the "AI" word here as its presence is only because everything currently has to be AI.
So why would IBM add ARM?
> As enterprises scale AI and modernize their infrastructure, the breadth of the Arm software ecosystem is enabling these workloads to run across a broader range of environments
I think it has become too expensive for IBM to develop their own CPU architecture and that ARM64 is starting to catch up in performance for a much lower price.
So IBM wants to switch to ARM without making a too big fuzz about it.
That was my first thought too, but it does not make sense, because if IBM would sell ARM-based servers nobody would buy from them instead of using cheaper alternatives.
As revealed in another comment, at least for now their strategy is to provide some add-in cards for their mainframe systems, containing an ARM CPU which is used to execute VMs in which ARM-native programs are executed.
So this is like decades ago, when if you had an Apple computer with a 6502 CPU you could also buy a Z80 CPU card for it, so you could also run CP/M programs on your Apple computer, not only programs written for Apple and 6502.
Thus with this ARM accelerator, you will be able to run on IBM mainframes, in VMs, also Linux-on-ARM instances or Windows-on-ARM instances. Presumably they have customers who desire this.
I assume that the IBM marketing arguments for this are that this not only saves the cost of an additional ARM-based server, but it also provides the reliability guarantees of IBM mainframes for the ARM-based applications.
Taking into account that today buying an extra server with its own memory may cost a few times more than last summer, an add-in CPU card that shares memory with your existing mainframe might be extra enticing.
The architecture might be non-standard and not very widespread however for what it does and workloads that are suited to it. I dont think any ARM design comes close , maybe Fujitsu's A64FX.
Sun had the same problem after 2001 dotcom when standard PC servers became reliable enough to run web servers on.
It's easier to sell "our special sauce" when building using a custom ARM platform. Then you have no easy comparison with standard servers.
They will probably market the ARM inclusion similarly - as something that the package provides.
As far as POWER i think only Raptor[1] does direct marketingof the power(hehe) and capabilities
[1]https://www.raptorcs.com/
https://www.ibm.com/products/power
The i systems are just POWER machines with different firmware.
Why do you say "starting to"? arm64 has been competitive with ppc64le for a fairly long time at this point
The recent generations of IBM POWER CPUs have not been designed for good single-thread performance but only for excellent multi-threaded performance.
So I believe that an ARM CPU from a flagship smartphone should be much faster in single thread that any existing IBM POWER CPU.
On the other hand, I do not know if there exists any ARM-based server CPU that can match the multi-threaded performance of the latest IBM POWER CPUs.
At least for some workloads the performance of the ARM-based CPUs must be much lower, as the IBM CPUs have huge cache memories and very fast memory and I/O interfaces.
The ARM-based server CPUs should win in performance per watt (due to using recent TSMC processes vs. older Samsung processes) and in performance per dollar, but not in absolute performance.
And the single thread side isn't that good either, but SMT8 is a quite nice software licensing trick
But I could be wrong… I’m going from a historical perspective. I haven’t checked PPC benchmarks in quite a while.
Motorola made CPUs with this ISA. Apple used CPUs with this ISA, some made by IBM and some made by Motorola.
While Motorola and Apple used the name "PowerPC", IBM continued to use the original name "POWER" for its server and workstation CPUs. Later IBM sold its division that made CPUs for embedded applications and for PCs, retaining only the server/workstation CPUs.
However, nowadays, even if the official IBM name is "POWER", calling it "PowerPC" is not a serious mistake, because all the "PowerPC" ISA changes have been incorporated many years ago into the POWER ISA.
So the current POWER ISA is an evolution of the PowerPC ISA, which was an evolution of the original 1990 POWER ISA.
It is better to call it POWER, as saying "PowerPC" may imply a reference to an older version of the ISA, instead of referring to the current version, but the 2 names are the same thing. PowerPC was an attempt of rebranding, but then they returned to the original name.
But, what are their legacy finance-sector customers asking for here? Are they trying to add ARM to LinuxONE, while maintaining the IBM hardware-based nine nines uptime strategy/sweet support contract paradigm?
If so, why don't the Visas of the world just buy 0xide, for example?
> develop new dual‑architecture hardware that helps enterprises run future AI and data intensive workloads with greater flexibility, reliability, and security.
> "This moment marks the latest step in our innovation journey for future generations of our IBM Z and LinuxONE systems, reinforcing our end-to-end system design as a powerful advantage."
I never would have expected such, but now I'm getting used to it.
I'm waiting for Apple and Microsoft to announce collaboration. They probably already do, but Apple knows its bad for marketing.
I'm not sure I can be surprised anymore.
IBM could put an entire 1k core ARM mini-cloud inside a Z series configuration and it could easily be missed upon visual inspection. Imagine being able to run banking apps with direct synchronous SQL access to core and callbacks for things like real-time fraud detection. Today, you'd have to do this with networked access into another machine or a partner's cloud which kills a lot of use cases.
If I were IBM, I would set up some kind of platform/framework/marketplace where B2B vendors publish ARM-based apps that can run on Z. Apple has already demonstrated that we can make this sort of thing work quite well with regard to security and how locked down everything can be.
The value in z series is in the system design and ecosystem, IBM could engineer an architecture migration to custom CPUs based on ARM cores. They would still be mainframe processors, but likely able to be able to reduce investment in silicon and supporting software.
They called their new architecture "ESAME" for a while for a pretty obvious reason.
edit: s/390 is big endian.
https://en.wikipedia.org/wiki/Linaro
My gut feeling says to lean more on the bad side. I am very skeptic when corporations announce "this is for the win". Then I slowly walk over to the Google Graveyard and nod my head wisely in sadness ... https://killedbygoogle.com/
https://www.qualcomm.com/news/releases/2025/09/qualcomm-achi...