By the end of Day 3, it seemed quite clear that Qualcomm's legal team and position was far ahead of ARM's. I feel the following snippet sums up the whole week:
"Qualcomm’s counsel turned Arm’s Piano analogy on its head. Arm compared its ISA to a Piano Keyboard design during the opening statement and used it throughout the trial. It claimed that no matter how big or small the Piano is, the keyboard design remains the same and is covered by its license. Qualcomm’s counsel extended that analogy to show how ridiculous it would be to say that because you designed the keyboard, you own all the pianos in the world. Suggesting that is what Arm is trying to do."
This is really confusing me. Is Arm seriously claiming that all design work that makes use of their ISA is derivative work? I feel like I have to be misunderstanding something.
Wouldn't that be similar to the Google v Oracle Java API case except the claim would be even stronger - that all programs making use of the Java API were derivative works of the Java API and thus subject to licensing arrangements with Oracle?
Or similarly, a hypothetical claim by Intel that a compiler such as LLVM is derivative work of the x86 ISA.
That can't possibly be right. What have I misunderstood about this situation?
Yeah, I did a double-take when I read that too - but that does seem to be the case. From a different article [^1]:
> "Throughout expert testimony, Arm has been asserting that all Arm-compliant CPUs are derivatives of the Arm instruction set architecture (ISA)."
> "Arm countered with an examination of the similarities in the register-transfer language (RTL) code, which is used in the design of integrated circuits, of the latest Qualcomm Snapdragon Elite processors, the pre-acquisition Nuvia Phoenix processor, and the Arm ISA (commonly referred to as the Arm Arm)."
Were they trying to argue that the RTL is too similar to the pseudocode in the ARM ARM or something?? That is absolutely crazy. (Of course, [when we have a license agreement and] you publish a public specification for the interface, I am going to use it to implement the interface. What do you expect me to do, implement the ARM ISA without looking at the spec?)
edit: Wow, I guess this really is what they were arguing?? Look at the points from Gerard's testimony [^2]. That is absolutely crazy.
I would assume (but don't actually know) that compiler authors make extensive use of the (publicly available) ARM as well. But claiming that any associated llvm backends are a derivative work seems absurd to me.
I really feel like I must have misunderstood something here.
Bloomberg article indicates the jury is still out on the question of whether or not Nuvia breached the license. They only agreed that Qualcomm's own ALA covers use of the tech in the event that they happen to possess it.
In other words, Nuvia failing to destroy the designs might or might not have been a breach of contract. At least if I understand all of this correctly. But I feel like I must be missing some key details.
Is Arm seriously claiming that all design work that makes use of their ISA is derivative work?
I assume Arm has some patents on the ISA [1] and the only way to get a license to them is to sign something that effectively says all your work exists at Arm's sufferance. After that we're just negotiating the price.
[1] You and I hate this but it's probably valid in the US.
ARM has unilaterally cancelled both the Nuvia architecture license agreement (ALA) and the Qualcomm ALA.
Because all Arm ALAs are secret, we do not know if Arm has any right to do such a unilateral cancellation.
It is likely that the ALAs cannot be cancelled without a good reason, like breech of contract, so the cancellation of the Qualcomm ALA must be invalid now, after the trial.
The conflict between Arm and Qualcomm has started because the Qualcomm ALA, which Qualcomm says that it is applicable for any Qualcomm products, specifies much lower royalties than the Nuvia ALA.
This is absolutely normal, because Qualcomm sells a huge number of CPUs for which it has to pay royalties, while Nuvia would have sold a negligible number of CPUs, if any.
Arm receives a much greater revenue based on the Qualcomm ALA than what they would have received from Nuvia.
Therefore the real reason of the conflict is that Qualcomm has stopped using CPU cores designed by Arm, so Arm no longer receives any royalty from Qualcomm from licensing cores, and those royalties would have been higher than the royalties specified by the ALA for Qualcomm-designed cores.
When Arm has given an architectural license to Nuvia, they did not expect that the cores designed by Nuvia could compete with Arm-designed cores. Nuvia being bought by Quacomm has changed that, and Arm attempts now to crush any competition for its own cores.
> Or similarly, a hypothetical claim by Intel that a compiler such as LLVM is derivative work of the x86 ISA.
Intel has been lenient toward compiler implementers, but their stance is that emulation of x86 instructions still under patent (e.g., later SSE, AVX512) is infringing if not done under a license agreement. This has had negative implications for, for example, Microsoft's x86 emulation on ARM Windows devices.
(I'm guessing Apple probably did the right thing and ponied up the license fees.)
Wholeheartedly agree. I understand where ARM is coming from, but my god the legal team from both parties were night and day apart. And from evidences ARM isn't even asking for a lot more money. They are likely fighting this from principle, but their explanation were weak, very weak. ( They were even worst then Apple during the Apple vs Qualcomm case )
I thought the whole thing Qualcomm was way more professional. ARM's case was that what they think was written in the contract, what they "should" have written in contract and what Qualcomm shows clearly contradict.
It is more of a lesson for ARM to learn. And now the damage has been done. This also makes me think who was pushing this lawsuit. Softbank ?
I also gained more respect to Qualcomm. After what they showed Apple vs Qualcomm's case and here.
Side Note: ARM's Design has caught on. The Cortex X5 is close to Apple's Design. We should have news about X6 soon.
I thought the entire point of this was that Arm was trying to prevent Qualcomm from switching away from products that fall under the TLA. Isn't revenue from TLA fees a huge difference from that of ALA fees?
"I don't think either side had a clear victory or would have had a clear victory if this case is tried again," Noreika told the parties."
After more than nine hours of deliberations over two days, the eight-person jury could not reach a unanimous verdict on the question of whether startup Nuvia breached the terms of its license with Arm.
> Arm’s opening statement.. presented with a soft, almost victim-like demeanor. Qualcomm’s statement was more assertive and included many strong facts (e.g., Arm internal communications saying Qualcomm has “Bombproof” ALA). Testimonials were quite informative and revealed many interesting facts, some rumored and others unknown (e.g. Arm considered a fully vertically integrated approach).
> The most important discussion was whether processor design and RTL are a derivative of Arm’s technology.. This assertion of derivative seems an overreach and should put a chill down the spine of every Arm customer, especially the ones that have ALA, which include NXP, Infineon, TI, ST Micro, Microchip, Broadcom, Nvidia, MediaTek, Qualcomm, Apple, and Marvell. No matter how much they innovate in processor design and architecture, it can all be deemed Arm’s derivative and, hence, its technology.
Wow, this has been settled already? I mean, I am sure ARM will appeal.
ARM did massive damage to their ecosystem for nothing. There will for sure be consequences of suing your largest customer.
Lots of people that would have defaulted to licensing designed off ARM for whatever chips they have planned will now be considering RISC-V instead. ARM just accelerated the timeline for their biggest future competitor. Genius.
Your otherwise on point piece contains the common misconception that ARM began in embedded systems. When they started they had a full computer system that had very competitive CPU performance for the time:
https://en.m.wikipedia.org/wiki/Acorn_Archimedes
They pivoted to embedded shortly after spinning off into a separate company.
Acorn Computers started off much earlier (I owned an Acorn Atom when it was released) which begat the Electron, then the BBC Micro and then the Archimedes.
At that time ARM was just an architecture owned by Acorn. They created it with VSLI technology (Acorn’s Silicon partner) and used the first RISC chip in the BBC Micro before then pivoting it to the Archimedes.
Whilst Acorn itself was initially purchased by Olivetti, who eventually sold what remained years later to Morgan Stanley.
The ARM division was spun off as “Advanced RISC Machines” in a deal with both Apple, and VSLI Technology after Olivetti came onto the scene.
It is this company that we now know as Arm Holdings.
So it’s not entirely accurate to claim “they had a full computer system” as that was Acorn Computers, PLC.
It would be more accurate to say that there haven't been any RISC-V designs for Qualcomm's market segment yet.
As far as I am aware, there is nothing about the RISC-V architecture which inherently prevents it from ever being competitive with ARM. The people designing their own cores just haven't bothered to do so yet.
RISC-V isn't competitive in 2024, but that doesn't mean that it still won't be competitive in 2030 or 2035. If you were starting a project today at a company like Amazon or Google to develop a fully custom core, would you really stick with ARM - knowing what they tried to do with Qualcomm?
Having a competitive CPU is 1% of the job. Then you need
To have a competitive SoC (oh and not infringe IP), so that you can build the software ecosystem, which is the hard bit.
We still have problems with software not being optimised for Arm these days, which is just astounding given the prevalence on mobile devices, let alone the market share represented by Apple. Even Golang is still lacking a whole bunch of optimisations that are present in x86, and Google has their own Arm based chips.
Compilers pull off miracles, but a lot of optimisations are going to take direct experience and dedicated work.
For all it's success it's still kind of a niche-language (and even with the amount of Google compiler developers, they're are spread thin between V8, Go, Dart,etc).
I think the keys to Risc-V in terms of software will be,
LLVM (gives us C, C++, Rust, Zig,etc), this is probably already happening?
JavaScript (V8 support for Android should be the biggest driver, also enabling Node,Deno,etc but it's speed will depend on Google interest)
JVM (Does Oracle have interest at all? Could be a major roadblock unless Google funds it, again depends on Android interest).
So Android on Risc-V could really be a game-changer but Google just backed down a bit recently.
Dotnet(games) and Ruby (and even Python?) would probably be like Go with custom runtimes/JIT's needing custom work but no obvious clear marketshare/funding.
It'll remain a niche but I do really think Android devices (or something else equally popular, a chinese home-PC?) would be the gamechanger to push demand over the top.
Considering how often ARM processors are used to run an application on top of a framework over an interpreted language inside a VM, all to display what amounts to kilobytes of text and megabytes of images, using hundreds of megabytes of RAM and billions of operations per second, I'm surprised anyone even bothers optimizing anything, anymore.
Not an issue because exceyt for a few windows or apple machines everthing arm is compiled and odds are they have the source. Give our ee a good risc-v and a couple years latter we will have our stuff rebult for that cpu
The whole reason ARM transition worked is that you had millions of developers with MacBooks who because of Rosetta were able to seamlessly run both x86 and ARM code at the same time.
This meant that you had (a) strong demand for ARM apps/libraries, (b) large pool of testers, (c) developers able to port their code without needing additional hardware, (d) developers able to seamlessly test their x86/ARM code side by side.
Apple is the only company that has managed a single CPU transition successfully. That they actually did it three times is incredible.
I think people are blind to the amount of pre-emptive work a transition like that requires. Sure, Linux and FreeBSD support a bunch of architectures, but are they really all free of bugs due to the architecture? You can't convince me that choosing an esoteric, lightly used arch like Big Endian PowerPC won't come with bugs related to that you'll have to deal with. And then you need to figure out who's responsible for the code, and whether or not they have the hardware to test it on.
It happened to me; small project I put on my ARM-based AWS server, and it was not working even though it was compiled for the architecture.
We've seen compatibility layers between x86 and arm. Am I correct in thinking that a compatibility layer between riscV and arm would be easier/more performant since they're both risc architectures?
> RISC-V isn't competitive in 2024, but that doesn't mean that it still won't be competitive in 2030 or 2035.
We can't know and won't for up to until 2030 or 2035. Humans are just not very good when it comes projecting the future (if predictions of 1950-60's were correct, I would be typing this up from my cozy cosmic dwelling on a Jovian or a Saturnian moon after all).
History has had numerous examples when better ISA and CPU designs have lost out to a combination or mysteries and other compounding factors that are usually attributed to «market forces» (whatever that means to whomever). The 1980-90's were the heydays of some of the most brilliant ISA designs and nearly everyone was confident that a design X or Y would become dominant, or the next best thing, or anywhere in between. Yet, we were left with a x86 monopoly for several decades that has only recently turned into a duopoly because of the arrival of ARM into the mainstream and through a completely unexpected vector: the advent of smartphones. It was not the turn than anyone expected.
And since innovations tend to be product oriented, it is not possible to even design, leave alone build, a product with something does not exist yet. Breaking a new ground in the CPU design requires an involvement of a large number of driving and very un–mysterious (so to speak) forces, exorbitant investment (from the product design and manufacturing perspectives) that are available to the largest incumbents only. And even that is not guaranteed as we have seen it with the Itanium architecture.
So unless the incumbents commit and follow through, it is not likely (at least not obvious) that RISC-V will enter the mainstream and will rather remain a niche (albeit a viable one). Within the realms of possibility it can be assessed as «maybe» at this very moment.
A lot of the arguments I’m seeing ignore the factor that China sees ARM as a potential threat to it’s economic security and is leaning hard into risc-v. it’s silly to ignore the largest manufacturing base for computing devices when talking about the future of computing devices.
I would bet on china making risc-v the default solution for entry level and cost sensitive commodity devices within the next couple of years. It’s already happening in the embedded space.
The row with Qualcomm only validates the rationale for fast iterating companies to lean into riscv if they want to meaningfully own any of their processor IP.
The fact that the best ARM cores aren’t actually designed by ARM, but arm claims them as its IP is really enough to understand that migrating to riscv is eventually going to be on the table as a way to maximize shareholder value.
> As far as I am aware, there is nothing about the RISC-V architecture which inherently prevents it from ever being competitive with ARM
Lack of reg+shifted reg addressing mode and or things like BFI/UBFX/TBZ
The perpetual promise of magic fusion inside the cores has not played out. No core exists to my knowledge that fuses more than two instructions at a time. Most of those take more than two to make. Thus no core exists that could fuse them.
Zba extension (mandatory in RVA23 profile) provides `sh{1,2,3}add rd, rs1, rs2` ie `rd = rs1 << {1,2,3} + rs2`, so a fusion with a subsequent load from `rd` would only require fusing two instructions.
And which cores currently support it? And unless the answer is “all”, it will not be used. Feature detection works for well-isolated high-performance kernels using things like AVX. No one‘s going to do feature detection for load/store instructions. Which means that all your binaries will be compiled to the lowest common denominator
As I said, it's mandatory in RVA23 profile. In fact it has been mandatory since RVA22 profile. A bunch of cores appear to support RVA22.
Whether prebuilt distribution binaries support it or not, I can't tell. Simple glance at Debian and Fedora wiki pages doesn't reveal what profile they target, and I CBA to boot an image in qemu to check. In the worst case they target only GC so they won't have Zba. Source distributions like Gentoo would not have a problem.
In any case, talking about the current level of extension support is moving the goalposts. You countered "there is nothing about the RISC-V architecture which inherently prevents it from ever being competitive with ARM" with "Lack of reg+shifted reg addressing mode", which is an argument about ISA, not implementation.
ubuntu announced they want to suppory RVA23 in their next LTS 25.04 IIRC. That doesn't really make sense, unless we get new hardware with RVA23 support till then.
Zb extension is in both RVA22 and RVA23 profiles, meaning application cores (targeting consumer devices like smartphones) designed in the past few years almost certainly have shXadd instructions in order to be compatible with the mainstream software ecosystem
You are aware that hardware takes time to build, tapeout and productise?
On the open-source front, I can right now download a RVA23 supporting RISC-V implementation, simulate the RTL and have it out perform my current Zen1 desktop per cycle in scalar code: https://news.ycombinator.com/item?id=41331786 (see the XiangShanV3 numbers)
RISC-V has existed for over a decade and in that time no one has got close to building a competitive non microcontroller level CPU with it.
How long is this supposed to take?
How long until it is accepted that RISC-V looks like a semiconductor nerd snipe of epic proportion designed to divert energy away from anything that might actually work? If it was not designed for this it is definitely what it has achieved.
The name RVA23 might give you a hint around which time the extensions required for high performance implementations to be viable were rougly standardized.
The absolutely essential bitmanip and vector extensions were just ratified at the end of 2021 and the also quite important vector crypto just in 2023.
So it took 10 years to ratify absolutely essential extensions?
Somehow I suspect in 10 years there will be a new set of extensions promising to solve all your woes.
Seriously, the way to do this is for someone to just go off and do what it takes to build a good CPU and for the ISA it uses to become the standard. Trying to do this the other way around is asking for trouble, unless sitting in committees for twenty years is actually the whole idea.
I think you’re missing a point here. The fact that this was not part of the initial design speaks volumes, since it is entirely obvious to anybody who has ever designed an ISA or looked at assembly of modern binaries. Look at aarch64 for an example an ISA designed for actual use.
Your statement that "RISC-V in 2024 is slow" gets followed by a crazy sequitur that this will continue to be the case for a long time.
Ventana announced their second-gen Veyron 2 core at the beginning of this year and they are releasing a 192-core 4nm chip using it in 2025. They claim Veyron 2 is an 8-wide decoder with a uop cache allowing up to 15-wide issue and a 512-bit vector unit too. In raw numbers, they claim SpecInt per chip is significantly higher than an EPYC 9754 (Zen4) with the same TDP.
We can argue about what things will look like after it launches, but it certainly crushes the idea that RISC-V isn't going to be competing with ARM any time soon.
How is Geekbench any good at comparing RISC-V to ARM? Geekbench isn't a native RISC-V application, let alone has the wherewithal to correctly report any basic information like frequency or core count. You haven't even prefaced these either, and drew conclusions from them.
Also, actually searching the chip in question is impossible.
Neutral party here, well not so neutral, I up voted you, it's a reasonable question at first blush: however here, you're doubling down with another comment that very clearly indicates you didn't read the link provided
I highly recommend it, most incisive RISC-V article I've read.
You reply does not address the great great great grandparent.
RISC-V does indeed not have pipeline reordering.
The Geekbench score thing is a strawman invented to distract from that, no one has mentioned Geekbench except the people arguing it doesn't matter. Everyone agrees, it doesn't matter. So why pound the table about it?
If you're talking physical chips you can buy off the shelf, sure.
But if you're talking IP, which would be what matters for the argument being made (core IP to use on new design), here's where we at (thanks to camel-cdr- on reddit[0]):
What a disaster for ARM. Qualcomm building out new chips targeting the pc market should have been a victory lap for ARM, not the source of a legal battle with their largest customer. Now potential customers might be a little more wary of ARMs licensing practices compared to the free RISC-V ISA.
> Now potential customers might be a little more wary of ARMs licensing practices compared to the free RISC-V ISA.
This is unbelievably understated. If I were Qualcomm, I would put parts of the Nuvia team's expertise to work designing RISC-V applications cores for their various SoC markets.
It's a bit chicken-and-egg. People won't port their software if there's no popular targets available. Even if there are targets, if the popular targets don't perform well, people will assume the ISA is not worth porting to.
No, it's not just around-the-corner but Qualcomm has a role to play here. Not like they should just sit on the sidelines and say "call me when we are RISC-V"
As I said on another forum yesterday, Qualcomm almost always wins its legal battles - when they lose its not because they are wrong but usually only because their lawyers screwed up (Broadcom lawsuit of ~2012). It's kind of a Boy Scout Company in a legal sense and they are very careful. They retrained some of their best engineers as lawyers to help them succeed in court battles ...
ARM should be able to re-file the lawsuit and get financial damages out of Nuvia, which Qualcomm will need to pay. But I doubt the damages will be high enough to bother Qualcomm. I don't think ARM will even bother.
As far as I could tell, this was never about money for ARM. It was about control over their licensees and the products they developed. Control which they could turn into money later.
It always seemed like [from ARM's point of view]: "oh, you're going to sell way more parts doing laptop SoCs with the license instead of servers... if we'd known that before, we would've negotiated a different license where we get a bigger cut"
The Nuvia ALA (architecture license agreement) specified much higher royalties, i.e. a much bigger cut for Arm, than the Qualcomm ALA.
The official reason for the conflict is that Qualcomm says that the Qualcomm ALA is applicable for anything made by Qualcomm, while Arm says that for any Qualcomm product that includes a trace of something designed at the former Nuvia company the Nuvia ALA must be applied.
The real reason of the conflict is that Qualcomm is replacing the CPU cores designed by Arm with CPU cores designed by the former Nuvia team in all their products. Had this not happened, Arm would have received a much greater revenue as a result of Nuvia being bought by Qualcomm, even when applying the Qualcomm ALA.
The whole ALA essentially boils down to "you pay us because other companies made our ISA popular".
This is why companies are pushing toward RISC-V so hard. If ARM's ISA were open, then ARM would have to compete with lots of other companies to create the best core implementations possible or disappear.
The server market for Arm-based computers remains negligible.
The number of servers with Arm-based CPUs is growing fast, but they are not sold on the free market, they are produced internally by the big cloud operators.
Only Ampere Computing sells a few Arm-based CPUs, but they have fewer and fewer customers (i.e. mainly Oracle), after almost every big cloud operator has launched their own server CPUs.
So for anyone hoping to enter the market of Arm-based server CPUs the chances of success are extremely small, no matter how good their designs may be.
In the context of ARM machines, it's [historically] been the case that most of the devices are not servers (although that's slowly changing nowadays, which is nice to see!!)
Seemed like ARM was desperate to let Apple and Apple alone make decent CPUs in something besides servers. Having an ok core on mobile or desktop was unacceptable.
This lawsuit will cast a cloud of darkness on any startup company that wants to build a new arm chip design! What a terribly stupid thing for ARM to do - they have basically pointed a gun at their own head and pulled the trigger!
This lawsuit scared away any future startups from using ARM technology if they want the option to be acquired. This in turn means that all the awesome new chip IP is going to be designed for some other ISA and that's almost certainly RISC-V.
Meanwhile, Qualcomm's ALA expires in 2033. They will almost certainly have launched RISC-V chips by then specifically because they know their royalties will be going WAY up if they don't make the switch.
ARM has been a shambles for years since acquisition by Softbank in 2016. The China subsidiary went rogue and had to be dealt with at a time when they were considering an IPO. This has Oracle Java like lessons learned waiting to be written, sprinkled with Intel-like market position squandered. Or even the methane producing pig farmer in underworld in Mad Max Beyond Thunderdome.
My understanding was that the central issue was whether the Nuvia “modify” license “transferred” to Qualcomm. If so, wouldn’t that be a legal issue to be resolved by MSJ? Why was this tried to a jury?
Qualcomm already had a "modify" licence, back from when they were doing their own custom ARM cores.
So the actual central issue was if Qualcomm had the right to transfer the technology developed under the Nuvia architecture license to the Qualcomm architecture license.
My memory is that before the lawsuit, Qualcomm were using both arguments, "we have the right to transfer the licence, and even if we didn't, we have our own license".
The first argument was always the weaker argument, the license explicitly banned licence transfers between companies without explicit prior permission. Qualcomm were arguing that the fact they already had a licence counted as prior permission.
But ARM bypassed that argument by just terminating the Nuvia licence, and by the time of the lawsuit qualcomm was down to the second argument. Which was good, it was the much stronger argument.
I was wondering about that too. If I understand correctly, the Arm license includes a clause stipulating that it is invalidated if the company is acquired unless Arm grants permission.
It makes sense. You wouldn't want to license something to a startup only for your competitor to buy them out from under you to get an advantage / bypassing talking to you directly.
Qualcomm for years has sworn they didn't transfer the existing Nuvia tech. It's the Nuvia team, and a from scratch implementation. Qualcomm was saying pretty strongly they didn't allow any direct Nuvia IP to be imported and to pollute the new design.
They saw this argument coming from day 1 & worked to avoid it.
But everytime this case comes up, folks immediately revert to the ARM position that it's Nuvia IP being transfered. This alone is taking ARMs side, and seems not to resemble what Qualcomm tried to do.
Acquihire's are very common. You can negotiate who is coming with you (or force getting the entire team and not just the A players), agreed to pay raises during the transfer, stock or other important roles, etc. Less unsettling for the employees too, having to go through the interview pipeline like a walk-in would.
Sometimes its the entire goal. Leave a company to fill a need it can't solve itself with the plan to get bought back in later.
A lot of companies with great IPR have very terrible management, now ARM has joined that club! Pay their license fees religiously for almost 30 years and they turn around and sue you, GEEZ!
"Qualcomm’s counsel turned Arm’s Piano analogy on its head. Arm compared its ISA to a Piano Keyboard design during the opening statement and used it throughout the trial. It claimed that no matter how big or small the Piano is, the keyboard design remains the same and is covered by its license. Qualcomm’s counsel extended that analogy to show how ridiculous it would be to say that because you designed the keyboard, you own all the pianos in the world. Suggesting that is what Arm is trying to do."
Source: https://www.tantraanalyst.com/ta/qualcomm-vs-arm-trial-day-3...
Wouldn't that be similar to the Google v Oracle Java API case except the claim would be even stronger - that all programs making use of the Java API were derivative works of the Java API and thus subject to licensing arrangements with Oracle?
Or similarly, a hypothetical claim by Intel that a compiler such as LLVM is derivative work of the x86 ISA.
That can't possibly be right. What have I misunderstood about this situation?
> "Throughout expert testimony, Arm has been asserting that all Arm-compliant CPUs are derivatives of the Arm instruction set architecture (ISA)."
> "Arm countered with an examination of the similarities in the register-transfer language (RTL) code, which is used in the design of integrated circuits, of the latest Qualcomm Snapdragon Elite processors, the pre-acquisition Nuvia Phoenix processor, and the Arm ISA (commonly referred to as the Arm Arm)."
Were they trying to argue that the RTL is too similar to the pseudocode in the ARM ARM or something?? That is absolutely crazy. (Of course, [when we have a license agreement and] you publish a public specification for the interface, I am going to use it to implement the interface. What do you expect me to do, implement the ARM ISA without looking at the spec?)
edit: Wow, I guess this really is what they were arguing?? Look at the points from Gerard's testimony [^2]. That is absolutely crazy.
[^1]: https://www.forbes.com/sites/tiriasresearch/2024/12/19/arm-s...
[^2]: https://www.tantraanalyst.com/ta/qualcomm-vs-arm-trial-day-2...
I really feel like I must have misunderstood something here.
In other words, Nuvia failing to destroy the designs might or might not have been a breach of contract. At least if I understand all of this correctly. But I feel like I must be missing some key details.
I assume Arm has some patents on the ISA [1] and the only way to get a license to them is to sign something that effectively says all your work exists at Arm's sufferance. After that we're just negotiating the price.
[1] You and I hate this but it's probably valid in the US.
Because all Arm ALAs are secret, we do not know if Arm has any right to do such a unilateral cancellation.
It is likely that the ALAs cannot be cancelled without a good reason, like breech of contract, so the cancellation of the Qualcomm ALA must be invalid now, after the trial.
The conflict between Arm and Qualcomm has started because the Qualcomm ALA, which Qualcomm says that it is applicable for any Qualcomm products, specifies much lower royalties than the Nuvia ALA.
This is absolutely normal, because Qualcomm sells a huge number of CPUs for which it has to pay royalties, while Nuvia would have sold a negligible number of CPUs, if any.
Arm receives a much greater revenue based on the Qualcomm ALA than what they would have received from Nuvia.
Therefore the real reason of the conflict is that Qualcomm has stopped using CPU cores designed by Arm, so Arm no longer receives any royalty from Qualcomm from licensing cores, and those royalties would have been higher than the royalties specified by the ALA for Qualcomm-designed cores.
When Arm has given an architectural license to Nuvia, they did not expect that the cores designed by Nuvia could compete with Arm-designed cores. Nuvia being bought by Quacomm has changed that, and Arm attempts now to crush any competition for its own cores.
Intel has been lenient toward compiler implementers, but their stance is that emulation of x86 instructions still under patent (e.g., later SSE, AVX512) is infringing if not done under a license agreement. This has had negative implications for, for example, Microsoft's x86 emulation on ARM Windows devices.
(I'm guessing Apple probably did the right thing and ponied up the license fees.)
I thought the whole thing Qualcomm was way more professional. ARM's case was that what they think was written in the contract, what they "should" have written in contract and what Qualcomm shows clearly contradict.
It is more of a lesson for ARM to learn. And now the damage has been done. This also makes me think who was pushing this lawsuit. Softbank ?
I also gained more respect to Qualcomm. After what they showed Apple vs Qualcomm's case and here.
Side Note: ARM's Design has caught on. The Cortex X5 is close to Apple's Design. We should have news about X6 soon.
I thought the entire point of this was that Arm was trying to prevent Qualcomm from switching away from products that fall under the TLA. Isn't revenue from TLA fees a huge difference from that of ALA fees?
"I don't think either side had a clear victory or would have had a clear victory if this case is tried again," Noreika told the parties."
After more than nine hours of deliberations over two days, the eight-person jury could not reach a unanimous verdict on the question of whether startup Nuvia breached the terms of its license with Arm.
[1] https://www.reuters.com/legal/us-jury-deadlocked-arm-trial-a...
When your contracts are airtight, you usually want a bench trial. Then the defendant demands a jury.
https://www.tantraanalyst.com/ta/qualcomm-vs-arm-trial-day-1...
> Arm’s opening statement.. presented with a soft, almost victim-like demeanor. Qualcomm’s statement was more assertive and included many strong facts (e.g., Arm internal communications saying Qualcomm has “Bombproof” ALA). Testimonials were quite informative and revealed many interesting facts, some rumored and others unknown (e.g. Arm considered a fully vertically integrated approach).
https://www.tantraanalyst.com/ta/qualcomm-vs-arm-trial-day-2...
> The most important discussion was whether processor design and RTL are a derivative of Arm’s technology.. This assertion of derivative seems an overreach and should put a chill down the spine of every Arm customer, especially the ones that have ALA, which include NXP, Infineon, TI, ST Micro, Microchip, Broadcom, Nvidia, MediaTek, Qualcomm, Apple, and Marvell. No matter how much they innovate in processor design and architecture, it can all be deemed Arm’s derivative and, hence, its technology.
https://www.tantraanalyst.com/ta/qualcomm-vs-arm-trial-day-3...
https://www.tantraanalyst.com/ta/qualcomm-vs-arm-trial-day-4...
ARM did massive damage to their ecosystem for nothing. There will for sure be consequences of suing your largest customer.
Lots of people that would have defaulted to licensing designed off ARM for whatever chips they have planned will now be considering RISC-V instead. ARM just accelerated the timeline for their biggest future competitor. Genius.
I’ve written about that here: https://benhouston3d.com/blog/risc-v-in-2024-is-slow
They pivoted to embedded shortly after spinning off into a separate company.
Acorn Computers started off much earlier (I owned an Acorn Atom when it was released) which begat the Electron, then the BBC Micro and then the Archimedes.
At that time ARM was just an architecture owned by Acorn. They created it with VSLI technology (Acorn’s Silicon partner) and used the first RISC chip in the BBC Micro before then pivoting it to the Archimedes.
Whilst Acorn itself was initially purchased by Olivetti, who eventually sold what remained years later to Morgan Stanley.
The ARM division was spun off as “Advanced RISC Machines” in a deal with both Apple, and VSLI Technology after Olivetti came onto the scene.
It is this company that we now know as Arm Holdings.
So it’s not entirely accurate to claim “they had a full computer system” as that was Acorn Computers, PLC.
Some of the other details you have are wrong too, to the point your comment is really quite misleading.
Anyone wanting an accurate version should check wikipedia: https://en.m.wikipedia.org/wiki/Acorn_Computers
(To be blunt the above comment is like a very bad LLM summary of the Acorn article).
As far as I am aware, there is nothing about the RISC-V architecture which inherently prevents it from ever being competitive with ARM. The people designing their own cores just haven't bothered to do so yet.
RISC-V isn't competitive in 2024, but that doesn't mean that it still won't be competitive in 2030 or 2035. If you were starting a project today at a company like Amazon or Google to develop a fully custom core, would you really stick with ARM - knowing what they tried to do with Qualcomm?
Having a competitive CPU is 1% of the job. Then you need To have a competitive SoC (oh and not infringe IP), so that you can build the software ecosystem, which is the hard bit.
We still have problems with software not being optimised for Arm these days, which is just astounding given the prevalence on mobile devices, let alone the market share represented by Apple. Even Golang is still lacking a whole bunch of optimisations that are present in x86, and Google has their own Arm based chips.
Compilers pull off miracles, but a lot of optimisations are going to take direct experience and dedicated work.
I think the keys to Risc-V in terms of software will be,
LLVM (gives us C, C++, Rust, Zig,etc), this is probably already happening?
JavaScript (V8 support for Android should be the biggest driver, also enabling Node,Deno,etc but it's speed will depend on Google interest)
JVM (Does Oracle have interest at all? Could be a major roadblock unless Google funds it, again depends on Android interest).
So Android on Risc-V could really be a game-changer but Google just backed down a bit recently.
Dotnet(games) and Ruby (and even Python?) would probably be like Go with custom runtimes/JIT's needing custom work but no obvious clear marketshare/funding.
It'll remain a niche but I do really think Android devices (or something else equally popular, a chinese home-PC?) would be the gamechanger to push demand over the top.
Golang's compiler is weak compared to the competition. It's probably not a good demonstration of most ISAs really.
This meant that you had (a) strong demand for ARM apps/libraries, (b) large pool of testers, (c) developers able to port their code without needing additional hardware, (d) developers able to seamlessly test their x86/ARM code side by side.
RISC-V will have none of this.
I think people are blind to the amount of pre-emptive work a transition like that requires. Sure, Linux and FreeBSD support a bunch of architectures, but are they really all free of bugs due to the architecture? You can't convince me that choosing an esoteric, lightly used arch like Big Endian PowerPC won't come with bugs related to that you'll have to deal with. And then you need to figure out who's responsible for the code, and whether or not they have the hardware to test it on.
It happened to me; small project I put on my ARM-based AWS server, and it was not working even though it was compiled for the architecture.
Having a clear software stack that you control plays a key role in this success, right?
Wanting to have the general solution with millions of random off label hardware combinations to support is the challenge.
Edit, link: https://box86.org/2024/08/box64-and-risc-v-in-2024/
The new (but tier1 like x86-64) Debian port is doing alright[0]. It'll soon pass ppc64 and close up to arm64.
0. https://buildd.debian.org/stats/graph-week-big.png
We can't know and won't for up to until 2030 or 2035. Humans are just not very good when it comes projecting the future (if predictions of 1950-60's were correct, I would be typing this up from my cozy cosmic dwelling on a Jovian or a Saturnian moon after all).
History has had numerous examples when better ISA and CPU designs have lost out to a combination or mysteries and other compounding factors that are usually attributed to «market forces» (whatever that means to whomever). The 1980-90's were the heydays of some of the most brilliant ISA designs and nearly everyone was confident that a design X or Y would become dominant, or the next best thing, or anywhere in between. Yet, we were left with a x86 monopoly for several decades that has only recently turned into a duopoly because of the arrival of ARM into the mainstream and through a completely unexpected vector: the advent of smartphones. It was not the turn than anyone expected.
And since innovations tend to be product oriented, it is not possible to even design, leave alone build, a product with something does not exist yet. Breaking a new ground in the CPU design requires an involvement of a large number of driving and very un–mysterious (so to speak) forces, exorbitant investment (from the product design and manufacturing perspectives) that are available to the largest incumbents only. And even that is not guaranteed as we have seen it with the Itanium architecture.
So unless the incumbents commit and follow through, it is not likely (at least not obvious) that RISC-V will enter the mainstream and will rather remain a niche (albeit a viable one). Within the realms of possibility it can be assessed as «maybe» at this very moment.
I would bet on china making risc-v the default solution for entry level and cost sensitive commodity devices within the next couple of years. It’s already happening in the embedded space.
The row with Qualcomm only validates the rationale for fast iterating companies to lean into riscv if they want to meaningfully own any of their processor IP.
The fact that the best ARM cores aren’t actually designed by ARM, but arm claims them as its IP is really enough to understand that migrating to riscv is eventually going to be on the table as a way to maximize shareholder value.
Lack of reg+shifted reg addressing mode and or things like BFI/UBFX/TBZ
The perpetual promise of magic fusion inside the cores has not played out. No core exists to my knowledge that fuses more than two instructions at a time. Most of those take more than two to make. Thus no core exists that could fuse them.
Whether prebuilt distribution binaries support it or not, I can't tell. Simple glance at Debian and Fedora wiki pages doesn't reveal what profile they target, and I CBA to boot an image in qemu to check. In the worst case they target only GC so they won't have Zba. Source distributions like Gentoo would not have a problem.
In any case, talking about the current level of extension support is moving the goalposts. You countered "there is nothing about the RISC-V architecture which inherently prevents it from ever being competitive with ARM" with "Lack of reg+shifted reg addressing mode", which is an argument about ISA, not implementation.
25.10 -> RVA23
26.04 (LTS) -> RVA23
from: https://www.youtube.com/watch?v=oBmNRr1fdak
where are they?
On the open-source front, I can right now download a RVA23 supporting RISC-V implementation, simulate the RTL and have it out perform my current Zen1 desktop per cycle in scalar code: https://news.ycombinator.com/item?id=41331786 (see the XiangShanV3 numbers)
How long is this supposed to take?
How long until it is accepted that RISC-V looks like a semiconductor nerd snipe of epic proportion designed to divert energy away from anything that might actually work? If it was not designed for this it is definitely what it has achieved.
The absolutely essential bitmanip and vector extensions were just ratified at the end of 2021 and the also quite important vector crypto just in 2023.
Somehow I suspect in 10 years there will be a new set of extensions promising to solve all your woes.
Seriously, the way to do this is for someone to just go off and do what it takes to build a good CPU and for the ISA it uses to become the standard. Trying to do this the other way around is asking for trouble, unless sitting in committees for twenty years is actually the whole idea.
Ventana announced their second-gen Veyron 2 core at the beginning of this year and they are releasing a 192-core 4nm chip using it in 2025. They claim Veyron 2 is an 8-wide decoder with a uop cache allowing up to 15-wide issue and a 512-bit vector unit too. In raw numbers, they claim SpecInt per chip is significantly higher than an EPYC 9754 (Zen4) with the same TDP.
We can argue about what things will look like after it launches, but it certainly crushes the idea that RISC-V isn't going to be competing with ARM any time soon.
If qualcomm changes instruction decoding over you’ll likely see a dramatic difference
Also correct: RISC-V is not anywhere near competitive to ARM at the level that Qualcomm operates.
Also, actually searching the chip in question is impossible.
I highly recommend it, most incisive RISC-V article I've read.
Geekbench does indeed not support the Vector extension, and thus yields very poor results on RISC-V.
RISC-V does indeed not have pipeline reordering.
The Geekbench score thing is a strawman invented to distract from that, no one has mentioned Geekbench except the people arguing it doesn't matter. Everyone agrees, it doesn't matter. So why pound the table about it?
But if you're talking IP, which would be what matters for the argument being made (core IP to use on new design), here's where we at (thanks to camel-cdr- on reddit[0]):
(rule of thumb SPEC2006*10 = SPEC2017)
SiFive P870-D: >18 SpecINT2006/GHz, >2 SpecINT2017/GHz
Akeana 5300: 25 SpecINT2006/GHz @ 3GHz
Tenstorrent Ascalon: >18 SpecINT2006/GHz, IIRC they mentioned targeting 18-20 at a high frequency
Some references for comparing:
Apple M1: 21.7 SpecINT2006/GHz, 2.33 SpecINT2017/GHz
Apple M4: 2.6 SpecINT2017/GHz
Zen5 9950x: 1.8 SpecINT2017/GHz
Current license-able RISC-V IP is certainly not slow.
0. https://www.reddit.com/r/hardware/comments/1gpssxy/x8664_pat...
This is unbelievably understated. If I were Qualcomm, I would put parts of the Nuvia team's expertise to work designing RISC-V applications cores for their various SoC markets.
Software ecosystem either takes lots of time (see ARM) or you need to be in a position to force it (Apple & M chips).
RISC-V is still a long way off from consumer (or server) prime time
No, it's not just around-the-corner but Qualcomm has a role to play here. Not like they should just sit on the sidelines and say "call me when we are RISC-V"
Or is that question irrelevant in light of the other findings, and the legal fight is actually over, with Qualcomm as the clear winner?
ARM should be able to re-file the lawsuit and get financial damages out of Nuvia, which Qualcomm will need to pay. But I doubt the damages will be high enough to bother Qualcomm. I don't think ARM will even bother.
As far as I could tell, this was never about money for ARM. It was about control over their licensees and the products they developed. Control which they could turn into money later.
The Nuvia ALA (architecture license agreement) specified much higher royalties, i.e. a much bigger cut for Arm, than the Qualcomm ALA.
The official reason for the conflict is that Qualcomm says that the Qualcomm ALA is applicable for anything made by Qualcomm, while Arm says that for any Qualcomm product that includes a trace of something designed at the former Nuvia company the Nuvia ALA must be applied.
The real reason of the conflict is that Qualcomm is replacing the CPU cores designed by Arm with CPU cores designed by the former Nuvia team in all their products. Had this not happened, Arm would have received a much greater revenue as a result of Nuvia being bought by Qualcomm, even when applying the Qualcomm ALA.
This is what you use to fund the next generation of said IP. There is no magic.
This is why companies are pushing toward RISC-V so hard. If ARM's ISA were open, then ARM would have to compete with lots of other companies to create the best core implementations possible or disappear.
The number of servers with Arm-based CPUs is growing fast, but they are not sold on the free market, they are produced internally by the big cloud operators.
Only Ampere Computing sells a few Arm-based CPUs, but they have fewer and fewer customers (i.e. mainly Oracle), after almost every big cloud operator has launched their own server CPUs.
So for anyone hoping to enter the market of Arm-based server CPUs the chances of success are extremely small, no matter how good their designs may be.
Meanwhile, Qualcomm's ALA expires in 2033. They will almost certainly have launched RISC-V chips by then specifically because they know their royalties will be going WAY up if they don't make the switch.
So the actual central issue was if Qualcomm had the right to transfer the technology developed under the Nuvia architecture license to the Qualcomm architecture license.
It strikes me as a surprising diversion to this, and I wonder how prepared for this outcome the respective teams were.
The first argument was always the weaker argument, the license explicitly banned licence transfers between companies without explicit prior permission. Qualcomm were arguing that the fact they already had a licence counted as prior permission.
But ARM bypassed that argument by just terminating the Nuvia licence, and by the time of the lawsuit qualcomm was down to the second argument. Which was good, it was the much stronger argument.
Qualcomm for years has sworn they didn't transfer the existing Nuvia tech. It's the Nuvia team, and a from scratch implementation. Qualcomm was saying pretty strongly they didn't allow any direct Nuvia IP to be imported and to pollute the new design.
They saw this argument coming from day 1 & worked to avoid it.
But everytime this case comes up, folks immediately revert to the ARM position that it's Nuvia IP being transfered. This alone is taking ARMs side, and seems not to resemble what Qualcomm tried to do.
Edit: I guess it wouldn't be that simple to do
Sometimes its the entire goal. Leave a company to fill a need it can't solve itself with the plan to get bought back in later.
Times have changed tho so who knows.
ARM is (edit) figuratively blowing up their ecosystem for no reason; now everyone will be racing to develop RISC-V just to cut out ARM...
With explosives?