ASM210001: 对照复杂指令集和精简指令集
作者:互联网
CISC vs. RISC: The Definition
The term RISC was first coined in the early 1980s. RISC architectures were a reaction to the ever-increasing complexity in architecture design (epitomized by the DEC VAX-11 architecture). It rapidly became the darling architecture of academia and almost every popular computer architecture textbook since that period has trumpeted that design philosophy. Those text books (and numerous scholarly and professional papers and articles) claimed that RISC would quickly supplant the “CISC” architectures of that area offering faster and lower-cost computer systems. A funny thing happened, though, the x86 architecture rose to the top of the performance pile and (until recently) refused to give up the performance throne. Could those academic researchers have been wrong?
Before addressing this issue, it is appropriate to first define what the acronyms “RISC” and “CISC” really mean. Most (technical) people know that these acronyms stand for “Reduced Instruction Set Computer” and “Complex Instruction Set Computer” (respectively). However, these terms are slightly ambiguous and this confuses many people.
Back when RISC designs first started appearing, RISC architectures were relatively immature and the designs (especially coming out of academia) were rather simplistic. The early RISC instruction sets, therefore, were rather small (it was rare to have integer division instructions, much less floating-point instructions, in these early RISC machines). As a result, people began to interpret RISC to mean that the CPU had a small instruction set, that is, the instruction set was reduce. I denote this interpretation as “Reduced (Instruction Set) Computer” with the parenthesis eliminating the ambiguity in the phrase. However, the reality is that RISC actually means “(Reduced Instruction) Set Computer” — that is, it is the individual instructions that are simplified rather than the whole instruction set. In a similar vein, CISC actually means “(Complex Instruction) Set Computer”, not “Complex (Instruction Set) Computer” (although the latter is often true as well).
The core concept behind RISC is that each instruction doesn’t do very much. This makes it far easier to implement that instruction in hardware. A direct result, of this, is that the hardware runs much faster because there are fewer gates decoding the instruction and acting on the semantics of the instruction. Here are the core tenets of a RISC architecture:
A load/store architecture (software only accesses data memory using load and store instructions)
A large register bank (with most computations taking place in registers)
Fixed-length instructions/opcodes (typically 32 bits)
One instruction per cycle (or better) execution times (that is, instructions cannot be so complex they require multiple clock cycles to execute)
Compilers will handle optimization tasks, so there is no worry about difficult-to-write machine code.
Early RISC designs (and, for the most part, the new RISC-V design) stuck to these rules quite well. The problem with RISC designs, just as what happened with CISC before them, is that as time passed the designers found new instructions that they wanted to add to the instruction set. The fixed-size (and very limited) RISC instruction encodings worked against them. Today’s most popular RISC CPU (the ARM) has greatly suffered from the kludges needed to handle modern software (this was especially apparent with the transition from 32 bits to 64). Just as the relatively well-designed PDP-11 architecture begat the VAX-11, just as the relatively straight-forward 8086 begat the 80386 (and then the x86-64), kludges to the ARM instruction set architecture has produced some very non-RISC-like changes. Sometimes I wonder if today’s ARM architecture would be viewed with similar disdain to the CISC architectures of yesterday by those 1980s researchers. This is, perhaps, the main reason the RISC-V architecture has given up on the fixed-instruction-length encoding tenet–it make it impossible to cleanly “future-proof” the CPU.
The original premise with RISC is that you design a new, clean, architecture (like the RISC-V) when time passes and you need something better than the 30-year-old design that you’ve been using (i.e., the ARM). Of course, the big problem with starting over (which is why the x86 has been King for so long) is that all that old, wonderful, software won’t run on the new CPUs. For all it’s advantages, it’s unlikely you’ll see too many designs flocking to the RISC-V CPU anytime soon; there’s just no software for it. Today, RISC-V mainly finds use in some embedded projects where the engineers are writing all the software for their device; they don’t depend on thousands or millions of “apps” out there for the success of their product.
When RISC CPUs first became popular, they actually didn’t outperform the higher-end CISC machines of the day. It was always about the promise of what RISC CPUs could do as the technology matured. However, those VAX-11 machines (and the Motorola 680×0 and National Semiconductor 32000 series) machines still outperformed those early RISC machines. FWIW, the 80×86 family *was* slower at the time; it wasn’t until the late 1980s and early 1990s that Intel captured the performance crown; in particular, the Alpha and Sparc CPUs were tough competitors for a while. However, once the x86 started outrunning the RISC machines (a position it’s held until some of the very latest Apple Silicon SOCs have come along), there was no looking back. RISCs, of course, made their mark in two areas where Intel’s CISC technology just couldn’t compete very well: power and price. The explosion in mobile computing gave RISC the inroads to succeed where the x86 was a non-starter (all that extra complexity costs money and watts; the poison pill for mobile systems). Today, of course, RISC owns the mobile market.
In the 1980s and 1990s, there was a big war in the technical press between believers in CISC and RISC. All the way through the 2000s (and even 2010s), Intel prowess kept the RISC adherents at bay. They could claim that the x86 was a giant kluge and its architecture was a mess. However, Intel kept eating their lunch and producing faster (if not frighteningly expensive) CPUs.
Unfortunately, Intel seems to have lost their magic touch in the late 2010s and early 2020s. For whatever reason they have been unable to build devices using the latest processes (3 to 5 nm, as I write this) and other semiconductor manufacturers (who build RISC machines, specifically ARM) have taken the opportunity to zoom past the x86 in performance. Intel’s inability to improve their die manufacturing processes likely have nothing to do with the RISC vs CISC debate, but this hiccough on their part may be the final nail in the coffin for the x86’s performance title and is likely to settle the debate, once and for all.
RISC这个术语最早是在1980年代初创造的。 RISC体系结构是对体系结构设计中不断增加的复杂性的一种反应(由DEC VAX-11体系结构实现)。自那段时期以来,它迅速成为学术界的宠爱架构,几乎是每本流行的计算机架构教科书都大肆宣传这种设计哲学。这些教科书(以及大量的学术和专业论文及文章)声称,RISC将迅速取代该地区的“ CISC”体系结构,从而提供更快,成本更低的计算机系统。然而,发生了一件有趣的事,x86体系结构上升到了性能堆的顶端,并且(直到最近)拒绝放弃性能宝座。那些学术研究者可能错了吗?
在解决此问题之前,首先定义缩写词“ RISC”和“ CISC”的真正含义是适当的。大多数(技术)人士都知道,这些缩写分别代表“精简指令集计算机”和“复杂指令集计算机”。但是,这些术语有点模棱两可,这使许多人感到困惑。
早在RISC设计首次开始出现时,RISC架构就相对不成熟,并且设计(尤其是来自学术界的设计)相当简单。因此,早期的RISC指令集很小(在这些早期的RISC机器中很少有整数除法指令,而浮点指令则少得多)。结果,人们开始将RISC解释为CPU的指令集很小,即指令集减少了。我将此解释表示为“精简(指令集)计算机”,括号中的内容消除了该短语中的歧义。但是,现实是RISC实际上是指“(精简指令集)计算机”,也就是说,简化的是单个指令,而不是整个指令集。同样,CISC实际上是指“(复杂指令集)计算机”,而不是“复杂(指令集)计算机”(尽管后者通常也是正确的)。
RISC背后的核心概念是每条指令的功能都不大。这使得在硬件中实施该指令变得容易得多。这样的直接结果是,由于解码该指令并根据该指令的语义进行操作的门较少,因此硬件的运行速度大大提高。以下是RISC体系结构的核心原则:
加载/存储架构(软件仅使用加载和存储指令访问数据存储器)
大型寄存器库(大多数计算在寄存器中进行)
固定长度的指令/操作码(通常为32位)
每个周期(或更好)的执行时间执行一条指令(也就是说,指令不能太复杂以至于需要多个时钟周期来执行)
编译器将处理优化任务,因此无需担心难以编写的机器代码。
早期的RISC设计(以及大多数情况下是新的RISC-V设计)很好地遵循了这些规则。就像之前的CISC一样,RISC设计的问题在于,随着时间的流逝,设计人员发现了他们想要添加到指令集中的新指令。固定大小(且非常有限)的RISC指令编码对它们不利。当今最流行的RISC CPU(ARM)遭受了处理现代软件所需的繁琐工作的困扰(从32位到64位的过渡尤为明显)。正如设计相对完善的PDP-11架构源自VAX-11,就像相对简单的8086源自80386(然后是x86-64)一样,对ARM指令集架构的不满产生了一些类似于RISC的更改。有时我想知道今天的ARM体系结构是否会被那些1980年代的研究人员以与昨天的CISC体系结构相似的方式看待。也许这是RISC-V体系结构放弃了固定指令长度编码原则的主要原因-它使得不可能对CPU进行完全“面向未来”的设计。
RISC的最初前提是您可以在时间流逝时设计出一种全新的,干净的体系结构(例如RISC-V),并且您需要的东西要比您使用的已有30年历史的设计(即ARM)更好。 。当然,重新开始的最大问题(这就是为什么x86长期以来都是King的原因)是所有旧的,精彩的软件都无法在新的CPU上运行。尽管具有所有优点,但您不太可能很快就会看到太多设计涌向RISC-V CPU。那里没有软件。如今,RISC-V主要用于某些嵌入式项目中,工程师们正在为其设备编写所有软件。他们不需要依靠成千上万的“应用”来获得产品成功。
当RISC CPU首次流行时,它们实际上并没有超过当今的高端CISC计算机。一直以来,随着技术的成熟,RISC CPU会做什么的承诺就一直存在。但是,那些VAX-11机器(以及Motorola 680×0和National Semiconductor 32000系列)机器仍然优于那些早期的RISC机器。 FWIW,当时的80×86家族*速度较慢*;直到1980年代末和1990年代初,英特尔才获得了性能冠军。特别是Alpha和Sparc CPU在一段时间以来一直是竞争对手。但是,一旦x86开始超过RISC机器(直到一些最新的Apple Silicon SOC出现之前,它一直保持这种位置),就没有回头路了。当然,RISC在英特尔的CISC技术无法很好地竞争的两个领域赢得了成功:功率和价格。移动计算的爆炸式增长使RISC在x86成为初学者的情况下取得了成功(所有额外的复杂性都花费了金钱和瓦数;移动系统的毒丸)。当然,今天,RISC拥有移动市场。
在1980年代和1990年代,CISC和RISC的信徒之间在技术媒体上发生了一场大战。一直到2000年代(甚至是2010年代),英特尔的实力一直使RISC拥护者望而却步。他们可以说x86是巨大的kluge,其体系结构一团糟。但是,英特尔一直在吃午饭,并生产更快(如果不是非常昂贵的话)的CPU。
不幸的是,英特尔似乎在2010年代末和2020年代初失去了魔力。无论出于何种原因,他们都无法使用最新的工艺(我写这篇文章时是3到5 nm)来制造设备,而其他半导体制造商(他们制造RISC机器,特别是ARM的机器)已经抓住机会超越了x86的性能。英特尔无法改善其模具制造工艺可能与RISC与CISC的争论无关,但就这一方面而言,这可能是x86性能冠军棺材中的最后钉子,并且有可能一劳永逸地解决这场争论的所有。
标签:ASM210001,x86,instruction,RISC,architecture,指令集,精简指令,CPU,CISC 来源: https://www.cnblogs.com/0924/p/14366313.html