Computer architecture is a common term in the computing world, mostly encountered when installing an operating system. It is the architecture that determines how the hardware and software “talk” to each other, allowing the software to function as programmed.
The x86 and x64 architectures from Intel and AMD have always been the most popular, but now a new option from Apple is popularizing ARM-based processors for its latest PCs and laptops. After that, a better understanding about the architecture of the computer.
What is computer engineering?
In general, the architecture of a computer determines the set of instructions that the computer is capable of executing. In short, each user action (from turning on the computer to opening a file) generates a series of data that the processor needs to constantly analyze. To regulate this flow of information and make processing happen as smoothly as possible, instructions must operate within their own system, called an architecture.
With this, programmers can predict how software will run on devices that share the same architecture and avoid potential conflicts or bugs. It also allows engineers to constantly improve hardware without causing compatibility issues.
For new chips that are released annually, the details are called microarchitecture, as they are changes made within the main architecture. Intel’s latest processor, Lake Raptor, is a good example.
What architectures exist for processors?
Currently, it is possible to find three types of architectures for personal computers. The oldest is x86, which was released in 1978 by Intel. The name is an acronym for the 8086 processor, which was the first to be built on this architecture.
Initially x86 could only handle 16-bit data. But starting in 1985, x86 became the industry standard for 32-bit chips. Although currently deprecated due to the limited 4 GB RAM, the x86 architecture can still be found on some older computers.
The most popular architecture at the moment is x64. Released in 1999 by AMD, x64 began processing data up to 64 bits, twice the size of x86. This means that the platform is capable of processing more instructions in less time, as well as having a theoretical capacity to store up to 16 exabytes of RAM. Another advantage is that it is backward compatible with the x86 platform.
Finally, the ARM architecture is still present in smartphones, tablets and the latest Apple computers. Released in 1985 by Acorn Computers, ARM was kept in the dark for years until the release of the iPhone in 2007. Because ARM chips draw less power and run cooler, the architecture is an exact match for a smartphone proposal. Starting in 2020, Apple began using the architecture in its personal computers, with its M1 and M2 chips, as a way to integrate its devices.
What is the difference in the architecture of video cards?
Processors and graphics cards work together to create the images we see on mobile phone and computer screens. Both, however, work in different ways. In the case of a CPU, its few cores are capable of complex and elaborate calculations. On the other hand, a GPU has thousands of cores that perform only basic operations that are repetitive, but on a large scale.
Therefore, video cards use another architecture, which is called Single Instruction Multiple Data (SIMD). It only processes one instruction at a time, but the cores process the data in parallel. Almost all modern video cards use this system.
As in the case of processors, video card manufacturers also name their microarchitectures to distinguish the different generations of products. For Nvidia, the latest is Lovelace, while for AMD it’s RDNA 3. As with CPUs, these changes represent improved functionality and the inclusion of new instructions, such as the addition of Ray Tracing and AI cores.