Battery and Energy Technologies

Von Neumann Computer Architecture

 In modern computers, multiple functions are combined together and implemented in custom integrated circuits so that the computer's working principles tend to be hidden from the user. This page describes the essential underlying functions involved, how they interact with eachother and how the overall computer system works.

The basic building blocks of a stored program digital computer were defined around 1945 by Eckert and Mauchly based on their experience with their ENIAC computer, started in 1943 and completed in 1946, and used in the construction of their 1952 EDVAC stored program computer. A description of how these blocks were interconnected and how they functioned together was published in a progress report in 1945 by mathematician John von Neumann who worked on the project. This arrangement subsequently became known as the von Neumann architecture and is the basis of most general purpose digital computers today.

The Central Processing Unit (CPU performs the computer's arithmetic and logical operations. It includes:

• The Arithmetic and Logic Unit (ALU), based on a binary adder, typically contains functional units which carry out basic mathematical operations such as add, subtract, multiply, divide, increment, compare (=, > or < etc.) as well as logical AND, NOT and OR. More complex functions such as trigonometry, square roots and vector operations can be provided by means of additional Boolean logic circuits in combination with binary arithmetic subroutines which enhance the capability of the ALU.
• Processor Registers provide temporary storage for operands and intermediate results in complex calculations.

Operands A and B are are loaded from the memory and stored in ALU registers. Instructions from the Control Unit initiate the process to be performed and the results are loaded into the ALU results register often called the Accumulator. A separate ALU output register indicates the status of the operation to the Control Unit.(zero, polarity, overflow etc)

The Control Unit manages the flow and timing of data and instructions through the computer as well as the operations performed by the CPU. It interprets the program's instructions and initiates their execution by the ALU. It contains:

• The Program Counter (PC), which contains the address of the next instruction to be executed.
• The Instruction Register (IR), which reads and stores the current instruction from memory.

The Clock

The execution of instructions is driven and sychronised by periodic pulses from a reference clock signal ensuring that all parts of the system work in unison. Some instructions execute in one clock cycle. Others may take several cycles. Typically the faster the clock rate, the faster the computer will run.

The Memory contains both instructions and data.

• The Memory Address Register (MAR) contains the address of the memory location currently in use.
• The Memory Data Register (MDR) stores the data currently being transferred to and from memory.

A "load" instruction reads a value from the MAR and stores it in the MDR which acts as a buffer and makes it accessible to the ALU.

A "store" instruction writes the value contained in the MDR to the memory location contained in the MAR to store it in memory.

Input and Output (I/O) Devices

I/O devices such as keyboards, pointers, card readers, displays, and printers are allocated specific memory locations for moving data in and out of the computer memory.

Data Word

Data is organised into words which may consist of 4, 8, 16, 32, 64 or more bits which are treated as a single unit by the computer hardware. The word length is usually long enough to contain both instructions and data as well as addresses. Memory and storage registers are usually designed to store complete words.

The Fetch-Execution Cycle

This is the basic computer operating cycle.

The control unit fetches an instruction from the memory and decodes it producing an operations code (op-code).

The op-code is passed to the ALU which receives the data from the memory, executes the instruction and stores the result in memory. The following describes the sequence in more detail.

Fetch Sequence

1. The program counter holds the address of the next instruction to be implemented.
2. The address held in the program counter is first transferred to the memory address register (MAR) which acts as a buffer holding it until it is ready to access the main memory via the address bus.
3. The content of the program counter is incremented by 1 and replaced back into program counter to indicate where the next instruction is located in memory.
4. At the same time, the contents of the instruction (op-code and operand) contained in the addressed main memory location specified by the MAR are transferred along the data bus to the memory data register (MDR) which also acts as a buffer.
5. Contents of MDR are then transferred to current instruction register.
6. Current Instruction register separates instruction into its op-code (add, load, store etc) and its operand (the data on which it operates).
7. Execute Sequence

8. The instruction register sends its op-code through an instruction decoder to generate its digital instruction code.
9. The digital instruction, together with its operand is transferred to ALU which executes the instruction.
10. The results stored in temporary accumulator.
11. Next Instruction

12. To store the contents of accumulator in the main memory is a separate task and requires a new instruction which needs a new fetch and execute sequence like the one above to implement it.

Data Transmission

• Parallel Processing
• The von Neumann scheme, as in most computers, uses parallel processing for internal data processing. It operates on the data words as a block, transmitting and processing all the bits in the word simultaneously. This speeds up the process but it uses more components to accomodate the parallel data transmission and processing channels adding to the weight and the complexity of the computer. Because of its speed advantage, parallel processing is normally used for internal data processing.

• Serial Processing
• Serial processing by contrast uses a single data channel. Words are still stored as a block in registers but the bits can only be transmitted through the communications channel sequentially and processed one bit at a time. This saves on component costs but it severely restricts the processing speed.

Serial bit transfer is however used for external network connections since, when used over long distances, it is less prone to errors and less costly than parallel transmission.

The Communications Bus

A computer bus is a set of parallel electrical tracks interconnecting the components within the computer. The von Neumann architecture combines signals from three separate buses, the control bus, the address bus, and the data bus which carries both data and instructions, into a single systems bus. All data traffic with the CPU thus takes place across this single internal communications bus.

• The data bus carries data and instructions to and from the main memory. Its width, that is its number of wires, determines the possible word length.
• The width of the address bus determines how many addresses the computer can access. It is unidirectional from CPU to the memory. The CPU generates addresses for storing data in the main memory which it loads onto the address bus as required.

The von Neumann architecture has only one bus which is used for both data transfers and instruction fetches, and therefore, data transfers and instruction fetches must be scheduled - they can not be performed simultaneously. This is often known as the von Neumann bottleneck. See the Harvard architecture (below) which has a different bus system.

The Operating System (OS) - Early computers didn't have what we would recognise today as an operating system. They were single user machines with programs from different users being processed sequentially in batches. In response to the need for more efficient use of expensive computing resources, in 1959, operating systems were developed to allow multiple users to access the computer simultaneously. The object of the initial systems was time sharing, but this was soon expanded to allow for the computer's overall resources and capability to be extended with file management and multiple programs, subroutines and input and output devices from which the users could choose.

Operating systems are now essential systems software that manage the computer's hardware and software resources providing common processing, communications, interfacing and security services for computer programs. They are specific to a particular machine type and are not usually accessible to the user.

Note: The Harvard Computer Architecture is similar to the von Neumann scheme but it has separate data and instruction buses, which allow transfers to be performed simultaneously on both buses allowing faster execution. It is also possible to have separate memories for programs and data. However the benefits come at a price since the system hardware and software are more complex and difficult to implement.

Programming Language and Instruction Set

The following definitions and comments refer to programming in general and are not specific to the van Neumann architecture.

• Instruction
• A description of an operation to be performed including, the operands, or where to find them, and where to put the result.

• Instruction Set
• A list of all the possible operation codes available on a particular machine type, together with their associated addressing schemes.

• Operation Code (Op-Code)
• The part of the instruction which specifies the operation to be performed.

• Machine Code
• The Op-codes relating to a specific machine type.

• Assembly Language - Consists of machine Op-codes written in alphabetic form with mnemonic significance. Designed to have a one for one correspondence between instructions and machine operations, they enable compact and efficient code. The programming effort required is however complex and difficult, even for the simplest of machine operations. The assembly code is machine dependant and not easy to port to alternative machine architectures.
• High Level Language - Easier to read, write, and maintain than assembly language and machine code. It is processor independent and can be run on different machine types but must first be translated into machine language by a compiler or interpreter which are both processor specific..
• Compiler - A software program which converts source code written in high level language to machine code which runs on a specific machine. High level code is easier to read and maintain. Once the source code has been translated to machine code the compiler is no longer required. Does not necessarily produce efficient code. Execution and memory requirements are too large for resource limited applications.
• Interpreter - A basic interpreter also converts high level language instructions to machine code but enables the source code to run directly on the machine. High level instructions must translated to machine code every time the instruction is encountered in the program and code snippets can not be saved and re-used within the program. The process is slow and inefficient.
• Pseudo Code (P-code) - is an intermediate, interpretive language which enables a more efficient implementation of an interpreter. The interpreter first converts the source code of the high level language into p-code instructions which are analogous to the machine's assembly instructions, not its machine code. The interpreter also includes software mimicking a virtual machine which then reads and executes the p-code.

See more about Boolean Logic and Digital Circuits.

Print This Page || Home || FAQ || Site Map || Legal || Privacy Promise || Contacts

Woodbank Communications Ltd, South Crescent Road, Chester, CH4 7AU, (United Kingdom)