askvity

What is Data Bus Multiplexing?

Published in Bus Architecture 4 mins read

Data bus multiplexing is a technique used in computer system design where the same physical lines of a bus are used to transmit different types of information, such as address data and actual data, at different times.

Multiplexing refers to a type of bus structure in which the number of signal lines comprising the bus is less than the number of bits of data, address, and/or control information being transferred between elements of the system. Essentially, it allows designers to reduce the number of pins on components and the number of traces on circuit boards by sharing the bus lines.

Why Use Multiplexing?

The primary reasons for employing data bus multiplexing are:

  • Reduced Pin Count: Microprocessors, memory chips, and other peripherals require pins to connect to the bus. Multiplexing reduces the total number of pins needed, making chips smaller and cheaper to manufacture.
  • Lower Cost: Fewer pins mean smaller packages and less complex wiring on the circuit board, leading to lower overall system costs.
  • Smaller Board Size: Fewer bus lines mean less space is required on the printed circuit board (PCB), allowing for more compact designs.

How Does It Work?

In a non-multiplexed bus system, there are dedicated lines for address information and dedicated lines for data information. For example, a system with a 16-bit address bus and a 16-bit data bus would require at least 32 signal lines for these two functions alone (plus control lines).

With multiplexing, these functions share the same lines. For instance, a 16-bit multiplexed address/data bus might use only 16 physical lines.

Here's a simplified breakdown of a typical operation (like reading from memory):

  1. Address Phase: The CPU places the memory address it wants to access onto the shared bus lines. A control signal (like Address Latch Enable - ALE) is activated briefly.
  2. Address Latching: External logic (often a latch circuit) connected to the bus reads and stores this address when the control signal is active.
  3. Data Phase: After the address is latched and the control signal deactivates, the shared bus lines are then used to transfer the data from the memory location (read operation) or to the memory location (write operation).
  4. Control Signals: Other control signals indicate whether the current transaction is a read or a write, and manage the timing of the data transfer.

This time-sharing of the bus lines is the core of multiplexing.

Practical Examples

Multiplexed buses were common in early microprocessor designs and are still used in various interfaces today, particularly where pin count is a significant constraint.

  • Early Microprocessors: CPUs like the Intel 8085 and 8088 used multiplexed address/data buses to reduce pin count compared to their non-multiplexed counterparts (like the 8086).
  • Memory Interfaces: Some memory technologies or specific memory chips might use multiplexed address lines (e.g., row address and column address sharing the same physical lines) to reduce the number of pins on the memory chip.

Advantages and Disadvantages

As with most design choices, multiplexing comes with trade-offs:

  • Advantages:
    • Reduced pin count on components.
    • Lower manufacturing costs.
    • More compact system designs.
  • Disadvantages:
    • Slower Performance: Because address and data cannot be transmitted simultaneously, the overall bus cycle takes longer compared to a non-multiplexed bus with dedicated lines.
    • Requires External Logic: External latches are often needed to hold the address during the data phase, adding complexity to the circuit board design.

In modern, high-performance systems, dedicated, non-multiplexed buses are often preferred for their speed, even though they require more pins and board space. However, multiplexing remains a valuable technique in cost-sensitive or space-constrained applications.

Related Articles