Design of VLSI Systems
Design of VLSI Systems 

Chapter 6 

[previous] [Table of Contents] [next]

6.1 Introduction

Computation speeds have increased dramatically during the past three decades resulting from the development of various technologies. The execution speed of an arithmetic operation is a function of two factors. One is the circuit technology and the other is the algorithm used. It can be rather confusing to discuss both factors simultaneously; for instance, a ripple-carry adder implemented in GaAs technology may be faster than a carry-look-ahead adder implemented in CMOS. Further, in any technology, logic path delay depends upon many different factors: the number of gates through which a signal has to pass before a decision is made, the logic capability of each gate, cumulative distance among all such serial gates, the electrical signal propagation time of the medium per unit distance, etc. Because the logic path delay is attributable to the delay internal and external to logic gates, a comprehensive model of performance would have to include technology, distance, placement, layout, electrical and logical capabilities of the gates. It is not feasible to make a general model of arithmetic performance and include all these variables.

The purpose of this chapter is to give an overview of the different components used in the design of arithmetic operators. The following parts will not exhaustively go through all these components. However, the algorithms used, some mathematical concepts, the architectures, the implementations at the block, transistor or even mask level will be presented. This chapter will start by the presentation of various notation systems. Those are important because they influence the architectures, the size and the performance of the arithmetic components. The well known and used principle of generation and propagation will be explained and basic implementation at transistor level will be given as examples. The basic full adder cell (FA) will be shown as a brick used in the construction of various systems. After that, the problem of building large adders will lead to the presentation of enhancement techniques. Multioperand adders are of particular interest when building special CPU's and especially multipliers. That is why, certain algorithms will be introduced to give a better idea of the building of multipliers. After the show of the classical approaches, a logarithmic multiplier and the multiplication and addition in the Galois Fields will be briefly introduced. Muller [Mull92] and Cavanagh [Cava83] constitute two reference books on the matter.

[Table of Contents] [Top of Document]

6.2 Notation Systems

6.2.1 Integer Unsigned

The binary number system is the most conventional and easily implemented system for internal use in digital computers. It is also a positional number system. In this mode the number is encoded as a vector of n bits (digits) in which each is weighted according to its position in the vector. Associated to each vector is a base (or radix) r. Each bit has an integer value in the range 0 to r-1. In the binary system where r=2, each bit has the value 0 or 1. Consider a n-bit vector of the form:

where ai=0 or 1 for i in [0, n-1]. This vector can represent positive integer values V = A in the range 0 to 2n-1, where:


The above representation can be extended to include fractions. An example follows. The string of binary digits 1101.11 can be interpreted to represent the quantity :

23 . 1 + 22 . 1 + 21 . 0 + 20 . 1 + 2-1 . 1 + 2-2 . 1 = 13.75 (3)

The following Table 1 shows the 3-bit vector representing the decimal expression to the right.

Table-6.1: Binary representation unsigned system with 3 digits

6.2.2 Integer Signed

If only positive integers were to be represented in fixed-point notation, then an n-bit word would permit a range from 0 to 2n-1. However, both positive and negative integers are used in computations and an encoding scheme must be devised in which both positive and negative numbers are distributed as evenly as possible. There must be also an easy way to distinguish between positive and negative numbers. The left most digit is usually reserved for the sign. Consider the following number A with radix r,
where the sign digit an-1 has the following value:
for binary numbers where r=2, the previous equation becomes:
The remaining digits in A indicate either the true value or the magnitude of A in a complemented form. Absolute value
Table-6.2: binary representation signed absolute value

In this representation, the high-order bit indicates the sign of the integer (0 for positive, 1 for negative). A positive number has a range of 0 to 2n-1-1, and a negative number has a range of 0 to -(2n-1-1). The representation of a positive number is :

The negatives numbers having the following representation:
One problem with this kind of notation is the dual representation of the number 0. The next problem is when adding two number with opposite signs. The magnitudes have to be compared to determine the sign of the result. 1's complement
Table-6.3: binary representation signed

In this representation, the high-order bit also indicates the sign of the integer (0 for positive, 1 for negative). A positive number has a range of 0 to 2n-1-1, and a negative number has a range of 0 to -(2n-1-1). The representation of a positive number is :

The negatives numbers having the following representation:
One problem with this kind of notation is the dual representation of the number 0. The next problem is when adding two number with opposite signs. The magnitudes have to be compared to determine the sign of the result. 2's complement
Table-6.4: binary representation signed in 2's complement

In this notation system (radix 2), the value of A is represented such as:

The test sign is also a simple comparison of two bits. There is a unique representation of 0. Addition and subtraction are easier because the result comes out always in a unique 2's complement form.

6.2.3 Carry Save

In some particular operations requiring big additions such as in multiplication or in filtering operations, the carry save notation is used. This notation can be either used in 1's or 2's or whatever other definition. It only means that for the result of an addition, the result will be coded in two digits which are the carry in the sum digit. When coming to the multioperand adders and multipliers, this notion will be understood by itself.

6.2.4 Redundant Notation

It has been stated that each bit in a number system has an integer value in the range 0 to r-1. This produces a digit set S:

in which all the digits of the set are positively weighted. It is also possible to have a digit set in which both positive- and negative-weighted digits are allowed [Aviz61] [Taka87], such as:

where l is a positive integer representing the upper limit of the set. This is considered as a redundant number system, because there may be more than one way to represent a given number. Each digit of a redundant number system can assume the 2(l+1) values of the set T. The range of l is:
Where:  is called the ceiling of .

For any number x , the ceiling of x is the smallest integer not less than x. The floor of x , is the largest integer not greater than x. Since the integer l bigger or equal than 1 and r bigger or equal than 2, then the maximum magnitude of l will be


Thus for r=2, the digit set is:


For r=4, the digit set is


For example, for n=4 and r=2, the number A=-5 has four representation as shown below on Table 5.

23 22 21 20
A= 0 -1 0 -1
A= 0 -1 -1 1
A= -1 0 1 1
A= -1 1 0 -1
Table-6.5: Redundant representation of A=-5 when r=4

This multirepresentation makes redundant number systems difficult to use for certain arithmetic operations. Also, since each signed digit may require more than one bit to represent the digit, this may increase both the storage and the width of the storage bus.

However, redundant number systems have an advantage for the addition which is that is possible to eliminate the problem of the propagation of the carry bit. This operation can be done in a constant time independent of the length of the data word. The conversion from binary to binary redundant is usually a duplication or juxtaposition of bits and it does not cost anything. On the contrary, the opposite conversion means an addition and the propagation of the carry bit cannot be removed.

Let us consider the example where r=2 and l=1. In this system the three used digits are -1, 0, +1.

The representation of 1 is 10, because 1-0=1.

The representation of -1 is 01, because 0-1=-1.

One representation of 0 is 00, because 0-0=0.

One representation of 0 is 11, because 1-1=0.

The addition of 7 and 5 give 12 in decimal. The same is equivalent in a binary non redundant system to 111 + 101:

We note that a carry bit has to be added to the next digits when making the operation "by hand". In the redundant system the same operation absorbs the carry bit which is never propagated to the next order digits:

The result 1001100 has now to be converted to the binary non redundant system. To achieve that, each couple of bits has to be added together. The eventual carry has to be propagated to the next order bits:

[Table of Contents] [Top of Document]

6.3 Principle of Generation and Propagation

6.3.1 The Concept

The principle of Generation and Propagation seems to have been discussed for the first time by Burks, Goldstine and Von Neumann [BGNe46]. It is based on a simple remark: when adding two numbers A and B in 2s complement or in the simplest binary representation (A=an-1...a1a0, B=bn-1...b1b0), when ai =bi it is not necessary to know the carry ci . So it is not necessary to wait for its calculation in order to determine ci+1 and the sum si+1.
If ai=bi=0, then necessarily ci+1=0
If ai=bi=1, then necessarily ci+1=1

This means that when ai=bi, it is possible to add the bits greater than the ith, before the carry information ci+1 has arrived. The time required to perform the addition will be proportional to the length of the longest chain i,i+1, i+2, i+p so that ak not equal to bk for k in [i,i+p].

It has been shown [BGNe46] that the average value of this longest chain is proportional to the logarithm of the number of bits used to represent A and B. By using this principle of generation and propagation it is possible to design an adder with an average delay o(log n). However, this type of adder is usable only on asynchronous systems [Mull82]. Today the complexity of the systems is so high that asynchronous timing of the operations is rarely implemented. That is why the problem is rather to minimize the maximum delay rather than the average delay.

Propagation: These remarks constitute the principle of generation and propagation used to speed the addition of two numbers.

All adders which use this principle calculate in a first stage.

pi = ai XOR bi (10)
gi = ai bi   (11)
The previous equations determine the ability of the ith bit to propagate carry information or to generate a carry information.

6.3.2 Transistor Formulation

[Click to enlarge image]Figure-6.1: A 1-bit adder with propagation signal controling the pass-gate

This implementation can be very performant (20 transistors) depending on the way the XOR function is built. The carry propagation of the carry is controlled by the output of the XOR gate. The generation of the carry is directly made by the function at the bottom. When both input signals are 1, then the inverse output carry is 0.

In the schematic of Figure 6.1, the carry passes through a complete transmission gate. If the carry path is precharged to VDD, the transmission gate is then reduced to a simple NMOS transistor. In the same way the PMOS transistors of the carry generation is removed. One gets a Manchester cell.

[Click to enlarge image]Figure-6.2: The Manchester cell

The Manchester cell is very fast, but a large set of such cascaded cells would be slow. This is due to the distributed RC effect and the body effect making the propagation time grow with the square of the number of cells. Practically, an inverter is added every four cells, like in Figure 6.3.

[Click to enlarge image]Figure-6.3: The Manchester carry cell

[Table of Contents] [Top of Document]

6.4 The 1-bit Full Adder

It is the generic cell used not only to perform addition but also arithmetic multiplication division and filtering operation. In this part we will analyse the equations and give some implementations with layout examples.

The adder cell receives two operands ai and bi, and an incoming carry ci. It computes the sum and the outgoing carry ci+1.
ci+1 = ai . bi + ai . ci + ci . bi = ai . bi + (ai + bi ). ci ci+1 = pi . ci + gi

[Click to enlarge image]Figure-6.4: The full adder (FA) and half adder (HA) cells


pi = bi XOR ai is the PROPAGATION signal  (12)
gi = ai . bi is the GENERATION signal  (13)
si = ai XOR bi XOR ci   (14)
si.(ai + bi + ci) + ai . bi . c(15)

These equation can be directly translated into two N and P nets of transistors leading to the following schematics. The main disadvantage of this implementation is that there is no regularity in the nets.

[Click to enlarge image]Figure-6.5: Direct transcription of the previous equations

The dual form of each equation described previously can be written in the same manner as the normal form:

dual of (16)  (17)

In the same way :

dual of (19) (20)
The schematic becomes symmetrical (Figure 6.6), and leads to a better layout :
[Click to enlarge image]Figure-6.6: Symmetrical implementation due to the dual expressions of ci and si.

The following Figure 6.7 shows different physical layouts in different technologies. The size, the technology and the performance of each cell is summarized in the next Table 6.

Name of cell Number of Tr. Size (µm2) Technology Worst Case Delay (ns) (Typical Conditions)
fa_ly_mini_jk 24 2400 1.2 µ 20
fa_ly_op1 24 3150 1.2 µ 5
Fulladd.L 28 962 0.5 µ 1.5
fa_ly_itt 24 3627 1.2 µ 10
Table-6.6: Characteristics of layout cells from Figure 7.
Figure-6.7: Mask layout for different Full Adder cells

[Table of Contents] [Top of Document]

6.5 Enhancement Techniques for Adders

The operands of addition are the addend the augend. The addend is added to the augend to form the sum. In most computers, the augmented operand (the augend) is replaced by the sum, whereas the addend is unchanged. High speed adders are not only for addition but also for subtraction, multiplication and division. The speed of a digital processor depends heavily on the speed of adders. The adders add vectors of bits and the principal problem is to speed- up the carry signal. A traditional and non optimized four bit adder can be made by the use of the generic one-bit adder cell connected one to the other. It is the ripple carry adder. In this case, the sum resulting at each stage need to wait for the incoming carry signal to perform the sum operation. The carry propagation can be speed-up in two ways. The first and most obvious way is to use a faster logic circuit technology. The second way is to generate carries by means of forecasting logic that does not rely on the carry signal being rippled from stage to stage of the adder.
[Click to enlarge image]Figure-6.8: A 4-bit parallel ripple carry adder

Generally, the size of an adder is determined according to the type of operations required, to the precision or to the time allowed to perform the operation. Since the operands have a fixed size, if becomes important to determine whether or not there is a detected overflow

Overflow: An overflow can be detected in two ways. First an overflow has occurred when the sign of the sum does not agree with the signs of the operands and the sign s of the operands are the same. In an n-bit adder, overflow can be defined as:


Secondly, if the carry out of the high order numeric (magnitude) position of the sum and the carry out of the sign position of the sum agree, the sum is satisfactory; if they disagree, an overflow has occurred. Thus,


A parallel adder adds two operands, including the sign bits. An overflow from the magnitude part will tend to change the sign of the sum. So that an erroneous sign will be produced. The following Table 7 summarizes the overflow detection

an-1 bn-1 sn-1 cn-1 cn-2 Overflow
0 0 0 0 0 0
0 0 1 0 1 1
1 1 0 1 0 1
1 1 1 1 1 0
Table-6.7: Overflow detection for 1's and 2's complement

Coming back to the acceleration of the computation, two major techniques are used: speed-up techniques (Carry Skip and Carry Select), anticipation techniques (Carry Look Ahead, Brent and Kung and C3i). Finally, a combination of these techniques can prove to be an optimum for large adders.

6.5.1 The Carry-Skip Adder

Depending on the position at which a carry signal has been generated, the propagation time can be variable. In the best case, when there is no carry generation, the addition time will only take into account the time to propagate the carry signal. Figure 9 is an example illustrating a carry signal generated twice, with the input carry being equal to 0. In this case three simultaneous carry propagations occur. The longest is the second, which takes 7 cell delays (it starts at the 4th position and ends at the 11th position). So the addition time of these two numbers with this 16-bits Ripple Carry Adder is 7.k + k, where k is the delay cell and k is the time needed to compute the 11th sum bit using the 11th carry-in.

With a Ripple Carry Adder, if the input bits Ai and Bi are different for all position i, then the carry signal is propagated at all positions (thus never generated), and the addition is completed when the carry signal has propagated through the whole adder. In this case, the Ripple Carry Adder is as slow as it is large. Actually, Ripple Carry Adders are fast only for some configurations of the input words, where carry signals are generated at some positions.

Carry Skip Adders take advantage both of the generation or the propagation of the carry signal. They are divided into blocks, where a special circuit detects quickly if all the bits to be added are different (Pi = 1 in all the block). The signal produced by this circuit will be called block propagation signal. If the carry is propagated at all positions in the block, then the carry signal entering into the block can directly bypass it and so be transmitted through a multiplexer to the next block. As soon as the carry signal is transmitted to a block, it starts to propagate through the block, as if it had been generated at the beginning of the block. Figure 6.10 shows the structure of a 24-bits Carry Skip Adder, divided into 4 blocks.

[Click to enlarge image]Figure-6.10: The "domino behaviour of the carry propagation and generation signals
[Click to enlarge image]Figure-6.10a: Block diagram of a carry skip adder

To summarize, if in a block all Ai's?Bi's, then the carry signal skips over the block. If they are equal, a carry signal is generated inside the block and needs to complete the computation inside before to give the carry information to the next block.


It becomes now obvious that there exist a trade-off between the speed and the size of the blocks. In this part we analyse the division of the adder into blocks of equal size. Let us denote k1 the time needed by the carry signal to propagate through an adder cell, and k2 the time it needs to skip over one block. Suppose the N-bit Carry Skip Adder is divided into M blocks, and each block contains P adder cells. The actual addition time of a Ripple Carry Adder depends on the configuration of the input words. The completion time may be small but it also may reach the worst case, when all adder cells propagate the carry signal. In the same way, we must evaluate the worst carry propagation time for the Carry Skip Adder. The worst case of carry propagation is depicted in Figure 6.11.

[Click to enlarge image]Figure-6.11: Worst case for the propagation signal in a Carry Skip adder with blocks of equal size

The configuration of the input words is such that a carry signal is generated at the beginning of the first block. Then this carry signal is propagated by all the succeeding adder cells but the last which generates another carry signal. In the first and the last block the block propagation signal is equal to 0, so the entering carry signal is not transmitted to the next block. Consequently, in the first block, the last adder cells must wait for the carry signal, which comes from the first cell of the first block. When going out of the first block, the carry signal is distributed to the 2nd, 3rd and last block, where it propagates. In these blocks, the carry signals propagate almost simultaneously (we must account for the multiplexer delays). Any other situation leads to a better case. Suppose for instance that the 2nd block does not propagate the carry signal (its block propagation signal is equal to zero), then it means that a carry signal is generated inside. This carry signal starts to propagate as soon as the input bits are settled. In other words, at the beginning of the addition, there exists two sources for the carry signals. The paths of these carry signals are shorter than the carry path of the worst case. Let us formalize that the total adder is made of N adder cells. It contains M blocks of P adder cells. The total of adder cells is then

N=M.P (24)

The time T needed by the carry signal to propagate through P adder cells is

T=k1.P (25)

The time T' needed by the carry signal to skip through M adder blocks is

T'=k2.M (26)

The problem to solve is to minimize the worst case delay which is:


So that the function to be minimized is:


The minimum is obtained for:



Let us formalize the problem as a geometric problem. A square will represent the generic full adder cell. These cells will be grouped in P groups (in a column like manner).

L(i) is the value of the number of bits of one column.

L(1), L(2), ..., L(P) are the P adjacent columns. (see Figure 6.12)

[Click to enlarge image]Figure-6.12: Geometric formalization

If a carry signal is generated at the ith section, this carry skips j-i-1 sections and disappears at the end of the jth section. So the delay of propagation is:


By defining the constant a equal to:


one can position two straight lines defined by:

(at the left most position) (33)
(at the right most position) (34)

The constant a is equivalent to the slope dimension in the geometrical problem of the two two straight lines defined by equations (33) and (34). These straight lines are adjacent to the top of the columns and the maximum time can be expressed as a geometrical distance y equal to the y-value of the intersection of the two straight lines.

because (37)
[Click to enlarge image]Figure-6.13: Representation of the geometrical worst delay

A possible implementation of a block is shown in Figure 6.14. In a precharged mode, the output of the four inverter-like structure is set to one. In the evaluation mode, the entire block is in action and the output will either receive c0 or the carry generated inside the comparator cells according to the values given to A and B. If there is no carry generation needed, c0 will be transmitted to the output. In the other case, one of the inversed pi's will switch the multiplexer to enable the other input.

[Click to enlarge image]Figure-6.14: A possible implementation of the Carry Skip block

6.5.2 The Carry-Select Adder

This type of adder is not as fast as the Carry Look Ahead (CLA) presented in a next section. However, despite its bigger amount of hardware needed, it has an interesting design concept. The Carry Select principle requires two identical parallel adders that are partitioned into four-bit groups. Each group consists of the same design as that shown on Figure 15. The group generates a group carry. In the carry select adder, two sums are generated simultaneously. One sum assumes that the carry in is equal to one as the other assumes that the carry in is equal to zero. So that the predicted group carry is used to select one of the two sums.

It can be seen that the group carries logic increases rapidly when more high- order groups are added to the total adder length. This complexity can be decreased, with a subsequent increase in the delay, by partitioning a long adder into sections, with four groups per section, similar to the CLA adder.

[Click to enlarge image]Figure-6.15: The Carry Select adder
[Click to enlarge image]Figure-6.16: The Carry Select adder . (a) the design with non optimised used of the gates, (b) Merging of the redundant gates

A possible implementation is shown on Figure 6.16, where it is possible to merge some redundant logic gates to achieve a lower complexity with a higher density.

6.5.3 The Carry Look-Ahead Adder

The limitation in the sequential method of forming carries, especially in the Ripple Carry adder arises from specifying ci as a specific function of ci-1. It is possible to express a carry as a function of all the preceding low order carry by using the recursivity of the carry function. With the following expression a considerable increase in speed can be realized.

Usually the size and complexity for a big adder using this equation is not affordable. That is why the equation is used in a modular way by making groups of carry (usually four bits). Such a unit generates then a group carry which give the right predicted information to the next block giving time to the sum units to perform their calculation.

Figure-6.17: The Carry Generation unit performing the Carry group computation

Such unit can be implemented in various ways, according to the allowed level of abstraction. In a CMOS process, 17 transistors are able to guarantee the static function (Figure 6.18). However this design requires a careful sizing of the transistors put in series.

The same design is available with less transistors in a dynamic logic design. The sizing is still an important issue, but the number of transistors is reduced (Figure 6.19).

[Click to enlarge image]Figure-6.18: Static implementation of the 4-bit carry lookahead chain
[Click to enlarge image]Figure-6.19: Dynamic implementation of the 4-bit carry lookahead chain

To build large adders the preceding blocks are cascaded according to Figure 6.20.

[Click to enlarge image]Figure-6.20: Implementation of a 16-bit CLA adder

6.5.4 The Brent and Kung Adder

The technique to speed up the addition is to introduce a "new" operator   which combines couples of generation and propagation signals. This "new" operator come from the reformulation of the carry chain.


Let an an-1 ... a1 and bn bn-1 ... b1 be n-bit binary numbers with sum sn+1 sn ... s1. The usual method for addition computes the sis by:

c0 = 0 (39)
ci = aibi + aici-1 + bici-1 (40)
si = ai ++ bi  ++ ci-1, i = 1,...,n (41)
sn+1 = cn (42)

Where ++ means the sum modulo-2 and ci is the carry from bit position i. From the previous paragraph we can deduce that the cis are given by:

c0 = 0 (43)
ci = gi + pi ci-1 (44)
gi = ai bi (45)
pi = ai ++ bi for i = 1,...., n (46)

One can explain equation (44) saying that the carry ci is either generated by ai and bi or propagated from the previous carry ci-1. The whole idea is now to generate the carrys in parallel so that the nth stage does not have to wait for the n-1th carry bit to compute the global sum. To achieve this goal an operator  is defined.

Let  be defined as follows for any g, g', p and p' :

(g, p)  (g', p') = (g + p . g', p . p') (47)

Lemma1: Let (Gi, Pi) = (g1, p1) if    i = 1 (48)
(gi, pi (Gi-1, Pi-1) if  i in [2, n]  (49)
Then ci = Gi  for i = 1, 2, ..., n.

Proof: The Lemma is proved by induction on i. Since c0 = 0, (44) above gives:

c1 = g1 + p1 . 0 = g1 = G1 (50)
So the result holds for i=1. If i>1 and ci-1 = Gi-1 , then
(Gi, Pi) = (gi, pi (Gi-1, Pi-1) (51)
(Gi, Pi) = (gi, pi (ci-1, Pi-1) (52)
(Gi, Pi) = (gi + pi . ci-1, Pi . Pi-1) (53)
thus Gi = gi + pi . ci-1 (54)

And from (44) we have : Gi = ci.

Lemma2: The operator  is associative.

Proof: For any (g3, p3), (g2, p2), (g1, p1) we have:
[(g3, p3 (g2, p2)]  (g1, p1) = (g3+ p3 . g2, p3 . p2 (g1, p1)
= (g3+ p3 . g2+ , p3 . p2 . p1) (55)
(g3, p3 [(g2, p2 (g1, p1)] = (g3, p3 (g2 + p2 . g1, p2 . p1)
= (g3 + p3 . (g2 + p2 . g1), p3 . p2 . p1) (56)

One can check that the expressions (55) and (56) are equal using the distributivity of . and +.

To compute the cis it is only necessary to compute all the (Gi, Pi)s but by Lemmas 1 and 2,

(Gi, Pi) = (gi, pi (gi-1, pi-1.... (g1, p1) (57)

can be evaluated in any order from the given gis and pis. The motivation for introducing the operator Delta is to generate the carrys in parallel. The carrys will be generated in a block or carry chain block, and the sum will be obtained directly from all the carrys and pis since we use the fact that:

si = pi H ci-2 for i=1,...,n (58)


Based on the previous reformulation of the carry computation Brent and Kung have proposed a scheme to add two n-bit numbers in a time proportional to log(n) and in area proportional to n.log(n), for n bigger or equal to 2. Figure 6.21 shows how the carrys are computed in parallel for 16-bit numbers.

[Click to enlarge image]Figure-6.21: The first binary tree allowing the calculation of c1, c2, c4, c8, c16.

Using this binary tree approach, only the cis where i=2k (k=0,1,...n) are computed. The missing cis have to be computed using another tree structure, but this time the root of the tree is inverted (see Figure 6.22).

In Figure 6.21 and Figure 6.22 the squares represent a  cell which performs equation (47). Circles represent a duplication cell where the inputs are separated into two distinct wires (see Figure 6.23).

When using this structure of two separate binary trees, the computation of two 16-bit numbers is performed in T=9 stages of  cells. During this time, all the carries are computed in a time necessary to traverse two independent binary trees.

According to Burks, Goldstine and Von Neumann, the fastest way to add two operands is proportional to the logarithm of the number of bits. Brent and Kung have achieved such a result.

[Click to enlarge image]Figure-6.22: Computation of all the carrys for n = 16
[Click to enlarge image]Figure-6.23: (a) The  cell, (b) the duplication cell

6.5.5 The C3i Adder


Let ai and bi be the digits of A and B, two n-bit numbers with i = 1, 2, ...,n. The carry's will be computed according to (59).

with: Gi = ci (60)

If we develop (59), we get:


and by introducing a parameter m less or equal than n so that it exists q in IN | n = q.m, it is possible to obtain the couple (Gi, Pi) by forming groups of m  cells performing the intermediate operations detailed in (62) and (63).


This manner of computing the carries is strictly based on the fact that the operator  is associative. It shows also that the calculation is performed sequentially, i.e. in a time proportional to the number of bits n. We will now illustrate this analytical approach by giving a way to build an architectural layout of this new algorithm. We will proceed to give a graphical method to place the cells defined in the previous paragraph [Kowa92].


  1. First build a binary tree of  cells.
  2. Duplicate this binary tree m times to the right (m is a power of two; see Remark1 in the next pages if m is not a power of two). The cells at the right of bit 1 determines the least significant bit (LSB).
  3. Eliminate the cells at the right side of the LSB. Change the  cells not connected to anything into duplication cells. Eliminate all cells under the second row of  cells, except the right most group of m  cells.
  4. Duplicate q times to the right by incrementing the row down the only group of m  cells left after step 3. This gives a visual representation of the delay read in Figure 6.29.
  5. Shift up the q groups of  cells, to get a compact representation of a "floorplan".
This complete approach is illustrated in Figure 6.24, where all the steps are carefully observed. The only cells necessary for this carry generation block to constitute a real parallel adder are the cells performing equations (45) and (46). The first row of functions is put at the top of the structure. The second one is pasted at the bottom.
[Click to enlarge image]Figure-6.24: (a) Step1, (b) Step2, (c) Step3 and Step4, (d) Step5

At this point of the definition, two remarks have to be made about the definition of this algorithm. Both concern the m parameter used to defined the algorithm. Remark 1 specifies the case for m not equal to 2q (q in [0,1, ...] ) as Remark 2 deals with the case where m=n.

[Click to enlarge image]Figure-6.25: Adder where m=6. The fan-out of the 11th carry bit is highlighted

Remark 1: For m not a power of two, the algorithm is built the same way up to the very last step. The only reported difference will concern the delay which will be equal to the next nearest power of two. This means that there is no special interest to build such versions of these adders. The fan-out of certain cells is even increased to three, so that the electrical behaviour will be degraded. Figure 6.25 illustrates the design of such an adder based on m=6. The fan-out of the  cell of bit 11 is three. The delay of this adder is equivalent to the delay of an adder with a duplication with m=8.

Remark 2: For m equal to the number of bits of the adder, the algorithm reaches the real theoretical limit demonstrated by Burks, Goldstine and Von Neumann. The logarithmic time is attained using one depth of a binary tree instead of two in the case of Brent and Kung. This particular case is illustrated in Figure 6.26. The definition of the algorithm is followed up to Step3. Once the reproduction of the binary tree is made m times to the right, the only thing to do is to remove the cells at the negative bit positions and the adder is finished. Mathematically, one can notice that this is the limit. We will discuss later whether it is the best way to build an adder using m=n.

[Click to enlarge image]Figure-6.26: Adder where m=n. This constitutes the theoretical limit for the computation of the addition.


In this section, we develop a comparison between adders obtained using the new algorithm with different values of m. On the plots of Figure 6.27 through Figure 6.29, the suffixes JK2, JK4, and JK8 will denote different adders obtained for m equal two, four or eight. They are compared to the Brent and Kung implementation and to the theoretical limit which is obtained when m equals n, the number of bits.

The comparison between these architectures is done according to the formalisation of a computational model described in [Kowa93]. We clearly see that BKs algorithm performs the addition with a delay proportional to the logarithm of the number of bits. JK2 performs the addition in a linear time, just as JK4 or JK8. The parameter m influences the slope of the delay. So that, the higher is m, the longer the delay stays under the logarithmic delay of (BK). We see that when one wants to implement the addition faster than (BK), there is a choice to make among different values of m. The choice will depend on the size of the adder because it is evident that a 24-bit JK2 adder (delay = 11 stages of  cells) performs worse than BK (delay = 7 stages of cells).

On the other hand JK8 (delay = 5 stages of  cells) is very attractive. The delay is better than BK up to 57 bits. At this point both delays are equal. Furthermore, even at equal delays (up to 73 bits) our implementation performs better in terms of regularity, modularity and ease to build. The strong advantage of this new algorithm compared to BK is that for a size of the input word which is not a power-of-two, the design of the cells is much easier. There is no partial binary tree to build. The addition of a bit to the adder is the addition of a bit-slice. This bit-slice is very compact and regular. Let us now consider the case where m equals n (denoted by XXX on our figures). The delay of such an adder is exactly one half of BK and it is the lowest bound we obtain. For small adders (n < 16), the delay is very close to XXX. And it can be demonstrated that the delays (always in term of stages) of JK2, JK4, JK8 are always at least equal to XXX.

This discussion took into account the two following characteristics of the computational model:

And the conclusion of this discussion is that m has to be chosen as high as possible to reduce the global delay. When we turn to the comparisons concerning the area, we will take into account the following characteristics of our computational model: For this discussion let us consider Figure 6.28 where we represent the area of the different adders versus the number of bits. It is obvious that for m being the smallest, the area will be the smallest as well. For m increasing up to n, we can see that the area will still be proportional to the number of bits following a straight line. For m equal to n the area will be exactly one half of the BK area with a linear variation. The slope of this variation in both cases of BK and XXX will vary according to the intervals [2q,2q+1] where q=0.

Here we could point out that the floorplan of BK could be optimised to become comparable to the one of XXX, but the cost of such an implementation would be very high because of the irregularity of the wirings and the interconnections. These considerations lead us to the following conclusion: to minimise the area of a new adder, m must be chosen low. This is contradictory with the previous conclusion. That is why a very wise choice of m will be necessary, and it will always depend on the targeted application. Finally, Figure 6.27 gives us the indications about the number of transistors used to implement our different versions of adders. These calculations are based on the dynamic logic family (TSPC: True Single Phase Clocking) described in [Kowa93]. When considering this graph, we see that BK and XXX are two limits of the family of our adders. BK uses the smallest number of transistors , whereas XXX uses up to five times more transistors. When m is highest, the number of transistors is highest.

Nevertheless, we see that the area is smaller than BK. A high density is an advantage, but an overhead in transistors can lead to higher power dissipation. This evident drawback in our algorithm is counterbalanced by the progress being made in the VLSI area. With the shrinking of the design rules, the size of the transistors decreases as well as the size of the interconnections. This leads to smaller power dissipation. This fact is even more pronounced when the technologies tend to decrease the power supply from 5V to 3.3V.

In other words, the increase in the number of transistors corresponds to the redundancy we introduce in the calculations to decrease the delay of our adders.
Now we will discuss an important characteristics of our computational model that differs from the model of Brent and Kung:

This assumption is very important as we discuss it with an example. Let us consider the 16-bit BK adder (Figure 6.22) and the 16-bit JK4 adder (Figure 6.24). The longest wire in the BK implementation will be equal to at least eight widths of ? cells, whereas in the JK4 implementation, the longest wire will be equal to four widths of -cells. For BK, the output capacitive load of a -cell will be variable and a variable sizing of the cell will be necessary. In our case, the parameter m will defined a fixed library of -cells used in the adder. The capacitive load will always be limited to a fixed value allowing all  cells to be sized to a fixed value.
Figure-6.27: Number of transistors versus the number of bits
Figure-6.28: Area versus the number of bits
Figure-6.29: Delay in number of  stages versus the number of bits in the adder

To partially conclude this section, we say that an optimum must be defined when choosing to implement our algorithm. This optimum will depend on the application for which the operator is to be used.

[Table of Contents] [Top of Document]

6.6 Multioperand Adders

6.6.1 General Principle

The goal is to add more than 2 operand in a time. This generally occurs in multiplication operation or filtering.

6.6.2 Wallace Trees

For this purpose, Wallace trees were introduced. The addition time grows like the logarithm of the bit number. The simplest Wallace tree is the adder cell. More generally, an n-inputs Wallace tree is an n-input operator and log2(n) outputs, such that the value of the output word is equal to the number of 1 in the input word. The input bits and the least significant bit of the output have the same weight (Figure 6.30). An important property of Wallace trees is that they may be constructed using adder cells. Furthermore, the number of adder cells needed grows like the logarithm log2(n) of the number n of input bits. Consequently, Wallace trees are useful whenever a large number of operands are to add, like in multipliers. In a Braun or Baugh-Wooley multiplier with a Ripple Carry Adder, the completion time of the multiplication is proportional to twice the number n of bits. If the collection of the partial products is made through Wallace trees, the time for getting the result in a carry save notation should be proportional to log2(n).
[Click to enlarge image]Figure-6.30: Wallace cells made of adders

Figure 6.31 represents a 7-inputs adder: for each weight, Wallace trees are used until there remains only two bits of each weight, as to add them using a classical 2-inputs adder. When taking into account the regularity of the interconnections, Wallace trees are the most irregular.

[Click to enlarge image]Figure-6.31: A 7-inputs Wallace tree

6.6.3 Overturned Stairs Trees

To circumvent the irregularity Mou [Mou91] proposes an alternalive way to build multi-operand adders. The method uses basic cells called branch, connector or root. These basic elements (see Figure 6.32) are connected together to form n-input trees. One has to take care about the weight of the inputs. Because in this case the weights at the input of the 18-input OS tree are different. The regularity of this structure is better than with Wallace trees but the construction of multipliers is still complex.
[Click to enlarge image]Figure-6.32: Basic cells used to build OS-trees
[Click to enlarge image]Figure-6.33: A 18-input OS-tree

[Table of Contents] [Top of Document]

6.7 Multiplication

6.7.1 Inroduction

Multiplication can be considered as a serie of repeated additions. The number to be added is the multiplicand, the number of times that it is added is the multiplier, the result is the product. Each step of the addition generates a partial product. In most coimputers, the operands usually contain the same number of bits. When the operands are interpreted as integers, the product is generally twice the length of the operands in order to preserve the information content. This repeated addition method that is suggested by the arithmetic definition is slow that it is almost always replaced by an algorithm that makes use of positional number representation.

It is possible to decompose multipliers in two parts. The first part is dedicated to the generation of partial products, and the second one collects and adds them. As for adders, it is possible to enhance the intrinsic performances of multipliers. Acting in the generation part, the Booth (or modified Booth) algorithm is often used because it reduces the number of partial products. The collection of the partial products can then be made using a regular array, a Wallace tree or a binary tree [Sinh89].

Figure-6.34: Partial product representation and multioperand addition

6.7.2 Booth Algorithm

This algorithm is a powerful direct algorithm for signed-number multiplication. It generates a 2n-bit product and treats both positive and negative numbers uniformly. The idea is to reduce the number of additions to perform. Booth algorithm allows in the best case n/2 additions whereas modified Booth algorithm allows always n/2 additions.

Let us consider a string of k consecutive 1s in a multiplier:
..., i+k, i+k-1, i+k-2  , ...,     i,    i-1, ...
..., 0   ,  1     ,   1       , ...,    1,      0, ...

where there is k consecutive 1s.

By using the following property of binary strings:


the k consecutive 1s can be replaced by the following string

..., i+k+1,   i+k, i+k-1, i+k-2, ..., i+1,   i  , i-1  , ...
..., 0       ,     1 ,    0    ,    0   , ...,   0 , -1  , 0    , ...
k-1 consecutive 0s Addition Subtraction

In fact, the modified Booth algorithm converts a signed number from the standard 2s-complement radix into a number system where the digits are in the set {-1,0,1}. In this number system, any number may be written in several forms, so the system is called redundant.

The coding table for the modified Booth algorithm is given in Table 8. The algorithm scans strings composed of three digits. Depending on the value of the string, a certain operation will be performed.

A possible implementation of the Booth encoder is given on Figure 6.35. The layout of another possible structure is given on Figure 6.36.

M is
21 20 2-1
Yi+1 Yi Yi-1  
0 0 0 add zero (no string) +0
0 0 1 add multipleic (end of string) +X
0 1 0 add multiplic. (a string) +X
0 1 1 add twice the mul. (end of string) +2X
1 0 0 sub. twice the m. (beg. of string) -2X
1 0 1 sub. the m. (-2X and +X) -X
1 1 0 sub . the m. (beg. of string) -X
1 1 1 sub. zero (center of string) -0
Table-6.8: Modified Booth coding table.
[Click to enlarge image]Figure-6.35: Booth encoder cell
Figure-6.36: Booth encoder cell (layout size: 65.70 µm2 (0.5µCMOS))

6.7.3 Serial-Parallel Multiplier

This multiplier is the simplest one, the multiplication is considered as a succession of additions.
If A = (an an-1a0) and B = (bn bn-1b0)

The product A.B is expressed as :
A.B = + ++ A.20.b0

The structure of Figure 6.37 is suited only for positive operands. If the operands are negative and coded in 2s-complement :

  1. The most significant bit of B has a negative weight, so a subtraction has to be performed at the last step.
  2. Operand A.2k must be written on 2N bits, so the most significant bit of A must be duplicated. It may be easier to shift the content of the accumulator to the right instead of shifting A to the left.
[Click to enlarge image]Figure-6.37: Serial-Parallel multiplier

6.7.4 Braun Parallel Multiplier

The simplest parallel multiplier is the Braun array. All the partial products A.bk are computed in parallel, then collected through a cascade of Carry Save Adders. At the bottom of the array, the output of the array is noted in Carry Save, so an additional adder converts it (by the mean of a carry propagation) into the classical notation (Figure 6.38). The completion time is limited by the depth of the carry save array, and by the carry propagation in the adder. Note that this multiplier is only suited for positive operands. Negative operands may be multiplied using a Baugh-Wooley multiplier.
[Click to enlarge image]Figure-6.38: A 4-bit Braun Multiplier without the final adder

Figure 6.38 and Figure 6.40 use the symbols given in Figure 6.39 where CMUL1 and CMUL2 are two generic cells consisting of an adder without the final inverter and with one input connected to an AND or NAND gate. A non optimised (in term of transistors) multiplier would consist only of adder cells connected one to another with AND gates generating the partial products. In these examples, the inverters at the output of the adders have been eliminated and the parity of the bits has been compensated by the use of CMUL1 or CMUL2.

Figure-6.40: A 8-bit Braun Multiplier without the final adder

6.7.5 Baugh-Wooley Multiplier

This technique has been developed in order to design regular multipliers, suited for 2s-complement numbers.

Let us consider 2 numbers A and B :

  (64),  (65)

The product A.B is given by the following equation :


We see that subtractor cells must be used. In order to use only adder cells, the negative terms may be rewritten as :


By this way, A.B becomes :


The final equation is :


because :


A and B are n-bits operands, so their product is a 2n-bits number. Consequently, the most significant weight is 2n-1, and the first term -22n-1 is taken into account by adding a 1 in the most significant cell of the multiplier.

[Click to enlarge image]Figure-6.41: shows a 4-bits Baugh-Wooley multiplier
[Click to enlarge image]Figure-6.41: A 4-bit Baugh-Wooley Multiplier with the final adder

6.7.6 Dadda Multiplier

The advantage of this method is the higher regularity of the array. Signed integers can be processed. The cost for this regularity is the addition of an extra column of adders.
[Click to enlarge image]Figure-6.42: A 4-bit Baugh-Wooley Multiplier with the final adder

6.7.7 Mou's Multiplier

On Figure 6.43 the scheme using OS-trees is used in a 4-bit multiplier. The partial product generation is done according to Dadda multiplication. Figure 6.44 represents the OS-tree structure used in a 16-bit multiplier. Although the author claims a better regularity, its scheme does not allow an easy pipelining.
[Click to enlarge image]Figure-6.43: A 4-bit OS-tree Multiplier with a final adder
[Click to enlarge image]Figure-6.44: A 16-bit OS-tree Multiplier without a final adder and without the partial product cells

6.7.8 Logarithmic Multiplier

The objective of this circuit is to compute the product of two terms. The property used is the following equation :
Log(A * B) = Log (A) + Log (B) (71)

There are several ways to obtain the logarithm of a number : look-up tables, recursive algorithms or the segmentation of the logarithmic curve [Hoef91]. The segmentation method : The basic idea is to approximate the logarithm curve with a set of linear segments.
If y = Log2 (x) (72)

an approximation of this value on the segment ]2n+1 , 2n[ can be made using the following equation :
y = ax + b = (y / x  ).x + b = [1 / (2n+1 - 2n)].x + n-1 = 2-n x + (n-1) (73)

What is the hardware interpretation of this formula?

If we take xi = (xi7, xi6, xi5, xi4, xi3, xi2, xi1, xi0), an integer coded with 8 bits, its logarithm will be obtained as follows. The decimal part of the logarithm will be obtained by shifting xi n positions to the right, and the integer part will be the value where the MSB occurs.

For instance if xi is (0,0,1,0,1,1,1,0) = 46, the integer part of the logarithm is 5 because the MSB is xi5 and the decimal part is 01110. So the logarithm of xi equals 101.01110 = 5.4375 because 01110 is 14 out of a possible 32, and 14/32 = 0.4275

Table 9 illustrates this coding. Once the coding of two linear words has been performed, the addition of the two logarithms can be done. The last operation to be performed is the antilogarithm of the sum to obtain the value of the final product.

Using this method, a 11.6% error on the product of two binary operands (i.e. the sum of two logarithmic numbers) occurs. We would like to reduce this error without increasing the complexity of the operation nor the complexity of the operator. Since the transfomations used in this system are logarithms and antilogarithms, it is natural to think that the complexity of the correction systems will grow exponentially if the error approaches zero. We analyze the error to derive an easy and effective way to increase the accuracy of the result.

Table-6.9: Coding of the binary logarithm according to the segmentation method

Figure 6.45 describes the architecture of the logarithmic multiplier with the different variables used in the system.

[Click to enlarge image]Figure-6.45: Block diagram of a logarithmic multiplier

Error analysis: Let us define the different functions used in this system.

The logarithm and antilogarithm curves are approximated by linear segments. They start at values which are in powers-of-two and end at the next power-of- two value. Figure 6.46 shows how a logarithm is approximated. The same is true for the antilogarithm.

[Click to enlarge image]Figure-6.46: Approximated value of the logarithm compared to the exact logarithm

By adding the unique value 17*2-8 to the two logarithms an improvement of 40% is achieved on the maximum error. The maximum error comes down from 11.6% to 7.0%, an improvement of 40% compared with a system without any correction. The only cost is the replacement of the internal two input adder by a three input adder.

A more complex correction system which leads to better precision but at a much higher hardware cost is possible.

In Table 10 we suggest a system which would choose one correction among three depending on the value of the input bits. Table 10 can be read as the values of the logarithms obtained after the coder for either a1 or a2. The penultimate column represents the ideal correction which should be added to get 100% accuracy. The last column gives the correction chosen among three possibilities: 32, 16 or 0.

Three decoding functions have to be implemented for this proposal. If the exclusive -OR of a-2 and a-3 is true, then the added value is 32*2-8. If all the bits of the decimal part are zero, then the added value is zero. In all other cases the added value is 16*2-8.

This decreases the average error. But the drawback is that the maximum error will be minimized only if the steps between two ideal corrections are bigger than the unity step. To minimize the maximum error the correcting functions should increase in an exponential way. Further research could be performed in this area.

Table-6.10: A more complex correction scheme

[Table of Contents] [Top of Document]

6.8 Addition and Multiplication in Galois Fields, GF(2n)

The group theory is used to introduce another algebraic system, called a field. A field is a set of elements in which we can do addition, subtraction, multiplication and division without leaving the set. Addition and multiplication must satisfy the commutative, associative, and distributive laws. A formal definition of a field is given below.


Let F be a set of elements on which two binary operations called addition "+" and multiplication".", are defined. The set F together with the two binary operations + and . is a field if the following conditions are satisfied:

  1. F is a commutative group under addition +. The identity element with respect to addition is called the zero element or the additive identity of F and is denoted by 0.
  2. The set of nonzero elements in F is a commutative group under multiplication . .The identity element with respect to multiplication is called the unit element or the multiplicative identity of F and is denoted 1.
  3. Multiplication is distributive over addition; that is, for any three elements, a, b, c in F:
  4. a . ( b + c ) = a . b + a . c
The number of elements in a field is called the order of the field.

A field with finite number of elements is called a finite field.

Let us consider the set {0,1} together with modulo-2 addition and multiplication. We can easily check that the set {0,1} is a field of two elements under modulo-2 addition and modulo-2 multiplication.field is called a binary field and is denoted by GF(2).

The binary field GF(2) plays an important role in coding theory [Rao74] and is widely used in digital computers and data transmission or storage systems.

Another example using the residue to the base [Garn59] is given below. Table 11 represents the values of N, from 0 to 29 with their representation according to the residue of the base (5, 3, 2).The addition and multiplication of two term in this base can be performed according to the next example:

Table-6.11: N varying from 0 to 29 and its representation in the residue number system

The most interesting property in these systems is that there is no carry propagation inside the set. This can be attractive when implementing into VLSI these operators

[Table of Contents] [Top of Document]


[Table of Contents] [Top of Document]

This chapter edited by D. Mlynek