Skip to content Skip to navigation

OpenStax-CNX

You are here: Home » Content » SSPD_Chapter 7_Part 7_Scaling of MOS circuits.

Navigation

Recently Viewed

This feature requires Javascript to be enabled.
 

SSPD_Chapter 7_Part 7_Scaling of MOS circuits.

Module by: Bijay_Kumar Sharma. E-mail the author

Summary: SSPD_Chapter 7_Part 7_Scaling of MOS circuits deals with the basic definition of scaling, its effects on the scale of economy in IC production and the various design options it is offering.

SSPD_Chapter 7_Part 7_Scaling of MOS circuits.

7.7.1. What is scaling ?

Scaling is reduction in lateral and vertical dimensions to achieve a higher packing density on Si IC Chip, to achieve faster switching devices, to achieve higher functionality and achieve better scale of economy.

In 1966 emulsion mask patterns were contact printed using UV light on 200mm wafer with the smallest feature size of 25μm.

In 2007 using Deep Ultra Violet(DUV)193nm immersion Lithography, 30nm± 6 nm feature size are imprinted on 300mm wafer.

In 2012 using Extreme Ultra Violet Light(EUV)100nm reflective mirror exposure system (Next Generation Lithography-NGL), less than 30nm feature size is expected to be imprinted on 450mm wafer.

In 70s growth of Personal; Computer fueled the CMOS scaling.

Today in 21st century, Internet, high speed communication that converges data, video and audio and mobile communication is fueling CMOS scaling consistent with Moore’s Law and has sharply brought in focus the need for cheap, high speed and low power devices for which Si:Ge is emerging as the the most promising candidate.

DSP and ‘DSP & Real World Interface’ are at the heart of Internet, so developing System-on-Chip(SOC) and integration of digital and Analog/RF function became high priority area of research. Advanced SiGe BJT Process also shrinks the lateral and vertical dimensions and boosts the performance hence it has become the technology of choice for 24GHz radar for blindside detection, for 77GHz radar system automobile collision warning or advance cruise control, for 60GHz Wi-Fi chips for next generation wireless LAN and backbone networks, for software defined radios, for cellular handset and for high frequency automatic test equipments.

7.7.2.How does scaling help achieve greater functionality ?

As we downscale the lateral and vertical dimensions we simultaneously add more functions to the chips.

In 1971 μP4004 was a 16-pin DIP packaged chip which had the CPU built in the die. It was a LSI chip which along with three other LSI chips could be used to build a complete functional computer. These LSI chips were : 4001(ROM-4-bit output), 4002(RAM-4-bit input/output) and 4003 static shft register for expanding input/output lines. The photograph of the exposed 16-pin μP4004 chip and its hermetically shielded picture are shown in Figure 7.7.1.

4004 LSI chip used 10μm PMOS(E) technology. It had 2300 transistors and it could carry out 92,000 instructions per second that is 10.8μsecond per instruction.

Figure 1
Figure 1 (Picture 1.png)

Figure 7.7.1. Photograph of μP4004,16-Pin DIP packaged chip. Left is the exposed chip. Right is the hermetically sealed chip.

Figure 2
Figure 2 (Picture 2.png)
Figure 7.7.2. Block diagram of the Intel 8086 microprocessor

[1. Block of general purpose registers, 2 Block segment registers, 3 20 BIT combiner, 4 Internal bus C, 5 Queue commands, 6 The control system, 7 The control system bus, 8 Internal Bus A, 9 Arithmetic logic unit (ALU), 10 Address bus, 11 Data bus, 12 Rail Control F. Registry tags, AX -accumulator , BX - register base CX - counting register, DX - data register, SP - stack pointer, BP - base pointer, SI - source index, DI - Destination Index ,CS – code. Segment DS - data segment, SS - stack segment, ES - extra segment, IP - Instruction Pointer.]

In mid-70s, all the parts of CPU were put on one die for μP 8086 (16-bit processor, 40 pin DIP package) as shown in Figure 7.7.2.

In 80s μP80286 and μP80386 were introduced which had ×87 Floating-point hardware on a separate chip on the same mother board.

In 1989 , tightly pipe-lined 8086 architecture was introduced as μP80486. This included ×87 Floating-point hardware on-die.

In 1990s, Single Instruction Multiple Data (SIMD) started out as a separate chip. SUN Microsystyem moved it on-chip when VIS was introduced to UltraSPARC.

In 1992-93 Pentium was introduced which had CPU+Memory+Input/Output port built on the same chip plus it had a second ALU. It was a Intel’s first superscalar processor.

Figure 3
Figure 3 (Picture 2.jpg)

Figure 7.7.3. Photograph of hermetically sealed Pentium 4 chip with 423 and 478-pin PGA packages.

L1 cache was moved on-die in Pentium 2 Architecture.

Latest Pentium architecture started out with a 400 MHz system bus and 256KB L2 cache (later increased to 800 MHz and 2MB). The first models contained 42 million transistors, used the 0.18 micron process and came in 423-pin and 478-pin PGA packages. Intel's first Pentium 4 chipset was the 850 and supported only Rambus memory (RDRAM), but subsequent chipsets switched to DDR SDRAM.

Subsequently Floating-point hardware moved on-die.

Thus new features were added in scaled down chip to add performance-enhancing functionality to a processor die.

7.7.3. Cost Metrics of IC Chip.

Figure 4
Figure 4 (graphics1.png)
Figure 5
Figure 5 (graphics2.png)

Final Test yield is the % of successful dies produced hence it is unity or less.

The die cost is calculated by the formula:

Figure 6
Figure 6 (graphics3.png)

The denominator of Eq.7.7.2 gives the the total number of good dies or expected number of working dies achieved on a given wafer. It is the product of total number of dies achieved on a wafer and the yield of the given wafer.

Dies per wafer is calculated by the following formula:

Figure 7
Figure 7 (graphics4.png)

Where D = wafer diameter in cm and S = die area in cm2.

Rewriting Eq.7.7.3 we get:

Figure 8
Figure 8 (graphics5.png)

Eq.7.7.3a can be interpreted as:

Figure 9
Figure 9 (graphics6.png)

Yield of the dies from a wafer is calculated by the following formula:

Figure 10
Figure 10 (graphics7.png)
Figure 11
Figure 11 (graphics8.png)

Where wafer yield is generally 100% meaning wafer is perfect,

defect/area is given as defects per cm2 and die area in cm2 is to be used.

There are 3 kinds of defects: random, systematic and parametric. Here we are concerned with random defects which is a measure of random manufacturing defects on the wafer.

α = empirical parameter that corresponds to the critical masking levels, a measure of manufacturing complexity of the circuit.

Using Eq 7.7.4 and Eq.7.7.3 in conjunction with Eq.7.7.2, the die cost is calculated for seven processor chips as given in Table 7.7.1.

Table 7.7.1.Die Cost of various processors.

Table 1
Chip Wafer dia.(mm) WaferCost($) DefectPer cm2 Die Area(mm2) Diesperwafer Yield(%) DieCost($)
386DX 150 900 1 43 360 71 4
486DX2 150 1200 1 81 181 54 12
PowerPC601 150 1700 1.3 121 115 28 53
HPPt7100 150 1300 1 196 66 27 73
DECα 150 1500 1.2 234 53 19 149
SuperSPARC 150 1700 1.6 256 48 13 272
Pentium 150 1500 1.5 296 40 9 417

Using Eq.7.7.1 we calculate the cost of the seven processors as given in Table 7.7.2.

Table 7.7.2. IC Chip variable Costs.

Table 2
Chip Die cost($) Package Test & Assembly Cost($) Total cost($)
    pins type Cost($)    
386DX 4 132 QFP 1 4 9
486DX2 12 168 PGA 11 12 35
PowerPC601 53 304 QFP 3 21 77
HPPt7100 73 504 PGA 35 16 124
DECα 149 431 PGA 30 23 202
SuperSPARC 272 293 PGA 20 34 326
Pentium 417 273 PGA 19 37 473

As we see that with increasing complexity and increasing number of functions per chip, die area is bound to increase as it has in 5th column of Table 7.7.1. Increased die area means reduced die yield which will mean increased die cost hence chip cost.

The manufacturing process dictates the wafer cost, wafer yield, and defect density So, the sole control of the designer is the die area – how much functionality should be packed into a chip for it to be “the most cost effective”?

Empirically cost per die grows roughly as the square of the die area. Infact it is a non-linear function of die area.

7.7.4. Cost metrics dependence on Die Area, on Defect Density and on Alpha(circuit manufacturing complexity parameter)

In 2006, for 90nm Generation Devices ‘Alpha 21264C’ fabricated on 300mm Wafer,

the density of defects=0.5per cm2 and alpha,α,=4.

The cost of 300mm Wafer was $4700 and wafer yield was 95%.

Using Eq.7.7.3, dies/wafer = 231.

Using Eq.7.7.4, die yield = 0.555

Therefore Good dies/wafer= 128.

Figure 12
Figure 12 (graphics9.png)

‘Alpha 21264C’ chip has 524-pin CLGA package.

Figure 13
Figure 13 (graphics10.png)

Figure 14
Figure 14 (graphics11.png)

Using 7.7.1

Figure 15
Figure 15 (graphics12.png)

By a similar algorithm, the total variable costs of 5 Processors are calculated and tabulated in Table.7.7.3.

Table 7.7.3. Variable costs of 5 Processors using 90nmGeneration Technology and 300mm Wafer.(Part I)

Table 3
μP Die area(mm2) Pin Wafer cost($) Package Dies/Wafer Die yield
Alpha212646C 115 524 4700 CLGA 231 0.555
Power 3-II 163 1088 4000 SLC 157 0.452
Itanium 300 418 4910 PLC 79 0.266
MIPS R14000 204 527 3700 CPGA 122 0.383
Ultra SPARCIII 210 1368 5200 FC-LGA 118 0.374

Table 7.7.3. Variable costs of 5 Processors using 90nmGeneration Technology and 300mm Wafer.(Part II)

Table 4
Functional dies/wafer Die cost($) Test cost($) Package Cost($) Total cost($)
128 36.72 5.5 25 67.22
71 56.34 5.16 20 81.50
20 245 12.54 20 277.54
46 80.43 7.98 25 113.41
44 118.18 10.7 30 158.88

As we have already noted and it is further verified from Table 7.7.3 that total cost is a nonlinear function of the die area. Generally it is square of the die area.

7.7.4.1. Effect of the Defect density on the cost metrics.

In Table 7.7.4, the effect of defect density on the cost metrics is evaluated.

Table 7.7.4.Total cost of a processor ITANIUM (die area=3cm 2 ) versus defect density with Wafer diameter of 300mm.

Table 5
Defect density(d/cm2) D/W Y FD/W DC($) TC($) Package Cost($) Total cost($)
0.3 79 0.422 33 148.8 7.9 20 176.39
1 79 0.101 8 612.58 32.91 20 665.41

D/W – dies per wafer, Y- die yield, FD/W – functional dies per wafer, DC- die cost, TC- test cost.

As is evident from Table 7.7.4, increase in defect density has drastic reduction effect on the yield and hence it has drastic enhancement effect on the die cost. So defect density has to be kept under tight control and it needs to be reduced as die area is increased if the die yield has to be maintained at a constant level.

7.7.4.2. Effect of Alpha (manfactruring complexity parameter) on Cost Metrics.

In Table 7.7.5 the effect of Alpha parameter on the cost metrics of Alpha21264C (die

area = 1.15cm2) processor is studied.

Table 7.7.5. Total cost of a processor Alpha21264C(die area = 1.15cm 2 ) versus alpha parameter.

Table 6
Alpha parameter D/W Y FD/W DC($) TC($) Package Cost($) Total cost($)
α = 4 231 0.415 95 49.47 7.36 25 81.44
α = 6 231 0.404 93 50.54 7.57 25 83.11

Increase in levels of interconnection causes only a slight reduction in yield and hence a slight increase in total cost. This means that improved routability of the signal in the IC Chip is inexpensive.

7.7.5. Wafer size transition from 200mm to 300mm and now to 450mm.

IC Industry’s ability to increased productivity by 25 to 30% per year is the combined result of wafer size transitions, shrinking device geometries, equipment productivity improvement and incremental yield improvements. Wafer size transitions historically account for 4% out of the stated 25-30% productivity increase. This is precisely why wafer size has been increasing periodically as shown in Table 7.7.6.

Table 7.7.6. Wafer size transition leading to increased throughput and increased productivity and decreased unit cost.

Table 7
Year WD(mm) t(μm) WC($) D/cm2 DA(mm2) D/W Yield FDper wafer Die cost($)
59-61 25 200 ? 1 81 6 48% 3  
61-63 51 275 ? 1 121 7 36% 3  
63-66 76 375 ? 1 196 11 22% 3  
66-68 100 525 ? 1 234 19 17% 3  
68-70 130 625 ? 1 256 34 16% 5  
70-76 150 675 1491 1 296 40 13% 5 298
76-00 200 725 1050 1 296 80 13% 10 105
00-12 300 775 735 1 296 200 13% 26 28
12-? 450 925 515 1 296 479 13% 62 8

WD-Wafer diameter, WC-Wafer cost, D-defect,DA- die area , D/W- gross dies per wafer,FD- functional die.

Starting wafer cost per unit would be reduced by 30% under wafer size increase according to VLSI Research from 1970 onward.

Table 7.7.6 clearly indicates that with increase in Wafer size the scale of economy of IC production improves. With constant die area, yield remains constant but if functionality increase leads to increased die area then defect density will have to be brought down concomitantly so that the product of defect density and die area is constant to ensure constant yield . The constantancy in the die yield only will ensure improved scale of economy.

Driving force for all wafer size transitions include factors of ever increasing die size and increasing number of integrated functions per chip.

Trends indicate that wafer size transitions industry-wide have typically enabled 4% per year productivity improvement and transition to 300mm wafer size should provide 2 to 4% per year lower IC cost per cm2 .

7.7.5.1. Hydrogen injection process to keep the defect density at the lowest with increase in wafer size.

Hydrogen injection during wafer manufacturing has helped in the following manners:

  1. Improved uniformity across the wafer, across the run and from run to run;
  2. Control of defects such as oxidation induced stacking faults and control of point defects simultaneous process.

Hydrogen injection produces higher yields helping to improve cost effectiveness.

7.7.6. Design options offered by Scaling and Wafer size transition..

Scaling gives us a higher transistor budget meaning higher transistor density. So for same functionality die area can be decreased and for same die area functionality can be increased.

For same die area, increased functionality means more transistors and hence more features. This will pay off in terms of increased CPU performance. This is called addition of performance-enhancing functionality to a processor die.

In the second option where we go for the same functionality and reduced die area we have four dividends:

  1. we get a higher yield and hence decreased die cost;
  2. reduced power dissipation because of reduced silicon area;.
  3. higher clock rate if power dissipation is maintained at original level;
  4. Or a combination of higher clock rate and reduced power dissipation.

Wafer size increases and concomitant defect density reduction allows the yield to be maintained constant even with increased die area. Thus number of functional chips per wafer are kept constant though much larger in size and much larger in performance enhancing functionality.

Processor’s power consumption and dissipation has been on the increase along with die size. This is creating the limitation of Fire Wall (P0 Watts= power dissipation density×die area) , the maximum limit of dissipation which can be tolerated by a silicon chip.

Here again we have two options:

  1. Multi-core procesors which is a combination of smaller , cheaper and less power hungry processor working in parallel to do the same job or
  2. Single core processor which is larger, more expemsive and more power hungry.

Manufacturer’s are opting for multi-core processors.

TodayMobile Computing demands that more functionality be added to a single die but not for the purpose of increased performance buit for an ideal mobile computing environment. This will mean combining non-volatile memory, volatile memory, CPU, multiple types of wireless capabilities (blue tooth, 802.11b, 802.11g, GSM e.t.c) on a single die. Here again we can integrate them on ta single die or combine multiple chips with multiple functions into a single module.

Content actions

Download module as:

Add module to:

My Favorites (?)

'My Favorites' is a special kind of lens which you can use to bookmark modules and collections. 'My Favorites' can only be seen by you, and collections saved in 'My Favorites' can remember the last module you were on. You need an account to use 'My Favorites'.

| A lens I own (?)

Definition of a lens

Lenses

A lens is a custom view of the content in the repository. You can think of it as a fancy kind of list that will let you see content through the eyes of organizations and people you trust.

What is in a lens?

Lens makers point to materials (modules and collections), creating a guide that includes their own comments and descriptive tags about the content.

Who can create a lens?

Any individual member, a community, or a respected organization.

What are tags? tag icon

Tags are descriptors added by lens makers to help label content, attaching a vocabulary that is meaningful in the context of the lens.

| External bookmarks