DDR3 Technology Progress and Future Development

来源:百度文库 编辑:神马文学网 时间:2024/07/05 21:23:04

Tuesday, October 07, 2008
DDR3 Technology Progress and Future Development
To DRAM industry, 2007 and 2008 are absolutely considered the down years. Over expansion by the DRAM makers resulted in an over-supplied market where DDR2 die price fell sharply. With the expectation of lower PC sales in Q4, DRAM inventory is expected to continue its climb and DRAM price is expected to break new lows. Up to this point, DDR2 1Gb eTT spot price is now at 1.15 USD and DDR2 667 1Gb is down to 1.19 USD. Even though DDR2 price is already below its variable cost, for DRAM makers, developing DDR3 remains the key factor to winning the next war.
DDR3 was initially developed in beginning of 2005 but was not applied on motherboard until middle of 2007 when DDR3 first began production in 90nm and when the Intel first introduced the P35 motherboard ?the first motherboard to support DDR3. With the advancement in process and continued introduction of new PC models, DRAMeXchange analyst expects DDR3 to be part of 20-30% of all PC market.
DDR3 Specification Advantages: high data rate and low power supply
According to the JEDEC specification, technically DDR3 is high data rate and low power supply.  Compared to DDR2, DDR3 can save roughly 30% of power and speed to 1600Mbps ?nearly twice as fast as DDR2. Because of the high data rate, DDR3 can transmit 8-bit of data in 1 clock cycle while DDR2 can only transmit 4-bit in 1 clock cycle. DDR3 operates at 1.5V supply which is 17% lower than 1.8V supply required by DDR2. The low power supply will allow NB to extend its battery life. There are even talks of using an ultra low 1.35V supply in certain NB makers?roadmaps in order to grab shares in the high end NB market and distance themselves from competitors in terms of technical specifications. (Figure-1)

Strong support for DDR3 from Intel Chipset vendors
In terms of support for DDR3, Intel chipsets claims the best support for DDR3 with the most number of chipsets and with the most complete support ranging from low to mid/high end. The only difference in terms DDR3 support is the speed especially with Intel's high end X Series chipset which supports Intel XMP (eXtended Memory Profile) over-clocking system to optimize DDR3 memory performance.  Although its core strength is in the graphic chip market, nVidia has also begun to introduce DDR3 support in its nForce Series chipsets with a memory over-clocking system ?EPP2 (Enhanced Performance Profile) - which optimizes DDR3 memory performance by fine tuning the DDR3 memory configurations in similar fashion as Intel抯 XMP system. However, because its memory controller is built into the CPU due to its CPU architecture, AMD will not be able to support DDR3 until the next generation AM3 is launched. (Figure-2)

DDR3 Process development and Current Market Analysis
From the product roadmaps, currently, the only DDR3 manufacturers continue to be the primary DRAM makers such as Samsung, Hynix, and Elpida. Of the Taiwanese DRAM makers, only Nanya has begun manufacturing DDR3 dies. From the production scales, Samsung and Elpida are the most aggressive in ramping up DDR3 production. DRAMeXchange estimates Samsung and Elpida will have nearly 10% of their production capacity focused on DDR3 production. In addition, with advancement in process, DDR3 dies will begin to be produced with 70nm, 65nms or even 56nm process beginning the second half of this year and will officially begin mass production in these advance process sometimes in 2009. Given the increased willingness to adopt DDR3 among the PC OEM makers, it is possible for DDR3 to reach a large enough economy of scale and to begin a generation shift from DDR2 to DDR3 (Figure-3)
According to DRAMeXchange figures, DDR3 dies will make up roughly 5% of total DRAM die produced by the end of this year. Although the proportion is small, DRAM makers have been very persuasive in convincing PC OEM makers to adopt DDR3. Besides promising a steady supply of DDR3 dies, DRAM makers have offered extremely attractive prices to lower the price barrier between DDR2 and DDR3 memory modules for those PC makers who are willing to adopt DDR3. Thus, in NB market, we have begun to see new product launches featuring DDR3 from branded PC makers such as Dell, Sony, Lenovo, Toshiba, Acer, etc. We expect more PC OEM makers to launch products containing DDR3 by end of this year.

Understanding DDR3 Serial Presence Detect (SPD) Table

Tuesday, July 17, 2007

Introduction
Since I wrote 揢nderstanding DDR Serial Presence Detect (SPD) Table?in 2003, I have been getting a lot a feedback from readers. I added 揢nderstanding DDR2 Serial Presence Detect (SPD) Table?in 2006.  Some of you told me that you are using these articles to train your employees and to introduce the mysteries SPD concept to your customers. I feel honored by your responses.
Lately, CST has started shipment of a DDR3 EZ Programmer. Since the DD3 DIMM is introduced recently, I think this is the time to add an article for the DDR3 SPD Table. Due to the many more years of development, the DD3 SPD table has definitely got more sophisticated than the original DDR and DDR2 SPD table. Your attention is required to understand and follow through. I will try to use as much layman language as I can to accommodate you all.

Serial Presence Detect (SPD) data is probably the most misunderstood subject in the memory module industry. Most people only know it as the little Eeprom device on the DIMM that often kept the module from working properly in the computer. On the contrary, it is quite the opposite. The SPD data actually provide vital information to the system Bios to keep the system working in optimal condition with the memory DIMM. This article attempts to guide you through the construction of an SPD table with 揟urbo-Tax?type of multiple choices questions. I hope you抣l find it interesting and useful.
Sample Jedec Standard SPD Data Table


Byte 0
Number of Serial PD Bytes Written/ SPD Device Size/ CRC Coverage
Bit 3 to Bit 0 describes the total size of the serial memory actually used in the EEprom for the Serial Presence Detect data. Bit 6 to Bit 4 describes the number of bytes available in the EEprom device, usually 128byte or 256 byte. On top of that, Bit 7 indicates whether the unique module identifier covered by the CRC encoded on bytes 126 and 127 is based on (0-116byte) or based on (0-125byte)..
(When CST EZ-SPD Programmer is used: Simply select items from 3 tables and automatically calculate the final hex number)
The most common one used is:
Total SPD Bye = 256
CRC Coverage = 0-116Byte
SPD Byte used = 176 Byte
Resulting code is   92h
Byte 1
SPD Revision
Version   0.0              00h
Revision 0.5              05h
Revision 1.0              10h
Revision 1.1              11h
Revision 1.2              12h
Byte 2
DRAM Device Type
This refers to the DRAM type. In this case, we are only dealing with DDR3 SDRAM.
DDR3  SDRAM:     0Bh
Byte 3
Module Type
This relates to the physical size, and category of memory module.
Undefined                                00h
RDIMM (Registered Long DIMM)    01h
UDIMM (Unbuffered Long DIMM)  02h
SODIMM (Small Outline DIMM)      03h
Byte 4
SDRAM Density and Banks
This byte defines the total density of the DDR3 SDRAM, in bits, and the number of internal banks into which the memory array is divided.
Presently all DDR3 have 8 internal banks.
SDRAM Chip Size
512Mb           01h
1Gb               02h
2Gb               03h
4Gb               04h
Byte 5
SDRAM Addressing
This byte describes the row addressing and column addressing in the SDRAM Device.
512Mb chips
13 Row X 10 Column         09h
13 Row X 12 Column         0Bh
12 Row X 10 Column         01h
1Gb chips
14 Row X 10 Column         11h
14 Row X 12 Column          13h
13 Row X 10 Column          09h
2Gb chips
15Row X  10 Column         19h
15 Row X 12 Column         1Bh
14 Row X 10 Column         11h
Byte 6
Reserved       00h
Byte 7
Module Organization
This byte describes the organization of the SDRAM module; the number of Ranks and the Device Width of each DRAM
(When CST EZ-SPD Programmer is used: Simply select number of Ranks and Device Width. It automatically calculate final hex number)
1 Rank module using X8 chips       01h
2 Rank module using X8 chips       09h
1 Rank module using X4 chips       00h
2 Rank module using X4 chips       08h
4 Rank module using X8 chips       19h
4 Rank module using X4chips        18h
1 Rank module using X16 chips      02h
2 Rank module using X16 chips      10h
Byte 8
Module Memory Bus Width This refers to the primary bus width of the module plus the additional with provided by ECC
16bit                            01h
32bit                            04h
64bit (no parity)             03h
64bit + ECC (72bit)         0Bh
Byte 9
Fine timebase (FTB) Dividend / Divisor
This byte defines a value in picoseconds that represents the fundamental timebase for fine grain timing calculations. This value is used as a multiplier for formulating subsequent timing parameters. The granularity in picoseconds is derived from Dividend being divided by the Divisor.
Granularity:
2.5ps       52h
5ps          55h
Byte 10
Medium Timebase (MTB) Dividend
Byte 11
Medium Timebase (MTB) Divisor
These byte defines a value in nanoseconds that represents the fundamental timebase for medium grain timing calculations. This value is used as a multiplier for formulating subsequent timing parameters. The two byte forms the Dividend and the Divisor to determine the granularity of the medium timebase.
Granularity
0.125ns              Byte 10       01h      Byte  11      08h
0.0625ns             Byte 10       01h      Byte  11      0Fh
Byte 12
Minimum SDRAM Cycle Time (tCK min)
This byte describes the minimum cycle time for the module in medium timebase (MTB) units.
For MTB granularity = 0.125ns (Byte 10 and Byte 11)
DDR3 400Mhz clock (800data rate)                  14h
DDR3 533Mhz clock (1066data rate)                0Fh
DDR3 667Mhz clock (1333data rate)                0Ch
DDR3 800Mhz clock (1600data rate)                0Ah
Byte 13
Reserved            00h
Byte 14
CAS Latencies Supported, Low Byte
(When CST EZ-SPD Programmer is used: Simply select all latencies supported from table. Automatically calculate the hi and low byte hex value base on binary number)
Latency 5.6 supported                06h
Latency 6    supported                04h
Latency 6,7 supported                0Ch
Latency 5, 6, 7, 8 supported        1Eh
Byte 15
CAS Latencies Supported, High Byte  00h
Byte 16
Minimum CAS Latency Time (tAAmin)
Minimum CAS Latency based on medium timebase (MTB) units. tAAmin can be read off SDRAM data sheet.
Based on medium timebase of 0.125ns
tAAmin
12.5ns         DDR3-800D        64h
15ns           DDR3-800E        78h
11.25ns       DDR3-1066E      5Ah
13.125ns     DDR3-1066F       69h
15ns           DDR3-1066G      78h
10.5ns         DDR3-1333F       54h
12ns           DDR3-1333G       60h
13.5ns          DDR3-1333H       6Ch
15ns             DDR3-1333J        78h
10ns             DDR3-1600G       50h
11.25ns       DDR3-1600H       5Ah
12.5 ns        DDR3-1600J        64h
13.75ns       DDR3-1600K       6Eh
Byte 17
Minimum Write Recovery Time (tWRmin) This byte defines the minimum SDRAM write recovery time in medium timebase (MTB) units. This value is read from the DDR3 SDRAM data sheet.
Based on medium timebase of 0.125ns
tWR min
15ns                         78h
12ns                         60h
16ns                         80h
Byte 18
Minimum RAS# to CAS# Delay time (tRCDmin)
This byte defines the minimum SDRAM RAS# to CAS# Delay in (MTB) units
Based on medium timebase of 0.125ns
tRCD min
12.5ns         DDR3-800D        64h
15ns           DDR3-800E        78h
11.25ns       DDR3-1066E      5Ah
13.125ns     DDR3-1066F       69h
15ns           DDR3-1066G      78h
10.5ns        DDR3-1333F       54h
12ns           DDR3-1333G       60h
15ns           DDR3-1333J        78h
10ns           DDR3-1600G       50h
11.25ns       DDR3-1600H       5Ah
12.5 ns        DDR3-1600J        64h
13.75ns       DDR3-1600K       6Eh
Byte 19
Minimum Row Active to Row Active Delay time (tRRDmin)
This byte defines the minimum SDRAM Row Active to Row Active Delay in (MTB) units. This can be read from the SDRAM data sheet.
Based on medium timebase of 0.125ns
tRRD min
6.0 ns        30h
7.5  ns       3Ch
10  ns        50h
Byte 20
Minimum Row Precharge Delay Time (tRPmin)
This byte defines the minimum SDRAM Row Precharge Delay in (MTB) units. This can be read from the SDRAM data sheet.
Based on medium timebase of 0.125ns
tRP min
12.5ns         DDR3-800D        64h
15ns            DDR3-800E        78h
13.125ns      DDR3-1066F       69h
15ns            DDR3-1066G      78h
10.5ns         DDR3-1333F       54h
12ns            DDR3-1333G       60h
13.5ns         DDR3-1333H       6Ch
15ns            DDR3-1333J        78h
10ns            DDR3-1600G       50h
11.25ns       DDR3-1600H       5Ah
12.5 ns        DDR3-1600J        64h
13.75ns       DDR3-1600K       6Eh
Byte 21
Upper Nibbles for tRAS and tRC
This byte makes up the MSB (upper 4 bits) of the tRAS (bits 3-0) and tRC (bits 7-4) for Byte  22 (tRAS lower byte) and Byte 23 (tRC lower byte). They are in (MTB) units.
Based on medium timebase of 0.125ns
These nibbles represents the value of 256 (in MTB units) for both the tRAS and tRC upper nibble.. Therefore, the value is always
11h
Byte 22
Minimum Active to Precharge Delay Time (tRAS min), Least Significant Byte
This byte is the lower 8 bits of the 12 bit tRAS value. It is represented in MTB units. The tRAS value can be read from the SDRAM data sheet.
Based on medium timebase of 0.125ns
tRAS min
37.5ns          DDR3-800D        2Ch
37.5ns          DDR3-800E        2Ch
37.5ns          DDR3-1066E      2Ch
37.5ns          DDR3-1066F       2Ch
37.5ns          DDR3-1066G      2Ch
36ns             DDR3-1333F       20h
36ns             DDR3-1333G       20h
36ns             DDR3-1333H       20h
36ns             DDR3-1333J        20h
35ns             DDR3-1600G       18h
35ns             DDR3-1600H       18h
35ns             DDR3-1600J        18h
35ns             DDR3-1600K       18h
Byte 23
Minimum Active to Active Refresh Delay Time (tRC min), Least Significant Byte
This byte is the lower 8 bits of the 12 bit tRC value. It is represented in MTB units. The tRC value can be read from the SDRAM data sheet.
Based on medium timebase of 0.125ns
tRC   min
50ns                 DDR3-800D        90h
52.5ns              DDR3-800E        A4h
48.75ns            DDR3-1066E      86h
50.625ns          DDR3-1066F       95h
52.5ns              DDR3-1066G      A4h
46.5ns              DDR3-1333F       74h
48ns                 DDR3-1333G       80h
49.5ns              DDR3-1333H       8Ch
51ns                 DDR3-1333J        98h
45ns                 DDR3-1600G       68h
46.25ns            DDR3-1600H       72h
47.5ns              DDR3-1600J        7Ch
48.75ns            DDR3-1600K       86h
Byte 24
Minimum Refresh Recovery Delay Time (tRFCmin), Least Significant Byte
Byte 25
Minimum Refresh Recovery Delay Time (tRFCmin), Most Significant Byte
These two Byes forms a 16 bit value representing the tRFC value in MTB units.
Based on medium timebase of 0.125ns
tRFC min
for 512Mb chip   90ns          Byte 24       D0h          Byte 25        02h
for 1Gb chip      110ns          Byte 24       70h           Byte 25       03h
for 2Gb chip      160ns          Byte 24       00h           Byte 25       05h
Byte 26
Minimum Internal Write to Read Command Delay time (tWTRmin)
This byte defines the minimum SDRAM Internal Write to Read Delay Time in MTB units. This value is read off the data sheet.
Based on medium timebase of 0.125ns
tWTR min     7.5ns                is for all DDR3 speed bins        3Ch
Byte 27
Minimum Internal Read to Precharg Command Delay time (tRTPmin)
This byte defines the minimum SDRAM Internal Read to Precharge Command Delay Time in MTB units. This value is read off the data sheet.
Based on medium timebase of 0.125ns
tRTP min     7.5ns                is for all DDR3 speed bins        3Ch
Byte 28
Upper Nibbles for tFAW
This byte makes up the most significant bit value (upper 4 bits) of the tFAW (bits 3-0). They are in (MTB) units. This value is read off the SDRAM data sheet.
Based on medium timebase of 0.125ns
For tFAW value of 32ns or higher, the hex value for this byte is       01h
For all tFAW value less than 32ns, the hex value for this byte is       00h
Byte 29
Minimum Four Activate Window Delay Time (tFAWmin), Least Significant Byte
This works with Byte 28 to form a 12-bit value which defines the minimum SDRAM Four Activate Window Delay Time in MTB units. This data is available on the SDRAM data sheet.
Based on medium timebase of 0.125ns
tFAW min
40.0ns    DDR3-800, 1K page size             40h
50.0ns    DDR3-800, 2K page size             90h
37.5ns    DDR3-1066, 1K page size           2Ch
50.0ns    DDR3-1066, 2K page size           90h
30.0ns    DDR3-1333, 1K page size           F0h
45.0ns    DDR3-1333, 2K page size           68h
30.0ns    DDR3-1600, 1K page size           F0h
40.0ns    DDR3-1600, 2K page size           40h
Byte 30
SDRAM Output Drivers Supported
This byte defines the optional drive strengths supported by the SDRAMs on this module. This information can be found from the SDRAM data sheet.
RZQ/6  supported   RZQ/7  supported                    03h
RZQ/6  supported   RZQ/7  not supported              01h
RZQ/6  not supported   RZQ/7  supported              02h
Byte 31
Module Bank Density
This byte describes the module抯 supported operating temperature ranges and refresh options. These values come from the DDR3 SDRAM data sheet. The information includes On-die Thermal sensor support, ASR Refresh support, 1X or 2X Temperature Refresh Rate support as well as the Extended Temperature Range.
(When CST EZ-SPD Programmer is used: Simply select all supported options from table. It automatically calculate the hex value based on the 2 byte binary number)
Byte 32-59
Reserved, General Section            00h
Byte 60
Module Nominal Height
Under or equal 15mm                     00h
Between 15 and 16mm                   01h
Between 16 and 17mm                   02h
Between 17 and 18mm                   03h
Between 18 and 19mm                   04h
Between 19 and 20mm                   05h
Between 20 and 21mm                   06h
Between 21 and 22mm                   07h
Between 22 and 23mm                   08h
Between 23 and 24mm                   09h
Between 24 and 25mm                   0Ah
Between 25 and 26mm                   0Bh
Between 26 and 27mm                   0Ch
Between 27 and 28mm                   0Dh
Between 28 and 29mm                   0Eh
Between 29 and 30mm                   0Fh
Between 30 and 31mm                   10h
Between 31 and 32mm                   11h
Between 32 and 33mm                   12h
Between 33 and 34mm                   13h
Between 34 and 35mm                   14h
Between 35 and 36mm                   15h
Between 36 and 37mm                   16h
Between 37 and 38mm                   17h
Between 38 and 39mm                   18h
Between 39 and 40mm                   19h
Between 40 and 41mm                   1Ah
Between 41 and 42mm                   1Bh
Between 42 and 43mm                   1Ch
Between 43 and 44mm                   1Dh
Between 44 and 45mm                   1Eh
Over 45mm                                  1Fh
Byte 61
Module Mechanical Maximum Thickness
This byte defines the maximum thickness in millimeters of the fully assembled module including heat spreaders and any other components. It is in two parts; the front thickness (from PCB surface) and the back thickness (from PCB surface).
(When CST EZ-SPD Programmer is used: Simply selected by number between 1-15mm for front thickness and by number between 1-15mm for back thickness. Program automatically converts these thickness number into 2 byte hex code.)
Smaller or equal to 1mm on both front and back  00h
1 to 2 mm on both front and back                     11h
2 to 3 mm on both front and back                      22h
3 to 4 mm on both front and back                      33h
2 mm on front 1 mm max on back                        01h
3 mm on front 1 mm max on back                        02h
4 mm on front 1 mm max on back                        03h
Byte 62
Reference Raw Card Used
This Byte indicates which JEDEC reference design raw card was used as the basis for the module assembly. It includes Raw Card designator and Revision number.
(When CST EZ-SPD Programmer is used: Simply select by number on revision code. Select Raw Card number by alphabetic code. Program automatically calculates the 2 byte Hex number.)
Raw Card  A   rev. 0      00h ,         rev. 1         20h ,         rev. 2      40h ,         rev. 3         60h
Raw Card  B   rev. 0      01h ,         rev. 1         21h ,         rev. 2      41h ,         rev. 3         61h
Raw Card  C   rev. 0      02h ,         rev. 1         22h ,         rev. 2      42h ,         rev. 3         62h
Raw Card  D   rev. 0      03h ,         rev. 1         23h ,         rev. 2      43h ,         rev. 3         63h
Raw Card  E   rev. 0      04h ,         rev. 1         24h ,         rev. 2      44h ,         rev. 3         64h
Raw Card  F   rev. 0      05h ,          rev. 1         25h ,         rev. 2      45h ,         rev. 3         65h
Raw Card  G   rev. 0      06h ,         rev. 1         26h ,         rev. 2      46h ,         rev. 3         66h
Raw Card  H   rev. 0      07h ,         rev. 1         27h ,         rev. 2      47h ,         rev. 3         67h
Raw Card  J   rev. 0       08h ,         rev. 1         28h ,         rev. 2      48h ,         rev. 3         68h
Raw Card  K   rev. 0      09h ,         rev. 1         29h ,         rev. 2      49h ,         rev. 3         69h
Raw Card  L   rev. 0      0Ah ,         rev. 1         2Ah ,        rev. 2      4Ah ,        rev. 3         6Ah
Raw Card  M   rev. 0     0Bh ,         rev. 1         2Bh ,        rev. 2      4Bh ,        rev. 3         6Bh
Raw Card  N   rev. 0      0Ch ,         rev. 1         2Ch ,        rev. 2      4Ch ,        rev. 3         6Ch
Raw Card  P   rev. 0      0Dh ,         rev. 1         2Dh ,        rev. 2      4Dh ,        rev. 3         6Dh
Raw Card  R   rev. 0      0Eh ,         rev. 1         2Eh ,        rev. 2      4Eh ,         rev. 3         6Eh
Raw Card  T   rev. 0      0Fh ,         rev. 1         2Fh ,         rev. 2      4Fh ,         rev. 3         6Fh
Raw Card  U   rev. 0      10h ,         rev. 1         30h ,         rev. 2      50h ,         rev. 3         70h
Raw Card  V   rev. 0      11h ,         rev. 1         31h ,         rev. 2      51h ,         rev. 3         71h
Raw Card  W   rev. 0    12h ,          rev. 1         32h ,         rev. 2      52h ,         rev. 3         72h
Raw Card  X  rev. 0      13h ,          rev. 1         33h ,         rev. 2      53h ,         rev. 3         73h
Raw Card  Y   rev. 0     14h ,          rev. 1         34h ,         rev. 2      54h ,         rev. 3         74h
Raw Card  Z   rev. 0     15h ,          rev. 1         35h ,         rev. 2      55h ,          rev. 3        75h
Byte 63
Address Mapping from Edge Connector to DRAM
For ease of module PCB layout, sometimes 搈irror?address mapping is used. 揗irror?address is to flip the address line sequence on the 2nd rank of the module. This byte describes the connection of edge connector pins for address bits to the corresponding input pins of the DDR3 SDRAMs.
Rank 1 Mapping
Standard            00h
Mirrored            01h
Byte 64-116
Reserved                           00h
Byte 117
Module Manufacturer ID Code, Least Significant Byte
This code is obtained through manufacturer抯 registration with JEDEC (the standard setting committee). A small fee is charged by JEDEC to support and maintain this record. Please contact JEDEC office.
Byte 117 is the least significant byte while byte 118 is the most significant byte. If the ID is not larger than one byte (in hex), byte 118 should be filled with 00h.
Byte 118
Module Manufacturer ID Code, Most Significant Byte
This code is obtained through manufacturer抯 registration with JEDEC (the standard setting committee). A small fee is charged by JEDEC to support and maintain this record. Please contact JEDEC office.
Byte 117 is the least significant byte while byte 118 is the most significant byte. If the ID is not larger than one byte (in hex), byte 118 should be filled with 00h.
Byte 119
Module Manufacturing Location
Optional manufacturer assigned code.
Byte 120
Module Manufacturing Date
Byte 120 is for the year.
(When CST EZ-SPD Programmer is used: User selects the year to automatically enter the year code in hex.)
Byte 121
The week of the year, 1 to 52.
(When CST EZ-SPD Programmer is used: The program should automatically calculate the week of the year once a day on the calendar is click selected and 揙K?by the user. It will also automatically convert to the proper SPD hex code)
Byte 122-125
Module Serial Number
Optional manufacturer assigned number.
On the Serial Number setting, JEDEC has no specification on data format nor dictates the location of the Most Significant Bit. Therefore, it抯 up to the individual manufacturer to assign his numbering system.(All CST testers and EZ-SPD programmers have the option for the user to select either byte 122 or 125 as the MSB (most significant bit). The tester assumes the use of ASCII format, which is the most commonly used. The CST testers also have the function to automatically increment the serial number on each module tested.)
Byte 126-127
SPD Cyclical Redundancy Code (CRC)
This two-byte field contains the calculated CRC for previous bytes in the SPD. A certain algorithm and data structures are to be followed in calculating and checking the code. Bit 7 of Byte 0 indicates which bytes are covered by the CRC.
(When CST EZ-SPD Programmer is used: The CST tester automatically calculates the CRC for you based on information of Byte 0 ?Byte 125.)
Byte 128-145
Module Part Number
The manufacturer抯 part number is written in ASCII format within these bytes.
Byte 128 is the most significant digit in ASCII while byte 145 is the least significant digit in ASCII. Unused digits are coded as ASCII blanks (20h).
(When CST EZ-SPD Programmer is used: Simply click the button at the right of Byte 128 to open an edit window, input the manufacturer抯 PN (Maximum 18 digits). The software will automatically translate it into ASCII and write them into Bytes 128-145.)

Byte 146-147
Module Revision Code
Optional Manufacturer Assigned Code
Byte 148
DRAM Manufacturer ID Code, Least Significant Byte
This code is obtained through manufacturer抯 registration with JEDEC (the standard setting committee). A small fee is charged by JEDEC to support and maintain this record. Please contact JEDEC office. Reference to JEDEC document JEP-106 for more detail.
Byte 148 is the least significant byte while byte 149 is the most significant byte. If the ID is not larger than one byte (in hex), byte 149 should be filled with 00h.
Byte 149
DRAM Manufacturer ID Code, Most Significant Byte
This code is obtained through manufacturer抯 registration with JEDEC (the standard setting committee). A small fee is charged by JEDEC to support and maintain this record. Please contact JEDEC office. Reference to JEDEC document JEP-106 for more detail.
Byte 148 is the least significant byte while byte 149 is the most significant byte. If the ID is not larger than one byte (in hex), byte 149 should be filled with 00h.
Byte 150-175
Manufacturer抯 Specific Data
Optional manufacturer assigned code. The module manufacturer may include any additional information desired into the module within these locations.
Byte 176-255
Open for Customer Use
Optional customer assigned codes. These bytes are unused by the manufacturer and are open for customer use.
 
Final Note:
Everything in the above article and more are now implemented into the CST EZ-SPD DDR3 Programmer software. The new features are:
1. Pop up window of explanation on each Byte.
2. Clickable selection right from the illustration window.
3. Auto CRC checksum on byte 126 and byte 127.
4. Text input on "manufacturer code" and "serial number". User define MSB/LSB format.
5. Auto JEDEC week and year coding from PC clock.
6. Software write protect function.
....just to name a few.
For further information on CST EZ-SPD Programmer , click on this link :
http://www.simmtester.com/page/news/showcstnews.asp?title=CST+delivery+DDR3+Programmer&num=83
http://www.simmtester.com/page/news/showpubnews.asp?num=153
About three years ago DDR2 memory first appeared on the desktop PC scene. It would be impossible to say it burst on the scene since it was introduced with the unimpressive Intel NetBurst processors. In that market DDR2 was more like a trickle since it was mainly a curiosity for a processor that was running a distant second place to the leading AMD Athlon chips, which were still powered by DDR memory.
DDR2 finally became the universal standard last May/June when AMD switched to DDR2 on their new AM2 platform and Intel introduced Core 2 Duo, the new CPU performance leader. Core 2 Duo resided on socket 775, which also was fed by DDR2. While it sometimes seems like centuries ago, it is worth remembering that Intel Core 2 Duo regained the CPU performance crown less than a year ago, and the two years prior to that all the fastest systems used AMD Athlon 64/X2/FX processors.
Wecompared performance of DDR2 on the new platforms in July of last year. AM2 provided better bandwidth with DDR2, but the better AM2 bandwidth did not translate into better performance. Since Core 2 Duo was faster at the same timings, it appeared the Intel Core 2 Duo architecture was not particularly bandwidth hungry and that it made very good use of the DDR2 bandwidth that was available with the chipset memory controller.
Since last May/June DDR2 has finally turned the market, and it has made some remarkable transformations along the way. The early 5-5-5 timings at the official DDR2-800 speed have since been replaced by several high performance memories capable of 3-3-3 timings at DDR2-800. The best memory at DDR2-1066 can now operate at 4-4-3 timings, and the fastest DDR2 is now around DDR2-1266 and still getting faster.
Perhaps even more remarkable, in the last year DDR2 memory prices have dropped to half of what they once were (sometimes more), and today DDR2 is often cheaper than the DDR memory it replaced. Compared to the very expensive prices at launch and into the holiday buying season we see DDR2 is now the memory price standard in the desktop computer market.
Fast forward a year and Intel is now launching their first chipsets to support DDR3 memory. In one of the sloppiest NDA launches in recent memory we already have P35 boards for sale since early May. The official chipset introduction is scheduled for May 21st and boards are "officially" launching into the retail channel on June 4th.
We can tell you that Intel does not really have an NDA, but they have been very aggressive in holding first tier manufacturers to a May 21st performance embargo and retail distribution on June 4th. Despite that, people around the world have been able to buy P35 boards from several retailers. We have retail boards we bought on the open market, which makes the 21st NDA a moot point in our opinion. Still, we value our relationship with both Intel and the major board makers, so this will not be a full P35 launch review. You will see that coming on May 21st.
What this review does address is the performance of the new DDR3 memory that is launched with P35. The new Intel P35 chipset, known as Bearlake during development, supports either DDR2 or DDR3 memory. This presented a perfect opportunity to look at the performance of both DDR3 and DDR2 on the new P35 chipset. We were also able to compare performance to a Gold Editors' Choice Intel P965 motherboard. The results of these comparisons provided interesting results about the capabilities of the new P35 memory controller. It also answered the question of whether you should care about DDR3 in any upcoming system purchase.
Core 2 Duo (Conroe) launched about twelve days ago with a lot of fanfare. With the largest boost in real performance the industry has seen in almost a decade it is easy to understand the big splash Core 2 Duo has made in a very short time. AnandTech delivered an in-depth analysis of CPU performance inIntel's Core 2 Extreme & Core 2 Duo: The Empire Strikes Back. With so much new and exciting information about Conroe's performance, it is easy to assume that since Core 2 Duo uses DDR2, just like NetBurst, then memory performance must therefore be very similar to the DDR2-based Intel NetBurst architecture.
Actually, nothing could be further from the truth. While the chipsets still include 975X and the new P965 and the CPU is still Socket T, the shorter pipes, 4 MB unified cache, intelligent look-ahead, and more work per clock cycle all contribute to Conroe exhibiting very different DDR2 memory behavior. It would be easy to say that Core 2 Duo is more like the AMD AM2, launched May 23rd, which now supports DDR2 memory as well. That would be a stretch, however, since AM2 uses an efficient on-processor memory controller, and the launch review found Core 2 Duo faster at the same clock speed than the current AM2. This is another way of saying Conroe is capable of doing more work per cycle - something we had been saying for several years about Athlon64 compared to NetBurst,
The move by AMD from Socket 939 to Socket AM2 is pretty straightforward. The new AM2 processors will continue to be built using the same 90nm manufacturing process currently used for Athlon 64 processors until some time in early to mid-2007. AMD will then slowly roll-out their 65nm process from the bottom of the line to the top according to AMD road-maps. This could include memory controller enhancements and possibly more. Performance of AM2 only changed very slightly with the move to DDR2, generally in the range of 0% to 5%. The only substantive difference with AM2 is the move from DDR memory to official AMD DDR2 memory support.
Our AM2 launch reviews and the articleFirst Look: AM2 DDR2 vs. 939 DDR Performance found that AM2 with DDR2-533 memory performed roughly the same as the older Socket 939 with fast DDR400 memory. Memory faster than DDR2-533, namely DDR2-667 and DDR2-800, brought slightly higher memory performance to AM2.
The Core 2 Duo introduction is quite different. Clock speed moved down and performance moved up. The top Core 2 Duo, the X6800, is almost 1GHz slower than the older top NetBurst chip and performs 35% to 45% faster. With the huge efficiency and performance increases comes different behavior with DDR2 memory.
With the world now united behind DDR2, it is time to take a closer look at how DDR2 behaves on both the new Intel Core 2 Duo and the AMD AM2 platforms. The performance of both new DDR2 platforms will also be compared to NetBurst DDR2 performance, since the DDR2 NetBurst Architecture has been around for a couple of years and is familiar. We specifically want to know the measured latency of each new platform, how they compare in memory bandwidth, and the scaling of both Core 2 Duo and AM2 as we increase memory speed to DDR2-1067 and beyond. With this information and tests of the same memory on each platform, we hope to be able to answer whether memory test results on Conroe, for instance, will tell us how the memory will perform on AM2.
In addition we have an apples-apples comparison of AM2 and Core 2 Duo running at 2.93GHz (11x266) using the same memory at the same timings and voltages with the same GPU, hard drive, and PSU. This allows a direct memory comparison at 2.93GHz at DDR2-1067. It also provides some very revealing performance results for Core 2 Duo and AM2 at the exact same speeds in the same configurations.
What is DDR3?
To provide compatibility and interchangeability for computer memory, the structure and form factor are controlled by a standards organization known as JEDEC. JEDEC specifies voltages, speeds, timings, communication protocols, bank addressing, and many other factors in the design and development of memory DIMMs. Taking a closer look at publications atwww.jedec.org can provide insight into what DDR3 brings to the market and where it might go. Comparing DDR2 and DDR3 several interesting points stand out.
Official JEDEC Specifications
DDR2 DDR3
Rated Speed 400-800 Mbps 800-1600 Mbps
Vdd/Vddq 1.8V +/- 0.1V 1.5V +/- 0.075V
Internal Banks 4 8
Termination Limited All DQ signals
Topology Conventional T Fly-by
Driver Control OCD Calibration Self Calibration with ZQ
Thermal Sensor No Yes (Optional)
Please keep in mind that JEDEC specs are official. They are a starting point for enthusiast memory companies. However, since there was never a JEDEC standard for memory faster than DDR-400 then DDR memory running at faster speeds is really overclocked DDR-400. Similarly DDR2 memory faster than DDR2-800 is actually overclocked DDR2-800 since there is currently no official JEDEC spec for DDR2-1066.
DDR speeds ran to DDR-400, DDR2 has official specs from 400 to 800, and DDR3 will extend this from 800 to 1600 based on the current JEDEC specification. Initial DDR3 offerings will be 1066 and 1333 will quickly follow. The 1333 speed is important because it matches the 1333 bus speed of the new Intel processors. The 1333 processors can run any speed of DDR3 or DDR2 memory, but 800 and 1067 will be overlap speeds with DDR2. 1333 will be the first DDR3 speed to offer enhanced memory speeds to current and future processors.
Since DDR3 is designed to run at higher memory speeds the signal integrity of the memory module is now more important. DDR3 uses something called "fly-by" technology instead of the "T branches" seen on DDR2 modules. This means the address and control lines are a single path chaining from one DRAM to another, where DDR2 uses a T topology that branches on DDR2 modules. "Fly-by" takes away the mechanical line balancing and uses automatic signal time delay generated by the controller fixed at the memory system training. Each DDR3 DRAM chip has an automatic leveling circuit for calibration and to memorize the calibration data.
DDR3 also uses more internal banks - 8 instead of the 4 used by DDR2 - to further speed up the system. More internal banks allow advance prefetch to reduce access latency. This should become more apparent as the size of the DRAM increases in the future.
DDR3 further reduces the memory voltage. In the past few years we have moved from 2.5V with DDR to 1.8V with DDR2. DDR3 drops memory voltage to 1.5V, which is a 16% reduction from DDR2. There are also additional built-in power conservation features with DDR3 like partial refresh. This could be particularly important in mobile applications where battery power will no longer be needed just to refresh a portion of the DRAM not in active use. There is also a specification for an optional thermal sensor that could allow mobile engineers to save further power by providing minimum refresh cycles when the system is not in high performance mode.
There is even more to DDR3, but for most enthusiasts looking at a new desktop system DDR3 can provide higher official speeds, up to 1600MHz. The higher speeds are available at lower voltage, with 1.5V as the official specification. There are many features that will not make much difference in DDR3 performance until we begin to see even faster and higher capacity memory. The question, then, is whether DDR3 memory provides better performance for the computer enthusiast than current DDR2?
DDR3 Memory: Technology Explained
These are uncertain financial times we live in today, and the rise and fall of our economy has had direct affect on consumer spending. It has already been one full year now that DDR3 has been patiently waiting for the enthusiast community to give it proper consideration, yet it's success is still undermined by misconceptions and high price. Benchmark Reviews has been testing DDR3 more actively than anyone, which is why over fifteen different kits fill ourSystem Memory section of reviews. Sadly, it might take an article like this to open the eyes of my fellow hardware enthusiast and overclocker, because it seems like DDR3 is the technology nobody wants bad enough to learn about. Pity, because DDR3 is the key to extreme overclocking.
A-Data PC3-12800 CL7-7-7-20 AD31600X002GU DDR3 1600MHz 1.75-1.85V 2x1GB RAM KitAeneon PC3-10666 CL8-8-8-15 AXH760UD00-13GA98X DDR3 1333MHz 1.5V 2x1GB RAM KitAeneon PC3-12800 CL9-9-9-28 AXH860UD20-16H DDR3 1600MHz 1.5V 2x2GB RAM KitCorsair PC3-14400 CL7-7-7-20 TWIN3X2048-1800C7DF G DDR3 1800MHz 2.0V 2x1GB RAM KitCrucial PC3-12800 CL8-8-8-24 BL2KIT12864BA1608 Ballistix DDR3 1600MHz 1.8V 2x1GB RAM Kit GeIL PC3-8500 CL6-6-6-15 G31GB1066C6PDCA DDR3 1066MHz 1.5V 2x512MB RAM KitKingston PC3-13000 CL7-7-7-20 KHX13000D3LLK2/2G DDR3 1625MHz 1.9V 2x1GB RAM KitMushkin PC3-10666 CL6-7-6-18 HP3-10666 DDR3 1333MHz 1.8V 1GBx2 RAM KitOCZ PC3-12800 CL7-7-7-24 OCZ3P16002GK Platinum Series DDR3 1600MHz 1.9V 2x1GB RAM KitPatriot PC3-15000 CL8-8-8-24 PDC32G1866LLK DDR3 1866MHz 1.9V 2x1GB RAM Kit Qimonda PC3-8500 CL7-7-7-20 Aeneon AEH760UD00-10FA98X DDR3 1066MHz 1.5V 2x1GB RAM KitSimpleTech PC3-10600 S1024R5NP2QA DDR3 1333MHz 2x1GB RAM KitSuper Talent PC3-14400 CL7-7-7-20 W1800UX2GP DDR3 1800MHz 2.0V 2x1GB RAM KitWinchip PC3-10666 CL8-8-8-15 64A0TRHN8G17E DDR3 1333MHz 1.65V 2x1GB RAM Kit
First and foremost, DDR3 is not just a faster version of DDR2. In fact, the worst piece of misinformation I see spread in enthusiast forums is how DDR3 simply picks up speed where DDR2 left off... which is as accurate as saying an airplane picks up where a kite left off. DDR3 does improve upon the previous generation in certain shared areas, and the refined fabrication process has allowed for a more efficient integrated circuit (IC) module. Although DDR3 doesn't share the same pin connections or key placements, it does still share the DIMM profile and overall appearance. From a technical perspective however, this is where the similarities end.
For over six months now, I have personally devoted a large amount of time towards testing this new system memory standard. Sadly, most of my efforts have gone unappreciated; DDR3 was too far ahead of it's time to be adopted early on. Yet, even though DDR2 has clearly reached its limit, the cost of production combined with a wide-scale recession will further harm acceptance of the new format. But are you really missing anything? I could give you a simple 'yes', but that's what I've already been saying for many months now. Instead, I invite you learn about what you're losing...
Features:
Now supports a system level flight time compensation
Mirror-friendly DRAM pin out are now contained on-DIMM
CAS Write latency are now issued to each speed bin
Asynchronous reset function is available for the first time in SDRAM
I/O calibration engine monitors flight time and correction levels
Automatic data bus line read and write calibration
Improvements:
Higher bandwidth performance increase, up to 1600 MHz per spec
DIMM-terminated 'fly-by' command bus
Constructed with high-precision load line calibration resistors
Performance increase at low power input
Enhanced low power features conserve energy
Improved thermal design now operates DIMM cooler
DDR3: Efficiency
Efficiency is a double-edged sword when we talk about DDR3, because aside from fabrication process efficiency there are also several architectural design improvements which create a more efficient transfer of data and a reduction in power. All of these items tie in together throughout this article, so for you to understand why DDR3 is going to be worth your money, you should probably also know why it's going to deliver more.
Power Consumption
So lets begin with power: at the JEDEC JESD 79-3B standard of 1.5 V, DDR3 system memory reduces the base power consumption level by nearly 17% compared the 1.8 V specified base power requirement for DDR2 modules. Taking this one step further, consider that at the high end of DDR2 there are 1066 MHz memory modules demanding 2.2 V to properly function. Then compare this to the faster 1600 MHz DDR3 RAM modules operating at 1.5 V nominal and you'll begin to see where money can be saved on energy costs - conserving nearly 32% of the power previously needed. The reduced level of base power demand works particularly well with the 90 nm fabrication technology presently used for most DDR3 chips. In high-efficiency purposed modules, some manufacturers have reduced current leakage even more by using "dual-gate" transistors.

You might be wondering how big a difference 0.7 V can make on your power bill, and that's a fair concern. In reality, it's not going to be enough to justify the initial costs of the new technology, at least not for your average casual computer user who operates a single workstation in their home. But before you dismiss the power saving, consider who might really make an impact here: commercial and industrial industries. If the average user can see only a few dollars saved per month on utility costs, imagine the savings to be made from large data center and server farm facilities. Not only will the reduced cost of operation help minimize overhead expenses, but the improved design also reduces heat output. Many commercial facilities spend a double-digit portion of their monthly expenses on utilities, and the cost savings sustained from lower power consumption and reduced cooling expenses will have an enormous effect on overhead. If you can imagine things just one step further, you'll discover that reduced cooling needs will also translate into reduced maintenance costs on that equipment and prolong the lifespan of HVAC equipment.
Voltage Notes for Overclockers
According to JEDEC standard JESD 79-3B approved on April 2008, the maximum recommended voltage for any DDR3 module is must be regulated at 1.575 V (see reference documents at the end of this article). Keeping in mind that the vast majority of system memory resides in commercial workstations and mission-critical enterprise servers, if system memory stability is important then this specification should be considered the absolute maximum voltage limit. But there's still good news for overclockers, as JEDEC states that these DDR3 system memory modules must also withstand up to 1.975 V before any permanent damage is caused.
DDR3: Prefetch Buffer
The SDRAM family has seen generations of change. JEDEC originally stepped in to define standards very early on in the production timeline, and subsequently produced DDR, DDR2 and now DDR3 DRAM (dynamic random access memory) implementations. You already know that DDR3 SDRAM is the replacement for DDR2 by virtue of its vast design improvements, but you might not know what all of those improvements actually are.
In additional to the logically progressive changes in fabrication process, there are also improvements made to the architectural design of the memory. In the last section I extolled the benefits of saving power and conserving natural resources by using DDR3 system memory, but most hardware enthusiasts are not aware of how efficiency is now also extended into a new data transfer architecture introduced fresh in DDR3.
One particularly important new change introduced with DDR3 is in the improved prefetch buffer: up from DDR2's four bits to an astounding eight bits per cycle. This translates to a full 100% increase in the prefetch payload; not just the small incremental improvement we've seen from past generations. Remember this important piece of information when I discuss CAS latency later on, because it makes all the difference.
DDR3: Speed
Even in it's infancy, DDR3 offers double the standard JEDEC standard maximum speed over DDR2. According to JEDEC standard JESD 79-3B drafted on April 2008, DDR3 offers a maximum default speed of 1600 MHz compared to 800 MHz for DDR2. But this is all just ink on paper if you aren't able to actually notice an improvement in system performance.
So far, we discussed how DDR3 is going to save the average enthusiast up to 37% of their energy costs consumed by system memory. We also added a much larger 8-bit prefetch buffer to the list of compelling features. So let's cinch it all together with a real ground-breaking improvement, because DDR3 has introduced a brand new system for managing the data bandwidth bus.
DDR Speed Memory clock Cycle time FSB Bus clock Module name Peak transfer rate
DDR3-800 100 MHz 10 ns 400 MHz PC3-6400 6400 MB/s
DDR3-1066 133 MHz 7.5 ns 533 MHz PC3-8500 8533 MB/s
DDR3-1333 166 MHz 6 ns 667 MHz PC3-10600 10667 MB/s
DDR3-1600 200 MHz 5 ns 800 MHz PC3-12800 12800 MB/s
Completely new to DDR3 is the 'Fly-by' technology. In the past generations of SDRAM to include DDR2, system memory used a 'Star' topology to disseminate data across many branched signal paths. This improvement is similar to when automobiles first began using the front-wheel drive system to bypass the long drivetrain linkage which incrementally sapped power from the wheels. Essentially, the Fly-by data bus topology utilizes a single direct link between all DRAM components which allows the system to respond much quicker than if it had to address stubs.
The reason DDR2 cannot develop beyond the point it already has isn't truly an issue of fabrication refinements, it is more specifically an issue with mechanical limitations. Essentially, DDR2 technology is no better prepared to reach higher speeds than a propeller airplane is capable of breaking the sound barrier; in theory it's possible, just not with the mechanical technology presently developed. At higher frequency the DIMM module becomes very dependant on signal integrity, and topology layout becomes an critical issue. For DDR2 this would mean that each of the 'T branches' in the topology must remain balanced, an effort which is beyond its physical limitation.
With DDR3 however, the signal integrity is individually tuned to each DRAM module rather than balanced across the entire memory platform. Now both the address and control line travel a single path instead of the inefficient branch pattern T topology in DDR2. Each DDR3 DRAM module also incorporates a managed leveling circuit dedicated to calibration, and it is the function of this circuit to memorize the calibration data. The Fly-by topology removes the mechanical line balancing limitations of DDR2, and replaces it with an automatic signal time delay generated by the controller fixed at the memory system training.
Although it is a rough analogy, DDR3 is very similar to the advancement of jet propulsion over prop-style aircraft, and an entirely new dimension of possibility is made available. There is a downside however, and this is primarily in the latency timings. In our next section, Benchmark Reviews will discuss how DDR3 can aid in overclocking, and why the higher latency will have little effect on the end result.
DDR3: Overclocker Functionality
So let's pause for a moment to recap what we've covered: DDR3 RAM modules can conserve up to 32% of the energy used on system memory, while at the same time saving money on maintenance costs for enterprise HVAC systems. The data prefetch buffer has doubled from only 4 bits per cycle to a full 8 bits with each pass. Finally, the Fly-by topology removes the mechanical limitations of physical line balancing by replacing it with an automatically controlled and calibrated signal time delay. Not just a speed improvement, like some would like you to think.
XMP
So then, when was the last time enthusiasts were actually encouraged to overclock their system memory by the manufacturer? Better yet, when was the last time Intel endorsed the practice? To be fair, Intel processors have been capable of overclocks for quite some time already, but not nearly to the level of convenience introduced in XMP technology.
XMP, or eXtreme Memory Profile is an automatic memory settings technology developed by Intel and Corsair to compete with Nvidia's SLI Memory and Enhanced Performance Profiles (EPP). It works very similar to EPP, with one major exception: XMP manages everything from the CPU multiplier, to voltage and front side bus frequencies. This makes overclocking one of the easiest thing possible, since it only requires an XMP compatible motherboard such as Intel's X48 series and an XMP enhanced set of system memory modules.

The XMP Specification was first officially introduced by Intel on March 23rd, 2007 to enable a enthusiast performance extension to the traditional JEDEC SPD specifications. It is very common for Intel Extreme Memory Profiles to offer two different performance profiles. Profile 1 is used for the hardware enthusiast or for certified settings and is the profile that is tested under the Intel Extreme Memory Certification program. Profile 2 is designed to host the Extreme or Fastest possible settings that have no guard band and may or may not work on every system. It should also be noted that XMP settings are not always defined as overclocked or over-volted components. In some less common cases, Extreme Memory Profiles can be used to define conservative power saving settings or reduced (faster) latencies timings.
CAS Latency Timing
CAS latency timing is not something new to DDR3, and it is one of the few items that remains unchanged in the new format. You may wonder why I used the term "unchanged", when every enthusiast in every web forum world-wide has jumped on their soapbox and chastised anyone considering DDR3 because of the higher latencies. The simple fact is that you cannot extend base frequencies without also extending the CAS delay, and DDR3 is actually requires less latency in comparison.
As a quick refresher, you might recall that 1066MHz DDR2 began with CL5 and CL6 latencies, and eventually improved to CL4 in rare cases of special IC module binning. So it should be considered a vast improvement in comparison that1333 MHz DDR3 can achieve CL5, and some 1800 MHz DDR3 modules such asCorsair's PC3-14400 kit which have received careful parts binning can operate on CL7 timings.
Putting this arguement into greater perspective, drift back to the first days of DDR2. I can still remember the complaints back then, although to a lesser extent, about the increased latency. Back in those days, 400 MHz DDR was often times seen with CL2 timings, so keep that in mind when you look at the 800 MHz DDR2 presently available at a 100% latency increase to CL4 timings. In comparison, the CL7 timings of 1600 MHz DDR3 are still ahead of the curve by 25%, or even up to 50% faster latencies withOCZ's CL6 DDR3.
The bottom line is that enthusiasts need to hone in on the truth behind the technology, and ignore the self-serving ignorance that often runs rampant in most technology forums. The same person who condescends the idea of using DDR3 is also the same person who doesn't know the reason for the difference in architecture. The reality of the matter is that DDR3 is actually a better memory in terms of latency timings, especially compared against DDR2. So now imagine how tight the timings will be once the now infantile manufacturing process evolves from 90 nm to 70 nm; these latency timings will only get better.
Final Opinion on DDR3 RAM
When I first began this article, it felt to me like this kind of information should be required reading for anyone who considers themselves a hardware enthusiasts or overclocker. Even after discussing the topic with some of my colleges, it was clear that the misconceptions had already entrenched themselves deep into the everyman. I can't give up hope, not yet, because if you've made it this far into the article then you've probably picked up a thing or two about the technology.
Retracing my key points, there are a few important major features worth mentioning again for those who like skipping to the end (statistically 70% of visitors). To begin with, DDR3 RAM modules can conserve up to 32% of the energy used on system memory, while at the same time saving money on maintenance costs for facilities HVAC systems. Next on this list is the data prefetch buffer; which has doubled from only 4 bits per cycle to a full 8 bits with each pass. Then comes the new Fly-by topology that removes the mechanical limitations of physical line balancing by replacing it with an automatically controlled and calibrated signal time delay. After that comes latencies which are lower in rate than the previous curve, and in some cases offer 50% better timings per MHz. Finally, we have all of the extra perks.
The first few perks are more to a technical advantage than anything else. At the beginning of this article I listed the introduction of an asynchronous reset pin, which gives DDR3 the ability to complete a device reset without interfering with the operation of the entire computer system. Additionally, DDR3 can also complete a partial refresh, so energy isn't wasted on refreshing memory that isn't active.
The concept that appears to be gaining momentum is onboard AI for the memory modules. For instance, the JEDEC standard allows for an optional on-die thermal sensor that can be used to detect a nearing temperature threshold for the memory, and shorten the refresh intervals if necessary. This fail-safe offers the memory an opportunity to reduce temperatures and consume less power.
I consider another major perk to be the XMP eXtended Memory Profile, which I have personally seen in action. One simple decision to enable the profile (or particular profile if more than one exists), and your system is automatically adjusted for a pre-defined overclock - voltages and all. This is going to be a great feature for anyone who just isn't ready to burn up their investment while trying to discover the mystery overclocking sweet spot.

Another perk is the increased front side bus speed which allows for extremely high overclocks and excellent bandwidth throughput. Some will argue that this comes at the expense of high latency; but let's be realistic. You can't reach 100 MPH in a car with out traveling a long distance. This analogy is just as true to system memory as it is to cars: the faster you want to your top speed to be farther you'll have to travel before you'll reach it.
There are other benefits to the new standard, but the last of the major differences is in the capacity. DDR3 allows for chip capacities of 512 megabits to 8 gigabits, effectively enabling a maximum memory module size of 16 gigabytes. This should (hopefully) help move the computing world into 64-bit computing with a more compelling force. I hope.
Disadvantages
With every action comes an equal and opposite reaction. I am constantly reminded of this, because whenever I'm feeling especially good about something there will always be something to bring me right back down.
When you compare DDR3 to previous SDRAM generations, it inherently claims a higher CAS latency. The higher timings may be compensated by higher bandwidth which increases overall system performance, but they aren't nullified.
Additionally, this is new technology and it wears the new technology price tag. DDR3 generally costs more if you compare the price per megahertz ratio, this was also the case when DDR2 replaced DDR years ago. In fact, I still have the receipt for a nearly $400 set of Corsair Dominator 1066 MHz DDR2 from just under two years ago. For that some amount today, I could get a lot more performance for my dollar.
There are also a few technical difficulties which must be overcome in order to take advantage of DDR3. For example, to achieve a maximum memory efficiency on the system level, the systems front side bus frequency must also extend to that level. In most cases, it's best to have the front side bus operate at a matching memory frequency. Now obviously this isn't going to be a problem as 1600 MHz FSB processors become mainstream, it still places burden on the processor and motherboard chipset to make accommodations.
But we're not quite out of the woods yet... a higher operating frequency also means more signal integrity issues. Both motherboard and memory module design engineers now have to overcome new technologies and purchase test equipment for verification which often takes very expensive equipment just to look at the specific performance routines. In the end, the lab facilities costs will be passed along to you know who. This might explain how $300+ DDR3 motherboards have become such a common sight.
Potential Concerns
System memory has had the opportunity to evolve and improve, but it hasn't been alone. Processor and motherboard technology have also moved forward at what might might be considered a faster rate of development. Just as the speed of system memory has increased, the amount of onboard processor cache memory has also increased. As I write this article, I have a set of DDR3 memory modules running at 2000 MHz, and an Intel E8200 processor with 6 MB of cache buffer. It seems that at some point in the upcoming wave of product evolution, my computer may not see the need to call on system memory unless I'm utilizing a graphics-intensive application. If the trend continues, as it likely will, we might not see any benefit from the ever-increasing operating frequency of system memory because the processor will have large amount of buffer operating at a far faster speed.
Another concern is scalability and expansion. While I admire the brilliance of JEDEC to bring a more efficient module into mainstream use, I sometimes wonder how they come about other decisions. One key issue that may become a problem down the road is the specification calling for a maximum of 2 two rank modules per channel at 800-1333 MHz frequencies. It get's worse, only one memory slot is allowed at the present-specification top operational frequency of 1600 MHz.
All in all, DDR3 isn't perfect. It's unquestionably better than its predecessor, but I think my points have illustrated that the good also comes with the bad. For the past year our concentration here at Benchmark Reviews has constantly centered around DDR3, as if it's a new toy to play with. But it's not; DDR3 is here to stay, and whether you want it to or not the market will soon be treating DDR2 the same way it presently treats DDR. You can cling on to your old technology, but at this point that would also be like reverting to AGP discreet graphics... which also cost a lot less than PCI Express. But that's for another article.
DDR/NetBurst Memory Bandwidth and Latency
One of the most talked-about AMD advantages of the last couple of years has been their on-processor memory controller. This has allowed, according to popular theories, the Athlon64 to significantly outperform Intel NetBurst processors. The fact is NetBurst DDR2 bandwidth has recently been similar or wider in bandwidth than Athlon64 - even when the DDR is overclocked. You can see this clearly when we compare Buffered and Unbuffered Bandwidth of a NetBurst 3.46EE to an AMD 4800+ x2(2.4GHz, 2x1MB Cache) running DDR400 2-2-2 and running overclocked memory at DDR533 3-3-3.
The green bars represent DDR memory performance, while the beige to red are increasing DDR2 speed on NetBurst. Light green represents DDR400 2-2-2 while Dark Green is overclocked memory at the same CPU speed, DDR533 at 3-3-3.

In buffered performance, Fast DDR400 is only faster than DDR2-400 and slower than DDR2-533, 667 and 800. Overclocked memory at DDR533 3-3-3 is faster than any of the DDR2 bandwidths on NetBurst.
The Sandra Unbuffered Memory Test, which turns off features that tend to artificially boost performance, is generally a better measure of how memory will behave comparatively in gaming. The same green for DDR applies here.

Without Buffering, DDR400 has the smallest bandwidth of tested memory speeds and timings. Even overclocking to DDR533 allows the DDR to barely beat DDR2-400. DDR2-533, 667, and 800 all have greater Unbuffered bandwidth than the DDR overclocked to 533. NetBurst DDR2 memory bandwidth is generally wider than the bandwidth supplied by DDR memory on Athlon64. Despite the wider bandwidth, the deep pipelines and other inefficiencies in the NetBurst design did not allow the NetBurst processors to outperform Athlon64. Keep this in mind later, when we look at AM2 and Core 2 Duo Memory Bandwidth.
Latency
The other area where AMD has had an advantage over NetBurst DDR2 performance is memory latency, the result of the on-processor memory controller. Comparison of the AMD DDR Memory controller and the Intel DDR2 Memory controller in the Intel chipset shows AMD DDR with latency about 35% lower than Intel NetBurst in Science Mark 2.0.

While memory bandwidth was very similar between AMD and NetBurst, the deep pipes of the NetBurst design still behaved as if they were bandwidth starved. On the other hand the AMD architecture made use of the bandwidth available and the much lower latency to outperform NetBurst across the board.
AM2/Core 2 Duo Latency and Memory Bandwidth
The introduction of AM2 merely increased the AMD latency advantage. AM2 latency was slightly lower than DDR latency on AMD.

However, Core 2 Duo did what most believed was impossible in Latency. One of AMD's advantages is the on-processor memory controller, which Intel has avoided. It should not be possible to use a Memory Controller in the chipset on the motherboard instead and achieve lower latency. Intel developed read-ahead technologies that don't really break this rule, but to the system, in some situations, the Intel Core 2 Duo appears to have lower latency than AM2, and the memory controller functions as if it were lower latency.
Memory Bandwidth
The other part of the memory performance equation is memory bandwidth, and here you may be surprised, based on Conroe's performance lead, to see the changes Core 2 Duo has brought. Results are the average of ALU/FPU results on Sandra 2007 Standard (Buffered) memory performance test. We used the same memory on all three systems, and the fastest memory timings possible were used at each memory speed.

The results are not a mistake. In standard memory bandwidth, Core 2 Duo has lower memory bandwidth than either AM2 or Intel NetBurst. It is almost as if the tables have turned around. AMD had lower bandwidth with DDR than Intel NetBurst, and the Athlon64 outperformed Intel NetBurst. Now Conroe has the poorest Memory Bandwidth of any of the three processors, yet Conroe has a very large performance lead. It appears Conroe, with shallower pipes and an optimized read-ahead memory controller to lower apparent latency, makes best use of the memory bandwidth available.
Perhaps the most interesting statistics are that the huge increases in memory bandwidth brought by AM2 make almost no difference in AM2 performance compared to the earlier DDR-based Athlon64. With this perspective let's take a closer look at DDR2 memory performance on AM2 and Core 2 Duo. This will include as close to an apples-to-apples comparison of Core 2 Duo and AM2 as we can create.
Memory Test Configuration
The comparison of AM2 and Core 2 Duo Memory Performance used the exact same components wherever possible. Memory, Hard Drive, Video Card, HSF, and Video Drivers were the same on both test platforms.

The motherboards used for benchmarking differed by necessity, but they are both top-line boards from Asus - the P5W-DH Deluxe for Core 2 Duo and the M2N32-SLI Deluxe for AM2. The latest motherboard drivers from Intel (P5W-DH) and nVidia (M2N32-SLI) were used for testing. The hard drive configurations for each test platform only differed in the drivers required for the test motherboard.
Our Corsair CM2x1024-6400C3 modules were set to the following memory timings on each platform; DDR2-400 - 3-2-2-5, DDR2-533 - 3-2-2-6, DDR2-667 - 3-2-3-7, DDR2-800 - 3-3-3-9, DDR2-1067 - 4-3-4-11, and DDR2-1112 - 5-4-5-14.
A Closer Look at Latency and Scaling
As was explained in the Core 2 Duo launch review, Core 2 Duo has not physically added a memory controller on the processor. The memory controller is still part of the motherboard chipset that drives Core 2 Duo. Intel added features that perform intelligent look-aheads on the memory controller to behave like lower latency. As you saw on pages 2 and 3, ScienceMark 2.0 shows the "intelligent look-aheads" in Core 2 Duo to be extremely effective, with Core 2 Duo memory now exhibiting lower apparent latency than AM2. However, not all latency benchmarks show the same results. Everest from Lavalys shows latency improvements in the new CPU revisions, but it shows Latency more as we would expect in evaluating Conroe. For that reason, our detailed benchmarks for latency will use both Everest 1.51.195, which fully supports the Core 2 Duo processor, and ScienceMark 2.0.
Latency, or how fast memory is accessed, is not a static measurement. It varies with memory speed and generally improves (goes down) as memory speed increases. To better understand what is happening with memory accesses we first looked at Sciencemark 2.0 Latency on both AM2 and Conroe.

ScienceMark shows Conroe Latency with a 45ns to 61ns lead at DDR2-400. Latency continues to decrease as memory speed increases with Core 2 Duo, reaching a value of about 30ns at DDR2-1067. The Trend line for AM2 is steeper than Core 2 Duo, increasing at a rapid rate until latency is virtually the same at DDR2-800.
It is very interesting that ScienceMark shows lower latency on Core 2 Duo than AM2, since we all know the on-chip AM2 controller has to be faster. We thought perhaps it was because all of the tested memory accesses could be contained in the shared 4MB cache of Core 2 Duo, but Alex Goodrich,one of the authors of ScienceMark, states that Version 2 is designed to test up to 16MB of memory, forseeing the day of larger caches. In addition he states the Core 2 duo prefetcher is clever enough to pick up all the patterns ScienceMark uses to "fool" hardware prefetchers. ScienceMark plans a revision with an algoritm that is harder to fool, but Alex commented that Conroe fooling their benchmark was "in itself a great indicator of performance".

Everest uses a different algorithm for measuring Latency, and it shows the on-chip AM2 DDR2 controller in the lead at all memory speeds, with Latency almost the same at the Core 2 Duo memory speed range of DDR2-400 to DDR2-533. However, the Everest trend lines are similar to those in ScienceMark, in that AM2 latency improves at a steeper rate than Core 2 Duo as memory speed increases.
The point to the Latency discussion is that, as expected, AMD has much more opportunity for performance improvement with memory speed increases in AM2. Intel will eventually reach the point, if the lines were extended, where they would have to move to an on-chip memory controller to further improve latency. This is not to take anything away from Intel's intelligent design on Core 2 Duo. They have found a solution that fixes a performance issue without requiring an on-chip controller - for now.
Memory Bandwidth and Scaling
Everyone should already know that memory bandwidth improves with increases in memory speed and reductions in memory timings. To better understand the behavior of AM2 and Core 2 Duo memory bandwidth we used SiSoft Sandra 2007 Professional to provide a closer look at memory bandwidth scaling.

The most widely reported Sandra score is the Standard or Buffered memory score. This benchmark takes into account the buffering schemes like MMX, SSE, SSE2, SSE3, and other buffering tools that are used to improve memory performance. As you can clearly see in the Buffered result the AM2 on-chip memory controller holds a huge lead in bandwidth over Core 2 Duo. At DDR2-800 the AM2 lead in memory bandwidth is over 40%.
As we have been saying for years, however, the Buffered benchmark does not correlate well with real performance in games on the same computer. For that reason, our memory bandwidth tests have always included an UNBuffered Sandra memory score. The UNBuffered result turns off the buffering schemes, and we have found the results correlate well with real-world performance.

The Intel Core 2 Duo and AMD AM2 behave quite differently in UNBuferred tests. In these results AM2 and Core2 Duo are very close in memory bandwidth - much closer than in Standard tests. Core 2 Duo shows wider bandwidth below DDR2-800, but this will likely change when the AM2 controller matures and supports values below 3 in memory timings as the Core 2 Duo currently supports.
The Sandra memory score is really made up of both read and write operations. By taking a closer look at the Read and Write components we can get a clearer picture of how the two memory controllers operate. Everest from Lavalys provides benchmarking tools that can individually measure Read and Write operations.

The READ results are particularly interesting, since you can see that the READ component of Core 2 Duo performance is much larger than the WRITE results on Core 2 Duo. This is the result of the intelligent read-aheads in memory which Intel has used to lower the apparent latency of memory on the Core 2 Duo platform. Actual READ performance on Core 2 Duo now looks almost the same as AM2 to DDR2-533. AM2 starts pulling away in READ at DDR2-677 and has a slightly steeper increase slope as memory speed increases. The increases in READ speed in Core 2 Duo are a result of the intelligent read-aheads in memory. Performance without this feature would show Core 2 Duo much slower in READ operations than AM2.

This is most clearly illustrated by looking at Everest Write scores. Memory read-ahead does not help when you are writing memory, so core 2 Duo exhibits much lower WRITE performance than AM2 as we would expect. This means if all else were equal (and it isn't) the AM2 would perform much better in Memory Write tasks. Surprisingly the WRITE component of Core 2 Duo appears a straight line just below 5000 MB/s. AM2 starts at 5900 at DDR2-400 and WRITE rises to around 8000MB/s at DDR2-667. Write then appears to level off, with higher memory speeds having little to no impact on AM2 WRITE performance.
Stock Performance Comparison
With a clearer understanding of how memory behaves on the AM2 and Core 2 Duo platforms, benchmarks compared performance of the fastest Core 2 Duo and AM2 processors available. Core 2 Duo X6800 at 2.93 GHz and FX62 at 2.8GHz are both dual-core processors.




It really doesn't matter which DDR2 speed you examine in this direct comparison. Core 2 Duo is faster in every benchmark at every speed evaluated. It is true, however, that different processor and top memory speeds are being compared. This is a necessity at stock speeds. For that reason, the next series of comparisons tried to configure both test platforms as close to each other as possible.