Can you explain how memory works?

Soldato
Joined
4 Nov 2002
Posts
2,982
Location
England - Leeds
Hey there, I was wondering if you could please try and explain to me how a memory sticks work. When I view them on OCUK, and I read the specs, I dont know what am looking for? I can't tell what is good and what isnt.

So mainly in this post I was hoping you could help me to read specs.

Tell me what to look for etc.


Thanks allot,


SkScotchegg:D
 
lol, am I not aloud to ask? hehe, I thoought this was a forum where you could seek help, I asked politly. I am asking because when I am browsing the OCUK shop I dont know how to choose parts, and I am wanting to get a new PC in january but between that time and now I am wanting to pick out all the parts and talk about them and work out whats best and how things work etc.

I've built a PC before but not for a while. So just need help. My questions arent there to annoy you guys. I am being serious, I posted in the motherboard section and I posted here. I am wanting to build my own system, so I need to know what to look for with motherboards and what to look for with memory?

Like when I think of a motherboard, I seriously dont get what makes one good? Does a motherboard make a CPU faster or memory faster, and when it comes to memory is it you the timings I'm looking at, or is there more then that? like do I just look at 4-4-3-2 those sorts of things, and I dont even get what that actually means....I thought seen as how you guys are always talking about memory and motherboards etc on this forum you could answer my newbie questions easily?

So any help given is much appretiated......


SkScotchegg:D
 
You'd have been better off asking the questions in one thread in say the general hardware forum than spreading it out over the different catagories...

When it comes to memory all that really matters (unless your into overclocking, etc.) is if it will match your CPU bus speed 1:1 so for most standard single or dual core machines just grab some generic PC2-4200 or if a quad core some 5300 and have done with it...

You won't notice the difference in the timings unless your an enthusiast and constantly tweaking and benchmarking.
 
Memory Basics

In this article, we will examine the physical components of a memory module and the basics of how system memory works.

This article will provide an overview of the physical components of a memory module and explain the principles underlying the functionality of system memory.

Form Factors: From Chips to Modules

Memory used to be available in the form of discreet memory chips that could be inserted into sockets of systems. That was about 20 years ago. With the migration towards higher system memory densities, this practice has been abandoned in favor of modular memory components, starting with the single inline memory module (SIMM) that evolved to the so-called dual inline memory module (DIMM) used in all systems today that utilize SDRAM. The name DIMM originates from the fact that with the introduction of the Pentium processor, the processor bus width increased from 32 bit to 64 bit that required two of the original SIMMs. Therefore, a module that combined two SIMMs into a single format had to be called a DIMM.

The first DIMMs were built according to a variety of different specifications, meaning that there were modules with two clock inputs or four clock inputs as well as a slew of other variations from one module to the next. As a result, compatibility problems were very common in those days. To end this situation, Intel introduced the PC-100 specifications, amongst which was the introduction of an electronic data sheet that was stored in the form of an Electrically Erasable Programmable Read Only Memory (EEPROM) chip: the Serial Presence Detect or SPD. The name originates from the fact that the SPD uses a serial interface to the bus that allows the BIOS to detect the module and enact the proper timings according to the data stored in it.


Each DIMM is composed of three primary components: the PCB (Printed Circuit Board), the SPD (Serial Presence Detect), and the IC (Integrated Circuit).


Front view of a DIMM (simplified schematic drawing). The main ingredients are the green PCB, the memory chips also called "components" or "discretes", the EEPROM containing the SPD and the edge contacts called pins. A standard DDR DIMM has 8 chips per physical bank, each of the chips has a data width of 8 bits for a combined total of 64 bit width of the module. Usually a suffix on the components designates the speed rating of the chips, in this case a -4 would indicate a clock cycle time (tCK) of 4 ns that is equivalent to 250 MHz clock frequency or DDR500. Note the asymmetric position of the key in the "pinout" to ensure that the module can be inserted only in one orientation.


The PCB

The PCB functions like any circuit board. It is composed of multiple sheets of fiber resin with the metal layers making up the traces sandwiched in between. Its function is to provide a mechanical scaffold for the components, as well as to provide power and data connectivity to the latter. As a rule of thumb, all PCBs are built using multiple layers of metal traces separated by the individual sheets of resin. Each layer or plane usually has its own dedicated set of functions, for example, there are input/output planes along with power and ground planes. Most of the time, the ground planes are running closer to the surface to shield the data lines from electromagnetic interference (EMI) originating with other computer components.

The memory chips

The memory chips are semiconductor ICs that consist of a data storage part, the so-called memory array and the logic for addressing and input/output. The silicon is packaged in either TSOP or BGA format, the difference is that TSOP has little legs sticking out on the side whereas BGA has little solder balls that are at the bottom surface of the chip and no longer visible after the chips are mounted on the PCB.


A memory chip consists of the actual silicon die, which contains the array, the interfacing logic and the bonding pads around the periphery. Typically, the bonding pads are in the order of 70 x 70 µm and so-called bond wires are attached to them to connect the die to the leadframe, which is where the pins are anchored. Bondwires are usually 30 µm in diameter, which means that they can barely be seen with the naked eye. To protect the entire assembly, it is packaged in non-conductive plastic (drawn here in transparent blue). Current DDR chips have 64 legs or pins as opposed to the simplified drawing shown above.

All currently used system memory uses Synchronous Dynamic Random Access Memory (SDRAM), a form of volatile memory. The term volatile means that the memory needs power in order to retain the data, if there is a loss of power or else the system is turned off, all data within this memory will be lost. The term Random Access describes the fact that data can be written to any location within the memory rather than having to start at the lowest address and then sequentially filling up the array. The advantage is that coherent data can be written to any area within the memory space that has enough room to contain the entire set of instructions or data, rather than having a few bytes here and a few others there in case an update is done and the originally allocated space no longer suffices.

The SPD

The SPD is a small EEPROM chip (Electrically Erasable Programmed Read Only Memory) that contains the data sheet listing memory timings, memory size, and memory speed that is read by the computers chipset. Most retail motherboard manufacturers allow settings such as memory timings and voltage to be set manually in the computer CMOS Setup Utility. When manual timings are not used, the SPD information is used by the chipset. Most original equipment manufacturers (OEMs) such as Dell, Gateway and others hide the manual settings from the user and, often, do not even read the SPD instructions. Rather, they default to safe settings for maximum compatibility.

Memory Basics

Memory by itself is just another component. Inside a computer system, however, it is integral for the functionality of the system, in fact, every single transaction on the system level will have to use the system memory as an intermediate station. There are subtle differences between different platforms, however, regardless of whether it is an Intel or an AMD system, an e-Machine or a high-end IBM or Itanium server, the basic principles are always the same.

CPU - Memory - Cache

The heart of any PC is the central processing unit or CPU, also referred to as processor in common parlance. The CPU needs data in order to do work. In other words, data are loaded first from the hard disc drive into memory and from there, they are retrieved by the CPU. Since the system memory is outside of the CPU, which means that a certain amount of time is required to access the data, all modern CPUs use a small amount of utra-fast memory, the so-called cache. All data that the CPU anticipates it will need again are written to this cache. In order to optimize data flow, the cache itself is hierarchically organized into a first level (L1) cache, which is extremely small but operates at very low latencies and a second level (L2) cache that is usually much larger but also needs a bit longer to make the data available. In some cases, a third level (L3) cache is present as well but this is the exception rather than the rule (P4EE, HP PA8800).


General schematics of the memory subsystem and how it is implemented in any modern computer. The CPU sends data requests to the memory controller, which in turn, generates the time-muxed (see below) memory addresses to retrieve data from the system memory. The data are being analyzed on the level of the chipset (memory controller) and the CPU itself and data that are determined to be valuable for future use are stored in the on-die, integrated high-speed SRAM memory called CPU cache. The cache runs at CPU clock speed, whereas the system memory runs at bus speed, typically 10-20x slower than a cache. Caches are hierarchically organized into Level1, Level2 and higher for higher speed smaller data chunks and vice versa. Keep in mind that the CPU cannot access any system component directly, whether it is the HDD or the sound, everything has to be written to the memory first.

Non-Multiplexed SRAM Addressing of Caches

One fundamental difference between all caches and the main memory is the method through which the addresses within the array are generated. Caches are generally using an SRAM interface, which means that all addresses can be specified with a single operation. One example would be an Excel spreadsheet where the cell "F34" is needed. "F34" is a composite address, that consists of a row #34 and a column #F. In the case the data are needed, simply the address F34 is sent as a single instruction and the data are retrieved from the corresponding cell.

In the case of system memory, the situation is quite a bit different because a so-called multiplexing address protocol is used. That means, that in the case of a spread sheet, first the row #34 would need to be opened and only then can the column #F be specified. Suffice it to say that this is substantially slower than the non-multiplexed SRAM addressing scheme. Moreover, the SRAM cache runs at the same speed as the CPU whereas the system memory runs at only a fraction of that, typically 1/10 to 1/20 of the cache. Therefore, it is clear that whatever data are needed over and again will have to be fit into one of the cache levels for best system performance.

However, caches are very low density and very expensive to manufacture. Therefore, only a very small amount of data can be held within the different cache levels, everything else has to go into the system memory. It should be clear, therefore, that also the speed of the system memory will have a major impact on the overall system performance, likewise, the access times will be critical.

Accessing System Memory

Accessing system memory involves a rather complicated sequence of events. First, the CPU requests data from where it thinks those data are, that is, from a logical or virtual address space that is created for every application and program running. This virtual address space needs to be translated into the real or physical address space, and this is done mostly by the memory controller - an integral part of the chipset. After the correct address has been determined using the translation cues stored in the CPUs translation lookaside buffers (TLBs), the signals for the addresses have to be generated. The first selection narrows the location of the data down to one side of any memory module by means of the chip-select signal. Afterwards, since we are talking about DRAM, the first signal to be sent from the memory controller to the memory the row address is the Row address by means of a Row Activate Command.

Time- Muxed Row and Column Address Generation and the Three Key Latencies:

tRCD and CAS Delay

Instead of using a hand-shake protocol to acknowledge that the row is ready, synchronous DRAM (SDRAM) specifies a time - after which it is safe to assume that the row is open - as the so-called Row-to-Column Delay (tRCD). That means that after a statistically sufficient time where the tRCD has been satisfied, the row decoders are turned off and the column decoders are turned on by signaling a logical true on the Column Address Strobe (CAS) command line. This allows the same address lines that were used to specify the Row address to now specify the column address by issuing a Read command. This sequence of events and the use of the same channels to perform two different tasks is called time-multiplexing or "time-muxed DRAM addressing". After finding the correct column address and retrieving (prefetching) the data from the memory cells into the output buffers, the data are ready to be released to the bus. This time interval is called the CAS delay or CAS latency.

tRP

As long as the requested data are found within the same row (or page) of memory, the consecutive accesses will be "in page" or so-called "page hits". Any requests of data that are stored outside the currently open row, will miss that page and are therefore called page misses. In that case, the open page will have to be closed and the new page will have to be opened. The sequence of events includes disconnecting of the wordlines writing back all data from the sense amplifiers to the memory cells and finally shorting of the bitlines and bitlines "bar" to put everything back into a virgin state. This process is generally referred to as RAS precharge and the time required to execute all steps involved is called the Precharge latency or tRP.

In order to retrieve the next set of data the appropriate memory row will have to be opened with a bank activate command and the circle completes.

Latency Listings

There is no general consensus on how to list the latency parameters, some vendors are starting with the precharge, others are using tRCD as the first, however, the Solid State Technology Association formerly known as Joint Electron Device Engineering Council (JEDEC) has set forth certain guidelines pertaining to the nomenclature used and the code used on the modules to specify the parameters used. According to these specifications the sequence used is CAS Latency - tRCD - tRP - tRAS, where tRAS is the minimum bank cycle time, that is the time a row needs to be kept open before another request can force it to be closed. Therefore, a module specified as 2-3-2-7 will use a CAS latency of 2, a tRCD of 3, a Precharge delay of 2 and a tRAS of 7.

In general, lower latencies will yield better performance but there are a number of exceptions. Most memory devices will only support latency settings of 2 and higher, however, there have been memory chips capable of running at 1-1-1-2, notable examples were the EMS HSDRAM / ESDRAM series. One important distinction between the CAS Delay and other latencies is that the CL options have to be supported in hardware in the form of pipeline stages, whereas the other latencies are simply "time-to complete" values


Programmable CAS latency means that there are a number of switches as the one shown open in this drawing. The data are released from the memory cell via a pair of bitlines to the sense amplifier (SA), from there, they will either go into a pipeline stage (PS) or else bypass the latter if the switch is closed. In that case, the CAS latency will be lower and the data will reach the I/O buffers earlier. However, this may incur errors at higher frequencies and for that reason, additional buffer or pipeline stages are inserted into the output path that will capture the data and propagate them on the following clock edge.

Refresh

Memory data are stored in the form of electrical charges within the memory cells, that is, extremely small capacitors. The charges are protected from leaking out by tiny switches, the so-called pass-gate transistors. There will still be some leakage of the charge over time and that is why all memory cells have to be refreshed periodically. The easiest way to accomplish this is by reading the data out to the sense amplifiers and then writing them back internally, a process that is called CAS before RAS or CBR. If this refresh does not happen, the data will simply fade and eventually be lost. In order to maintain the data, SDRAM, therefore, needs to execute periodic refreshes that can be triggered even in low power standby mode by means of an integrated refresh counter on the memory chip itself. This feature is the reason why e.g. Suspend To RAM (STR, a power-down mode where CPU and chipset are going into a complete power off state) works without a need to supply power to the memory controller or the CPU.

Direct Memory Access

Aside from the CPU accessing data from the system memory any so-called busmastering device can set up its own direct memory access (DMA) channel to load and store data directly from and to the system memory. In most cases, this will involve a direct connection between the device (e.g. the hard disc drive), the South Bridge and the memory controller integrated into the North Bridge

Once the data are resident in memory, the CPU can access them as well. Keep in mind, though, that if the CPU is the heart of the system, the memory is the soul and no data can bypass it.


Different data paths include the DMA channels, in this case a HDD DMA channel to the memory is shown (red arrows). Essentially, this is the method of how different system components are interacting with each other, and the one link that holds everything together is the system memory.

This concludes our discussion of memory basics and how system memory functions. In the next section of memory basics we will cover the most common types of memory that are currently used in mainstream computers and give an outlook into the future of memory technology

No plagiarism has taken place regarding this article

<Memory Basics., Accessed 30th September 2007., From: >
http://www.ocztechnology.com/displaypage.php?name=memory_basics&psupp=1
 
Last edited:
Memory Types

System Memory Types

FP and EDO Memory

For the purpose of this white paper, we are going to skip the asynchronous forms of system memory that became obsolete around 1998, suffice it to say that at the time FastPage (FP) memory was used that was later replaced by Extended Data Output (EDO) memory. Those memories did not run in an asynchronous fashion in the true sense of the word either, rather, they ran at ½ bus speed, meaning that data transactions were possible only on every other cycle. FP and EDO memory were originally available in the form of SIMMs but later partially transitioned to the DIMM form factor.

SDRAM

SDRAM, as mentioned earlier, stands for Synchronous Dynamic Random Access Memory. In principle, this means that one bit is transferred on every clock cycle. Officially approved forms of SDRAM were PC66, PC100 and PC133 even though private speed grades were rated as PC150 and PC166 by different vendors. The peak transfer rate of SDRAM can be calculated easily by multiplying the numerical moniker with a factor of 8, using MB/sec as unit. For example, PC100 has a peak rate of 800 MB/sec, PC133 increases to 1066 MB/sec

DDR

Starting in 2001, the desktop and workstation platforms migrated to Double Data Rate SDRAM, usually called DDR SDRAM or short DDR. DDR doubles the peak data rate in that two bits are transferred on every clock, one on the rising edge and one on the falling edge. In order to do this, on every clock cycle, two bits have to be accessed from the memory array for each I/O pin, a feature that is called "prefetch of 2" (data from the array into the I/O buffers). Data bandwidth of DDR memory can be calculated by using the clock speed and multiplying it by a factor of 16MB/sec. Therefore, a DDR DIMM running at 100 MHz clock rate will have a peak bandwidth of 1600 MB/sec. To offset the DDR nomenclature from the SDRAM terminology, DDR uses the peak bandwidth by convention or else the data rate. In other words, a DDR component or module running at 100 MHz clock rate will be either DDR200 (data rate) or PC1600.

Current DDR SDRAM operates at between 100 MHz and 200MHz clock rate or 200 and 400 MHz data rate, respectively, according to official JEDEC standards. Hardware enthusiasts achieve as high as 300 MHz clock rate equivalent to DDR600 or PC4800.

Buffered / Un-buffered / Registered

As mentioned previously, nearly all system memory today uses un-buffered DDR. Essentially, with un-buffered memory the chipset addresses each memory chip on all modules in the system directly. As a consequence, there is a high electrical load that the memory controller has to overcome, and which slows down the signal slopes. In other words, with increasing system memory, the controller has to do more work and that results in performance deterioration because of added delays in the order of pico-seconds.

In order to overcome this limitation, higher density systems are using "Registered" memory. A Register is a buffer that temporarily holds data for one clock cycle and then propagates them further on the next clock edge. The effect is that the chipset only sees the register which is one per physical bank (Rank) of memory and, therefore, has much less resistance and capacitance to overcome. The drawback is that the temporary hold and translation of addresses and commands will add one additional latency cycle between the Chip Select and the Bank Activate command.

The term un-buffered is in reality a misnomer since there are no buffered memory modules. The difference between a buffer and a register is that buffers will propagate the data within the same cycle if possible, however, this is only possible at very low operating frequencies and even then results in somewhat sluggish behavior of the memory system.

ECC Memory

ECC stands for error checking and correction which can be accomplished through a variety of different implementations. For single bit error correction, the standard procedure is the use of an Exclusive OR algorithm that calculates the checkbit for each byte across the memory bus, which is then written to a separate chip. Therefore, the ECC memory bus is 72 bits wide as opposed to the standard 64 bit non-ECC memory. Keep in mind, though that by convention, ECC memory is only 64 bits since only the data bits are counted for functionality and bandwidth considerations.

ECC memory has to compare the parity bit to the data bits, this requires extra calculations on the CPU level and slightly slows down the memory operations compared to non-ECC memory, depending on the mode of operation (see below). Keep in mind, though that the performance hit is more caused by occupation of system resources (including CPU cycles for the ECC calculations) than by an actual reduction of bandwidth.

There is hardly any part of memory technology that is as badly misunderstood as ECC memory.

The biggest misconception is that ECC improves stability of the PC!

A layman could conceive that error correction will improve stability of the computer but anyone who understands memory will know that this is not the case. ECC can only correct soft errors, that is, errors that occur after the data are written to the memory when the particular memory cell is getting hit by a cosmic ray. Chances for a hit by a cosmic ray are approximately 1 per GB of memory per month. That does still sound like there is a remote possibility for this to happen but keep in mind that the errors can only have consequences if the cosmic ray hits an area of memory that is currently in use. Otherwise, the CPU will not even acknowledge that there are false data and treat the error as invalid entry to overwrite it with the next set of data.

Moreover, errors can only have consequences if the same data are requested again within the same user session. If the PC is turned off in between, the data in the memory will be lost anyway and so will be the errors.

Once a month, the system will get hit by a cosmic ray!

True, but to rephrase the probability calculation, the real perspective would be that one user would constantly use 1 GB of memory for PC operations in a 24/7 environment over a period of 1 month. In that case, there would be a statistical probability for one soft-error to occur. In 80% of all cases, there would be a data error that would look like a single pixel with a slightly different color in one single frame of a game and only in 20% would there be an error in the instructions that could affect stability. This means that a household user will have to use 1GB of memory 24/7 over 6 months without any interruption or idle time in order to experience a benefit of ECC.

Why then ECC at all?

ECC is, however, necessary for servers and workstations that are using multi-Gigabytes of memory and are running for years without downtime. In that case, there is a chance of accumulation of soft errors and that is where the real challenge comes in since it is not the crash of the system that might have fatal consequences but the proliferation of these soft errors through a data base that could go unnoticed if there were no "third party auditing" mechanism in place. In other words, for banks, flight operation / navigation and similar applications ECC is not just good but a vital part of the memory subsystem since it is the only way to verify data integrity and prevent errors from spreading.

Different modes of ECC.

ECC can be run "on demand" that is, only data that are requested are checked. A second mode is that the data are being checked and corrected. The most effective but likewise redundant mode is the so-called scrubbing which means that the entire memory space is constantly checked for the occurrence of soft errors, even if the data are not being requested. In case a soft error is found, this error will be corrected immediately. Needless to say that this mode of operation also takes up most resources and causes the greatest performance hit but it also avoids the problem of multiple errors that cannot be corrected but that statistically only occur after several months of continuous uptime.

The Future

In the beginning, there was FP memory, then came EDO and SDRAM. Currently we are looking at another changing of the guards, that is the projected transition from DDR to DDR-II. There is currently much hype about DDR-II and a lot of promotion of the new technology that is supposed to be better than the current standard. The big question in this respect concerns the definition of better.

Better for DDR-II means that it is easier to manufacture the core because it is running at ½ of the speed of the I/O buffers. That means that at 100 MHz core frequency, it is possible to run at 400 MHz data rate since the output buffers will run at 200 MHz clock rate while employing a DDR protocol. To feed enough data into the bus requires, therefore that on each core cycle four bits have to be moved from the array into the I/O buffers (per pin) which is described as prefetch of 4 as opposed to the single fetch in SDRAM and the prefetch of 2 in DDR-1.


Graph reprinted with permission from Lost Circuits: http://www.lostcircuits.com/memory/ddrii/2.shtml

Hilighted features of DDR-II are the increased bandwidth compared to the official DDR (-1) standards but keep in mind that real life performance is not necessarily fettered by the official approval of a consortium. As a matter of fact, DDR-II is limited by the same physical properties as DDR, it was targeted to work at DDR400 speed and later migrate to DDR533 at a time when DDR266 seemed all that could be accomplished. Now, DDR500 is almost mainstream and even though DDR-II has some design advantages such as On-Die-Termination (ODT) to allow higher frequencies, these improvements were only implemented on the data bus whereas the command and address buses are left with "stone age DDR (1) technology".

Every chain is only as strong as its weakest link, and without ODT on the address and command bus, the same frequency limitations apply to DDR-II that will eventually hold back DDR (1). Intriguingly, in GDDR3, these missing features have been added back in and are the key to the superior performance of GDDR3.

Back to DDR-II. On the downside are the enormously inflated latencies. DDR (1), by definition uses a write CAS latency of 1. DDR-II uses read CAS latency-1 for writes, meaning that we are looking at 3, 4 or 5 cycles delay depending on the speed grade, compared to a single latency cycle in DDR.

Will DDR-II finally conquer the market? It is foreseeable that it will, it is better from a price per density standpoint, at least for the foundries and it has full support from Intel, even though the consumer will not see the savings trickle down in the near future. On the other side of the scale are AMD and a number of chipset and mainboard manufacturers who suspect that they will pay the bills for the inflated implementation costs of DDR-II with its enormous pin and trace count and the associated higher production costs on the system level. It is interesting that the shortfalls of the DDR-II architecture have led to a new proposal from Intel itself in the form of the FB-DIMM, a novel approach to funnel a wide bus through a narrow high speed interface. Only time will tell, roadmaps are roadmaps and have been known to change with demand.

No plagiarism has taken place regarding this article

<Memory Types., Accessed 30th September 2007., From: >
http://www.ocztechnology.com/displaypage.php?name=memory_types&psupp=1
 
Last edited:
Put them together…CAS + Latency = CL
CL stands for CAS Latency. This is the amount of time that it takes to retrieve data from the memory module. First a RAS (Row Access Strobe) signal is activated and then the CAS signal is activated to access the precise location of the requested data; the data is then transmitted. CAS Latency intervals are identified in clock cycles; for example, CL2/CAS-2 means it will take two clock cycles for the initial data stream to be sent. Therefore, CL2/CAS-2 modules can run faster than CL2.5/CAS-2.5 or CL3/CAS-3 modules.


How does CAS Latency impact my system’s performance?
The areas of computing most affected by CAS Latency will be memory-hungry applications such as computer games and other graphics intensive programs, as well as multimedia applications such as video editing and home theater systems. A switch from CL3/CAS-3 to CL2/CAS-2 will show a noticeable gain in overall system performance while running these memory intensive applications. CL3/CAS-3 will be adequate for most memory users that run basic web browsing and small office applications; CL2/CAS-2 modules provide low latency for those seeking higher performance in demanding applications. When overclocking a system, memory timings can range from low latency CL2/CAS-2 overclocking modules, to higher latency CL2.5/CAS-2.5 or CL3/CAS-3 modules designed for extremely high memory speeds.


What is Latency?
Latency is the amount of time one system component is waiting to get what it needs from another system component. In terms of memory, it’s the interval between a processor’s stimulus and the memory’s response.

What is CAS?
Short for Column Access Strobe, CAS is a signal sent by the processor to a DRAM circuit to prompt a column address. DRAM stores data in a series of columns and rows; each bit of data is filed in both a column and a row. Data is retrieved from the DRAM by the processor using CAS and RAS (Row Access Strobe) signals, much like pinpointing a location on a map using coordinates.

No plagiarism has taken place regarding this article

<Memory Types., Accessed 30th September 2007., From: >
http://http://www.ocztechnology.com/support/cas/
 
Last edited:
All the above original articles courtesy of the referenced original authors and originating websites. Hopefully the above articles will help you in understanding memory types and the different types of speed (MHz) as well as their own individual timings such as 4-4-4-12.

Further reading http://forums.overclockers.co.uk/showthread.php?t=53160

Voltage range is as specified by the individual manufacturers as well as warranties provided.

Motherboard types will support the rated given speed (MHz) of different type modules as indicated by spec under supported memory. Regarding the amount of memory you will require, that would be pending the operating system you wish to use. If you want to o/c, applications ie video editing ect ect. Although the standard memory kits sold are(2x1Gb kits DDR2)

As to your motherboard enquiry http://forums.overclockers.co.uk/showthread.php?t=17784986&highlight=username_SkScotcheggthread W3bbo offers up a sound debate, and good sound advice taking in to account DDR3 is now available. Boards are slowly appearing in the market designed to support this memory type.
 
Last edited:
Back
Top Bottom