T1 or T2 cmd rate

It's just these days they never put the command rating on the ram it's self, as they once did.

Kind of made it a lot easier to know what cmd your ram could handle.

But running mine at 2T at the moment to play it safe.

And from what I've read online, timings and more bandwidth, don't make much of an increase on performance with Sandy Bride anyway.

What make and type ram is that Surveyor, and at what voltages?
 
Thanks both, I'll give it a try with 1T and 1.5v and see what happens.

And got the title the wrong way around, placing the letter T before the number. Thats what happens when you post to early in the morning and you've played to many MMO's over the years. :)
 
1T and 2T is the command rate of the RAM.

Technical answer - Command Rate is the delay, in clock cycles, between when the Chip Select signal is activated and when any command can be issued to the RAM.

1T is supposed to be faster because is reduces the latency the memory is controlled at.

In real world use the difference isn't much.
 
Last edited:
Performance wise? Likely low % increases in synthetic benchmarks and single digit % performance increase in real world use.

I believe it relates to the delay between commands. Lower the better.
 
It seems as though nobody here knew what 1T and 2T command rates really were, we knew the performance drop, but not what it really is, at least I thought. Well, I found a website that wrote an article back in 2001, and it told me what that was. Basically, the command rate is the time it takes for the initial memory fetch to occur. Let me explain:

When the MC (Memory Controller) first tries to access memory, it has to latch onto a memory bank, known as CS (Chip Select). Then it proceeds to find the column (CAS), the Row (RAS), and then return the data to the CPU. Now, 1T means it takes 1 clock cycle to "find" a memory bank, vs. 2T where it takes 2 clock cycles to "find" the memory bank. But there's a sorta quirk, this only happens the first time data is attempted to be fetched from memory, and all subsequent accesses to that chip are done w/o delay, making the command rate null after the initial chip fetch.

"Whether the chip select can be executed in a single clock or whether it needs two clocks, depends on a variety of factors. Among the most crucial contributing factors appears to be the number of banks populated within the system from which the correct bank has to be selected. In a single bank configuration, the system already knows that all data have to be within this bank. If more banks are populated, there is an additional decision involved. Translated, that means that the number of chips within the entire pool of system memory plays an important role in how fast the DRAM command can be executed. This is highly oversimplified but to spell it out, it means that a single DIMM with only 8 chips (single bank) is easier and faster decoded within the entire possible memory space than 2 DIMMs with 2 banks each.

Other factors involve the distance of the DIMM slot from the memory controller and, most importantly, the quality of the DIMM's PCB. If a 4-layer PCB with its relatively high noise level is used, there is a fat chance of ever hitting a 1t command rate. A 6 layer PCB on the other hand has a better signal to noise ratio and can greatly speed up the decoding (on the level of the PCB). It is, therefore, not surprising that, even if the same chips are used on different DIMMs, the differences in the PCB will dictate whether the module is able to perform at a command rate of 1T or 2T"

So in a nutshell, just with having to raise latencies such as CAS, RAS, etc. when overclocking to remain stable, you should raise the command rate as well. Now, I know that this may be old or may not of been needed, but I actually never knew and I know a few people on here didn't, if you find it helpful, good, if you didn't, well, something to read .
 
Back
Top Bottom