SNR is how much of the signal is actually received reliably, isn't it?
Kinda.
A lower figure means both a noisier line and lower overall signal strength.
Not necessarily. In isolation a lower SNR tells you nothing about the absolute values of either the received signal or the level of noise. On the other hand if you know the level of the signal is unchanged, then a lower SNR would tell you that the noise must have increased.
The SNR is the difference between the level of the received signal and the level of the noise.
The sync rate on FTTC seems to be determined by the SNR, minus the Line attenuation of each VDSL2 frequency band. So, both are important factors.
No, that's not how I understand it at all - if you have a reference for that it would be interesting to read.
The line attenuation is a measure of how much the signal is reduced over the length of the line. This will influence the measured SNR - so for example picking numbers completely at random out of thin air: Say the signal is originally 20dB and you have 15dB of attenuation, then the received signal level would be 5dB. If the noise is at 2dB, then your SNR would be 3dB.
The sync rate is established solely on the SNR, it makes no sense to include the line attenuation in that calculation because attenuation is already inherent in the SNR.
The 'SNR margin' adjusts the sync rate accordingly, to ensure an adequate 'safety margin' between the SNR (received signal strength) and line attenuation (signal degradation over distance). Typically, this is configured by Openreach to be 3DB or 6DB on FTTC.
The margin is a target SNR - during the sync, the modems (yours and the one in the cabinet) will effectively test the line to see what it will support while sticking to a minimum SNR. I believe that DLM can also go to a 9dB margin.
I assume this margin is necessary to cope with fluctuating line conditions caused by things like electrical noise and weather.
Correct. Noise on the line will naturally vary over the day - in particular radio signals are propagated differently at different times of the day, so the line will pick up varying amounts of RF. Plus local noise producers may not be on at the time of the sync.
Attenuation is an inherent property of the line - it shouldn't vary by much (although I imagine some faults can cause it to change). Plus the original signal power will be constant, so the received level of the signal will most likely be more or less the same - it's the variation in the noise level that can cause the received signal to be swamped leading to errors and ultimately resyncs. It's that that DLM is trying to prevent by setting an initial minimum SNR.
But that's just what I've picked up and inferred over the years, so would be interested in any alternative references that you may have.