Maybe better to look at what it is your asking the gpu to do, and where you're seeking performance improvement. My general understanding is that for the likes of effects, noise reduction, optical flow, etc ... the performance will generally come from how many GPU / CUDA cores you have. For encoding, then the performance comes from the encoder engines within the GPU.
As an example, a 3090 has 1 encoder engine ... and so does a 3080, 3070, 3060. So they would all 'encode' at a similar speed as they all use basically the same engine. However, if you had effects applied, then the 3090 will likely be markedly faster than the 3080 and below due to the higher number of cuda cores and memory which can grunt out the effects calculations before passing them to be encoded.
I used to notice this on my 3080. A simple encode would keep the card quiet. Where affects like nose reduction were applied, then your could hear the card coil whine slightly. It was clearly doing a different task of work to encode with effects.
Would adding a second card allow a second account of CUDA cores and a second encoding engine to be used in parallel? Don't know personally, never tried.
However if you made the jump to a 4070TI or above, then they have 2 encoder engines in them ... I presume DaVinci would be able to use both if they are on the same card. You might drop your encode time a lot, at the possible expense of effects performance. Depends on what you want to improve.
- - - - - - - -
This also seems to be the case with Macs as well in DaVinci .... there is increase in performance with increase of gpu cores, but also big steps in performance where there are additional media encode engines.
For example, M2 Max chip has 2 encode engines, and the M2 Ultra has 4 ... and the Ultra generally shows near double the performance as a result.