Well, if you think of something like a Segway, that adjusts its position something like 100,000 times a second for balance, so there are elements of this sort of problem that can be dealt with.
I think the weak link would come in the sensing of things, rather than responding to things. While a computer might be able to respond quicker, more accurately and more frequently to a stimulus, it's going to be a hell of a job to put all the things that a driver experiences with his eyes, his ears, his hands on the wheel, his arse on the seat, his feet on the pedals... into terms that a computer can interpret.