Which OS?
You can make Windows XP do it by editing dmboot.sys, dmconfig.dll, and dmadmin.exe slightly
(a google about those files should tell you everything) but you'd need at least 4 SATA ports to do it - one to boot, and 3 Windows 'dynamic disks' on which to put the RAID 5 array.
In Linux there's no issues directly booting from a software RAID 5 array, via an initrd. I used to get 30-40MB/sec on writes, 60-70MB/sec on reads when I had a VIA-C3 1.2Ghz fileserver with Linux software RAID 5, but that was over a PCI SATA controller.
133MB/s PCI limit / 4 drives = 33MB per drive total = 16MB per direction .. considering a write requires a read to recalculate the parity, this gives a theoretical (uncached) max of 48MB/s writes, 100MB/s on reads. With onboard SATA connected via a PCI-E bus, you should get results similar to that of a dedicated RAID controller.
rpstewart: I think the bus bandwidth is generally more of an issue than CPU
power when performing the parity calculations...
Linux software raid (md) in-memory benchmark, Athlon64 3800+ :
raid5: using function: pIII_sse (6990.800 MB/sec)
raid6: int32x1 848 MB/s
raid6: int32x2 899 MB/s
raid6: int32x4 870 MB/s
raid6: int32x8 578 MB/s
raid6: mmxx1 1809 MB/s
raid6: mmxx2 3298 MB/s
raid6: sse1x1 1645 MB/s
raid6: sse1x2 2469 MB/s
raid6: sse2x1 2250 MB/s
raid6: sse2x2 2947 MB/s