Performance depends entirely on the drivers (since its basically software RAID), your CPU and the width of the connection from the SATA chipset to the system bus. AFAIK modern onboard SATA RAID is usually connected via a single or two PCIe lane(s) rather than the PCI bus so bandwidth isn't usually a concern unless you have 4 HD's or more.
I'd expect performance to be almost identical between onboard RAID and a dedicated RAID card with up to 4 disks in most scenarios - certainly in what I've experienced.
As for Linux, most onboard RAID chipsets are supported via the dmraid utility, which creates Linux-style device-mapper (hence the 'dm' part) devices from the seperate disks which the low-level driver sees (/dev/sda, /dev/sdb, etc). Running dmraid early in the boot stages (from initrd/initramfs) before the root filesystem is mounted allows you to use a Linux root filesystem, or LVM etc, on top of software onboard RAID. I've done this with 2-6 disks on nForce4, ICH5, ICH6 and ICH7 chipsets.