I wouldn't buy new components for this, the used 8th gen Intel stuff or newer is for next to nothing. Lets start with the basics, how much storage do you think you'll need? You'll likely vastly underestimate, but that's just the way it works. Do you have anything you'd like to use? Eg drives you already have or other hardware. What's your budget and how much protection do you want? One of the things that usually leaves me shaking my head, is people who are concerned about data resilience and will spend the money on ECC - without understanding the entire data pathway from creation to writing needs to be ECC - but then cheap out and don't run a UPS. In terms of size, a self build (or repurposing a cheap pre-built) will often take up slightly more space than a standalone off the shelf NAS, they may even be slightly louder and sip slightly more power, but you won't wake up one day and find out it's a paperweight because someone somewhere decided it was time to stop providing basic updates and as/when you feel the need to do new things, upgrades are cheap/easy.
If you do want to go new, for a micro build, something like an ASROCK N100 board would be highly efficient, you can buy NAS specific cases for 4-8 drives, the N100 packs enough of a punch that it can run services easily and if you go down the media server route, it's got enough grunt to unpack/transcode files easily, drives you'd need to tell us what you need, shove 16GB in (yes, I know Intel claim it only supports 8GB, they officially do, but unofficially it's 16GB). Be warned, if you outgrow it's capabilities, the N100 boards will be a proverbial albatross around your neck very quickly. Software comes down to two basic choices, UNRAID or TrueNAS. Personally i'd go UNRAID for media as IOPS aren't that important, they've changed the pricing model recently, so you can either buy xx devices and lifetime updates or xx devices with annual subscription for updates, you still get basic security updates within the version you pay for, but major version updates won't be a thing. Use an NVMe cache drive to handle data landing/processing (prefer TLC over QLC, realistically 1TB is probably a minimum at this point, but buy whatever is best value), then move it to the mechanical drives once it hits x% full, docker and VM's run off the SSD. I tend to prefer my drives to be HGST He drives as longevity is simply insane, I have a 'few' 10TB SATA and SAS drives running for quite a few years at this point without a single failure, they just keep going. Recently, the optimum point has tended to be 14-16TB drives. If you need more IOPS and don't mind a full VDEV being spun up for any R/W operation, then ZFS and TrueNAS has the edge here, before going ZFS please make sure you understand the limitations it brings in terms of expansion, those are changing, but they've been changing for years, and an exit strategy can be horrifically time-consuming/inefficient or expensive if you get it wrong.