I'm with the previous posters. I do infrastructure for an application with 8 EMC Clariion class SANs (>2000 spindles). We choose SANs for availability/reliability and because we can scale a huge number of disks for performance. Disk for disk, you won't see an advantage over DAS. But DAS cannot scale the way a SAN is able to.
For example, I have a 20TB file server (DFS) cluster with a dedicated SAN. The 20TB is broken into 2TB LUNs (RAID 5 in this case) and exposed to an 8 node MSCS DFS cluster. We have the cluster groups/resources/LUNs divided up so that 2qty 2TB LUNs exist in a single cluster group. There are 5 of these cluster groups, and we normally keep them spread out over 5 of the 8 nodes. The load, on an average day, is sometimes so high that collapsing 2 of these cluster groups onto one node takes the 1GigE NIC usage to 60% or higher. There's no way a single server can handle this workload. DAS would just be a non starter.
From what I've seen, the 800lb Gorilla in the room when you're talking about a SAN is poor design. Know your workload, know the capabilities of the FC infrastructure (HBA's, multipathing software, Switches, SAN Controllers, etc). Know your SAN engineer, buy them a drink and have them walk you through the design. When I am going to spend >$100K on a piece of hardware, I get pretty anal. I don't ask my management to approve an order for a SAN design I haven't gone over myself.