wmfoster

Member
  • Content Count

    22
  • Joined

  • Last visited

Everything posted by wmfoster

  1. wmfoster

    What enclosure to get?

    I've purchased the Pleiades Super S-Combo (Passively cooled): http://www.macpower.com.tw/products/hdd3/pleiades/pd_scombo It's based on the Oxford 924 DSB chipset. Whilst I use Seagate Harddrives exclusively (I like the 5 year warranty), I've never had any issues with this drive using SATA, Firewire 400 or USB. Not sure how it goes with Western Digital drives though. Anyone else have any experience with this Oxford Chipset? I haven't tried the Firewire 800 support as my motherboard (Asus a8n-SLI premium nForce4 based) doesn't support it. Regards, Michael.
  2. wmfoster

    What enclosure to get?

    I have purchased the MacPower Pleiades Super S-Combo http://www.macpower.com.tw/products/hdd3/pleiades/pd_scombo It's a lot more expensive than some of the other enclosures out there (not doubt from having "Mac" in the title) but it's based on the
  3. wmfoster

    drive chassis recommendation needed

    A little bit of topic but just a warning about noise in these trays. I bought a 3 SATA in 2 5.25" hot swap drive chassis: http://www.chieftec.de/pdf/manual_SNT-2131SATA.pdf The one I purchased didn't have the optional "J4 jumper" to allow low speed fan operation. On the bright side you can switch the drives on and off from the front of the unit and if all 3 drives are off the fan stops as well. It sounds like a nVidia Dustbuster cooling solution. Which is completely unacceptable in my nice quiet Antec P180 Case powered by a Phantom 500 power supply. I've now removed the supplied fan (unfortunately it had a non-STD connector on the plug) and replaced it with an 20mm Fan connected to a 5V line on the PC. Zalman ZM-OP1 optional 80mm Fan + Fan Grill for Zalman ZM80C-HP This means that the chassis now always shows a red error indicating fan failure but at least it's quiet. Before I never wanted to turn the drives on. Just something else to consider. Regards.
  4. wmfoster

    Dell Perc4E (PCI-E SCSI on socket939)

    There are no 939 motherboards with PCI Express x8 slots. You would need an Opteron board. KC 215085[/snapback] The 16X PCIe slot on any nForce4 939 motherboard can be used to house a 4x or 8x channel PCIe RAID Controller. I'd suggest you choose an SLI motherboard so you can have a video card (running on an 8x PCIe channel) in addition to the raid controller. That's what I was thinking when I bought my ASUS A8N-SLI Premium board. Regards, Michael.
  5. wmfoster

    New SATA installation

    I think that is the problem - you can't have a RAID of 1 disk. The name is Redundant ARRAY ....ie 2 or more. A RAID 0 or 1 will need a minimum of 2 disks. As i understand there are 2 sata connectors. You may be able to setup a RAID with 1 or 2 sata disks with 1 or 2 disks on the PATA connector, but you will need at least 2. I think you should be looking for a JBOD option, rather than RAID for running a single disk. At least that's the way it works on my Sil 3114 on my ASUS A8N-SLI Premium Board. Regards, Michael.
  6. wmfoster

    Promise SuperTrak EX8350 PCIe x4

    I've just purchased an a8n-sli premium for home and am setting it up now as we speak. Isn't one Gig NIC Card enough for you? If you seriously need more gig NIC cards, I'd would think that a workstation grade board (based on nForce4 Pro or an Intel Server chipset would be a much better choice. If you did decide to stay with an nForce4 SLI board, for graphics I'd recommend you install the oldest, cheapest PCI (not PCI-e) video card you can get you hands on. Then you can run the both the x16 PCI Express slots as x8 and have 2 high end RAID controllers or whatever else you need in them. Regards, Michael.
  7. wmfoster

    Hot Laptop HDD

    You might not be happy to hear this either: We have a fleet of 30 Dell Optiplex C610 Laptops (purchased March - May 2002). They also run very hot, especially the Hard drives. So far we have had 15 hard drives fail. A couple of laptops are on their third drive now. Most hard drives went around the 2-2.5 year mark. Record so far is 4 weeks (original drive in laptop), followed by 8 weeks for a replacement brand new 40G Hitachi Dell shipped me in September. Make sure you have a good backup (see thread on Ghost Vs TrueImage) if you have any data on a laptop drive you need to keep (or you don't look forward to setting up your Operation System and software again from scratch). Regards, Michael.
  8. wmfoster

    NAS Storage Serves vs. a Server ?

    NAS seems like a waste of money to me. If your running Linux you can get another copy of the OS for free, if you're running Win2003 most likely you licence by connecting device so adding another server adds no additional cost. Direct Attached Storage, either within the 3-6RU or connected via external SCSI port to an external case will be faster than using a Gig-E connected NAS (especially if you're not using jumbo packets on your network) and you don't need to dedicate another port on your switch. Adding more disks to a well designed RAID array is no harder than daisy chaining NAS units. Finally if you decide you need to run another application somewhere if you bought a server instead of a NAS at least you have the option. I've yet to hear a convincing argument for either NAS or SAN. But there must be one (because they're gaining in popularity) so if anyone can enlighten me please do. Regards, Michael. P.S. Head Office's SAN (that was storing the data from most servers on a single SAN array) recently died and they lost all data from 4 24*7 Systems. Much down time and an enormous amount of work to rebuild everything.
  9. wmfoster

    Need sata raid5 advice

    If you purchase a Nvidia nForce 4 based Motherboard supported SLI (Asus, MSI and Gigabyte have all announced solutions) then you can run your video card in one slot (at PCI-e 8x) and a PCI-e 4x RAID controller in the 2nd PCI-e 16x slot (which should automatically cut back to PCI-e 4x to match the RAID controller. That might be a good solution for you? That's what I'm hoping to do when I finally purchase a new PC, which is currently looking like a January timeframe. Regards, Michael.
  10. Congratulations on your Optical Burning Tower success, unfortunately, the one I built sucks. Care to give me some suggestions??? I built a 6 Drive CD Burning tower here at work using 6 * Lite-on PATA 52* Burners on an Intel 865 motherboard and a PROMISE U100TX2 2Channel Ultra ATA100 PCI IDE HDD Controller, 2.8Ghz P4, 1G of Dual channel RAM. (Windows 2000 SP4). A Western Digital 72G Raptor sits on one of the SATA ports, and if I'm burning from an image on the drive to 4 drives it works a treat. Unfortunately if I attempt to burn from the image to 6 drives (where drives 5 and 6 are on the masters on the 2 channels of the Promis controller it all goes to hell). Burn time goes up from around 4 minutes to 15 minutes and the drives burn in turn 4, then 2, then 4 like some mad Christmas tree. Any recommendations would be greatly appreciated. Currently I only burn to 4 drives at a time to avoid the problem entirely. I'm using Nero as the burning software. I await your sagely advice... <grin> Thanks in advance.
  11. wmfoster

    Initial SLI #'s look promising

    Apologies accepted Snyper. Let us know if you happen to find any articles testing a PCIe RAID card with a PCIe Video Card.
  12. wmfoster

    Initial SLI #'s look promising

    Thanks, I think it will work as well. I hope someone will give this a run. I like to play games so a PCIe video card would be nice on my next PC, but RAID 5 is much more important to me for safely storing all my data.
  13. wmfoster

    Initial SLI #'s look promising

    Hi. No offense taken, I'm always happy to hear other peoples opinions, but I think in this case you've been far too quick to judge me. I've been building computers for myself and others for 10 years and I've done a lot of reading on PCI Express. I assure you I know how a computer, and PCI Express work. It looks like your first post, so maybe this is a troll, but if not... here goes... Actually it's 250Mbytes/second (500Mbytes/s full duplex). 250Mbps is slower than the maximum sustained transfer rate on modern Hard drives. I understand how PCI Express works with multi serial channels (lanes). I don't understand why you think otherwise??? Unless you misinterpreted my post as thinking that I thought a PCIe 4X card has four SATA ports??? That would be quite silly (You've assumed I must be stupid and you're so much smarter - assumptions like that can get you in a lot of trouble... Pool Sharks must love you ). The reason I'm so interested in 4X cards is most of the Hardware RAID controller manufacturers are looking at using PCIe 4X cards as PCIe 1X can potentially be bandwidth limited with 4 drives. This is a (major marketing) problem as the manufacturers like to be able to say there is plenty of bandwidth to go around... they also like to use the same layout for 4, 8 and 12 drive cards making the PCIe 4X slot a logical choice. New Intel based server motherboards usually come with PCIe 4X slots on board. You've asked a question you already think you know the answer to? And now you're going to answer your own question??? I'm not sure why you find this amusing, as you yourself point out below 4x is plenty for current video cards, NVidia's implementation makes perfect sense. I'm sure most people reading this forum would have known all that. If you're interested though check out these links about using a GPU as an audio effects processor. It will potentially need the bandwidth provided by PCIe (and particularly the full duplex nature of it). Older Theory Article: http://www-sop.inria.fr/reves/publications...4/posterGP2.pdf. Modern Practical example: http://www.bionicfx.com/ You didn't actually answer my question though which was: Is it possible to run a PCIe X4 slot RAID controller in the 2nd PCIe 16X slot on an nVidia SLI based motherboard whilst using a PCIe 16X Video Card (electrically running as an 8X) on the 1st PCIe X16 slot? I'll be starting out with 3 (or 4 depending on budget) * 400G drives in a raid 5 array for redundant storage and adding extra drives as my storage needs increase (and my budget allows, up to a maximum of 8 drives). Currently to do this I have to buy a server motherboard that supports PCI-X (as opposed to PCIe) or go with an Intel Xeon board and give up having a PCIe Video card. I'm sure someone will suggest Software RAID but I still don't trust it (especially not software RAID-5) , and besides I dual boot between Linux and Windows and I want my data drive available from both Operating Systems. I want to be able to run a PCIe RAID card (which look like they're going to PCIe 4X cards) and a PCIe Video card (which are mostly PCIe X16 - physically, for marketing reasons) on the same motherboard. Would you care to have another go at answering my question Snyper (or anyone else for matter)? Eagerly awaiting any knowledge / news you guys might have on this topic.
  14. wmfoster

    Initial SLI #'s look promising

    I've read that it's possible to run two cards in non-SLI mode and therefore have four monitors running from two cards. What I'm really curious to know is if it will be possible to run a video card in one slot and a 4X PCIe Handware RAID controller in the other slot? I want to setup a RAID 5 system for my PC, I've already got 4 HDDs of files I don't want to lose and I need to buy more disk again soon, Raid 0+1 looks like a pretty expensive option in my case (even if it is free on the motherboard). Write performance isn't such a big issue as I'll have a separate (non-RAIDED) drive for my Operating system and another for my temp files, swap space, and other frequently written to data. Has anyone heard anything along these lines?
  15. wmfoster

    Slow surfing while downloading

    I'm running a gigabyte Motherboard with NForce2. The inbuilt nic has no problems. I limit my downloads to 120K/s (I've got 1.5Mb/s ADSL) (using getright if it's from a webpage, the bandwidth limiting options if it's BitTorrent or WinMX) and everything else runs fine. I'll be upgrading to a AMD 64 939pin as soon as PCI Express capable Nvidia Motherboards (not sure if I'll wait for dual VGA card support yet) are available. Hopefully I won't get your problems. I did read in a few places though it isn't recommended to go with the Nvidia IDE drivers. The speed up most hard drives a little bit but can cause all sorts of grief with Optical drives. Just stick with the Microsoft provided generic IDE drivers. Regards, Michael.
  16. wmfoster

    PCAnywhere vs Remote Desktop

    Remote Desktop sharing doesn't work very well at all with Oracle. When you attempt to connect with SQL Plus it can't find the database (even if you're running on the database server). Last I heard Oracle said that Microsoft was cheating with some sort of dodgy back door entry and Oracle had no intention of changing their implementation to allow Microsoft's effort to work. Have to agree though for all the servers except the Oracle Server I now use Remote Desktop Connection... It's way faster than PC Anywhere 9.2 or 10.51 connecting from a Windows 2000 Pro client. Regards, Michael.
  17. wmfoster

    Switches: managed or unmanaged?

    We've been using 7 Dell Powerconnect 5224 24 port Gigabit Ethernet switches (rebadged SMC TigerSwitch 8624T) for the past year and they've been great. Pretty cheap too. They're not Full Layer 3 switches but they've proven more than good enough for our needs. (They only support 6 trunks per switch and they have no stacking ports so they're really only good in an environment with <= 150 devices. We've had 3Com in the past... wouldn't recommend them at all. We've also have a Netgear 24*10/100, 2 * GigE Switch and a couple of entry level Cisco 24*10/100, 2 * GigE. Both have been fine but the Ciscos cost 5 times more than the Netgear, so I'm disappointed it can't make coffee or play chess!!!
  18. wmfoster

    Switches: managed or unmanaged?

    We've been using 7 Dell Powerconnect 5224 24 port Gigabit Ethernet switches (rebadged SMC TigerSwitch 8624T) for the past year and they've been great. Pretty cheap too. They're not Full Layer 3 switches but they've proven more than good enough for our needs. (They only support 6 trunks per switch and they have no stacking ports so they're really only good in an environment with <= 150 devices. We've had 3Com in the past... wouldn't recommend them at all. We've also have a Netgear 24*10/100, 2 * GigE Switch and a couple of entry level Cisco 24*10/100, 2 * GigE. Both have been fine but the Ciscos cost 5 times more than the Netgear, so I'm disappointed it can't make coffee or play chess!!!
  19. wmfoster

    Seagate 7200.8 When?

    Seagate Australia said a few weeks ago that the 300G 7200.8 drives will be available in Australia to OEM at the start of October, with general sales available around the end of October.
  20. wmfoster

    Hdd Mounting...

    I don't have any evidence for the newer Fluid Bearings but the older drives often wouldn't see out their warranties if you chose to mount them at any angle othen than exactly horizontal or vertical. I saw several failed hard drives (of different brands) from a, in my opinion, poorly considered HP Pavillion Case that actually raised the internals by about 10% so the CDROM would come up at the user. As for chosing between horizonal or vertical it shouldn't matter.
  21. wmfoster

    Need A Desktop Replacement Laptop

    Just a word of warning, we are an all Dell shop at work and I've never seen nor heard of anyone else having such a bad run with laptops as we have had. The Dell servers, Gig Switches and Desktops have been fine but there laptops have really sucked. Out of about 25 C610's all have had their keypad and keyboard replaced. 5 have had hard drives replaced, and 3 have had new motherboards. Of I nearly forgot they all have bad marking on the screens due to poor case designs - 6 LCD panels replaced so far. We have 2 C400s, both on their 3rd motherboard. We have 2 new D600's the keyboard is suspect (reading forums) and the technician that came out to service one of our C610's (he's in at least every fortnight) said they have had to replace a lot of motherboards on them due to faulty bluetooth connections. Also the mouse buttons for use with the nipple don't last at all. If I was buying a laptop, Dell would be my very, very last choice. I'd go with a white box OEM before I chose Dell!!! Regards, Michael.
  22. wmfoster

    Dell Gigabit Switch

    We have 7 PowerConnect 5224's here and are very happy with them. We have one switch acting as the backbone with all the servers on it and each of the other six 5224's connected to the backbone with a pair of ports trunked together. We can't trunk any more switches to the backbone though as they have a limit of 6 active trunks per switch. We've had absolutely no problems with them and are very happy (although we don't implement QoS or other management options likely to confuse them). 90% of the PCs we are using are Optiplex GX260's and most of the servers have Gig cards so it's almost gigabit end-to-end. We recently purchased redundant power modules for the switch. Way better value for our needs than Cisco, 3Com or any of the other competitors.