• Content count

  • Joined

  • Last visited

Community Reputation

7 Neutral

About mitchm3

  • Rank
  1. Well all can't be too bad... EMC got to keep the name Unity from NexSan. I'm sure there was a large check of money involved.
  2. Queue lawsuit from Netapp over the name Alta... 3, 2, 1....
  3. I've had a number of older momentus XT hybrids, and never again, until you can put more a meaningful amount of flash in there. 8GB is not enough IMO. I'd much prefer to see at least 32GB in there, on a 1TB drive, perhaps 16GB on the 500GB, so you can maintain low costs. But 8GB gets quickly eaten up, and gains quickly lost in everyday computing. Think of it as a quasi Intel Optane like functionality, but without needing to change motherboards, chipsets etc; and the capacity is equal.
  4. When the article mentions, "term based," licenses. Does that mean, they are not owned by the customer? It's just a subscription, or OpEx model? No Perpetual license model?
  5. What do the 4 nodes do? Why only one FC switch? What kind of SAN and types of disk? So many questions...
  6. I too like HGST. Just grab an affordable 3-4TB drive, which seems like the sweet spot these days for cost, and be done with it.
  7. The Pentium M started around early 2002/2003, for about 5yrs. It hasn't been in production like Vista for about 10yrs. Frankly, it's time to upgrade. It doesn't even support a 64bit OS! Even going to Windows 8.1 seems like an odd choice, when 10 is out and has been solid and stable. New PC's with more clock speed, more ram, more cores can be had for under $300. It'll even come with a copy of WIndows in many cases. I'm currently using a 5yr old AMD powered system with 8GB of RAM and 6 cores, and it's plenty for my general web stuff, transcoding videos, and a run a linux VM or two as needed.
  8. Or that you have the appropriate disks to get the data out the network fast enough, or the protocol used, etc... I'm usually content when I see around 600-700MB/s on a good day, usually a Wednesday after it rains.
  9. agreed on the troubleshooting tools mentioned above. you can expand on what svchost is being called by through 3rd party means. racagent is a windows executable. which seems to have a lot of issues as evident by a quick google search. Vista is also a 10yr old OS... There is something to be said for supporting something as old as that. But I digress.
  10. I used to be in the position to image dozens of machines a week, limited only by my workbench, which could do about 5 at a time. The best imaging performance for me was off of Isilon NAS and NetApp NAS devices, that had hundreds of spindles behind it, a lot of front end cache, over a 10GbE uplink split to a 1GbE switch.... Not exactly a fare comparison I know. But even lower end windows based file servers, that had a 15 bay DAS attached to it, could do better than any local USB drive at the time. Today's USB 3.1 SSD systems can do pretty damn good, over 300+MB/s of read speed for large block data, and when talking imaging, often we're talking sub 50GB images so that should be pretty darn quick. If you're really imaging a lot of machines, nothing can beat cache and spindle count. How about a sustained 300+MB/s across 10 machines simultaneously. That is a time saver! I used to work for a couple of different companies that did network imaging too, it was fun to deploy 50 computers in a lab and show them how fast each day I could re-image and re-profile desktops... Ah the good ole days.. Never again!
  11. A company like Nimble, was created to address what the big players werent doing. Simple to setup, simple to license/all-inclusive licensing, easier maintenance, etc. Their hybrid approach was better than the incumbants. Their all-flash was more of a "me-too," to address the competition IMHO. They now have I think NAS capabilities. Much of Nimble was founded by ex NetApp and DataDomain folk. As such, I think NetApp is a fantastic NAS and is pretty hard to beat. As a SAN, I think they a second rate and more of their cool stuff (data services) is on the file side and not block side. But with NetApp, you do get a true enterprise product, that has a tremendous amount of maturity over a startup like Nimble. If going NetApp, I'd look at running Hyper-V over SMB3.0. Again, this is where I think NetApp shines, which is on file services and not over FC/iSCSI access. If Nimble, FC or iSCSI is their preferred method. So you may not need a new FC SAN, and can remove that cost, and stock with 10GbE. Veeam supports Nimble snapshots and NetApp, if that tickles your fancy. But truth be told, as cool as that sounds, it's an incredibly complex implementation that you'll troubleshoot more than you want or think you will. This goes for all snapshot, lan-free, san basd backup technologies. Man I hate that concept these days, and the grumpy unix admin's that think they need it...
  12. CDW nor SHI carry inventory. Often, they are buying from a distributor, who is often drop shipping it or it's dropped shipped from the manufacturer. You want inventory, you want to buy from a place like amazon, newegg, etc. If they don't have it, then it's probably a low volume inventory item, that is being scooped up by a large OEM like HPE, DellEMC, etc, where they buy in lots of 1000's at a time. Until production ramps up, it's anyone guess when the consumer versions will hit retailers.
  13. As someone that specializes a lot in data protection and archive solutions in all sizes of company, while technically feasible, this is a very bad idea, with a lot of risk in storing this data in this manner as described. Tape would be an acceptable solution, as long as you wrote it to two copies of tapes. Then in 5yrs or so look at replacing your tape solution for a newer one. LTO is backwards compatible 2 versions back to read. e.g. LTO7 can read LTO5, and write to LTO6. So you may need to upgrade to LTO7 soon, to read those LTO5 tapes. Then in 5 or so years, go to LTO9, to be able to read LTO7 tapes. With a couple of tape migration projects inbetween. Or you have an enterprise tape library that can run multiple types of tape drives. Another solution, would be to stick that data in the cloud, and keep it in two sites, or two providers. Like a copy in Amazon and/or azure. Amazon Glacier would be about $400-500/month in today's prices for 100TB. Their snowball can help you get the data into their cloud quick. Pricing should go down over time as scale and economics work in our favor. I suggest two vendors, because who knows how this new fangled "cloud," will shake up, and who wins and loses over the next decade. If you are deadset on using hard drives, make multiple copies, and I'd probably do it across two different brands of drives. When talking 10yrs of retention, the media type is very important, for compatibility sakes, and ease of accessing said data. But just as important is the environment, that this media is stored (humidity, temp, etc). Best to look at an Iron Mountain or similar to store these. Which by the way, I'm sure Iron Mountain offers some sort of storage platform, and long term archival solution too.
  14. I have ran into many a customer issue with VSAN ready nodes, and complete outages and data loss. Many of them Dell, Pre-EMC merger. They are too busy to sell inventory, and not bother engineering a correct solution. EMC on the other hand with vxrail goes knee deep in building a proper solution, with growth factored in, as well as performance factored in. I would suggest you speak to the legacy EMC folks and not legacy Dell folks.
  15. For whatever reason it escapes me the sole few Hyper-V focused HCI players out there. Perhaps they are not surviving? Or had to diversify to support other/more platforms? You can always do what the big boys do and just go full on OpenStack, KVM, Docker, etc.