mitchm3

Member
  • Content count

    69
  • Joined

  • Last visited

Everything posted by mitchm3

  1. The Pentium M started around early 2002/2003, for about 5yrs. It hasn't been in production like Vista for about 10yrs. Frankly, it's time to upgrade. It doesn't even support a 64bit OS! Even going to Windows 8.1 seems like an odd choice, when 10 is out and has been solid and stable. New PC's with more clock speed, more ram, more cores can be had for under $300. It'll even come with a copy of WIndows in many cases. I'm currently using a 5yr old AMD powered system with 8GB of RAM and 6 cores, and it's plenty for my general web stuff, transcoding videos, and a run a linux VM or two as needed.
  2. Or that you have the appropriate disks to get the data out the network fast enough, or the protocol used, etc... I'm usually content when I see around 600-700MB/s on a good day, usually a Wednesday after it rains.
  3. agreed on the troubleshooting tools mentioned above. you can expand on what svchost is being called by through 3rd party means. racagent is a windows executable. which seems to have a lot of issues as evident by a quick google search. Vista is also a 10yr old OS... There is something to be said for supporting something as old as that. But I digress.
  4. I used to be in the position to image dozens of machines a week, limited only by my workbench, which could do about 5 at a time. The best imaging performance for me was off of Isilon NAS and NetApp NAS devices, that had hundreds of spindles behind it, a lot of front end cache, over a 10GbE uplink split to a 1GbE switch.... Not exactly a fare comparison I know. But even lower end windows based file servers, that had a 15 bay DAS attached to it, could do better than any local USB drive at the time. Today's USB 3.1 SSD systems can do pretty damn good, over 300+MB/s of read speed for large block data, and when talking imaging, often we're talking sub 50GB images so that should be pretty darn quick. If you're really imaging a lot of machines, nothing can beat cache and spindle count. How about a sustained 300+MB/s across 10 machines simultaneously. That is a time saver! I used to work for a couple of different companies that did network imaging too, it was fun to deploy 50 computers in a lab and show them how fast each day I could re-image and re-profile desktops... Ah the good ole days.. Never again!
  5. A company like Nimble, was created to address what the big players werent doing. Simple to setup, simple to license/all-inclusive licensing, easier maintenance, etc. Their hybrid approach was better than the incumbants. Their all-flash was more of a "me-too," to address the competition IMHO. They now have I think NAS capabilities. Much of Nimble was founded by ex NetApp and DataDomain folk. As such, I think NetApp is a fantastic NAS and is pretty hard to beat. As a SAN, I think they a second rate and more of their cool stuff (data services) is on the file side and not block side. But with NetApp, you do get a true enterprise product, that has a tremendous amount of maturity over a startup like Nimble. If going NetApp, I'd look at running Hyper-V over SMB3.0. Again, this is where I think NetApp shines, which is on file services and not over FC/iSCSI access. If Nimble, FC or iSCSI is their preferred method. So you may not need a new FC SAN, and can remove that cost, and stock with 10GbE. Veeam supports Nimble snapshots and NetApp, if that tickles your fancy. But truth be told, as cool as that sounds, it's an incredibly complex implementation that you'll troubleshoot more than you want or think you will. This goes for all snapshot, lan-free, san basd backup technologies. Man I hate that concept these days, and the grumpy unix admin's that think they need it...
  6. CDW nor SHI carry inventory. Often, they are buying from a distributor, who is often drop shipping it or it's dropped shipped from the manufacturer. You want inventory, you want to buy from a place like amazon, newegg, etc. If they don't have it, then it's probably a low volume inventory item, that is being scooped up by a large OEM like HPE, DellEMC, etc, where they buy in lots of 1000's at a time. Until production ramps up, it's anyone guess when the consumer versions will hit retailers.
  7. As someone that specializes a lot in data protection and archive solutions in all sizes of company, while technically feasible, this is a very bad idea, with a lot of risk in storing this data in this manner as described. Tape would be an acceptable solution, as long as you wrote it to two copies of tapes. Then in 5yrs or so look at replacing your tape solution for a newer one. LTO is backwards compatible 2 versions back to read. e.g. LTO7 can read LTO5, and write to LTO6. So you may need to upgrade to LTO7 soon, to read those LTO5 tapes. Then in 5 or so years, go to LTO9, to be able to read LTO7 tapes. With a couple of tape migration projects inbetween. Or you have an enterprise tape library that can run multiple types of tape drives. Another solution, would be to stick that data in the cloud, and keep it in two sites, or two providers. Like a copy in Amazon and/or azure. Amazon Glacier would be about $400-500/month in today's prices for 100TB. Their snowball can help you get the data into their cloud quick. Pricing should go down over time as scale and economics work in our favor. I suggest two vendors, because who knows how this new fangled "cloud," will shake up, and who wins and loses over the next decade. If you are deadset on using hard drives, make multiple copies, and I'd probably do it across two different brands of drives. When talking 10yrs of retention, the media type is very important, for compatibility sakes, and ease of accessing said data. But just as important is the environment, that this media is stored (humidity, temp, etc). Best to look at an Iron Mountain or similar to store these. Which by the way, I'm sure Iron Mountain offers some sort of storage platform, and long term archival solution too.
  8. I have ran into many a customer issue with VSAN ready nodes, and complete outages and data loss. Many of them Dell, Pre-EMC merger. They are too busy to sell inventory, and not bother engineering a correct solution. EMC on the other hand with vxrail goes knee deep in building a proper solution, with growth factored in, as well as performance factored in. I would suggest you speak to the legacy EMC folks and not legacy Dell folks.
  9. For whatever reason it escapes me the sole few Hyper-V focused HCI players out there. Perhaps they are not surviving? Or had to diversify to support other/more platforms? You can always do what the big boys do and just go full on OpenStack, KVM, Docker, etc.
  10. Warning, if you move to Acropolis, your backup architechture will NEED to change significantly. Storage mangement tools, reporting products, etc; many don't support Acropolis. All the wonderful things they claim can be done on Acropolis are cancelled out when nothing seems to support Acropolis. The market supports VMware first, HyperV second, and KVM and some of the other Openstack players are next. So regardless of choice traditional vs HCI.... Think about all the other techncial and business process solutions you have. Will they need updating, retiring, changes of process etc. If you're going to entertain Acropolis, why not entertain Vmware? I mean there is an added cost, but that cost comes with much more features, maturity in DR/load balancing features, and broader industry support. Not to mention a lot of HCI options there. The only benefit to Acropolis is less upfront cost for the hypervisor. Then it becomes more cost for all the other things like mgmt, workflow, etc.
  11. Here for feedback?
  12. Personally a Nimble array is perfect. It's performant, simple, and now an HPE company, so you can just ask them to engage the Nimble folks, since you have an existing relationship with HPE. 3Par is way more than you need IMO Otherwise, Nutanix only makes sense if you want to get rid of your existing compute nodes. But at the same time, there are some Hyper-V specific HCI players out there (I can't remember what those names are)... Something you may want to look at, rather than Nutanix with HyperV which is a hypervisor with another hypervisor on top of that. Of course there is the DellEMC Unity arrays too, which has a nice HyperV plugin for mgmt and support as part of the Unity UI. And I think Veeam can use the Unity snapshots for backups too.
  13. I can't find your older DD2200 forum topic, the link is dead from the review. But now that there are flash enabled DataDomain systems, do you think you can sweet talk DellEMC into getting one in the labs? Specifically the DD6300 or similar system? I would like to see the effects of flash used for metadata, in regards to Veeam restores, VDP performance, and general performance improvements. They are claiming big restore improvements, which has always been an issue for products like Veeam and CVLT and DD used together. Not that it was a limitation by EMC, but more so how those products do lots of random IO on the restore process, that punishes DataDomain, which doesn't do random IO well at all.
  14. ArcServe? Eww. LOL. I've yet to meet a happy ArcServe customer. But at the same time, those were all CA customers, and hopefully the new owner of ArcServe is actually doing good things with the product and not letting it languish. If you're going to test DDVE, take a look at the requirements for the various sizes you can scale it. The 96TB edition is insane in terms of HW requirements. They also have a performance tester they want you to run on the datastores upon initial setup. But when you expand it, they don't require you to run it again. Seems like a missing feature...
  15. It depends. There is no right or wrong answer, there is only the right technology for your needs. From an HCI market, the leading companies are DellEMC, Nutanix, and probably Simplivity, which is now HPE. (You can probably kiss that product good bye now that HPE owns it, and will mess it up) Right here on Storagereview.com you can read up on reviews of VxRail by DellEMC. Sadly no Nutanix review yet. There are also VSAN ready boxes, where you can build your own HCI system, using your own HW vendor of choice. HPE, Lenovo, supermicro, etc. Personally, I would bring in, at least 2-3 vendors that you think fit to your requirements, like Nutanix and DellEMC, and perhaps a 3rd. Listen to their pitch. Ask them about competitive info as to WHY their kit may be better than the other. But ultimately, get two quotes, pit them against each other, so that you make out like a bandit! The market is tough and competitive, they will discount up front to win your business. Never take their first pass price. But like all negotiations, make sure to lock in your maintenance for multiple years, as well as node adds. That may mean paying for multiple years up front to get better pricing, or get an agreement in writing.
  16. Any word on if they addressed battery life while in cold climates? I'd read anecdotal stories that their batteries went to weeks or a single month in colder/near-freezing weather. That has been the only thing holding me back here on buying this, vs running PoE in my home and outside of my home for different camera's and a Synology NAS (which isn't easy when you're in a 3-story home) EDIT: Poor cold weather battery performance was reported on the first gen Arlo camera's.
  17. My Synology NAS is claiming a SMART error on one HDD (ID 184 End to End error) I literally had 36 hours before my warranty was to be expired, and I ran seagate tools against it, and no errors were found on their extended test. The drive is a Momentus XT 500GB drive. I had two laying around doing nothing, and I popped them into the extra bays within my NAS just for more capacity for videos and such. The gist of Seagate's warranty policy was that you can only RMA a drive under warranty if the tool provided an error. No error, you would be on the hook for some costs in terms of diagnosis, drive repair or a refurbished drive. What do you guys think, Ignore it? Or "Danger Will Robinson, eject!" Currently I put those two drives in RAID 1 just in case. And it's just holding movies that I can always re-download.
  18. It could be as simple as just putting a Windows Server in front of the SAN. Then use Windows file shares as needed. I've got customers with up to 30TB's shared to 500 users, on a single file server. I scoff at that, but they do it, and it works. FreeNAS/OpenNAS and more are possible options too. Your SAN may actually have a NAS option. Give us the make/model and we can tell you more if possible.
  19. You have three options. 1. Buy a SAN that has an encryption option. All of them do this these days. 2. There used to be a market for SAN encryption gateways for an existing SAN that didn't support native encryption. This is a dead market IMO. 3. Host based encryption, software running on an OS that encrypts the data. This can cause a lot of issues with AV/backup software. Ill-advised unless you have to.
  20. Have you checked VMware's compatibility list? That is the definitive guide to all things, especially HBA compatibility. You want to from pick what's on that list.
  21. As someone in the VAR/reseller community, I have a very different viewpoint than you. FC is very much alive in midsize and enterprise accounts. SMB maybe not with the likes of Nimble, HP's lefthand stuff, EMC's VNXe, etc, and other start ups. Startups that are iSCSI first, and recently bolted on FC to inch into the Enterprise. Think about that. They added FC after the fact to drive sales in the enterprise, that otherwise wouldn't have given them the time of day. HCI is a great thing. I've got customers that run both Nutanix and traditional infrastructure. The traditional environment costs about 5x what they spent on Nutanix, and Nutanix will never grow more than what it is. I've also worked with customers that went all in with Nutanix. They are uber successful in that. HCI is just an option, not a solution. You need the software/application stack to take notice and adopt. I have many niche verticals that use apps that require physical infrastructure (healthcare), still use HW dongles for license checks(Design firms), and I have customers that have a need for high sustained single stream read writes (NAS systems). Dell/EMC merger is a good thing IMO. It's a marrying of two different business focuses. One caters to the midrange and highend; the other consumer, low end and midrange. They are complimentary. The EMC data protection portfolio is really 2nd to none today. It wasnt the case just a couple years ago. Isilon a fantastic scaleout NAS, Dell's servers are extremely affordable, and their datacenter practice is very very good. You've also got a few very passionate leaders up top that will make waves (top, down) to get his way. When I worked for an OEM software company, my team had our Lenovo laptops out in Roundrock(They were a reseller of our gear)... Bad idea when Micheal Dell walks in the room. ;-) Working with vendors... The whole first hit free thing. That's you (or your sales rep) not informing the customer of long term costs. I run into this a lot as an example with Commvault. They make the sale, sell the software and required licensing, but dismiss or minimize the HW requirements over time. Sure looked cheap in yr1! At the end of yr3, and you need $300k in new servers, SQL licenses, and storage just to maintain organic growth... CVLT did a poor job of explaining that. Same goes for EMC, HP, etc. A good reseller maps out the 1yr, 3yr, and 5yr costs. Manufacturers make their pricing as such, to refresh the equipment after support is over. In fact, pricing is so calculated, it's often cheaper to buy new than renew support in many cases. But new startups are bucking that trend, and the big vendors are taking notice, and making that change. Flat rate annual maintenance for the life of the product. Expect to see this transition in many places over the next 3-5yrs. Back to the original discussion.. ScaleIO, the free version could be the ticket. You can even true-up to pay for support. You don't run a business without support/insurance! Thats the thing about taking on a startup. Is the business viable, the technology sound, can any IT lackey just jump in and manage it? yes the initial acquisition costs were super low, but if ongoing caring and maintenance takes a lot of time, how much did you really save? How about Simplivity? I've lost to them on cost in the past, I have to assume they are doing good... How about just some of the newer rack mounted thecus, synology, qnap arrays? All are vcenter certified, some support flash, and iSCSI. Super affordable, and since they have little in the way of data services, not much could go wrong... Good discussion, perfect on the low end. Of course there are great turnkey low end solutions too if you want to talk to the bigger vendors...
  22. If 100k, Have you looked at VxRail or Nutanix? Both should have entry level configs under that price range. They both allow you to buy without VMware licensing, if you already own that. (EMC used to require you buying Vmware licensing before up until recently)
  23. This product... Is it a first of it's kind? Or a first of it's kind for consumers? Meaning, does this exist already in the enterprise space in other brand-name arrays to some degree? (Though don't know of many SAN's using NVMe...)
  24. Ha, this guy is like royalty in the storage world! Well not quite... But folks like him Chad sakac, Steve Duplessie, and just a handful of others are really great folks to read up on and learn from! Would love to see some color and positive contributions from you here on this forum Vaughn