Kevin OBrien

HP ProLiant DL380p Gen8 Server Review Discussion

21 posts in this topic

As StorageReview expands our enterprise test lab, we're finding a greater need for additional latest generation servers; not just from a storage perspective, but from a more global enterprise environment simulation perspective as well. As we test larger arrays and faster interconnects, we need platforms like the HP DL380p to be able to deliver the workload payload required to these arrays and related equipment. Additionally, as PCIe storage matures, the latest application accelerators rely on third-generation PCIe for maximum throughput. Lastly, there's a compatibility element we're adding to enterprise testing, ensuring we can provide results across a variety of compute platforms. To that end HP has sent us their eighth-generation (Gen8) DL380p ProLiant, a mainstream 2U server that we're using in-lab for a variety of testing scenarios.

HP ProLiant DL380p Gen8 Server Review

Share this post


Link to post
Share on other sites

I wish HP's online configurator was better. Honestly it's a complete mess. They list a dozen different models, some are "configurable" and some aren't. The page that does list the configurable models doesn't give any indication what number of drive trays are included.

I just want to know if this server is available with 25 SFF drives - there's one line on one page that makes me think it is, but I can't seem to find a configuration that actually supports that.

Share this post


Link to post
Share on other sites

I have the 360G8p and the 380G8p I cant comment on the performance but from a lab installation side, the new rail system sucks. Its horribly thought out. The way the new rails are designed you literately need two people to install a 1u system.

Edited by Mkruer

Share this post


Link to post
Share on other sites

The rails work well but you're right, it's very difficult to guide the server in back to front with one person. We always use two just to be sure we don't drop it. That said, those rails are still better than others we have in the lab right now.

Share this post


Link to post
Share on other sites

The rails they shipped with the DL180 G5? I think it is are actually just little shelves. They install easily but the fact that the server will fall on the floor if you unwittingly pull it out a bit too far. . .and this is an 80-pound server. . .

Share this post


Link to post
Share on other sites

The Best rails for me were the

HP Standard 2U Universal Rail Kit (359254-001) same rails for 3 generations

HP 1U Universal Rail Kit (364691-001) also lasted for 3 generations (actually more because the same inner rails were used back to the G2)

What made them awesome was you could mount them into a rack even if the rails were not extended and they had a auto guide for insertion. It was for all intents an purposes a one person job even for a 2U box, and mounting up an entire rack was very quick. The new rails in comparison are flimsy $#!^ I have seen from IBM, Dell and Sun. The only issue that I had with the old rails were the removal in tight places. I would require you to manually push in the rails, but it you were experienced in using the rails you would bump out the unit below a bit to use it as a rest to remove the unit above. Honestly the only way I could see them improve the older rails were was to add a multi- stage pull out so the rails would not fully telescope if you didn’t need them to.

And this is why the new rails suck! Its a 1u box and now you need two people to correctly mount it. Each person on each side must make sure it snaps into place correctly. I have seen people almost drop the systems because of the new rail design.

Sorry about derailing this thread! And yes that is a pun

Edited by Mkruer

Share this post


Link to post
Share on other sites

Railing against rails is derailing the thread? No.....I think we can all rally around rail reveling.

Share this post


Link to post
Share on other sites

Hey Kevin,

Hopefully these will be suitable hosts to test those high-speed flash arrays with...

A few questions:

What riser configuration did you guys go with?

How many Mellanox FDR HCA's are you planning on cramming into it?

When Mellanox was out, did they bring an FDR switch (or 2) for you guys to play with?

:)

Thanks for any insights into coming configurations.

Edited by ehorn

Share this post


Link to post
Share on other sites

It actually surprises me a bit that the server only can hold two single-width GPUs..

The Dell R720XD can hold either 4 150watt single width, or two 300watt dualwidth fulllength cards.

Im not sure if it matters in GPU terms, but beeing able to feed a server with 4 full-length SSD cards, might be needed

Share this post


Link to post
Share on other sites

Hey Kevin,

Hopefully these will be suitable hosts to test those high-speed flash arrays with...

A few questions:

What riser configuration did you guys go with?

How many Mellanox FDR HCA's are you planning on cramming into it?

When Mellanox was out, did they bring an FDR switch (or 2) for you guys to play with?

:)

Thanks for any insights into coming configurations.

The riser configuration both of our models include is the standard x16 PCIe 3.0 , x8 PCIe 3.0, and x8 PCIe 2.0.

For our flash array testing, we are going to be using two twin-port 56Gb/s InfiniBand NICs in each server

Of course, they equipped our lab with the 36-port 56Gb/s InfiniBand SX6036 switch ;)

Share this post


Link to post
Share on other sites

The riser configuration both of our models include is the standard x16 PCIe 3.0 , x8 PCIe 3.0, and x8 PCIe 2.0.

For our flash array testing, we are going to be using two twin-port 56Gb/s InfiniBand NICs in each server

Of course, they equipped our lab with the 36-port 56Gb/s InfiniBand SX6036 switch ;)

B)

Thanks for the update...

That rack is looking full now... :)

What ya got hiding in the 4U iStar on top?

P.S. Looking for a buyer for the SX1036?

Thanks again Kevin. Very much looking forward to the upcoming articles...

Edited by ehorn

Share this post


Link to post
Share on other sites
1344035424[/url]' post='277060']

B)

Thanks for the update...

That rack is looking full now... :)

What ya got hiding in the 4U iStar on top?

P.S. Looking for a buyer for the SX1036?

Thanks again Kevin. Very much looking forward to the upcoming articles...

Hopefully pretty soon we'll be splitting our lab into two sections with a new rack, with some of the contained tests on one rack and the multi-server stuff on the other. Funny you mention the 4U up top... We are thinking about using that case for our domain server, although we would really just prefer a 1U for that purpose.

Now about the SX1036, it's in use and we never sell review or testing equipment. We will sometimes give the stuff away in the forums, but selling any of this stuff would have some serious ethics issues to say the least. Now that said, we could definitely get you in touch with the right people if you are in the market for one :-)

Share this post


Link to post
Share on other sites

Hopefully pretty soon we'll be splitting our lab into two sections with a new rack, with some of the contained tests on one rack and the multi-server stuff on the other. Funny you mention the 4U up top... We are thinking about using that case for our domain server, although we would really just prefer a 1U for that purpose.

Now about the SX1036, it's in use and we never sell review or testing equipment. We will sometimes give the stuff away in the forums, but selling any of this stuff would have some serious ethics issues to say the least. Now that said, we could definitely get you in touch with the right people if you are in the market for one :-)

Thanks Kevin,

Cheers!

Share this post


Link to post
Share on other sites

It actually surprises me a bit that the server only can hold two single-width GPUs..

The Dell R720XD can hold either 4 150watt single width, or two 300watt dualwidth fulllength cards.

Im not sure if it matters in GPU terms, but beeing able to feed a server with 4 full-length SSD cards, might be needed

There's an alternate riser card available with 2x PCIe 3.0 x16 (x16 electrical as well as physical) connectors on each riser, but one of those only supports half length cards due to the physical internal space in the server. You can have one of those risers per CPU installed, for up to 4 cards with 2 CPUs. As for 150W options, the quickspecs note:

NOTE: All slots support up to 150w PCIe cards, but an additional Power Cord Option is required (669777-B21).

So it can do 4x 150W PCIe x16, but if you want them all to be full length, look at the ML350p Gen8, which has 4 full length full height PCIe 3.0 x16 connectors, one of which is only x8 electrical. It's also 5U (rack or tower), however, takes up to 18 LFF or 24 SFF drives.. Should be comparable in price and other specs to the DL380p Gen8.

On a separate note, the DL100 series are definitely HP's lower end server offerings... fewer management features, not as tool-less, less hot swappable components, lower end drive controllers, and of course the shelves instead of extendable rails. They still have their place, but for the difference in price, I think the DL300 series are a worthwhile investment (over the DL100 series, I'm not comparing with other brands here) for any organisation big enough to devote a significant amount of employee time to managing physical servers. That's just my personal opinion, I know I don't like working with them and have tried to persuade my employer to avoid buying them in future. We're more or less standardised on DL380s (with DL585s for virtualisation hosts) now. Simplifies everything if the majority of your server estate (we're into 3 figures) is based on one or two models. Of course, in terms of Server OS installs, the majority of our estate is virtual, which is even better... but it all has to run on something.

Share this post


Link to post
Share on other sites

First off, full disclosure, I work for HP as a PreSales Architect...

Feel free to ping me with any questions on the ProLiants, especially blades.

The riser configuration both of our models include is the standard x16 PCIe 3.0 , x8 PCIe 3.0, and x8 PCIe 2.0.

The stock First CPU Riser is 16@3.0, 8@3.0, 8@2.0 because the 8@2.0 is coming off the C600 PCH Chip and not CPU1 directly.

The P400i and FlexLOM slot both get 8@3.0 so you end up using 16 from the board, 16 from the first slot and 8 from the second slot which exhausts the 40 3.0 lanes provided by the E5-2600s

However on the Second CPU Riser (Optional) the default would be 16/8/8 all @ 3.0 because we now have another 40 lanes to play with.

As was mentioned we have other Riser's if you need more x16 slots instead.

On a separate note, the DL100 series are definitely HP's lower end server offerings... fewer management features, not as tool-less, less hot swappable components, lower end drive controllers, and of course the shelves instead of extendable rails.

DL100 are gone with Gen8 from the traditional enterprise offerings. They are now going to be solely a model used for what we call HyperScale type builds where you need the absolute cheapest price per node. The Budget models will be the DL3xxE models with E5-2400 procs.

But this doesn't mean they skimped on the mgmt features.

ALL Servers in Gen8 get iLO4 (LO100i is GONE) and we have also standardized our BIOS across all models as well (some low end models before used a 3rd party BIOS).

The E models and the limited DL100 models that will come out will use onboard RAID provided by the Intel chipset but with the HP look and feel for config.

Lastly, and I don't know how to go back a page and quote it from earlier, there was ask about the 25 drive model.

The 25 drive DL380p was not released at the same time as the 8+8 design. Its been out a while now but when this thread started it may not have yet been released.

There is also a 27 drive DL380e which has 2 rear facing drive bays+25 drives up front.

2 things to be aware of with the drive cages on the 380p

1) The 8+8 config currently requires 2 controllers. There is basically no use of a SAS Expander card like we offered before. The 25 drive cage has the expander built into it but is only factory orderable, so you cannot convert an 8 to a 25 easily. So if you get the optional +8 SFF cage, you will also need another P420 or P822 card to power it and it will have to be a separate Array and Logical Volume. The DL350p has both standard cages and Expander enabled cages allowing it to go from 8 to 16 to 24 all from a single controller as long as cages 2 and 3 are both Expander models.

2) The 25 drive DL380p is limited to 115W or lower processors. There was concern that 130W procs would not get the airflow they needed with the front to full of drives. So to play it safe (the longer I work here the more I see that's our standard) you are factory limited to 115W procs. Could you put a 130 in later, absolutely. Will that be supported? Probably not. I put in a request to the Product Marketing Manager to see if we could do like a 25 drive model but limited to 20 drives to increase airflow, and then support the 130s but as I was the only person asking they said that would end up on the back burner as far as our certification and test teams were concerned.

HTH,

Casper42

Share this post


Link to post
Share on other sites

Thanks for the input! That makes a lot of sense with the PCIe layout... I almost forgot about the requirements of the P400i card and LAN card.

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!


Register a new account

Sign in

Already have an account? Sign in here.


Sign In Now