All Activity

This stream auto-updates     

  1. Past hour
  2. Today
  3. Yesterday
  4. Last week
  5. I got a new 860 evo and it worked for a day then I switched motherboards and it stopped working. It would show up in bios for the first couple boots but wouldn’t show up in windows but not it’s completely stopped showing up in bios. I have a ASUS z390e gaming motherboard with a m.2 ssd and hdd that are currently working but the new 3rd isn’t. I’ve updating bios to the newest and have enabled csm. Please help and thank you. My specs if helpful: CPU: i5 9600kMotherboard: ASUS Z390 e gamingRam: Corsair Vengeance 16gb 8x2SSD: m.2 intel 600p 256gb860 evo 1tb (the not working one)HDD: WD Caviar Black 1tbGPU: Strix 2080tiPSU: EVGA GQ 750w Gold+Chassis: NZXT s340 eliteOS: Windows 10 Pro 64bit Bios Version: 1305
  6. continuum

    Transfer car HDD

    Nope. From memory, both Honda and Nissan have used hard disk based ICE in their vehicles much more recently than that! D:
  7. continuum

    Hard drive errors, time to get a new one?

    Run the drive manufacturer's diagnostic utility. If it gives a pass after both long and short tests, don't worry about it. If it fails, definitely get a new drive. Raw SMART values are notoriously difficult to interpret given the varying meanings between manufacturers on what exactly in those values constitute failures vs. failing vs. health. You can break it down if you want-- I know several people who do so with skill on other forums-- but running the manufacturer's test is by far the easiest thing to do for a reasonable answer.
  8. Kixs97 did you try to create _one_ large partition (GPT formatted disk) bigger than 2TB on the drive ? (like the pic I posted above) That's the real test to do to see if the size is supported.
  9. continuum

    onboard raid controller giving me grief

    The destination client under #1 is a different system in #2, #3, or #4? If so sounds like something is weird with client #1.
  10. Kevin OBrien

    onboard raid controller giving me grief

    I don't know if I'd ever reccomend anyone use RAID0, especially network shares. In your setup are you using a BIOS controlled software RAID for all of this, or Windows Server software RAID?
  11. Brian

    News comments now link to reddit?

    The forums just haven't been engaged with much, we have more engagement with Reddit, so we're experimenting there. Revamped the podcast...doing all sorts of things to see if we can better figure out a way to engage with our audience.
  12. LOST6200

    News comments now link to reddit?

    Yes, I noticed that as well. The username and password didn't transfer to the new website. :( It seems like the end of SR 2.0.
  13. I see there have been no forum comment threads for news posts since Nov 19. When I followed a comment link, it took me to reddit. Which I've never signed up with. Is SR discontinuing the forum here? I notice a lot of spam lately, even after I reported one. Spam still up days later. I actually wanted to comment on the Seagate dual-actuator story. They talk about HAMR, release it to trusted partners for testing ... and we hear nothing more. No public release. They talk about dual-actuator, now released to trusted partners for testing ... and no word on public availability. The future seems to be putting up a fight about becoming the present. Are the HAMR drives experiencing issues in testing? That would explain why they haven't shipped yet.
  14. Earlier
  15. I have had issues with the following system in the past not being stable. This was improved when Gigabyte released a new bios. Since then I have had very occasional reboots which have left the machine stuck at the "boot failure detected" screen. Which when investigating the issue it pointed to a problem with the graphics driver. system: windows server 2019 Gigabyte Z170M D3H F22f bios 8GB corsair 3000mghz XMP Intel 6100T Antec Earthwatts 380W psu drive config below. so moving forward I have now found a repeatable task that will cause the system to reboot every time: 1) if I copy a large file (78GB for instance) using the server 2019 desktop to push the file to a network share from the Raid 0 I have on 2x Hitachi 4TB NAS drives (that also have a raid 1 setup on them). result REEBOOT. 2) if I pull the file from the same array using another windows 10 machine it transfers fine. 3) if I transfer the same file to the other raid 0 (2x 120gb ssd's) I have on the same server it works fine. 4) if I then send that file from the SSD raid 0 to a network share it works fine. it only reboots when I push the file from the raid 0 on the 2x NAS drives. the drive config is as follows: server 2019 on 2x 128GB crucial M4-CT128M4SSD2 drives in raid 0. data drives on 2x 4TB Hitachi HGST HDN724040ALE640 drives. configured as 3.6tb raid 0 and 1.8tb raid 1. ive changed the ram to some Kingston 2400mghz ram and it seems to run fine and copy without issue. so thought it must be faulty ram. got the ram replaced and getting the same issue. any ideas would be appreciated. some theories i have is the CPU maybe cant run at 3000mghz on the ram side as its a 6100T or faulty CPU/motherboard?
  16. linuxbuild, you do realize this thread is from 2014? It's doubtful any drive mentioned above is still manufactured today. Your link post in the other (current) thread suffices.
  17. reader50

    Hard drive errors, time to get a new one?

    You can use WikiPedia's key to SMART values. 1) Read Error Rate = vendor specific value. Desired to be "low", but that's all we know. Interpreting it would vary by manufacturer. 7) Seek Error Rate = another vendor specific value. We don't even know if high or low values are desired. 184) End-to-End error = desired to be low. This attribute may predict drive failure. The last one is the only one I'd worry about. On top of that, you're getting recurring disk errors. I'd replace the drive. You're backed up, right? In case the drive fails unexpectedly?
  18. I would personally stay away from Seagate. I posted on this very forum about a recent issue with three failing hard drives in a row. If you do a little research it seems that Seagate are indeed the hard drive with the highest failing rate.
  19. linuxbuild

    New BackBlaze Stats

    There is also a community collected reliability data for desktop drives: https://github.com/linuxhw/SMART
  20. Usually it depends on the particular drive model. There is a project to estimate reliability of desktop hard drives: https://github.com/linuxhw/SMART You can search for most reliable model or vendor in the list.
  21. Perseus Legend

    Hard drive errors, time to get a new one?

    no one?
  22. Hi there, Noob question: when you get errors like this, does that mean it is time to get a new hard drive? Raw Read Error Rate: 81/44, Worst: 78 Seek Error Rate: 60/45, Worst: 60 End to End Error Detection Count: 100/99, Worst: 100 For the record, the data comes from a S.M.A.R.T. test I did following some issues with the internal hard drive. Randomly, folders disappeared. After a check disk I've got the folders back, but then it happened again. Luckily, with the check disk I've been able to recover the folders.
  23. Diamanti's previous platforms have been in the form of on-premise hyper-converged infrastructure (HCI) appliances that provide both the HW and SW needed to support containerized applications. With Spektra, Diamanti aims to combine their hardware-accelerated x86 platform with cloud-based infrastructure to provide Kubernetes-as-a-Service, along with built-in disaster recovery of workloads and application data. While Diamanti claims that they will be able to provide a single control pane spanning both their on-premise hardware and public clouds, they have provided no details on what that might look like or how it would work. Someone seems to believe their solution is worthwhile though. On November 7th the company announced the close of a $35 million Series C funding round. Diamanti Announces Hybrid Cloud Kubernetes Platform
  24. Dell Technologies is expanding its portfolio of Dell EMC Ready Solutions for HPC Storage with two new, turnkey solutions. The first solution is for ThinkParQ's BeeGFS. ThinkParQ's software-defined parallel file system, speeds-up input/output-intensive workloads with the ability to scale from small clusters to enterprise-class systems on-premises or in the cloud. The second solution is for ArcaStream's PixStor file systems. This solution offers a high-performance parallel file system, enabling data management at scale with the ability to perform archive and analytics in place. Both of these new solutions are immediately available. At the same time, Dell EMC is also releasing a 400GbE networking switch called the PowerSwitch Z9332F-ON. Dell EMC Server Announcements from SC19
  25. The reference design platform includes both hardware and software building blocks and was designed in response to the HPC community’s need for a more diverse range of CPU architectures. As such, it gives supercomputing centers, hyperscale-cloud operators and enterprises the ability to combine NVIDIA’s accelerated computing platform with the latest Arm-based server platforms. NVIDIA Announces New Reference Design Platform
  26. NVIDIA partnered with industry leaders in networking and storage to develop Magnum IO, including DataDirect Networks, Excelero, IBM, Mellanox and WekaIO. This software suite release is also highlighted by GPUDirect Storage, which gives researchers the ability to bypass CPUs when accessing storage for quick access to data files in applications such as simulation, analysis or visualization. NVIDIA Magnum IO Software Suite Now Available
  27. To build this new scalable GPU-accelerated supercomputer, Microsoft and NVIDIA engineers used 64 NDv2 instances on a pre-release version of the cluster to train BERT in approximately three hours. This was possible due to the multi-GPU optimizations provided by NCCL, an NVIDIA CUDA X library, and high-speed Mellanox interconnects. NVIDIA Announces Scalable GPU-Accelerated Supercomputer in the Microsoft Azure Cloud
  28. There have been many announcements around AMD EPYC Rome CPUs today, and now TYAN has released advancements in its Transport HX and SX Product lines. For the HX, TYAN rolled out three new platforms powered by AMD. The Transport HX TN83-B8251 is a 2U server platform that supports up to eight 3.5” hot-swap SATA or NVMe U.2 tool-less drive bays, and is ideal for AI training and inference applications deploying 4 double-width or 8 single-width GPU cards, and 2 PCIe 4.0 x16 high-speed networking cards. The Transport HX TS75-B8252 and Transport HX TS75A-B8252 are 2U server platforms with support for 32 DIMM slots and up to 9 PCIe 4.0 slots. These server platforms are ideal for HPC and virtualization. TYAN Launches AMD EPYC HPC & Storage Servers At SC19
  29. Since the release of AMD EPYC Rome CPUs, the industry has seen a pretty good adoption. The new CPUs came out and shattered world records and can deliver better performance in a single socket versus its competition’s dual socket setup. A big advantage of the AMD EPYC Rome is that it allows servers to leverage PCIe 4.0 devices. In a world that is leveraging GPUs more and more, this is a huge leg up over the competition. With these Advantages, AMD is pushing its way into the HPC/supercomputing market where the above will be leveraged quickly. AMD Makes Several Announcements At SC19
  30. GIGABYTE has offered several GPU dense servers in its G-Series of server family. Today they are expanding this with seven new server, including five more 2nd Generations AMD EPYC severs (bringing their AMD total to 28 servers). The newly announced servers leverage GIGABYTE Management Console as standard, which is based on an AMI MegaRAC SP-X web-browser based platform and is compliant with the latest standards of Redfish API. Also available as a free download is GIGABYTE Server Management (GSM), GIGABYTE’s multiple server remote management software platform, and includes both a desktop and mobile APP. GIGABYTE Releases 7 GPU Servers
  1. Load more activity