MartinP

Member
  • Content Count

    134
  • Joined

  • Last visited

Community Reputation

0 Neutral

About MartinP

  • Rank
    Member
  1. MartinP

    Archiving Audio

    Sorry, completely disagree with this. A decent record deck + cartridge can have usable output (with the right record) up to 40 or even 50 KHz. This is appropriately captured by 96KHz sampling rate. It's not really important, but this is *bandwidth*, not dynamic range. Dynamic range is captured by the bit depth. 24 bits will ensure that everything on the record will be captured. It won't be fully used (nothing in nature has 140dB+ of dynamic range), but it will ensure that everything on the record is captured. 16 bits is not enough, especially because the peak levels on a record can vary dramatically from one record to the next, and PCM audio distorts hugely as soon as you go a tiny bit over the maximum level. Because of this, you need to allow a fairly large amount of headroom, meaning the quiet parts of the signal will be down below the minimum that 16 bits could capture. BTW, the human ear is fantastically good at extracting usable signal amongst a sea of noise. I've seen it stated that it can listen down into the noise floor by up to 20dB, basically by differentiating between the repetitive noise and the constantly changing music signal. This is the same feat as when you can make out what a colleague is saying despite being in a factory with huge amounts of machine noise. The quality of the audio equipment and the expertise of the person performing the transcription will massively affect the quality of the resulting digital recordings. There may also be a temptation to "improve" the signal by removing clicks, rumble, surface noise, etc with digital post-processing. If you want to go down this route I would strongly suggest that you also keep the unprocessed original transcription, as future technology may be able to do a much better job and recover even more detail than can be achieved now. I hope I've understood your postings to say that you will be retaining / protecting the original media, as these are much more likely to survive over the decades (and even centuries) then your digital copies. At least the vinyl might, if it is stored correctly (and strangely the audiophile community reckon that vinyl needs to be played occasionally to stop surface noise from accumulating). We have entered an era where we are losing our history. How many people still have the equipment to read MFM hard drives, 5.25" floppies, ZIP discs, QIC, 8mm or Travan tapes? Only museums. As soon as someone looses the will to continually copy their data onto the latest technology, it will rapidly become unreadable and inaccessible. I know it's hardly authoritative, but I am currently reading an SF novel ("Glasshouse") where one of the major themes is an almost complete ignorance about 100+ years of history starting at the last part of 20th century. Before then, stuff was printed on paper which survived and was not obscured by DRM. After the end of this period, society had learned to retain and archive its digital history.
  2. MartinP

    Scratch Disk Raid Configuration

    If you need a RAID, I would say to use 2 discs for a RAID-1 for your O/S & apps. Use the other two discs as independent drives, either put the swap on one, and the Photoshop scratch file(s) on the other, or create two separate PS scratch files, one on each disc. If you have 3-4GB RAM, do you need to have a Windows swap file? I think 32 bit XP can only make use of 3GB of RAM. I guess you have PS configured to use 90% of available RAM, so that's why it swaps at 2.7GB. Under these circumstances, PS is probably starving the rest of Windows / other apps of memory, so PS is having to do heavy traffic to the scratch files at the same time that Windows is hammering the swapfile. As I say, try disabling the swapfile, and setting PS at say 75%. With a couple of dedicated scratch discs that might give best performance. cheers, Martin
  3. I guess that would be +/- 2.26% cheers, Martin
  4. MartinP

    PSU and sound quality

    SR-80 is a lot better, and not a lot more money. I have SR-125's and they are great. I may even upgrade to 325's in the future, for the benefit of my MP3 player. My local hifi dealer (who *really* knows his stuff) reckons nothing comes close the the Grado range. You can trust every model to be an upgrade of the next lower model in the range, and that's pretty rare, I can tell you. cheers, Martin
  5. MartinP

    CPU Cooling, weird cooling dilemma

    That sounds very neat, but have you tried that with Arctic Silver 5? That stuff is much more viscous. cheers, Martin
  6. When I made the original point, I was really thinking of multiple threads, each with one I/O outstanding. cheers, Martin
  7. Nope, sorry, don't see that at all. Presume that a thread is doing serial I/O to a non-fragmented file. It issues (for example) 16 non-dependent I/O's:- current-block + 1 thru current-block + 16. The disk sees 16 requests which "happen" to be for contiguous blocks. How does it handle them? It simply sees 16 I/O's that can be satisfied by contiguous reads after a single head move. The 15 later I/O's will be queued into the buffer, then returned to the app immediately after the first. The requesting thread might see several times the throughput, compared to dependent I/O's to a non-CQ HD [in a very busy system]. Thus, the non-CQ reference system may have added considerable delays (in the trace), if other threads have also issued interleaved I/O's. This is a scenario where a real-world app could perform multiple I/O's to a CQ drive in a time that the benchmark would add significant delays. cheers, Martin
  8. I didn't know this was possible. I always assumed that a thread would issue a disk request, then wait for it to be satisfied. This is why I suggested that the benchmark should record & replay the activity on a per-thread basis. However, I don't see how the benchmark could decide when to launch the various threads in order to give meaningful results. The benchmark is a trace of 30 mins of activity on Eugene's desktop machine. If a new thread is launched 20 mins into the test because Eugene started some new application or activity, then the benchmark would need to launch the test thread at that "appropriate" time in the test (but compared to what?) If a new thread is launched 17 mins into the test because some previous activity has come to an end, then that thread needs to be launched immediately after that dependent thread completes. There is no way to extract this information from the existing trace, nor even (I think) to be able to record it in a new trace The only way I can see to do this is to run the test for 30 minutes, launching each thread at the same time that it was originally launched when the trace was captured. One could then capture data on how quickly each thread completes, compared to the time taken on the trace. The statistic would then be based on how quickly the various threads completed (within the overall 30 minutes), rather than how quickly the whole trace can be replayed. cheers, Martin
  9. But in 'the real world', if a response arrives earlier, a dependent request may also be send earlier. 212593[/snapback] Olaf, thanks for picking up on that. I tried to get this point across on an earlier thread, without success. With CQ, threads that access data from cache should be able to request their next access much sooner (which may in turn also be satisfied from the cache). The benchmark inserts artificial delays on that "thread", reducing the overall score. Desktop-orientated drives do read aheads for exactly this scenario. It is even conceivable that the drive may use a different read-ahead pattern for "throttled sequential access" (eg DVD writing) instead of "full speed sequential access" (copying a big file to another disc). No wonder CQ drives suffer. cheers, Martin
  10. That's correct, request order and interarrival times are properly preserved. 212562[/snapback] Per thread/task? cheers, Martin
  11. I presume that the new benchmarks still enforce the same order of submission of I/Os to the drives, even when CQ is enabled? cheers, Martin
  12. MartinP

    Tcq, Raid, Scsi, And Sata

    No - I still believe that I have failed to get my point across, and you prove it with the statement that:- (Of course) Which is where we start to disagree. When you say that they are issued in sequence, you are simply confirming my point. There are five I/O's in the queue. That is presumably from five different applications or threads. The benchmark must watch which I/O of those five completes AND THEN ISSUE THE NEXT I/O FROM THAT THREAD, not whichever one it happened to record next when it captured the trace. NCQ offers the possibility that some threads will run faster than others. The current benchmark says that every thread must run a slow as the slowest one, because it cannot issue I/Os into the queue in a different order than they were captured. cheers, Martin
  13. Did your card come with the PAM utility on a CD? That should do it. cheers, Martin
  14. MartinP

    Tcq, Raid, Scsi, And Sata

    Thank you for your reply. I thought this thread had died! However, I have to point out that although queueing is a pre-requisite for NCQ, it is I/O reordering within those queues which actually delivers any performance benefit. SR claims that NCQ delivers no real benefit to the end user, which is refuted by your benchmarks. The SR benchmarks replay their I/O's in a fixed order, so no reordering, so no performance benefit. cheers, Martin
  15. MartinP

    why i can't delete ?

    If Windows can't move the file to the Recycle Bin, it will ask whether it's OK to just delete the file (ie same as Shift+Del). It won't silently fail the delete. cheers, Martin