Sign in to follow this  
jehh

Switching to Windows from Linux - A Linux user goes back

Recommended Posts

This sure sounds interesting. I am glad to hear that laser printers are coming down in price. Do you have a lead for a different model?

The negative feedback on cnet for this model was a little scary. Linux, OSX and windows users all compaining about different stuff.

You have got to be kidding. The user opinions on this printer are overwhelmingly positive. 89% thumbs-up, and 11% thumbs-down. You will always have a bunch of people who couldn't get the thing to work properly, and 11% negative reviews is really nothing to worry about.

Honestly, I don't think you will find a better printer in this price range. brother sells a similar model for a little more cash and HP gutted their $400 printer to compete to the Samsungs and brothers, but it sacrifices some features and performance to get there (and it's still $50-60 more expensive). Lexmark has a similar printer to the Samsung as well -- in fact, they might be based on the same printing engine -- but still a little more expensive.

The great thing about Samsung for us consumers is that they are the newcomers to the laser printer market and are using an aggressive pricing strategy to break into the market and quickly gain market share. The consumer benefits by getting tremendous value for their laser printer dollar. In a couple years, they will be established as a legitimate player in the industry and won't have to discount their products as much. All I can say is "get it while it lasts". (I am sure you already know about Samsung's award winning monitor line that is known for its excellent performance for the dollar.)

The only reservation I would have about this model is the vertical sheet feeder. My impressions of vertical sheet feeders is that they are more prone to sheet feeding problems, especially multiple feeding as they age. My HP LaserJet 6L needed a separation pad repair kit to stop it from multi-feeding. The good news is that Samsung has a larger printer with a 500 sheet paper tray that is just being introduced -- the ML-1440.

Share this post


Link to post
Share on other sites
I will let other posters discuss the usability features of each operating system, but I must disagree with Jason’s assertion that Linux is technically superior.  Quite the opposite is true.

While NT and Linux are roughly the same age, the NT kernel is far more sophisticated than the Linux kernel.  Linus chose to implement a simple, traditional, monolithic kernel.  After all, the project was ambitious enough as it was.  I suspect he made the right choice at the time.

This will be an interesting discussion because cas has far more experience than I, or is extremely good at feigning strong competance.

Cas, you are the first very knowledgeable person that I have ever known to believe the Windows design is actually superior to the Unix design.

First, when refering to the technical superiority of Windows and Linux (or actually any Unix, I don't see why Linux should get all the attention) I think it would be most appropriate to discuss the entire operating system and not just the kernel.

About the kernels: The monolithic kernel vs. microkernel is not a finalized debate. Microkernels are clearly better for commercial operating systems, but not because they are necessarily technically superior. Further, if you want a monolithic kernel, MacOSX is a fine operating system and I would argue is easier than Windows in almost every way as well, but it is a fork thrown into the blender of an otherwise smoother discussion thread.

I will have to do some more homework before I can respond intelligently, and feel free to correct anything I say that is incorrect (like I did to you in our brief filesystem debate earlier). A few things of note for now:

The Windows kernel integrates the GUI, making the kernel hugely more complex and bug prone. Any bug in the kernel, as you know, can bring down the system because no software is above the kernel, and the more complex a piece of code is the more bugs it will have. For many home users the stability is good enough though, so that should no longer be much of an issue (for average users).

Most code in the Linux kernel is driver code for specific devices, thus most is unused by any particular system, but NT literally forces you to use the GUI, so the extra bloat is used by everyone.

Microkernels like NT's are slower than monolithic kernels. Less context switches are needed between kernel land and user land--and as you know these are fairly expensive.

The Windows idea of all applications' windows (which includes buttons, scrollbars, buttons, checkboxes, just about everything--not just fully fledged application windows) recieve a message for nearly everything that happens--the mouse moved from here to here, this button was just released, the middle mouse button was just clicked--etc. generate substantial overhead that should not be required when running a server (just like the memory-hungry GUI should not be required) Open up 10 applications including Task Manager and move the mouse around rapidly.

On this AthlonXP 1600+ system thyat I just reassembled, which has two applications open, doing so used up to 14% of the CPU. 14% of a 1400MHz modern CPU--to move the mouse. I can only guess what this does on a busy server (literally, because when running many applications which are all busy, it is very difficult to tell how much processing power messages are using.) For desktop GUIs, though, message passing is probably the best way to handle everything though. Perhaps IPC through the kernel with specific apps telling the system what exact events they care about would work better, but then, I haven't designed too many GUI infrastructures.

That said, Linux has paid the price as drivers, and vast amounts of kernel code have required rewrites to support multiple processors and threads(in an elegant way). Further, early versions of the Linux code base were tightly tied to the x86 architecture.  Even in its initial release, NT was portable, had fine grained kernel locks, and supported SMP operation.
Just as most drivers have required a rewrite from earlier NT versions to work on 2000/XP.

NT started with the intention of being a full multi-processor commercial OS to support many platforms. This was not the original intention of Linux, so faulting it for not having these features long ago is innappropriate.

The fact is that Linux now scales quite nicely in many tasks up to around 64 processors (though certainly not as well as Solaris or Irix). I recall reading an SMP benchmark, though this will be useless information as I do not remember the link, using IBM's DB2 in which a 64-way Linux machine (using SPARC hardware I believe, again, just FYI) scaled at 89% efficiency. I did not see any comparison with Windows, but Windows machines with more than four processors are rare and those with more than eight are as rare as emerald hen's teeth under a blue moon.

Regardless, the proof is in the pudding. NT at one time supported Alpha and MIPS (three total architectures ever, unless I don't know about one) and currently supports *only* x86, unless you count the WindowsXP port to Itanium. (Who is actually going to use WindowsXP, a desktop OS, on an Itanium?_

Linux, on the other hand, supports x86, Alpha, MIPS, SPARC, PA-RISC, ARM, VAX, PowerPC, Motorola 68K, and I have seen it run on a wristwatch, whatever architecture that thing is. Saying that platform support was poor in Linux's ancient history is like complaining that Windows 3.0 doesn't multitask well*. What matters is now, and the proof is in the proverbial pudding.

*Speaking of Windows' multitasking, it is also inferior to that of most unices, but that will have to come up later on.

Share this post


Link to post
Share on other sites
I feel compelled to point out here that W2k/.NET even as a server OS is completely stable, to industrial levels of availability (4 9's and above), provided the following is true.

The hardware platform is tested, stable, and reliable

The drivers are properly written to all current MS specs, with no "shortcuts" taken

The OS is properly configured and installed.

Most major tier-1 OEM's will guarantee, with Microsoft, IN WRITING, 99.99% uptime on W2k. The product is called Datacenter, but is nothing more than W2k advanced server, with a stable, tested driver base on stable, tested, hardware.

Security? Again, if security patches are properly applied, KV5 properly implemented, services configured correctly, W2k is capable of B1 level security.

So..whats the real problem? Why do people complain about bluescreens all the time under W2k and XP? Or security holes?

1) Poorly written software. Most software people run is not tested, or logo certified.

2) Security hole - Running in Admin mode. A properly written application should not require admin mode to execute. Problem is, most apps are NOT properly written. Why the HELL aren't games written to run in standard user mode? Who knows.

3) Poor drivers/hardware. If you have a creative labs soundcard in your system, you know what I mean. Creative lab designed a card (SBLive) that did NOT meet PCI 2.1 spec, then wrote lousy WDM drivers to top it off. When Audigy came out, Creative designed a "slightly" better card, but still has lousy WDM drivers. AFAIK, none of Creatives drivers to date have passed WHQL (ie: not "certified". Unfortunately, Creative is not alone here, they're just one example.

4) On that note, if you're installing non-signed drivers under XP (or W2k), don't come crying.

5) Backlevel drivers/firmware/Bios/software

DON'T try to run your old DOS games. DOS is dead, and DOS software is outdated and pathetic.

DON'T try to run your old Win9x crap...see above.

DON'T download hacked, or leaked drivers, then complain when you get BSOD's

DON'T use unstable hardware. With hardware, you get what you pay for.

Running a VIA chipset is bad enough. Running a VIA chipset, with your standard PC2100 RAM running aggresive memory settings, FSB at 150, and a massive heatsink to cool it off, is running your system way...WAY..beyond spec. Yes, you'll boot. Yes, you may even run for a while, just don't expect it to ever be truly stable. Hardware problems can manifest themselves many MANY different, not so obvious ways. As someone who has debugged bluescreens for a living, I know.

You can have speed, stability, or low price. Pick any two.

There is a significant reason why no Tier-1 OEM builds any AMD/VIA based servers..you figure it out.

ROFL :D

99.99% reliablity is 52 minutes of downtime a year. You simply cannot reach that level of reliability without clusters of machines and redundant hardware.

In this case, it really doesn't matter what OS or hardware you are using, because the reliability of any single machine is not of any concern.

Nice try though :D

Share this post


Link to post
Share on other sites
Windows ... recieve a message for nearly everything that happens--the mouse moved from here to here, this button was just released, the middle mouse button was just clicked--etc. generate substantial overhead that should not be required when running a server (just like the memory-hungry GUI should not be required) ...

... On this AthlonXP 1600+ system thyat I just reassembled, which has two applications open, doing so used up to 14% of the CPU. 14% of a 1400MHz modern CPU--to move the mouse. I can only guess what this does on a busy server ...

Sivar, why on earth would anyone open a bunch of other applications, move the mouse around, and click things on a server? That's what the desktops are for.

But as for your mouse using inordinate amounts of CPU cycles, it could be that your mouse drivers are poorly written (seems to be more prevalent now with the advent of USB mice). A couple years ago I installed the Logitech MouseWare software and drivers for my Logitech wheel mouse. I was shocked to see the enormous levels of CPU utilization when using my mouse -- especially when using the wheel to scroll in windows. You think 14% is bad? I was getting 70-80%. I quickly uninstalled the MouseWare drivers and software and reverted to the basic Microsoft PS/2 driver (the IntelliMouse driver also had higher CPU utilization I remember). Much lower CPU utilization. Unfortunately, I have been deprived of wheel scrolling in Visual Studio -- ironically, this is where it would be the most useful --, but it is worth the tradeoff for better performance in every other program.

Share this post


Link to post
Share on other sites
Let's get something straight. HP inkjets suck, but so do all inkjets.

...

[snip]

...

I cannot for the life of me understand why laser printers haven't become the standard printer of choice with inkjets acting as somewhat more specialized colour or photographic output devices the way laser printers are perceived as somewhat more specialized black and white output devices (at least for the mainstream non-corporate consumer).

The newer inkjets are much better than the older ones. I love my new HP 940C(OK, kill me, I bought an HP.) I agree about the ridiculous cost of consumables, but I avoid this by refilling my cartridges. It took me about half an hour to refill both cartridges, they worked perfectly the first time, and it cost me maybe $10 instead of the $65 new cartridges would have. And since I have no nearby stores that sell HP cartridges, I easily would have spent an hour or more going to the store to purchase them, so I saved time and money. My biggest pet peeve with inkjets is that the ink runs all over the place the minute the page gets wet. Are there any refill kits that use permanent ink?

That being said, I agree a color laser is technically superior in every way to an inkjet. The only drawback is the purchase price, which is still $1000+. Should the price get down to $250 or less for a color laser, I would run to the nearest store and purchase one.

Share this post


Link to post
Share on other sites
Further, if you want a monolithic kernel, MacOSX is a fine operating system and I would argue is easier than Windows in almost every way as well, but it is a fork thrown into the blender of an otherwise smoother discussion thread. 

*Note:

Mac OSX is based on the Mach kernel. The Mach kernel is a microkernel.

Share this post


Link to post
Share on other sites
ROFL :D

99.99% reliablity is 52 minutes of downtime a year. You simply cannot reach that level of reliability without clusters of machines and redundant hardware.

In this case, it really doesn't matter what OS or hardware you are using, because the reliability of any single machine is not of any concern.

Nice try though :D

You can if the hardware has the built in redundancy and OS does not screw itself up.

Share this post


Link to post
Share on other sites
I feel compelled to point out here that W2k/.NET even as a server OS is completely stable, to industrial levels of availability (4 9's and above), provided the following is true.

The hardware platform is tested, stable, and reliable

The drivers are properly written to all current MS specs, with no "shortcuts" taken

The OS is properly configured and installed.

Most major tier-1 OEM's will guarantee, with Microsoft, IN WRITING, 99.99% uptime on W2k. The product is called Datacenter, but is nothing more than W2k advanced server, with a stable, tested driver base on stable, tested, hardware.

Let's look at this piece-by-piece. 99.99% uptime for an enterprise server. That's a bit over 4 minutes per month, thus roughly equals two reboots per month; one if the machine is particularly slow at rebooting. Doesn't sound terribly impressive for an enterprise class server, does it?

Additionally, what does the guarantee say? Does it say that the vendor will pay all lost revenue if the system is down longer than that? I seriously doubt it. Can you point us to one of these guarantees? Even if is something like "or we will refund you in full for your hardware." After a huge investment in Datacenter server and the hardware, infrastructure, training, setup, etc. that doesn't seem terribly valuable. In fact, it sounds like marketing, unless they pay for all lost revenue. I have never seen a guarantee do that.

Rugger, 99.99% uptime is not particularly impressive. Again I like to point to the Netcraft uptime survey which lists the webservers that have been up the longest (3+ years), none of which seem to be Windows-based. Many of which are on ordinary PCs. I remember checking the website of the longest uptime a year or so ago--it ran FreeBSD on a 386DX/25 with 8MB RAM. Hardly an enterprise server.

Security? Again, if security patches are properly applied, KV5 properly implemented, services configured correctly, W2k is capable of B1 level security.
If you wish to advocate Windows as a server it would be wise to shy away from security as much as possible. Security is, well, not one of Windows' strengths..

Various government levels of security, if they qualify a Windows machine, are also a joke. I recall an NT 4.0 server "recieving" a high level government security rating. Yes, the system happened to have no network connection, no CD-ROM, and the floppy drive was epoxied over, but it still got the letters. That's what matters, right?

Or perhaps the actual frequency and severity of flaws matters more? Perhaps the time between a flaw being reported and the flaw being fixed matters more?

And of course, it is always nice when the bugfixes actually fix the bug. Windows is not in the same league as Unix/Linux in security. Not that Unix/Linux is perfect, or even the best, but it is certainly several orders of magnitude better. NT does have some nice security /features/, none of which are unavailable on Unix, but what matters in the end is how likely your system is to be broken into.

Let's take a look at a few security websites and major hosting websites. These websites will have a pretty good idea of what a good server platform is, what is secure, and what is not.

Securityfocus.com. These people publish Bugtraq. You may have heard of it, as it is the most widely respected publisher of security exploits for all major systems.

The Netcraft OS detector says:

The site www.securityfocus.com is running Apache/1.3.26 (Unix) mod_perl/1.27 on Linux.

Now Defcon.org. Defcon is the annual hacker convention is Las Vegas. Not a bad convention to attend, and a great place to get cool T-shirts:

The site www.defcon.org is running publicfile on FreeBSD

Now, the U.S. Department of Defense:

The site www.dod.gov is running Netscape-Enterprise/4.1 on Solaris.

And now Rackspace.com, one of the largest hosting services, which hosts servers running Windows, Linux, FreeBSD, Solaris, etc.

The site www.rackspace.com is running Apache/1.3.22 (Unix) (Red-Hat/Linux) PHP/4.0.6 on Linux.

While I don't have time to run a Netcraft search of Pair.net, the oldest existing web hosting service, they use FreeBSD exclusively.

So..whats the real problem? Why do people complain about bluescreens all the time under W2k and XP? Or security holes?

1) Poorly written software. Most software people run is not tested, or logo certified.

2) Security hole - Running in Admin mode. A properly written application should not require admin mode to execute. Problem is, most apps are NOT properly written. Why the HELL aren't games written to run in standard user mode? Who knows.

3) Poor drivers/hardware. If you have a creative labs soundcard in your system, you know what I mean. Creative lab designed a card (SBLive) that did NOT meet PCI 2.1 spec, then wrote lousy WDM drivers to top it off. When Audigy came out, Creative designed a "slightly" better card, but still has lousy WDM drivers. AFAIK, none of Creatives drivers to date have passed WHQL (ie: not "certified". Unfortunately, Creative is not alone here, they're just one example.

4) On that note, if you're installing non-signed drivers under XP (or W2k), don't come crying.

5) Backlevel drivers/firmware/Bios/software

The entire point of stability is to be able to handle adversity, not falter at the first sign of it.

The OS should never crash when running a buggy application (games included), and should not have a consistant driver quality problem. These drivers written for Linux seem to work wonderfully, whereas so many drivers written by the company that designed the product itself seem to be of terrible quality. Granted, as far as drivers go, a system is only as stable as its drivers--but that begs for a comparison of driver stability. You won't see the Linux SBLive drivers causing any system crashes. (And, oddly, creative writes some of them)

DON'T (etc.)

This quickly turned from a server stability/security post to a desktop OS post, and then to a hardware stability post.. (most servers do not run games and are not overclocked)

There is a significant reason why no Tier-1 OEM builds any AMD/VIA based servers..you figure it out.

Probably because VIA doesn't make any server chipsets and doesn't market any of their existing chipsets as "for use in servers."

Share this post


Link to post
Share on other sites
You can if the hardware has the built in redundancy and OS does not screw itself up.

A single system always has at least 1, if not more, single points of failure. (eg motherboard, CPU, video adaptor, other controllers). If any of these devices fail, your system goes down and your 99.99% reliability is broken, since it is likely that it will take longer than 52 minutes to fix.

Share this post


Link to post
Share on other sites
Most code in the Linux kernel is driver code for specific devices, thus most is unused by any particular system, but NT literally forces you to use the GUI, so the extra bloat is used by everyone.

Actually, NT does no such thing. I think that you are confusing NT the OS, with the Win32 subsystem and API. It is true, that recently MS moved Win32K.SYS into kernel space, so bugs there could make the OS less stable than if they had left only the "proper" NT kernel living in Ring0. (Video device-drivers, and probably some others, also got moved into Ring0 along with the NT kernel, starting with NT 3.51 or 4.0. There was a big drop in stability at that point as well.)

Microkernels like NT's are slower than monolithic kernels. Less context switches are needed between kernel land and user land--and as you know these are fairly expensive.

Perhaps, but making them reentrant and multithreaded, can make the system overall appear more responsive, even if in pure performance terms it is less so.

The Windows idea of all applications' windows (which includes buttons, scrollbars, buttons, checkboxes, just about everything--not just fully fledged application windows) recieve a message for nearly everything that happens--the mouse moved from here to here, this button was just released, the middle mouse button was just clicked--etc. generate substantial overhead that should not be required when running a server (just like the memory-hungry GUI should not be required) Open up 10 applications including Task Manager and move the mouse around rapidly.

On this AthlonXP 1600+ system thyat I just reassembled, which has two applications open, doing so used up to 14% of the CPU. 14% of a 1400MHz modern CPU--to move the mouse. I can only guess what this does on a busy server (literally, because when running many applications which are all busy, it is very difficult to tell how much processing power messages are using.) For desktop GUIs, though, message passing is probably the best way to handle everything though. Perhaps IPC through the kernel with specific apps telling the system what exact events they care about would work better, but then, I haven't designed too many GUI infrastructures.

You are confusing things here. I'm not sure how it all works totally, given that the scheduler should be in the NT kernel, but the message-passing is handled in the Win32 subsystem, but in NT/W2K, as in many other OSes, interactive processes that recieve input, get their scheduling quantums boosted, so that those processes seem more responsive to the user, even if they do "cost" more of the CPU time percentage-wise, and overall slow down the system in global terms. VMS was the same way. (NT was designed based on the design of VMS. Some say it is the direct successor. VMS++ == WNT.)

Regardless, the proof is in the pudding. NT at one time supported Alpha and MIPS (three total architectures ever, unless I don't know about one) and currently supports *only* x86, unless you count the WindowsXP port to Itanium.

You forgot the PPC, the IBM "PREP" platform. A short-lived port, if there ever was one. I don't know if a Sparc port ever existed or not. Was Sun part of the "ARC" at the time? Portions of the NT/Win32 API were also grafted onto the VMS OS, as part of a technology transfer between MS and DEC (although I don't know if that was only the Alpha platform or not). (MS got some of DEC's clustering technology, and DEC got to integrate some Win32 APIs into their OS. What value that had for them, I find questionable.)

Share this post


Link to post
Share on other sites

I make a list of statements below. The technical information can be verified by anyone as the information that they are based on is freely available on the web. The rest are my observations on the strengths of free software

Linux runs in more than 24 different processor architectures making it an ideal solution for enterprises wishing to integrate disparate hardware under a common programming environment and under a common set of applications. I don't know of any other OS as widely supported today.

The standard 2.4 Linux kernel supports 64GiB of memory. Alternately, Windows 2000 Server is limited to 4GiB of RAM. You have to purchase Windows 2000 Advanced Server to get support for 8GiB of RAM or Windows 2000 Datacenter Server to achieve full 64GiB RAM support. This is indeed a moot issue for individuals for now, but it isn't for academic institutions needing to create a very powerful server on very little money.

Also note from the pricing information that if you spend US$3,999 to purchase Windows 2000 Advanced Server you are still only allowed to connect 25 clients to it before you are required to purchase additional Client Access Licences (CALs). Not having to deal with CALs is a clear advantage of Linux as any network admin knows.

This sort of cost-advantages make Linux very attractive for academic use: complex economic modeling, whether forecast, genetic research, the study of molecular behavior in the fields of biotechnology and bioengineering...

Linux is used in vast computational clusters, an area that Microsoft Windows, to the best of my knowledge, doesn't even feature.

Open source puts the consumer in the driving seat. By disclosing fully the workings of the OS, the consumer/enterprise can go get support from whoever offers such support at the best price point. Unlike with proprietary software, no longer does the customer have to upgrade when the vendor tells him to do so if he is to maintain support for his systems. This is as it should be.

If you don't think onerous licenses are a problem, you haven't been paying attention to the widespread discontent with Microsoft's Software Assurance program :http://news.com.com/2100-1001-908779.html

According to a recent Gartner survey, http://news.com.com/2100-1001-257390.html?tag=rn the new licensing could increase corporate customers' spending on Microsoft licenses between 33% and 107%, depending on how often they typically upgrade their software. Both Gartner and the analyst firm Giga report confusion and frustration among corporate customers http://news.com.com/2100-1001-908773.html?...ne.dht.nl-sty.0 they studied, with roughly one-third having already signed up for the program, one-third leaning against it, and still another third undecided.

New developers, companies, governments stand on the shoulders of giants. By benefiting from the collective mind pool that is free software, they can speed up time-to-market or come up with new creative solutions without reinventing the wheel or being subjected to onerous licensing terms. Note that under the GPL you don't have to share your modifications, so long as you do not release those modifications as part of a commercial products. Why we are forcing our cash-starved public schools to pay Microsoft's tax is beyond me. Some schools are already making the change:

http://www.k12ltsp.org/

http://lingua.utdallas.edu/encore/

Related to the prior point. We live in a world were everything has slowly being commoditized. The pursuit of knowledge does not benefit from closed systems. Knowledge advances when our achievements can be fully shared with the rest of humanity. There are instances when standard copyright licenses make sense: the process of writing a novel would be completely distorted if you could not point authoritatively to a single author. This allows us to discuss an author's ideas as he intended to come accross. (Even here, there are some interesting experiments).

Software today acts as an enabler of communication in every meaningful realm of human activity: culture, commerce, politics. By having the software that powers this immensely important infrastructure available to everyone, we remove barriers to the pursuit of human achievement. Software should be a tool. If we can make that tool available to all those wishing to innovate, humanity as a whole benefits. We will have better products and a more colorful and less insular culture (look at the way in which the web allows people from all over the world to interact in real time). Should we not be happy about the existence of a software philosophy that puts our collective well-being before the private gain of any company? As R. Buckminster Fuller observed the same year that Stallman invented copyleft: “trying to stop this apolitical and amorphous phenomenon of cooperative networking will be like trying to stop the waves of the ocean.”

Microsoft's has not disclosed security vulnerabilities to its customers in a timely fashion, making it impossible for them to find ways to protect themselves until a fix available. One can always remove or disable a service if one knows that it is not safe.

Just consider Microsoft's response to the UPnP vulnerabilities that were discovered right after the release of Windows XP. Not once did they inform Windows XP customers that it might be prudent to disable the UPnP service over the entire five week period that Microsoft knew about an additional critical vulnerability. In fact Microsoft even had a preliminary patch that fixed the critical vulnerability. Scott Cultp, Manager, Microsoft Security Response Center, said:

(Microsoft news server) On 14 November, eEye reported that they had found a buffer overrun, and that the preliminary patch we'd sent on 07 November appeared to fix it. We investigated the report and found that there was indeed a buffer overrun and that our preliminary patch had protected against it by blocking access to the code path containing the unchecked buffer

This is strong evidence that Microsoft had a patch available to fix a critical network exploitable buffer overrun around 14 November. Yet it was going to be five weeks later before the patch to fix the buffer overrun was released. The buffer overrun fix was critical. Yet it wasn't released independently of the less serious denial of service vulnerabilities. And no one was notified of the option to disable UpnP.

More to come later.

Share this post


Link to post
Share on other sites
Note that under the GPL you don't have to share your modifications, so long as you do not release those modifications as part of a commercial products.

I think that statement would be more correct, if you dropped the phrase "as part of a commercial product". The GPL doesn't distinguish. If you modify the source, and release it (period), then you need to provide the source as well.

Share this post


Link to post
Share on other sites

Well..a few misconceptions need to be cleared out.

Its obvious that most of the "hobbyists" here are unfamiliar with enterprise class intel servers. Note that I say most, so hold the flames.

On such a server, there are NO single points of faliure.

As far as uptime guarantee's on Datacenter? OEM's are REQUIRED to offer this in writing in order to sell Datacenter. The penalties in these guarantees, to both MS and the OEM are quite severe, often rangeing into the 100k+ range. This is why the OEM's often insist on clusters. Please understand, in these configurations, the server is usually the cheapest part of the setup. The software almost always costs far...FAR more, not to mention the storage. Just as an example: Oracle 9i runs $60k PER CPU. On a standard 8 way server, thats 48k....about as much as the server itself. SAP can run into the 100's of thousands.

On other issues, Sivar's contention that NT "forces you to use the GUI" is laughable. The gui exists for the sole purpose of manipulating the registry. (As oppsed to editing countless little config files under Linux). Sivar...why are you "moving the mouse" on your server at all? You set it up correctly and it runs. If needed, you access it remotely through terminal server, or run headless. Just how much overhead does Xwindows use under Linux?....UGH. Real servers are SMP anyway, so this is a non-issue. Also Sivar, 99.99% is the guarantee. In practice, most datacenter setups in the field are running 5 nines. As I stated above, OEM's and MS stand behind these numbers.

You seem to be smoking some crack when it comes to Linux comparisons. Linux/Intel is NOT an enterprise class OS. No major vendor offers it as such (on the Intel platform). Yes, IBM does have a specific port for the mainframe environment, but they had to rewrite the entire kernel to do it. Linux/Intel still does not successfully support more than 8 CPU's, and scale's pathetically past 2CPU's.

Yes, as a webserver, I'll admit, IIS leaves much to be desired. However, enterprise class servers are not used for IIS (thats what cheap 1U server farms are for). Rather, they are used for mission critical apps.

Rugger. You are incorrect. A single system does NOT always have a single point of failiure. High end servers are quite capable of running without video/headless, have redundant hot-swap PCI slots, and mutipath storage. Some can even dynamically take a bad CPU offline (although this will require .NET for OS support). Motherboard components themselves are redundant on these servers. So yes Rugger, a single server can easily reach 99.99% uptime in the hardware.

Share this post


Link to post
Share on other sites
Note that under the GPL you don't have to share your modifications, so long as you do not release those modifications as part of a commercial products.

I think that statement would be more correct, if you dropped the phrase "as part of a commercial product". The GPL doesn't distinguish. If you modify the source, and release it (period), then you need to provide the source as well.

Sorry for that "s" tagging along my last sentence. I always write to these boards when I am tired of doing my own work and never proofread.

Won't happen again, I assure you :wink:

And you are correct about the GPL.. What I intended to say, and didn't say is that you can't modify a GPL program to suit your needs and s o long as you don't publicly release it, y ou can do whatever you want. So, an NGO or a government entity could adapt a program to its needs without sharing the changes. I apologize for the ambiguity in both statements. I was more than a little tired when I wrote them.

Thanks for setting the record straight.

Cheers,

Yuyo

Share this post


Link to post
Share on other sites

I think the best way to go nowdays is dual-boot... since Linux is still most of times freeware, I don't see why one shouldn't have a small partition of his HD with a Linux distro to play and use Open apps... :)

Share this post


Link to post
Share on other sites
And you are correct about the GPL.. What I intended to say, and didn't say is that you can't modify a GPL program to suit your needs and s o long as you don't publicly release it, you can do whatever you want. So, an NGO or a government entity could adapt a program to its needs without sharing the changes. I apologize for the ambiguity in both statements. I was more than a little tired when I wrote them.

[/quote}

Here we go again. And I just promised you that it wouldn't happen again. ?Can't modify? above should be ?can modify?.

Read the GPL on your own if you are interested. It can be found here: http://www.fsf.org

Share this post


Link to post
Share on other sites
Rugger. You are incorrect. A single system does NOT always have a single point of failiure. High end servers are quite capable of running without video/headless, have redundant hot-swap PCI slots, and mutipath storage. Some can even dynamically take a bad CPU offline (although this will require .NET for OS support). Motherboard components themselves are redundant on these servers. So yes Rugger, a single server can easily reach 99.99% uptime in the hardware.

Sure about that, needing .NET support for hot-plugging defective CPUs? My friend has a quad Xeon super-redundant-hotswap-everything Compaq server he picked up, surely if the CPUs and memory support hot-plugging, then they must have been supported in prior (existing) versions of NT. OEMs can write custom HALs for support, you know. Still, I don't think we've tried actually hot-swapping a CPU on that box. Kind of a scary thought to most people familiar with Intel workstation-class machines.

Share this post


Link to post
Share on other sites

Proteus,

Where did you get the idea that Linux wasnt enterprise class. Most serious hosting companies offer a 4 0r 5 - 9 uptime guarantee. RackSpace.com, One of the biggests hosters out there offers 99.999% on redhat, solaris, FreeBSD and Win2K. Most others do as well. Hell, we have a plan sett up for 99.99% for customers we host. That type of uptime guarantee is VERY common in the industry. Its just costly. Most of the larger hostign companies will offer that for any major OS you choose. You just have to pay for it.

As for what is a more stable OS or what is more worthy of an uptime guarantee. Well lets see what netcraft has to say:

http://uptime.netcraft.com/up/today/top.avg.html

In the top 50 I see one linux box and 5 or so Irix. a whole bunch of BSD and..........look...... no Windows.

James Ashton

Share this post


Link to post
Share on other sites
Most code in the Linux kernel is driver code for specific devices, thus most is unused by any particular system, but NT literally forces you to use the GUI, so the extra bloat is used by everyone.

Actually, NT does no such thing. I think that you are confusing NT the OS, with the Win32 subsystem and API. It is true, that recently MS moved Win32K.SYS into kernel space, so bugs there could make the OS less stable than if they had left only the "proper" NT kernel living in Ring0. (Video device-drivers, and probably some others, also got moved into Ring0 along with the NT kernel, starting with NT 3.51 or 4.0. There was a big drop in stability at that point as well.)

I was saying that NT forces the GUI to run. Technically I suppose you could run the DOS shell on top of it, and I believe it is possible to replace the shell with a custom program as was done with "Windowblinds" (or something like that) which allowed customization of the Windows 9x interface, but most functionality is designed with the GUI in mind and much is not generally accessible from the DOS shell at all, and the GUI is still running on top of everything else. Are you saying that one can run NT and, say, IIS without the use of the GUI, thus freeing up memory? I don't know, I imagine it might be possible, but does anyone do this?

Microkernels like NT's are slower than monolithic kernels. Less context switches are needed between kernel land and user land--and as you know these are fairly expensive.

Perhaps, but making them reentrant and multithreaded, can make the system overall appear more responsive, even if in pure performance terms it is less so.

The same can be done with a monolithic kernel as well. Are you saying that microkernel performance can appear better if both are reentrant and multithreaded? I don't know. What do you mean "appear more responsive?"

Note that I am not necessarily saying that a monolithic kernel is, all things considered, better than a microkernel--only that both have their advantages and disadvantages. Additionally, one can use MacOSX and still get the power of Unix with the advantages of the microkernel if it is so desired. (That is, if you happen to have a Mac handy)

The Windows idea of all applications' windows (which includes buttons, scrollbars, buttons, checkboxes, just about everything--not just fully fledged application windows) recieve a message for nearly everything that happens--the mouse moved from here to here, this button was just released, the middle mouse button was just clicked--etc. generate substantial overhead that should not be required when running a server (just like the memory-hungry GUI should not be required) Open up 10 applications including Task Manager and move the mouse around rapidly. [Etc...]
You are confusing things here. I'm not sure how it all works totally, given that the scheduler should be in the NT kernel, but the message-passing is handled in the Win32 subsystem, but in NT/W2K, as in many other OSes, interactive processes that recieve input, get their scheduling quantums boosted, so that those processes seem more responsive to the user, even if they do "cost" more of the CPU time percentage-wise, and overall slow down the system in global terms. VMS was the same way. (NT was designed based on the design of VMS. Some say it is the direct successor. VMS++ == WNT.)

What things am I confusing?

Regardless, the proof is in the pudding. NT at one time supported Alpha and MIPS (three total architectures ever, unless I don't know about one) and currently supports *only* x86, unless you count the WindowsXP port to Itanium.

You forgot the PPC, the IBM "PREP" platform. A short-lived port, if there ever was one. I don't know if a Sparc port ever existed or not. Was Sun part of the "ARC" at the time? Portions of the NT/Win32 API were also grafted onto the VMS OS, as part of a technology transfer between MS and DEC (although I don't know if that was only the Alpha platform or not). (MS got some of DEC's clustering technology, and DEC got to integrate some Win32 APIs into their OS. What value that had for them, I find questionable.)

That's interesting information, about DEC getting to use Win32APIs in VMS. I didn't actually forget the PPC port of NT--I never knew about it to begin with. :-)

Share this post


Link to post
Share on other sites
(Microsoft news server) On 14 November, eEye reported that they had found a buffer overrun, and that the preliminary patch we'd sent on 07 November appeared to fix it. We investigated the report and found that there was indeed a buffer overrun and that our preliminary patch had protected against it by blocking access to the code path containing the unchecked buffer

This is strong evidence that Microsoft had a patch available to fix a critical network exploitable buffer overrun around 14 November. Yet it was going to be five weeks later before the patch to fix the buffer overrun was released. The buffer overrun fix was critical. Yet it wasn't released independently of the less serious denial of service vulnerabilities. And no one was notified of the option to disable UpnP.

More to come later.

Good post--not that there have been many bad posts in this thread. I find it interesting that the Microsoft patch simply blocked access to the code containing the buffer overrun rather than fixing the problem in the first place. Perhaps the preliminary patch had secondary problems, thus making a later release prudent. Either way, good point about not posting details of a simple workaround.

Share this post


Link to post
Share on other sites
On other issues, Sivar's contention that NT "forces you to use the GUI" is laughable. The gui exists for the sole purpose of manipulating the registry. (As oppsed to editing countless little config files under Linux). Sivar...why are you "moving the mouse" on your server at all?

Moving the mouse was one of several examples to illustrate a point, which is message passing overhead. You do not need to move mice around to pass thousands of messages.

Saying that the GUI is used for the sole purpose of editing the registry is a bit of a reach, but not considering that, you missed the point: The server is running the GUI regardless. The focus of the statement was GUI overhead. Yes, I know you can telnet in and use the rather lacking DOS shell or, on systems with special software to servers with a special setup, log in to the system with a graphical terminal (which tends to be rather slow and flaky in my experience) but the GUI is still running on the NT server. You speak of not needing to move the mouse on a server, which is correct (but wasn't the point). I'll counter that with, why are you running a GUI on the server at all? If you have 200 servers, why are you running 200 copies of a rather resource-hungry GUI?

You set it up correctly and it runs. If needed, you access it remotely through terminal server, or run headless. Just how much overhead does Xwindows use under Linux?....UGH.

The overhead of X is something that the user can decide for themselves, but it doesn't matter because everything server related can easily be done via SSH or on the local terminal. If a newbie admin insisted on using a GUI on the server, generally considered a bad practice or at the very least "in poor taste" on Unix systems, it would likely use X with a lightweight window manager which would all fit into about 32MB of RAM nicely. That includes the webserver, the operating system overhead, and the GUI itself. This could all be done with 16MB of RAM if certain very lightweight tools were used, and an older (but still perfectly functional) version of XFree86 were used.

You do not, however, have to run the GUI. Even if it is possible to avoid running the GUI under Windows, which would be an interesting feat, the idea is quite against the grain of the Windows "culture," that is, nobody (that I know of) runs NT without the GUI.

Real servers are SMP anyway, so this is a non-issue.
Operating system overhead is a non-issue on SMP systems?
Also Sivar, 99.99% is the guarantee. In practice, most datacenter setups in the field are running 5 nines. As I stated above, OEM's and MS stand behind these numbers.

On an extremely carefully setup system, that does not seem far-fetched at all, but I am still looking for a link to such a guarantee.

You seem to be smoking some crack when it comes to Linux comparisons.

Note that if you use a personal insult again that I will simply ignore your posts. I come here for intelligent debate and information sharing, not to participate in flame wars. If you wish to do so, there are plenty of Linux zealot and Windows zealot newsgroups that would be more than happy to accomodate your needs.

Linux/Intel is NOT an enterprise class OS. No major vendor offers it as such (on the Intel platform). Yes, IBM does have a specific port for the mainframe environment, but they had to rewrite the entire kernel to do it. Linux/Intel still does not successfully support more than 8 CPU's, and scale's pathetically past 2CPU's.

I wouldn't advocate anything on an Intel-based (or AMD-based) server as an enterprise-class solution. I would advocate IBM, Sun, and HP PA-RISC and, after proving themselves for a few years, perhaps Itanium and Opteron.

I would certainly not advocate Windows as an enterprise class OS either. Would you?

Yes, as a webserver, I'll admit, IIS leaves much to be desired. However, enterprise class servers are not used for IIS (thats what cheap 1U server farms are for). Rather, they are used for mission critical apps.
Indeed. Sometimes, though, huge enterprise-class servers run user-interactive mission-critical apps. For example, DirecTV Inc.'s real-time billing system is a terminal OpenVMS application on Alphas.

Many enterprises are moving to web-based interfaces for many internal operations, though I do not know how many are running these on very large systems.

Share this post


Link to post
Share on other sites
I was saying that NT forces the GUI to run.

And I'm saying that is simply not true. You are slightly mis-informed. Check out www.sysinternals.com or www.ntinternals.com. They have some NT-native-API command-line applications available. The Windows 2000 "recovery console" is an NT-native command-line application, that does not require a GUI to run. The GUI is connected to the Win32 subsystem, it is not required in NT proper.

Are you saying that one can run NT and, say, IIS without the use of the GUI, thus freeing up memory? I don't know, I imagine it might be possible, but does anyone do this?

Maybe you are using a defininition of "NT" that is vague. IIS is a Win32 application/service, not an NT one, at least in terms of API calls.

What things am I confusing?

That the CPU jumps up when moving the mouse. That is normal, if you have an application open (and remember, the "desktop" is actually a window of the Explorer.exe application, full-screened). The OS gives the application recieving the input events a priority boost in the scheduler, so it appears to take more CPU time. If the increase in CPU time usage is really significant, then you may be running some CPU-heavy mouse drivers, that have an application running in the background to do fancy things with the input. That wouldn't surprise me too much.

That's interesting information, about DEC getting to use Win32APIs in VMS. I didn't actually forget the PPC port of NT--I never knew about it to begin with. :-)

That's ok, there are things that I've read from even MS people, that indicate that they don't even know about some of MS's short-lived products, like Visual C++ for Macintosh. (I read something from a VC developer, stating that they had never produced any language products for Mac. Strange, eh?)

Share this post


Link to post
Share on other sites

Larry,

Have you ever run a windows server without the GUI running??? In production?? Do you know anyone who has???

I have been administering servers for a few years now and have never heard of anyone doing that.

Also, am I making a reasonable assumption that it takes a great deal of work to run any standard NT/2K app without the GUI??

IIS not an NT app????? So.... you recomend running it on win98se?? or maybe winME??

Most poeple running WIN2K or NT as a server are going to be running at least part of the IIS suite so.....

James Ashton

Share this post


Link to post
Share on other sites
Even if it is possible to avoid running the GUI under Windows, which would be an interesting feat, the idea is quite against the grain of the Windows "culture," that is, nobody (that I know of) runs NT without the GUI.

Windows 2000 comes with an ISV-extensible management console [ MMC ] which when combined with DCOM can be used to perform remote administration : http://www.microsoft.com/windows2000/techi...nt/mmcsteps.asp

Windows 2000 Server also has Terminal Services which allows you to fully log into a server machine remotely : http://www.microsoft.com/windows2000/techn...nal/default.asp

Windows .NET takes this one step further and will provide native support for a headless server : http://www.microsoft.com/hwdev/platform/se...ess/default.asp

So although the GUI isn't really going away... it is shifting away from the server

Share this post


Link to post
Share on other sites
Larry,

Have you ever run a windows server without the GUI running??? In production?? Do you know anyone who has??? 

I have been administering servers for a few years now and have never heard of anyone doing that. 

Also, am I making a reasonable assumption that it takes a great deal of work to run any standard NT/2K app without the GUI??

IIS not an NT app????? So.... you recomend running it on win98se?? or maybe winME?? 

Most poeple running WIN2K or NT as a server are going to be running at least part of the IIS suite so..... 

James Ashton

Clearly, some people missed the subtle but careful distinction that I started making between NT the OS kernel, and the Win32 subsystem, after Sivar started talking about microkernels and stuff.

NT is a layered OS, and some people have argued that it is in fact a form of microkernel OS.

NT supports different subsystems/"personalities". The all-too-familiar Win32 subsystem, with the generally-manditory GUI (USER/GDI), the OS/2 (MS OS/2 1.3, I think) subsystem, the POSIX subsystem, and the (rarely directly used) native NT API.

When I mentioned "NT", I meant "the NT kernel". However, I failed to make clear my specific reference, and most people simply call the Win32-subsystem-on-NT "NT", because the big shiny box that the install CD came on, said "Windows NT" on it.

Maybe "Win32/NT" would be a better choice of phrase. (With a nod to the "GNU/Linux" crowd.)

In that sense, IIS is not an NT (NT Kernel native API) app, but it is a Win32 app, which can run on either Win32/NT, or Win32/9x. (But surely not Win32s, I don't think. Anyone tried it? Btw, "PWS" on Win9x, is in fact the same binaries as IIS, I think version 2 or 3. So IIS does run on Win9x.)

To sum up - NT does not have a GUI. Win32-on-NT does. That was the difference that I was trying to convey.

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
Sign in to follow this