Discussion in 'OT Technology' started by P07r0457, Jun 17, 2006.
Whoops, OSX isn't too hot as a server:
For serving databases, this seems to be true, although that article hasn't been independantly verified. There are other features of OS X however, that make it REALLY attractive. In one solution we've spec'd, we use HPs running Linux as database servers, and XServes as application servers. The reason being:
1) Remote monitoring capabilities for OS X are advanced, and easy to use. And they integrate with other hardware via SNMP for a single monitoring solution through an OS X Server at HQ.
2) All frameworks, platforms and tools we need are included with OS X. We don't have to customize a linux distro. This saves us time and money.
3) Ease of use. If a tech can't handle managing OS X Server, he is too stupid to breathe.
XServes are quite nice. But I wouldn't use one for a database server if performance was a concern.
It could be that Apple didn't even bother trying to compete in the database-server arena, and optimized the OSX Server (or whatever the hell it's called) to outperform its competitors in other tasks.
Thats the case. The article talks about it.
[SIZE=-1]Woodcrest will beat the crap out of Operons
How about linking to the article from whence that image came?
No, that would allow for too much flexibility of interpretation. All you need to worry about is believing what he tells you.
I found this in 2 seconds of a google search.
My search in google:
mySQL 4.x performance dual g5 tiger dual opteron
You guys seriously need to find other things to complain about.
Seems a waste to us on a database server anyway.
On a side note I see Hyperthreading shows it sucks yet again. Worst gimmick feature ever, at least for server processors.
I've never really understood how Hyperthreading works, though I do know what it's supposed to accomplish. Does anybody have a concise explanation?
It processess 2 threads by splitting them up into little pieces and alternating between them while pushing them through a single "hole".
So instead of
It would be like ABABABABABABABA
that would be false. That is an overly-simplified example of how typical single-core chips deal with multi-threading.
With HTT, you have 1 real "core" and one virtual core. The real core is the only one that can do any real processing. The primary core will receive the majority of threads, and will do the majority of the work. However, certain memory-intensive applications can make the core sit useless while it waits. This is where HTT comes into play. It can put the primary thread on "hold" while it waits for what it needed, then can work on processing the virtual core. Performance gains can be up to 15%. This isn't a "gain" so much as lessening the inefficiency of the netburst architecture.
Personally, I find HTT P4s to feel much more "snappy" when running basic applications.
The problem is the virtual and physical processors share L1 and L2 cache. In high-demand situations, such as heavily utilized SQL, Exchange, or Terminal servers they end up shredding the L1 and L2 caches and actually degrading performance. There have been a couple of whitepapers and articles on it in the last year or so. The benchmark above shows it too. I've seen it happen personally with heavily loaded (running ERP software, go figure) Citrix servers. We replaced about 30 old servers with new blades and could not figure out why they were not performing like we expected. We turned off HT and gained around an 14% user load increase (meaning we were getting 14% more users per box before they hit fully loaded).
Its just a useful abstraction to Intel's really wide pipeline, yeah? But it doesn't actually help.
It helps a lot for common use. But when you're running full load it can hinder you.
for desktop use, it certainly does help.
So Hyperthreading is just CPU-controlled multithreading, as opposed to OS-controlled multithreading? The CPU reports that it is being underutilized and the OS gives it something to do, yes?
Works just fine on a desktop. Most desktops don't ever come close to the heavy sustained usage that would cause the performance hits.
No. Hyperthreading processors have some of the CPU components duplicated on the die. These components hold the state registers and this allows the system to pretend it has 2 processors (since 2 sets of registers are available) It's not like a dual core that has 2 complete cores on a die. So when one thread is active but not using the execution sections of the processor the CPU can use it to perform tasks for the other thread.
It's kind of like if two cars could share one engine, so when one car was idol at a light the other could use the engine to accelerate. They just can't both use the engine at the same time.
They OS still controls the multithreading just like it normally would in a true dual-cpu system.
no. not at all.
in a uni-processor system, a time can arise where the CPU has nothing to do, but is still in a "busy" state because it is waiting on memory. The OS has no way of detecting this and has no way to tell the CPU to do something else. The netburst architecture really made this show with it's long memory pipes. HTT allows the core to work on the second thread during these "wait" times.
Modern processors have more than one execution pathway. So more than one instruction is being processed at once. Intel's pipe is really fat. That means many instructions are being done at once.
Hyperthreading makes half the super fat pipe look like a second processor.