Intel buys middleware company Havok, why we don't know yet... Havok is well known for its physics, animation and behavior engines and is widely used in the game industry, the statement of Intel is the following: "The acquisition will enable developers in the digital animation and game communities to take advantage of Intel's innovation and technology leadership in the creation of digital media." Looks like Intel is going for diversification, good choice Havok is quite healhty and innovative. One can wonder if the current debate on GPU vs CPU for physics and high performance computing is not urging Intel to get in the game by the backdoor and get critical knowledge to develop an "Havok on chip". Intel has never been really successful at developing good GPU but Havok could be a great addition of their core business, the CPU. A Physic core on a CPU would be a better proposition than buying a PCI Physix card or having 3 video cards for the sake of smooth physics... Or maybe they just want an additional royalty revenue stream... Sabre
AMD honorably serve as a monopoly killer, but also as a parasite since they don't have the resources to match Intel. So they've always lagged behind, and won't catch up in the foreseeable future or possibly ever. That's the nature of the game though when Intel has the ideas and AMD mimics them out of necessity. I think eventually because of the multi-core fad dedicated GPUs will eventually be phased out in favor of "GPU" cores (well, really just the processing) and much later on we'll just be using software rendering like the old days (like ray tracing) just as most people today use soft-DSP + DAC for all their audio instead of silly soundcards, that's progress! But so was RISC until the assclowns killed it. This new physic card thing is sorta the same, why do we need yet another processor to do what our CPU can already do now just to relieve the load? Do we really need 300fps in games?! This mentality is really stupid because it's a waste of the consumers money when shit ideas like this becomes standardized. If Intel decides to implement physics support *instructions* in the CPUs I'm all for that since they already have instructions for everything *and* the kitchen sink. This would make their (and everyone else's) middleware faster/better, and let people keep their money + let non-gamers use the new instructions in scientific computing.
Nice one lol... Can you elaborate on the kitchen sink? Man if it has optimisation for doing dishes, I'm all for it too... But you're right that fps race is pointless, that's why I buy cheap £25 video card and got an Xbox 360, solve my problem of playing on PC as long as it runs Counter Strike Source, it's all that matters to me. Sabre
I strongly disagree. AMD vs Intel is like a sine wave, one out-trumps the other and it goes back and forth. Until very recently Intel CPUs were overly hot, expensive in power terms and not as fast as compareable AMD chips. They were also more expensive. At the moment Intel are ahead because they had the multi-core technology advantage. I do not think multi-core will extend to something like GPU pipes However, there will ultimately have to be a step from x86 architecture in the future and into something more modern. I'm pretty sure processors have been RISC cores for a long time now, with the x86 instruction set layered on top? I tend to agree. The only reason PC technology is progressing at such a rate is gaming. If you look into next year (and the benchmarking for Vista) we can expect quad cores with over 2GB of RAM. For general home/office applications it is total and absolute overkill. For gaming it is not enough. However, I don't think we will ever see gaming move to dedicated consoles - rather we are likely to see consoles and PCs converge into home entertainment systems that do a bit of everything. The PS3 is another step towards that kind of environment. We'll see the two industries join when we have more processing power and memory than we know what to do with. Eventually gaming really will be just about the games.
But the point is Intel is always first, even when at a given moment AMD outperforms a comparable Intel chip, AMD doesn't win. When AMD develops their own architecture that people like more than IA, that would be trumping Intel. When have any of their forks even been successful compared to Intel counterparts? I think they'll always be Intel's bitch since Intel leads the way. But most software isn't multithreaded yet. I'm pretty sure the increase in performance right now is due to quicker instruction logic which came about from the new mfg process and less cache misses from the enlarged cache. What's a GPU pipe and why not? GPUs mostly do massively-parallel calculations and transformations. CPUs can easily be built to do that too. Even FPGA can do that. If Intel were to incorporate a few medium sized FPGA cores into the architecture, we could have custom proprietary GPU functions on a game by game basis. That would be pretty awesome. x86 is both one of the oldest and the most modern in one package, I'm starting to think we'll never leave it. There are still some fundamental differences and I hope this day never comes.
Well with the movie industry needing more and better physics for their FXs and the weapon industry needing better simulators (specially with all the UAVs they're making) is no wonder that the big-ass Intel empire wants to secure that business, just like they did with processors... Anyway, I doubt they'll release a chip, more like they will use that technology in their existing products, like in new instruction sets and maybe some stuff on their chipsets. Nope, I dont see that happening... Theres a big difference between GPU and soundcards, basically that unlesss you're a real audiophile (and have some great quality speakers to boot) you wont feel the difference between a highend soundcard and the standard sound system every motherboard comes with these days. Graphics on the other hand are a whole different thing, cuz you've to be almost blind if you cant see all the differences between a game running in a $500 card and one running on a $40 one. I know measuring the FPS is a crazy thing to do (I dont do that BTW) but believe it or not the number of enthusiasts doing that grows more every day. And thats good news to the duopoly of nVidia-ATI, cuz they can keep selling their overpriced GPUs... Things will be the same for some time: people who doesnt play games will continue using the integrated graphics chip in their motherboards, and those chips will remain as a lowend option since nobody would like to pay more for a powerful chip if they arent going to use it at all. In the long run those chips may get embedded into the CPU, just like AMD is trying to do now, but even in 10 years I still see people buying GPU cards to enhance their graphics...
Intel didn't secure the microprocessor business, they STARTED IT (originally competing with TI.) You have to give them some credit. AMD started as a dirty cloner, just like Creative Labs. That's what I said. They have to be mid-range CISC instructions though since highly specialized instructions aren't flexible for software expansion. (They wouldn't replace software functions as very complex instructions do.) No there is not at all, both are the same model: data -> processing -> output. <offtopic>Audiophiles are full of shit (and everyone should know that by now.) 20-bit PCM @ 48khz which most integrated sound is capable of surpasses EVERYONEs hearing ability. The only reason to have higher sampling rates is to 1) accurately sample high frequency waveforms (44Khz) due to Nyquist limit and 2) get better results from a cheap analog filter.</offtopic> There is nothing that any GPU can do that any CPU can't given enough time and memory. Today audio is practical to process in software, 15 years ago it wasn't, there is absolutely no reason why cutting edge video won't be as well sometime. 10 years ago it was tried with MMX and 3DNow instructions. That strategy WILL come back since CPUs are the fastest evolving component. That is TODAY on consumer equipment, not the future. Since you brought it up, the movie industry uses raytracing (CPU) for movies. Software has infinitely higher precision than a hardwired GPU feature. After a certain point there is absolutely no reason to offload graphics to another device.
AMD led Intel with the introduction of x86-64 though. Despite whether you like AMD or not, you can't deny that they have done remarkably well making a place for themselves in a market dominated by the Intel/MS monopoly. Custom GPU functions? We already have those in shader units. Not true. It is archaic, but extended to keep it competitive. Far from optimal when compared with more modern chip designs and eventually the x86 platform will just seem too dated to continue to be viable.
You misinterpretated what I said: That they "secure" the market doesnt means they did some kind of monopolic practice like MS did/does, they got the market by intense competition against other campanies like Cyrix and AMD. Most GPUs are VERY different to X86 CPUs, so unless you change the said industry standard that has been there for almost 30 years then I seriously doubt you'll be able to offload the GPU work to say one of the CPU cores. I can see a CPU+GPU midend combination in no time, specially with AMD's new three-core CPUs and GPU integration done with ATI. On the other hand I can see multiple core GPUs easily replacing CPU's in the near future... Every new game need a more powerful GPU to boot, so as long as new games are made theres gonna be new GPUs in the market. Unless of course theres a videogame crash like the one from 1983. In that case I can see the GPU industry collapsing and being reduced to just the professional market. But what are the odds of that happening anytime soon?
They put it to market for consumers first, but does it matter? How much do we rely on 64-bit today, AMD started 64-bit too early. The only reason why Intel jumped on was probably because they thought they'd loose the market to dumb gamers who want 64-bit systems. I have nothing against AMD, I can't say they've done remarkably well though since they have such a small market share compared to Intel. It's not the same since they aren't 100% flexible. You can't transfer a configuration file to the GPU in a few nanoseconds and now have a new function. "Shader units" are GPU microprograms, no? Microcode can't touch a FPGA. What's an optimal chip design? PowerPC? That's the most advanced RISC I know of and look how far they've gotten. A long time ago RISC meant less transistors, higher clocks, more meaningless MIPs. Today with multiple-cores, huge caches, huge pipelines, huge superscalar designs, you get the same transistors as you do with CISC! What's more, CISC instructions have been getting faster over the years, probably approaching RISC efficiency.
You're misinterpreting what I said. They have little competition from AMD and Cyrix because AMD and Cyrix use Intel architecture. The competition that they do have may push them to put out better products, but most of the time Intel goes their own way, and everyone else follows because they have to since Intel is the market leader. GPUs are CPUs with specialized focuses. There's no reason why the defacto Intel CPU can't get a CPU core with GPU parallelism. In a convoluted way you're agreeing with me: While GPU may be Turing-complete, they will never supersede a general CPU without BECOMING one. Right now you're arguing the exact same thing as me, except backwards (that a GPU can be a CPU and not the other way around...) :-( This is true until the pixel is exhausted, which is not far away. It's only necessary to do so much in hardware before it's more cost effect to use software. I have no idea what you're talking about. The professional market doesn't rely on GPU for their products, just like everyone else they use GPU for real-time productivity, CPU for products. The GPU industry has been around for like 10 years... before that it was software or state-machines (like console VDPs) which do not actually process anything.
Until the K6-2 and K7, and especially the K8 compared to the P3 and P4, you were right, but now nothing could be further from the truth. All Intel seem to be doing is making everything clock higher: more GHz, more L2 cache, higher FSB speed! Even the new Core CPUs are based off old P3s. AMD's CPUs are more technical: SGoE, HyperTransport, Direct Connect, first to market with real dual core chips, created x86-64 (Intel directly copied that: look at XP x64 edition, the i386 folder is called the AMD64 folder). Not to mention the intergrated memory controller for fast memory speeds. The only thing AMD really lack are the facilities and cash to make the fabs for making these higher clocked chips. You're only looking at the desktop side of things: Servers are what's important for the future. What you see in today's servers is what's gonna be in tomorrow's desktop. AMD64 is required for many operations, and is NEEDED if you want more than 4GB RAM (3GB in Windows) which is a barrier a lot of us are touching (2 or 3GB, you're almost there. My mobo could max out at 8GB, or even 16GB once 4GB sticks are out). With more than 4GB RAM, processes don't have to access the HDD as much, and there goes a massive bottleneck. If coded correctly, the increases in performance are huge. When MMX, SSE, DDR and PCI-E came out, they were useless too, since no one used them. Once they're adopted, you really get to see the benefits. This echoes in all technology. I know people who have just got themselves DVD players, and now HD DVD and BD are out, it's outdated.
Havok is a software physics engine. With Intel acquiring their IP this puts them at a strategic advantage against AMD. Intel can later on push a new form of physics rendering and make it run much more efficiently and faster than it currently does. This will set AMD back even further than it is now one can imagine. However, Intel is still at a disadvantage. AMD is rumored to be working on a new chipset that can give it an edge in terms of bandwith speed between the CPU and GPU (ATI). Graphics still equals dollar signs for game publisher. So, one can say that Intel acquiring Havok isn't necessarily a strategic move towards AMD. It is, but how big of an impact it can have against AMD is anyone's guess. Any hardcore graphics programmer would tell you that the x86 architecture is old and they need something new if people want realistic graphics with realistic AI and physics. x86 is just getting too old.
More "technical"? These features just move the socalled "chipset" on-die. I don't know what SGoE is (stop execute bit?) but HyperTransport is a general purpose packet based interface like PCI-E, Direct Connect just removes the front-side bus and integrates the memory controller, but doesn't remove the actual DRAM bottleneck, which is the important thing and largely irrelevant because of caches. The only advantage this would have is in a cache miss which are becoming rarer and rarer. Right, but as I acknowledged 64-bit (and dual core) were released too early for consumers. There are very few 64-bit or multithreaded apps now or even being developed now. It won't be for years or until these paradigms become a requirement in Windows. This is typical jargon, EVERY CPU for the past 25 years has had a built in memory controller; That's how protected memory (paging) and cache works. It's not all about the clock speeds, it's about the logic density which allows you to do more per clock cycle. The clock speed issue is really about AMD's inferior electrical/metastability skills. If AMD was so hot now, why don't we see Direct Connect peripherals instead of PCI-E? Why are we still using ATX form factor, moving towards only SATA and are we going to use Intel's 64-bit architecture instead of AMD64 even though AMD released it first? Easy--because Intel is more popular. Most server architecture is NOT merged back into desktop architecture, only the paradigms are. But this has been true forever. AMD didn't nearly invent the 64-bit address space, and AMD64 is only required for many operations written for AMD64, ie. mostly UNIX server apps, which most people could care less about. Yes it's true that AMD won the 64-bit instruction race, but we have yet to see whose won the war since 64-bit has yet to be embraced fully. I don't really care either way. What the hell user apps take advantage of 4GB+ of RAM? I would rather save the money and power and do that "loading" thing. The thought of 16GB of 4KiB memory pages is pretty funny though...
I think it's more like anyone who actually writes x86 assembly, which most people don't do, so they shouldn't complain about it. Seems to me like it's just popular to hate x86 now for not many good reasons. Most people wouldn't disagree that MIPS is a much nicer architecture, but in reality MIPS has it's uses and x86 has it's uses. The question I'm trying to raise is what is better than x86 and how is it? x86 is doing some amazing things with a handful of registers.
Sorry, SGoE was a typo, I meant SiGe, or silicon-germanium, and SGoI, Silicon Germanium-on-insulator: an improved way of making wafers. Direct Connect is indeed the way the core talks to the built in northbridge\memory controller, but it also allows multiple cores to talk to each other, helping to stop 'Saturn Syndrome' where the processors have no communication. And the reduction in latency caused by the on board controller really did make a difference. Less of one now that DDR2's arrived, but latency is reduced. By that same logic, we wouldn't have had cars, or TVs, or recorded music until about 30 years ago, when they started being affordable, reliable and of higher quality. Unless the product is on shelves and in computers, there's no incentive for developers to make products for them. A DDR\DDR2 memory controller? Latency benefits mentioned above. Meh, the clock speed thing was actually a guess-ish. But AMD have teamed up with IBM to develop and use the strained silicon\SiGe technology which is a much better material for basing chips off, as the electricity travel faster. 2 reasons why PCI-E: 1 - It's universal, so everyone can use the same cards. 2 - Direct Connect is designed for inside the CPU, not the external buses. It's designed for a completely different use. Why you're bringing up ATX baffles me, INTEL tried to make BTX a few years ago, but it fell on it's face like a sack of shit. No one wanted it, they'd rather have standards. And on Intel64, what? Intel64 is identical to AMD64, save for a register or 2. I have no idea on SATA, so I can't say. Well the PPC arch is based off a server, AMD's new server chips are the first with per-core and independant northbridge speed limiting, and Xeons were the first x86-64 and quad core Intel chips.[/quote] Well if it's adopted identically the same by everyone, we all win, as developers have one 64-bit x86 platform for making their apps. [/quote] What the hell user apps take advantage of 4GB+ of RAM? I would rather save the money and power and do that "loading" thing. The thought of 16GB of 4KiB memory pages is pretty funny though...[/quote]
Dude you're talking about X86, one of the oldest architectures. Is like saying the chinese motor companies are copying the german's cuz the last ones invented the car 120 years ago... And if get to the point, AMD has been using the follow up to NexGen's Nx586 since the K6, just like Intel is still using the PIII. In theory and I mean really basic theory, yes, but in reality X86 CPUs are quite different from todays GPU. In fact the only CPU thats kinda similar to a GPU is the CELL from PS3, and that one is also very different from the X86. And I was talking about a CPU and a GPU on the same chip, you were talking about using a slightly modified X86 CPU as a GPU. Is like making a fighter plane with a propeller engine instead of jets... Of course! but I'm talking about a modern GPU compared to a modern X86 CPU. In general all ICE are the same, no matter if its a car, a boat or a propeller plane, but when you get into details things are very different, to the point that using a car engine to fly a plane will make the vehicle slow and ineficient. We're still very far from that, 15-20 years which in computer time is an eternity (that long ago a very powerful machine had 50MBs of RAM, today even 4GB are almost standard). Even movie FXs are getting more complex each year, and theres a bigger chance of game development getting exhausted due to increasing costs than the pixel itself reaching the limit. You're joking me right? what about Quadro? and the FileGL? what are those used for besides CAD and rendering farms?
That doesn't really make sense, there shouldn't be a Northbridge bus if Northbridge peripherals would talk Direct Connect, which sounded like the intention. Also isn't the memory controller supposed to be on-chip? Multiple-CPUs never have been and never will be as integrated as multiple-cores, so it seems like this technology obsolete from the multi-core trend. Not at all, I just hoped that when IA went 64-bit, it jump like PowerPCs to a true 64-bit platform. 64-bit extension is quite similar to the color addition to TV, inferior for the sake of backwards compatibility. Anyways, until the market is saturated with 64-bits and multi-core systems, there will be no incentive for developers to BEGIN to use those paradigms since the programming model is so different. A DDR/DDR2 memory controller arbitrates reads/writes from the FSB and refresh the RAM, it's not it's job to reduce latency, that's up to the DRAM. Having the controller is a good idea, but for now it's kinda pointless since AFAIK FSBs are still faster than the DRAM itself. It sounds to me like Direct Connect was meant to replace the FSB they took out. It has to be for external buses if it's meant to interface with the Northbridge or other CPUs. I did because it's Intel's standard that we're still using. Intel wants stuff smaller and actually wants to regulate even more things like power supplies which everyone else isn't comfortable with. I meant people are going to use Intel 64-bits instead of AMD's, despite Intel taking AMD's instruction set. PPC arch based off a server? It was created to replace 68Ks in Macs. Xeons have been around for a very long time now, probably the first popular dual CPU arch, and it still hasn't caught on because multi-core is better. Arguably, right now the desktops which have such ridiculous processor setups are not even desktops but servers themselves. How many people do you know with quad core 64-bit machines? The fact is that 90+% of the world still uses 32-bit single core systems. Right now the situation is still the same as it's always been, developers release three builds: generic build, Intel optimized build, AMD optimized build.