Intel buying Havok

Discussion in 'General Gaming' started by sabre470, Sep 17, 2007.

  1. Calpis

    Calpis Champion of the Forum

    Joined:
    Mar 13, 2004
    Messages:
    5,906
    Likes Received:
    21
    So? It's a completely valid point.


    No, not in theory, a separate core can be implemented however you want, it doesn't have to be identical to all others.

    A CPU core with GPU instructions is the same thing as a GPU on the same chip!

    So am I. Where are you going with this?

    How did we live for all those years without GPU?

    Dude, I'm talking about the future man.

    When I say pixels being exhausted, I mean filling a scene at a decent resolution where each pixel is a ray, in real time, at a reasonable frame rate. We can already do that with "supercomputers"--completely in software, today. It won't be long before that's brought home, with superduperscalar CPUs like the Cell as you mentioned.

    Rendering farms aren't typical GPU; they're more like your GPU acting as a CPU analogy. They take the role of a traditional CPU, ie ray tracing but are optimized to do it better. CAD are the same. The reason why people use GPU is to get imprecise results fast.
     
  2. mairsil

    mairsil Officer at Arms

    Joined:
    Apr 20, 2005
    Messages:
    3,425
    Likes Received:
    153
    You are still trying to compare a general purpose scalar processor with a specific purpose vector processor. Sure, you can work either of them to do what you want in most cases, but they both have their own strengths and weaknesses. Also, I just want to point out that there seems to be a lot of focus on the desktop and server based hardware considerations, but don't forget about the embedded system markets.
     
  3. Shadowlayer

    Shadowlayer KEEPIN' I.T. REAL!!

    Joined:
    Jan 16, 2006
    Messages:
    6,563
    Likes Received:
    8
    Yeah right:rolleyes:....


    A X86 CPU with GPU instructions is the same than a GPU? and what about graphics pipelines?

    What you're basically saying is that in "the future" CPUs will replace GPUs, but hell! if we're going to talk about the future we can even talk about quantum computing taking over, or organic chips for that matter!.

    Plus theres no current projects to replace GPUs altogether with CPUs, the only close thing is the CPU+GPU on a chip AMD is developing, and is still far from your idea of using a CPU core for graphics instead of a GPU on a graphic card, like we do today.

    But if you take a look at GPGPU you'll see we're closer to actually replace CPUs with GPUs, since if the aplication is made to take advantage of the GPU, it can run over 40 times faster than in a CPU.


    You said the professional market doesn't rely on GPU for their products, yet they do, and while the Quadro isnt like a GeForce, is still closer to it than to say a C2D, therefore is a GPU.

    You were saying that all GPU operations will be done in software, then what better choice than using a GPU to do so? after all those are so good at parallel computing that some engineers are already replacing CPUs for GPUs in grids and high-performance clusters (supercomputers).
     
    Last edited: Sep 18, 2007
  4. Calpis

    Calpis Champion of the Forum

    Joined:
    Mar 13, 2004
    Messages:
    5,906
    Likes Received:
    21
    Sure, why not? An entire GPU "core" could sit behind instructions, it doesn't have to have anything to do with existing X86 instructions, in fact, it wouldn't.

    You're the one that missed that point completely and responded haphazardly. Look back for the keywords "eventually" and "future". And I'm *not* saying CPUs will *replace* GPUs, I'm saying that GPUs won't be necessary just as soundcards aren't anymore. For many applications, GPUs are still not used, you sure as hell don't need a GPU for viewing the internet or typing a word document, unless you're on Windows Vista or OSX, just a framebuffer. 2D games only need a framebuffer as well, no GPU or even VDP anymore since CPU are now fast enough to calculate and blit pixels with plenty of time to spare for the game. This will eventually happen to 3D too, it's inevitable. By the time it happens maybe we'll have 3D displays, but it WILL happen.

    Don't be a stubborn dumbass, of course there are: http://www.pcper.com/article.php?aid=334
    How do you think they made 3D before GPU? I've only said raytracing like a hundred times in this thread. Read: http://en.wikipedia.org/wiki/Software_rendering

    Ehrm, CPUs have always "replaced" "GPUs" until 1997 or so (and still do in the applications I mentioned above.) GPUs can only emulate CPU just as CPU emulate GPU. If you give a GPU CPU instructions, it's not a fucking GPU, it's a CPU now since GPU aren't general purpose. There's no reason why a CPU can't have the same ability as the GPU you describe. http://en.wikipedia.org/wiki/Commutativity

    They don't rely. The only thing that relies on GPU are real-time games. Offline rendering does NOT rely on GPU.


    Clusters aren't used for consumer productivity software, they're used for specific algorithms. Algorithms that will make use of the parallelism! You're completely missing the point.

    Edit: What do you think the branch performance would be on a cluster? lol
     
    Last edited: Sep 18, 2007
  5. Jamtex

    Jamtex Adult Orientated Mahjong Connoisseur

    Joined:
    Feb 21, 2007
    Messages:
    5,472
    Likes Received:
    16
    Point 1. Intel didn't invent the microprocessor they just happened to be the first to make it for the consumer market. In fact if Busicom decided to keep the rights and make more use from them then Intel would now probably still be making memory components.

    Point 2. In ye olde days, most computer companies would not use a CPU unless it could be second sourced, so if a company went bust or couldn't make enough then the computer maker could get them from another company. Which is why if you name a CPU you will find another company that makes the chip. Zilog for example allowed companies (like NEC and Sharp) to make clones of the Z80 royality free and in doing so Zilog took the market share of the CP/M computer market, whilst Intel floundered around. When Intel made the 8088, they licenced it to a number of companies including AMD and Fujitsu. AMD just looked at the contract and decided they could make CPUs for all Intel 80x86 chips and Intel didn't er sue them. ;)

    If IBM had choosen a decent CPU like the Zilog Z8000 (Z8001 for example) series or the Motorola 68000 (even the 68008) then things might have been different. They used the 8088 as it was an 8 bit data bus chip and would interface with cheaper 8080 based controller chips.
     
  6. Calpis

    Calpis Champion of the Forum

    Joined:
    Mar 13, 2004
    Messages:
    5,906
    Likes Received:
    21
    I didn't say they invented it. As a Massachusetts student I've sat through many long lectures on DTL family DEC minis, I know :)

    If IBM used the Z8000, Intel probably would have been litigious since it still had traces of the 8080.
     
  7. Jamtex

    Jamtex Adult Orientated Mahjong Connoisseur

    Joined:
    Feb 21, 2007
    Messages:
    5,472
    Likes Received:
    16
    What sort of traces? As far as I can see there isn't much similar between the Z8000 and the Z80.

    Reasons why Zilog were better then Intel part 22
    [​IMG]
     
  8. Calpis

    Calpis Champion of the Forum

    Joined:
    Mar 13, 2004
    Messages:
    5,906
    Likes Received:
    21
    I can't find an opcode table... any instruction that's byte-compatible with Z80 would be a problem though. I don't know for a fact that it's not lawsuit friendly.
     
    Last edited: Sep 18, 2007
  9. Shadowlayer

    Shadowlayer KEEPIN' I.T. REAL!!

    Joined:
    Jan 16, 2006
    Messages:
    6,563
    Likes Received:
    8
    Look whos talking!:rolleyes:

    And raytracing? why dont we just go back to vacuum tubes? after all those are EMP resistant...

    Seriously, we're quite far from playing today's games with ray tracing, but if you want to wait for it, go ahead...


    Well duh! thats what GMA is being used for! and if you put a GPU core in the same chip of the CPU you're are NOT replacing it, you're just making a CPU+GPU array like the one from AMD.

    All mid to highend GPUs today are meant for gaming. The average joe out there, who uses his PC for just internet and office work has either a GMA or another lowend accelerator. So as long as theres new games coming out theres gonna be people buying this stuff.

    My point is simple: X86 is obsolete, and therefore if you want a CPU with GPU instruction done correctly you need a new architecture.

    Now before you say "then what? where's the X86 replacement?" well, RISC is still good, the problem is that since nobody uses them (Macs are X86 now, and only consoles have RISC) theres no investment for further development on that architecture.

    Its like the electric car: we would all drive those if someone had invested in batteries some years ago. And while cars like the Tesla do live up to the hype, a small autonomy and high price kinda breaks the whole idea.

    Overall, is a chicken and the egg situation: nobody is getting rid of the X86 until theres a reliable alternative available, but there wont be one until some company or fund invests in said project

    Anyway, chips with a X86CPU and a GPU on the same die are just around the corner, but multiple core CPUs that replace GPU by using raytracing, thats at least (and I'm being optimistic) 5-6 years from now...
     
    Last edited: Sep 18, 2007
  10. Calpis

    Calpis Champion of the Forum

    Joined:
    Mar 13, 2004
    Messages:
    5,906
    Likes Received:
    21
    What do you think your GPU clusters are doing? Uhhh, raytracing!!

    We can play yesterday's games with raytracing and tomorrow we'll be able to play today's games with raytracing. 3D will inevitably become exhausted like 2D has, it will just take longer than 2D because of that extra dimension. As we approach photorealism there will less and less concern over hardware effects anyway, just the burden of having human modelers and programmers.

    You're the one that's still arguing because of YOUR mistake. And if you put a GPU core in the same chip of a CPU, you COULD be replacing it. A CPU can use the GPU "pipeline" for function X, and have data passed back for further processing. Key point: we're not talking video output stage!

    Your point is WRONG--fact. The world is dependent on X86, we still use it, it's not obsolete:
    Code:
    ob·so·lete –adjective
    1.	no longer in general use; fallen into disuse: an obsolete expression.
    "Done correctly" is entirely a matter of opinion. There many people out there who are very comfortable with and even like X86.

    From microwaves to iPod, set top boxes, cell phones, TVs, everything is RISC. All non-x86 processors designed within the last 20 years have been RISC. All new processors will continue to be RISC. Today even old "CISC" IP cores are implemented as RISC.

    :lol: so now you agree?! :banghead:
     
  11. Jamtex

    Jamtex Adult Orientated Mahjong Connoisseur

    Joined:
    Feb 21, 2007
    Messages:
    5,472
    Likes Received:
    16
    The Z8000 family share nothing with the Z80 apart from the first 3 characters. The Z8000 has complete different opcodes and has 16 16-bit registers, which can be used as 8 32-bit registers or 4 64-bit Registers. Apart from being slow, the chip had several features that Intel wouldn't put onto their chips until the 80386.

    Even if a CISC chip is implemented on the chip as RISC, if it can only be used in a CISC way then surely it's still a CISC chip. Zilog still make chips like the Z80, eZ80, Z180 all are CISC chips the last time I looked, infact Zilog even state they are CISC chips by saying that the Z8 is "Register-to-Register architecture avoids accumulator bottlenecks and is more code efficient than RISC processors.". Hell even the Hitachi H8 which is still less then 20 years old was advertised as the most powerful 16bit CISC chip on the market... so your arguement is a bit null and void.

    8 Bit and 16 Bit CISC chips will still be around as long as the embedded market has a use for them. The Z80 is dead, long live the Z80. :thumbsup:

    A lot of simple devices like Microwaves, Washing Machines, Fridges and other simple electronic devices will more likely use an 8 or 16 bit microcontroller mainly as anything else is overkill and can cost money to implement.
     
  12. opethfan

    opethfan Dauntless Member

    Joined:
    Dec 13, 2006
    Messages:
    753
    Likes Received:
    2
    Starting with the K5 (and K6) most x86 cpus have been RISC chips with CISC frontends.
     
  13. Taucias

    Taucias Site Supporter 2014,2015

    Joined:
    Oct 11, 2005
    Messages:
    5,015
    Likes Received:
    17
    That's what I said earlier :)
     
  14. Calpis

    Calpis Champion of the Forum

    Joined:
    Mar 13, 2004
    Messages:
    5,906
    Likes Received:
    21
    CISC model, RISC implementation. Whose to say where RISC ends and CISC begins though? There's no clear line.

    Z80 cores are just RISC w/ orthogonality removed now. CISC are still marketed as such because they want to sound full featured (as if PIC or AVR aren't better featured than Z80) despite modern RISC surpassing 8-bit CISC in number of instructions.

    Null && void? There are always a few exceptions and the 10 CISCs from the last 20 years, including school projects, would be those.

    [​IMG][​IMG]
     
    Last edited: Sep 19, 2007
sonicdude10
Draft saved Draft deleted
Insert every image as a...
  1.  0%

Share This Page