the answer may lie in the royaltee fees arising out of EE. EE was made by Toshiba and Sony, whereas GS was only Sony's work. Since Toshiba, IBM, and Sony have worked on the CELL, they probably thought it was overkill to pay two fees to Toshiba?
It'll be interesting to see how well the PS2 can be emulated while still using the graphics chip from it. That's actually not bad since I'd imagine the graphics chip would be a real pain in the ass to emulate that well, where as the cpu probably not. But Europe is still being screwwed seeing how it is true they are paying more for less. But I like to think of it as the TV tax. You guys have RGB input on your TVs, therefore you must be taxed cause us poor North American's do not.
“The Emotion Engine has been removed and that function has been replaced with software,” said Nick Sharples, a spokesman for Sony in London. That has a “slightly detrimental effect” on compatibility, he said to the IDG News Service [source: http://www.dailytech.com/article.aspx?newsid=6208] So we have some Sony sources saying limited and others saying slightly detrimental, whatever that means (PR spin?). Perhaps the compatibility is not so bad as we are predicting, but the EE is not an simple piece of emulation in itself. It seems like a hellish bodge job and I could see firmware woes in the future with two fairly major versions of the hardware (especially for firmware hackers).
The GS is massively parallel and relies on the huge bandwidth (~48GB/s!) and the eDRAM which makes it very difficult to emulate (if not impossible on the PS3 hardware alone). The EE is better suited to emulation out of the two because the VUs can be approximated with the SPEs and the rest of the logic on the others. That would leave the PPE to emulate the MIPS core. PS2 developers, good ones at least, resorted to all kinds of tricks and played on hardware quirks of the system so it must be a nightmare getting it all to work. I don't envy Sony's R&D teams on this one.
Jim, your points are very true from a technical side which apparently Sony has taken into consideration and chose not to replace the GS, but the EE is quite capricious not only in (technical) function, but how it was used in relation to the console's bottleneck's and overall architecture , as you mentioned. very hard work indeed, and I don't see the emulation ever reaching the claimed 98% it has on existing Ps3 machines. Looking back, the PS2 is a very curious beast with a unique way of doing things internally, perhaps not the most efficient way. Streaming was the motto, too bad theories don't translate well sometimes from paper to silicon. For me, the Gamecube is a joy-box, if not a playground for programming, even compared to the XBOX. Really admirable piece of hardware and architecture, easy to manage and only restricted by the actual volumes - no hidden fees, 'do this but don't try that' etc- and very spot-on efficient for what it is. I don't care if is essentially weaker in number crunching from both the PS2 and the XBOX, it's simple algebra and no black magic ;p
PS3 seems to be doing things in a 'unique' way too. It's like they hired ex-Saturn hardware disgners. "Hmm, we want it to be POWERFUL! It must be complex to match!" Anyone know if the disc drive has it's own CPU?
It is complex sure, but I think PS2 programmers can migrate easier than PC developers. A lot of people see a complex machine and liken it to the Saturn, but the Saturn was a definite bodge-job system that saw a spec change right at the end of it's development. The main complexity problems with the PS3 are just down to the way CELL operates and how different that is to traditional processing cores. Microsoft did well to pick ATI as their graphics core partner I reckon. The RSX is the weakest link in the chain. @Barcode: I'd say the PS2 is the most interesting from a coding perspective - it is the biggest challenge at least! Sony should have increased RAM sizes on the video buffers, but the eDRAM was and is expensive so they chickened out of that one I guess. Texture memory was the biggest weakness IMHO. Having said that, the XBox was definitely the one I'd like to have worked on. The GPU is very well documented and the dev environment for the XBox was superb. The worst thing about it was probably that Microsoft tend to force the APIs onto you and don't let you get too close to the hardware but I suspect some devs were given more access than others, depending on the project. One thing we can expect to see in the future, if BC becomes the standard expectation each new generation, is the big three becoming stricter on API use. That is definitely not what we want because the really interesting stuff comes about when we see engines that do things in unique ways and we want to see devs really pushing the hardware. The GC is a very nice design for sure, but it came too late and Nintendo was in charge of it. Nintendo are a great developer but they aren't particularly good with third parties.
who the hell is barcode? :dance::lol: The GC had a near perfect ratio of price/capability/versatility/ease of handeling, and with the Wii, all that documentation has gotten better, from what I read on the SDK. The PS3 is a different type of unique from the PS2 and the Saturn. For one, it's much more organized and offers fairly large chunks of memory. Coders need to master the memory access concept (since it's handled automatically, no dirty coding there) and just see what they can push. This would probably translate to worst ports, but we all know that consoles shine with exclusives. Nvidia's RSX may indeed prove to be the lowest common denominator. I feel sorry for SONY for never having any luck with their graphics chips yet they make such wonderful (conceptual ) processors! What's the exact deal with 256+256 XDR though? who can use it, when and why segmented?
Only Microsoft have gone with the unified memory to date, and that only works on the XB360 because of the eDRAM on the GPU. Segmented memory is a necessity for Sony because XDR is very expensive! I think the RSX will shine (it already is with some titles) but ports from the XB360 will look worse unless devs put a lot of time in (which they probably won't in most cases, FEAR being a good example). The SPEs in the CELL are being used to assist the RSX in a similar way to how the EE and GS work together, with geometry culling etc. Of course, the RSX has a nice array of pixel shaders and vertex units which take it well beyond that comparison, but the XB360 has unified shaders which makes it a lot more flexible and will make games originally designed for the XB360 tricky to replicate. The RSX is by no means a bad processor, it's just a generational step behind the XB360 GPU design. It has stronger pipelines for shader operations and it is more familiar to PC GPU coders, but the XB360's CPU is easier to adapt to. PC GPUs are going to go the way of the XB360 GPU in the next few years so who knows where that will leave the PS3 (although PC ports never sell a system like a console specific title will). It's all about the games in the end, but it is fun to look at technical specs :grin:
Honestly I cant see how making a solo version of an old chip again is more cost effective that use the SOC array thats already in production and will be for the next two or three years. The only explanation would be those fees toshiba charges for the design, but I doubt that was as high as what Intel charged for the Xbox celeron. About the PS3, is easy to see the disadvantages of the RSX since is nothing but a souped up 7950, while the Xenos is a R600, a new generation of ATI architecture that came up more than one year before the PC version.
The only real similarities between Xenos and X600 are that they feature unified shaders. Xenos has 48, X600 has 64. In general processing tech terms the Xenos is half the raw polygon power of the X600 (500mil pps vs >1bill pps), but Xenos is not directly comparable thanks to its 10MB eDRAM and so on. The Xenos is a custom solution chip for the XB360.
There was a HUGE fight in some dev forum about this. What I could understand after all that is that the Xenos is a R600, but the new generation of ATI GPUs will be based in in the R650 (or R680, i dont remember) which is related to the R600 but not the same thing. I'm not into deep technical specs but some of the guys in that forum where apparently working on the X360.
I've heard that the Xenos has the same number of shaders on die as the R600, but Xenos has 16 disabled to improve yeilds. Is there any truth to this?
I note that on Sony's 'semi-official' Three Speach website, Phil Harrison says that the Euro PS3 will be backwards compatible with over 1000 PS2 games, so maybe it's not so limited....
2400-odd in Europe. That means less than 50% compatibility (though apparently they are tweaking the emulation for the big names)
Glad I got a PS3 when I did then, although I am going to have access to a retail pal unit next week so it will be interesting to see how compatable ps2 and ps1 games are.
Seems like a non issue. Phil Harrison is quoted as saying nearly 1000 titles (out of approx 2500 released in europe) will work, with futher updates planned to get more games working. Also added was the possiblity of 720p resolution via software.