The official PS2 releases of ICO and Shadow of the Colossus press kits and other sources contain images with very hight resolution. They can be seen here: https://www.giantbomb.com/shadow-of-the-colossus/3030-6522/images/ . They are of resolutions 1440 x 1080; 1920 x 1080 and 2048 x 1536. Many of them are certainly in-game early (from pre-release versions) screenshots. The question is - how were they made? Here is one of them: https://static.giantbomb.com/uploads/original/0/4462/1509148-lrgaspx4.jpg Some of them have a particular darkening around the left and top edges: https://static.giantbomb.com/uploads/original/0/4462/1509249-typeg_field004_20050725.jpg https://static.giantbomb.com/uploads/original/0/4462/1509247-typeg_field003_20050725.jpg https://static.giantbomb.com/uploads/original/0/4462/1509102-1178284141280.jpg The foreground is very sharp, while the background has some filter/blur applied. The PS2 GS cannot officially render in modes higher than 1080I and anything above that requires significant workarounds to get working, which would have made the gameplay of a complex game far too slow, as proved by Maximus32 here: http://psx-scene.com/forums/f19/high-resolution-3d-rendering-156426/ .
Some of the pictures on that page are from the PS4 version, and I think the ones you linked are probably from the Ico and Shadow of Colossus Collection for PS3.
Many game engines can take tiled Screenshots, taking multiple screen resolution images that are either combined following the process, or using photoshop after. So combine, say, 9 640x480 images to get one big image. They can also add advanced effects and filtering, since they can pause the game and render each section individually without running it in real-time.
@la-li-lu-le-lo I am quite certain that the ones I linked to (at least some of them) are from a pre-realease version that which certainly didn't run on the PS3. If you find any Boss_A colossus looking like this in a PS3 version, that would be very surprising: https://static.giantbomb.com/uploads/original/0/4462/1509148-lrgaspx4.jpg And yes, there are pictures from the PS3 and PS4 version at that page as well, but I am not talking about them. @Borman This is very interesting info! I was wondering whether something of the sort might have been used. SotC for example sends everything to the GS through Path1 - it is all processed bu VU1, so I am not sure exactly how such a mechanism would have been implemented, as the VU1 has very limited amount of memory. A visible object's geometry is sent at once - in one DMA transfer. The only way for this to have worked would have been for the EE to separate the objects (and their geometries) by their position in the viewport and only send those for rendering that fall in the parts of the "screen" currently rendered. Or maybe this wouldn't have needed to be done, and the already existing VU1 microcode would clip triangles outside the viewport. However with this methods, motion blur effects generated by the GS by overlaying previous images would probably not work, so if there is notion blur in any of those pictures, it would disprove this theory.
I think this is probably how Tourist Trophy's screenshot feature works. It also has effects like anti-aliasing and motion blur.
Don't rule out photoshop. And not taking things while the game was running, like 3dstudio . Lots of options. The debug menu would likely reveal more.
Some of the images contain one of the following strings: "ACD Systems Digital Imaging.2010:09:15 21:26:41" "Photoshop 3.0" "ImageGlue JPEG Export" One of the early screenshots that shows land not seen in any known game version has no string in the header. @Borman So you are saying that the scene could have been made in the program used to make the game and rendered there? Those ga,mes were made with LightWave. The main debug menu is missing from all known versions. There are tools and their menus that are partially or completely intact. None of them is related to making screenshots. However there is a functioning standard debug screenshot function that uses the standard SCE functions for that. But I haven't seen in that any options for tiling the screen or changing the resolution. The framebuffer's dimensions are set by variables in RAM, but those can't be increased because the framebuffer starts overlapping with the memory used for textures. https://assemblergames.com/threads/shadow-of-the-colossus-potential-debug.50436/ EDIT: I think I found just what you described in the code! I don't know how I missed it earlier. When SnapShotSize is set to 0 a single snapshot called snap<7digitDecimalSequentialNumber>_00.bmp is made. However, when SnapShotSize is set to 1, four snapshots are made, with the following filenames (using sequential number 0 as example): snap0000000_00.bmp, snap0000000_01.bmp snap0000000_10.bmp snap0000000_11.bmp. From the code it can be determined that those last two digits are actually two hexadecimal numbers, each of which is a counter of a loop. When SnapShotSize is set to 2, the snapshot files created are 16 (showing only the last two digits): 00 01 02 03 10 11 12 13 20 21 22 23 30 31 32 33 So they are "coordinates" signifying which part of the viewport the snapshot is. In that case also some object sorting functions are called. The code isn't too long and because of this I never payed much attention to it. So it seems it should do just what you described! It makes separate snapshots of different parts of the "screen", so that they can afterwards be joined together into a single high-resolution image. However is not working correctly and only makes snapshots of the same viewport area as the normal snaphot does. So I think this may solve the mystery, especially if I can get the code working correctly. Thanks!
Probably capture offline rendered shots for marketing. There is no anti aliasing on these images. And the DoF effect and all the textures look correct for a PS2 game. Probably simply for marketing or for print. It's non real time. Nothing new.
It turned-out the code of the game that makes those high-resolution snapshots was already working correctly, just the method the snapshots are made is different from what I assumed initially. The PS2 GS actually renders at much higher than the display resolution. Because of that, it is possible through "moving" the displayed area by a single point horizontally and vertically through the XOFFSET registers, to capture multiple samples of the same frame. Each contains pixels right next to the ones of the previous. By interleaving the resulting images, one can compile a high-resolution image. This is also the reason I got confused that it wasn't working - because all output images look very much alike, just shifted by a point or a few. The maximum resolution seems to be at least twice 2048x1792 if not four times that. Attached is an example at 2048x1792. It is compressed in JPG, because the original BMP was 10.5MB and the PNG ~ 7.4MB.
Is it rendering at a higher resolution & discarding pixels? Or is it rendering at the display resolution and the offset affects the rounding, for which pixels will be rendered? Calculating geometry higher than you render at is useful, because you can sub pixel correct what is drawn.
No, I didn't write it correctly, because I don't quite understand it myself. It renders at its display resolution, but the points, from the viewport, sampled for rendering are fewer than the total available points. It samples one point in every 16 (both horizontally and vertically), so by making multiple renders at different offsets (16x16 max) and then combining those renders in a single image it can achieve up to 256 times higher resolution (if I am not mistaken). Yes, so we can say that the GS does indeed render sub-pixel correct images.
It seems vague whether it should be called "16 times" or "256 times" Panel manufacturers talk about 4x, like they used to measure monitors from the outside edge even though they had a 2 inch bezel.