Windows DTL-H10030 Driver

Discussion in 'Sony Programming and Development' started by PS2Guy, May 25, 2012.

  1. l_oliveira

    l_oliveira Officer at Arms

    Joined:
    Nov 24, 2007
    Messages:
    3,879
    Likes Received:
    245
    SilverBull, you happen to know anything about the I2C interface that exists inside of the SSBUS chip ? (CXS9566/9611) The one the PS2 uses to send commands to the DVE chip ? We need information on that stuff for a secondary project (related to the GSM app and DVD player drivers) ...
     
  2. SilverBull

    SilverBull Site Supporter 2010,2011,2013,2014,2015.SitePatron

    Joined:
    Jun 12, 2008
    Messages:
    385
    Likes Received:
    6
    Sorry, I don't know anything about that. Frankly, I didn't even know there was an I2C bus to the digital video encoder before I saw that other thread...

    Do you know which component is normally responsible for communicating with that chip? May be worth a look.
     
  3. sp193

    sp193 Site Soldier

    Joined:
    Mar 28, 2012
    Messages:
    2,217
    Likes Received:
    1,052
    Your research is very interesting, although it's a shame that the SPEED device is using a proprietary protocol.

    I'm interested into knowing the register definitions, so could you please share that with me?

    On the bright side, we should be able to access our HDD units remotely soon. I'm going to release my SMAP stuff soon, but the speed is a bit disappointing (4MB/s top). :(

    The auto negotiation issue is still there, so even the Sony driver probably didn't have a software workaround that hardware fault.

    At least, it doesn't seem to freeze up after transferring data for a while.

    This may be off-topic, so we may have to split it into another thread: Do you know whether there were performance enhancements made to the IOP thread manager after the first few literations?

    People say that queuing the frames for transmission should improve throughput as the Playstation 2 SMAP device has a small Tx FIFO, but adding a queue (Involving semaphores) of any sort seems to do the reverse - performance dips by quite a bit (0.8MB/s).

    I believe that it has to be related to the thread manager, since it seems like context switches are really bad for performance.

    Even the Sony DEV9 driver seems to not not use a semaphore to get around this issue (It polls the DMA CHCR register to know when the DMA transfer ends). If one was to modify the homebrew DEV9 driver to use the polling original method used by Sony, performance increases by about 0.6MB/s.

    Do you know what sort of performance the Sony SMAP driver offered? My SMAP driver is based on their SMAP v2.25 driver, and performance provided by their design was about 2.2MB/s (Before I changed that to make Tx occur in the calling thread instead).

    Placing the network stack on the EE seems to cause a dip in performance too, seemingly because of the SIF (Probably causes thread context switches, or the SIF is really just that slow - 0.6MB/s dip).
     
    Last edited: Nov 19, 2012
  4. SilverBull

    SilverBull Site Supporter 2010,2011,2013,2014,2015.SitePatron

    Joined:
    Jun 12, 2008
    Messages:
    385
    Likes Received:
    6
    Here you go (hoping I did not make any mistakes while trying to decipher my notes):

    GND is ground, "L" is low potential (i.e., GND :smile-new:), "H" is selected high potential (Vcc, changable via power control register). Output voltage of CDx and VSx pins cannot be changed for obvious reasons, but other control pins can. The state of Vcc-dependent pins (like BVDx, WP, READY and so on) cannot be read until power has been applied to the card and OE=1 in the power control register.

    BEWARE: there is no hardware protection against the wrong voltage being applied to a PCMCIA/CardBus card. Misuse of the power control register can fry a card, so BE CAREFUL.

    BF801460 (r/w): Target memory space. Selects which PCMCIA space is accessed via memory accesses to 0xB0000000 on the IOP. Only the lower 3 bits keep a written value.
    xxxxx...: ? (unchangable)
    .....x..: ? (changable, interpretation unknown)
    ......xx: 00=>common memory (P#61/!REG=H); 01=>I/O (P#61/!REG=L); 10/11=>config (P#61/!REG=L)

    BF801462 (ro): Card status pins.
    .|.......x: P#36/!CD1: 0=>L
    .|......x.: P#67/!CD2: 0=>L
    .|.....x..: P#43/!VS1: 0=>L
    .|....x...: P#57:!VS2: 0=>L
    .|...x....: P#63/!STSCHG/BVD1: 0=>L (only valid if Vcc+OE)
    .|..x.....: P#62/!SPKR/BVD2: 0=>L (only valid if Vcc+OE)
    .|.x......: P#33/!IOIS16/WP: 0=>L (only valid if Vcc+OE)
    .|x.......: P#16/!IREQ/READY: 0=>L (only valid if Vcc+OE)
    x|........: ? Indicates error if 0 after power on

    BF801464/BF801466 (r/w): Interrupt status register. Bits set when condition occurs, writing a 1 clears the bit. At least 9 bits in use.
    .|.......x: !CD1
    .|......x.: !CD2
    .|...x....: !STSCHG/BVD1
    .|..x.....: !SPKR/BVD2
    .|.x......: !IOIS16/WP
    .|x.......: !IREQ/READY
    x|........: ? Something related to Vpp ?

    BF801464 is for "signal asserted" (e.g., !CD1 going from H to L), BF801466 is for "signal deasserted" (e.g., !CD1 going from L to H)

    BF80146C (r/w): Power control. Only the lower 5 bits known.
    xxx.....: ?
    ...x....: 1=> Vpp=Vcc; 0=> Vpp=GND
    ....xx..: 00/11=> Vcc=0V; 01=>Vcc=3V3; 10=>Vcc=5V
    ......x.: 1=> OE=Vcc (Vcc-dependent outputs enabled); 0=> OE=L (Vcc-dependent outputs to GND)
    .......x: 1=> RESET=L; 0=> RESET=Vcc (if OE=Vcc, otherwise L)

    BF80146E (ro?): Device identification register (?)
    xxxx....: 0010=>CXD9566 (PCMCIA). 0011=>CXD9611 (ExpBay)


    Good job. Seems a release is near? :applause:

    Sorry, I don't know whether they changed the thread manager after the initial release.

    You know that the IOP is slow, so it is no surprise that any kind of additional overhead influences the transrate rate. Waiting for a semaphore may cause a thread switch; then comes the interrupt from the device (requiring the context of the current thread be safed, so it can be restored once the handler completes); then another thread switch back to the original thread. In comparison, interrupt-driven systems are usually slower when doing I/O with a single device; that is, in comparison to systems using polling, simply because the latter ones can react faster to hardware events.

    By itself, a queue does not help to improve throughput. You need to synchronize access to the queue, and copy data in and out. If you do that via a semaphore, you may end up switching thread contexts, which is always slow... Even on modern PCs. Although with all their caches and high clock frequencies, it isn't such a nuisance as on the IOP :tongue:.

    Why do you use a semaphore for the queue, anyway? That requires writing all packets to the TX FIFO in thread context, because your interrupt handler cannot touch the queue (it cannot acquire the semaphore, because the interrupt may come in just after a thread has acquired it and is inserting something). I would try to just disable interrupts while accessing the queue. Something like this:

    When attempting to send a packet, in normal thread context:
    • Disable interrupts
    • If TX possible: write packet to FIFO
    • If TX not possible: insert packet into queue
    • Enable interrupts

    When an interrupt comes in:
    • If TX possible and queue not empty: dequeue packet and write to FIFO

    Can interrupt handlers be nested on the IOP? That is, can one device interrupt the interrupt handler of another device? If so, and you want to allow other modules to send packets from their interrupt handlers as well, you also want to disable further interrupts from inside your handler. Otherwise its not needed, because you cannot be interrupted therein (no pun intended).

    Sorry, I have never experimented with the SMAP system before and don't have any numbers. But I remember reading a comment in the homebrewn SMAP (or was it DEV9?) driver about using interrupts instead of polling; the homebrew developer wondered why Sony used polling in the official driver. Guess we know that reason now... :cool-new:

    What do you mean by "placing the network stack on the EE"? Are you accessing the SMAP/DEV9 devices directly from the EE, in the same way you would on the IOP (just using other memory addresses for direct device access)? That would require each such access being sent over the SIF, and essentially block the EE in the meantime (no core multi-threading to mask those latencies).
     
  5. sp193

    sp193 Site Soldier

    Joined:
    Mar 28, 2012
    Messages:
    2,217
    Likes Received:
    1,052
    Wow! This is very comprehensive! :)

    I'll spend some time studying these findings at my own time, to see whether I can get it to accept normal PCMCIA or CARDBUS cards.

    Precisely. I've decided to just go ahead and make a release once I complete a Network I/F manager like the Sony NETDEV module, so that people can enjoy my work and possibly even help me.

    I think that the addition of a network I/F manager would be good, as the network stack and network adaptor driver won't be so closely coupled.

    It's also so that the same modules can be used with both a IOP and EE side stack without being recompiled.

    That was what I was thinking, but I'm glad that you have made it clear that it's a common problem for computers. :)

    (Especially for the IOP, which is really not too suitable for multithreading and yet remaining responsive :mad-new:)

    It's because I have no confidence with doing that properly (But I don't clearly remember why). :/
    But I'll try again after I make a release, so that more people can play with my work until then.

    Well, I need to keep packet transmission and receiving in a thread, because the DEV9 driver's DMA transfer function must be executed outside of the interrupt context as it uses a semaphore for exclusive access.

    But I'll try getting a queue to work again with disabling interrupts instead of a semaphore, when I get some time again after I make my first release (Should be soon).

    I think that interrupts cannot interrupt handlers handling the interrupt assertions of other devices, or the PS2 wouldn't have "frozen up" when my older i.Link driver was spamming data with the IEEE1394 enclosure I have.

    That old driver used to copy data from the FIFOs with a loop in the interrupt context, since DMA support was non-existent back then (You know why).

    Since the EXPANSION BAY consoles will generate an interrupt whenever the RESET button is pressed and this will in turn invoke the poweroff handler to power off the console, it means that interrupts cannot be nested as the RESET button was non-functional while my i.Link driver was transferring a lot of data in the interrupt context.

    I remember that the RESET button was totally not functional, as if my SCPH-39006 had hung (But it was still playing my FMVs smoothly like never before). :cool-new:

    Exactly!

    Well, that comment you saw is in the DEV9 driver, above the DMA transfer function.

    Actually, I think that the increase performance was probably 0.8MB/s instead, as I remember modifying the DEV9 driver I'm using to not turn the DEV9 hardware interrupt on and off whenever data is transferred. It gained about 0.2MB/s through that modification alone (Hence how I thought that it was an improvement by 0.6MB/s).

    The "network stack" I'm referring to is the network protocol stack (Layers 3 and 4 of the OSI model).

    The SMAP driver entirely resides on the IOP, and I/O requests from the protocol stack to the link layer go over the SIF.

    I'm currently aiming to imitate what Sony has been doing, since projects like OPL will benefit from having an entirely IOP-only network stack and driver.
     
    Last edited: Nov 20, 2012
  6. sp193

    sp193 Site Soldier

    Joined:
    Mar 28, 2012
    Messages:
    2,217
    Likes Received:
    1,052
    BUMP!

    I couldn't find a thread with anyone discussing the support within PS2 Linux for "Enable PC Card IDE support". It appears to be different from the "Enable PS2 HDD support" option, whereby it seems to deal with ATA registers that are located at different locations (other than the PS2's standard 0xb4000040): (IDE) 0xb40001f0, (IDE2) 0xb4000170 and (IDE3) 0xb4000170.

    Doesn't it seem weird that it supports 3 channels that seem to be located at rather standard-looking locations? Also, the startup executable on the PS2 Linux beta disc seems (based on the strings within it) to be able to deal with commercial PCMCIA cards, like those from Adaptec. It seems like it is possible to use a standard PCMCIA IDE controller with the retail PCMCIA consoles, since the "Enable PC Card IDE support" option is available, regardless of the setting for "Support for SCEI DTL-T10000". Or at least, on the TOOL.

    Has anyone tried connecting a standard PCMCIA IDE card to a PCMCIA set, which has PS2 Linux installed and running on it? If it can actually interface and control a PCMCIA IDE controller, that would prove that the PCMCIA port on the SCPH-1x000 units is at least partially capable of standard PCMCIA operations.

    ***

    The AIF's HDD interface is at 0xb8000060, and seems to be a standard ATA interface too. But the Sony code doesn't have any support for DMA, which might mean that it is incapable of DMA support (which would then also explain why the HDD unit that is connected to it uses a 40-line IDE cable).
     
    Last edited: May 21, 2014
  7. smf

    smf mamedev

    Joined:
    Apr 14, 2005
    Messages:
    1,255
    Likes Received:
    88
    ATA-1 had some DMA modes, ATA-2 added more. It was only the Ultra DMA modes added in ATA-4 that needed 80 core cables. Bus mastering dma controllers were quite rare (and expensive) when ATA-1 came out though, so they didn't get much coverage.

    If there is an ATA disk involved then it will usually have the exact same register layouts as everything other ATA interface because the registers are handled by the drive itself & it would be more work to make it different. There is only one difference in that sometimes you use word indexes and other times byte indexes. On a PC register 0 is at base + 0, register 1 is at base + 1, while on a lot of other systems register 1 is at base + 2 (I don't know if it's something about the pc motherboard or the x86).
     
    Last edited: May 21, 2014
  8. sp193

    sp193 Site Soldier

    Joined:
    Mar 28, 2012
    Messages:
    2,217
    Likes Received:
    1,052
    I was first referring to the driver, which only supports PIO. When I wrote that the AIF's IDE channel might not have DMA support, I made that assumption based on the fact that there is no known way for the AIF to perform DMA transfers with the DEV9 DMA channel within the SONY AIF support code.

    DMA support may be in the official ATA standards, but the host must support it too. The host-side implementation for DMA support is not part of the standard.
     
  9. smf

    smf mamedev

    Joined:
    Apr 14, 2005
    Messages:
    1,255
    Likes Received:
    88
    If there is no known way then it might as well not have DMA support, because you're unlikely to find it.

    [/QUOTE]

    Yes. I was merely pointing out that the type of cable used doesn't mean that the hardware doesn't support DMA. The ATA-5 specifies that Ultra DMA mode 0, 1 & 2 can be used on 40 conductor cables. It's only mode 3 and up that require 80 conductor cables. You can also use "single device direct connection", which I assume is where you don't have a cable at all (like in a laptop). The extra 40 conductors are only part of a cable and don't exist on the computers motherboard or drive pcb.
     
    Last edited: May 21, 2014
  10. sp193

    sp193 Site Soldier

    Joined:
    Mar 28, 2012
    Messages:
    2,217
    Likes Received:
    1,052
    Exactly! Although it's possible that the DEV9 DMA channel also services the AIF controller, since it seems like the PCMCIA interface is actually a subset of the AIF controller, since its interrupt events seem to trigger off one of the AIF's interrupts too.

    The problem is finding the register that controls the device that the DMA channel will service, and the details of the DMA transfer on the device's end. As well as the register(s) for changing the ATA interface's transfer mode to one of the DMA operating modes, since only the PIO timing register is known.

    Since it seems likely that Sony added support for the AIF into the Linux kit after support for the retail DEV9 interface was added, they might not have been willing to rewrite the DEV9 support code to allow the DMA channel to support the AIF. A system for sharing it must be constructed.

    Alright, so you saw my point from another perspective. My hypothesis was that they deliberately chose a 40-line cable because it's cheaper than an 80-line cable, as the AIF's ATA interface wouldn't ever support any DMA modes (or just wouldn't ever support UDMA-4 and above).
     
  11. smf

    smf mamedev

    Joined:
    Apr 14, 2005
    Messages:
    1,255
    Likes Received:
    88
    I concur. If the interface or hard drive didn't support UDMA mode 3 or up then 80 conductor cable would be a bit of a waste (it would reduce interference but the amount of interference should be quite low already and I assume it's quite a short cable anyway). But they could just as easily have made the decision not to support Ultra DMA because they wanted to buy cheap cables or hard drives, instead of because they didn't want to add DMA to the hard drive interface. It could just as easily be a manager with political influence that had a problem with Ultra DMA once and outlawed it's use.

    Nintendo made similar decisions with the Wii by only supporting USB-1, when the hardware was perfectly capable of USB-2. They might have made that decision to try to avert backup loaders making use of it, or for some more insane reason.
     
    Last edited: May 22, 2014
sonicdude10
Draft saved Draft deleted
Insert every image as a...
  1.  0%

Share This Page