-
-
Notifications
You must be signed in to change notification settings - Fork 94
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Accessibility: expose controls through OS-provided accessibility interfaces? #181
Comments
This is an area where I admittedly know nothing about. I doubt it would be possible to integrate accessibility in arbitrary TGUI apps (because widgets might be all over the window created in arbitrary order and within group and layouts to visually put them in the right spot). So I'm guessing the API that needs to be added still requires the app developer to use it in order to add accessibility support to the app. That would make it a very niche feature. If you are willing to look into it then feel free to do so and perhaps try to come up with an API that would allow support for this in TGUI. Then it can be further discussed if it is a good idea to try to implement it like that or not. I'm fine with something that is only supported on one platform, it doesn't have to be cross-platform to be accepted into TGUI. More platforms could be added later when people need support on them. |
@texus Ah, I was thinking that accessibility would be more of an automatic thing than anything else. That's how WinForms apps, as an example, work: accessibility is something you just get. Well, sort of. Essentially, the accessibility magic would tell accessibility clients (screen readers, accessibility testing tools, etc.) that your tabbing over a widget, what kind of widget it is (called the widget's role), any help text to describe the widget (called the accessibility description), the name of the widget, and so on and so forth. If the widget is a text box, it obviously wouldn't have a name, but it could have a description, and would have a role. So, if I tabbed over it, my screen reader would just say "edit" and then the contents. It would be up to the developer to assign the properties of the widget (role, name, description, ...) as well as creating any labels that a screen reader could see. Edit: it's worth noting a few things:
The extra challenge will be integrating this into what is, at the end of the day, an OpenGL drawing window. I have no idea how assistive technology will react to a UI that, say, appears and disappears depending on the state of a program. Hopefully it's like it would in any other app, where if the UI isn't present, it just goes back to not telling you anything other than what the window title is. |
This is something I agree with, it doesn't necessarily need to be something that has to be explicitly enabled by the developer. I'm fine with it being automatically available. I just doubt that it will work properly if the developer doesn't take accessibility into account for the reasons you mentioned (e.g. tab order, no widget descriptions, ...) So 2 things need to be implemented:
|
@texus Correct. Implementing the first should be trivial. But the second will be significantly more complicated. The guide for implementing a server-side UIA provider (as MS calls them, and which is what we need to implement) does not look simple, by my definition of the word at least. I imagine it will require a lot of tinkering to get right. |
@texus So, this issue has gotten pretty stale, but things are happening in terms of advancement of accessibility libraries to make this easier. There is a library that's gotten pretty good called AccessKit. If you look at this SDL2 example, you can get a basic idea of how the AccessKit library works. This example doesn't render anything, but makes the screen reader believe that there are two buttons in the window. These buttons don't (visually) exist but exist in the accessibility tree. In fact, the example has no visual UI at all, but can handle keyboard focus. This might get a bit long but I'll try to explain things so you understand things a bit better. An accessibility tree is a tree data structure. It consists of accessibility nodes. At the root of the tree is the "desktop" window (that is, the top-level window of all other top-level windows). On Windows, this is the Desktop Window Manager. On Linux, this could be the X11 display, Wayland drawing canvas, frame buffer, whatever. The top-level window consists of other top-level windows, which are really just child windows for the purposes of this description. In the tree, these are nodes with role type I'm uncertain if any of this will help, but I thought I'd revive this with some new information. I hope I've given you a better understanding of how this works. I might give it a go by externally hooking into TGUI. It isn't the best way, of course; the best way would be to implement it directly into the library. But I'm unsure exactly how to do that with AccessKit yet, since I'm still learning the API myself. But I hope all of this helped at least somewhat! |
AccessKit looks good on paper, but it's a relatively new project that doesn't seem to be widely used yet.
While it might not be the best way in the end, I think it is a good way to start with. Once something it working externally, it will be easier to figure out how to move the implementation into TGUI. |
@texus Fair, and I'll see about getting that working. No guarantees but we'll see. Should I keep this issue open? |
You can keep this issue open. |
So this might be more of a long-term goal thananything else, and probably won't be doable on either MacOS or Linux (I'll explain that momentarily). The idea is, instead of requiring app developers to manually implement accessibility, we might as well go all in and expose the controls to the OS and accessibility clients. On Windows, this would use UIA (UI automation). I have absolutely no idea what it would use on MacOS/Apple devices; Apple has been extremely tight-lipped about how their accessibility interfaces work, which is why there's only one screen reader. Linux's accessibility is a bit of a mess, but I (believe) we'd use AT-SPI on that platform (according to some accessibility devs). As I said, actually implementing this might be quite difficult, but well worth it. I've never implemented UIA support, but I could try anyway to see what happens. Thoughts?
The text was updated successfully, but these errors were encountered: