Hello, So there was a presentation about this tool at FOSDEM, the video recording is already available on https://fosdem.org/2018/schedule/event/ode_testing/ Basically, the tool (improved a bit since FOSDEM) is currently reporting about 8000 warnings, i.e. 8 per file on average. I have attached the evolution over time, we clearly see the migration to .ui files during 4.0 :) Looking at a few .ui files, there are some false positives, but not so many, and they are usually based on semantic, so they wouldn't be detectable anyway, one needs to mark them as such anyway. There are some errors too (parsing error or missing targets), they are quite rare. We have discussed with various people at FOSDEM about their feeling on it, and thought how to proceed from there. Our goal is to achieve zero-regression and fixing existing issues on the long run, while avoiding to bother developers too hard. Our fears is that the tool might produce too many false positives, that people need to be taught how to fix the true positives, and that we don't want to do several a11y-fix passes over all .ui files. We thought about the following planning, step by step: - Add to the build process error checking (only the hard errors such as bogus target names). There are only a few existing issues, so we can fix them alongside, and people won't introduce many, so making them errors already shouldn't be bothering. - Add to make check warning checking, one kind of warning at a time, with suppression files alongside, so that the tool only displays "<n> suppressed warnings" and new warnings introduced by developers from there. These warnings would point to wiki pages explaining the ins and outs of the issues and how to fix them. Introducing warnings one kind of warning at a time should leave time to developers for learning the accessibility rules progressively. It should also allow to observe how well false positives are treated before enabling all warnings. - When we get more and more confident that warnings are solid, we can make them fatal (one kind at a time), to really enforce non-regression. - At the same time, we would work on fixing issues raised by the tool on some set of dialog boxes, to check that fixing them does provide good accessibility, and to what extent we want to introduce more warnings to reach good accessibility. - At some point we'll get confident that we won't introduce other big classes of warnings over hundreds of .ui files. That's the point where we can say "ok, let's start fixing the existing issues over all .ui files once for good". We can then run through .ui files one by one, fixing the issues and removing the corresponding suppression lines. These could be used as "easy hacks" entries, they are usually just a few lines to fix. The progression of all of this could be monitored with statistics reported e.g. in the minutes of ESC calls. What do people think about this plan? Samuel
Attachment:
libreoffice.eps
Description: PostScript document