On 14/6/24 17:25, Michael Weghorn wrote:
Personal preferences aside, Windows/NVDA as the most widely used
platform indeed generally has some priority for me, as does Writer
over Calc over everything else.
There are other factors I also take into account, though, e.g.
involvement/contributions from others like people working in certain
areas, user requests/tickets, possibilities to cooperate (e.g. the
Orca maintainer reworking Orca's LibreOffice support and providing a
lot of helpful feedback and input) or productivity (my productivity on
Linux is way higher than on Windows, so my take is that putting some
extra initial effort in order to be able to do most of the analysis
for issues *also* affecting Windows on Linux usually pays off, in
particular since the platform APIs IAccessible/AT-SPI2 are fairly
similar).
A further consideration is that, unlike Windows users, Linux users don't
have the option of running Microsoft Office without setting up
virtualization, rebooting the machine to a different environment, or
using a different machine for the purpose. Of course, some Windows users
might not be able to afford it, and they're also an important group to
consider, as are those who prefer the LibreOffice interface.
I personally have access to Microsoft Office as well at the moment, but,
obviously, not in my Linux environment.
I think the problem of disclosing large documents to accessibility APIs
is real and important. I suspect this explains the extraordinary
performance issue that occurs if you try to open a long document in
Microsoft Word for Mac with the VoiceOver screen reader enabled - the
application can be completely unresponsive for several minutes, even on
a fast machine, while the document loads.
My limited understanding of the new protocol proposed for Linux by the
GNOME Foundation is that it is expected to use pipes for data transfer,
giving better performance than DBus calls. So my naive question is: what
would be the performance cost of transferring a large document over the
proposed API? Could it be partly done in the background, so that the
user can at least start to read/edit the document from the top while the
data structures are built and sent to the screen reader?
My assumption is that once the initial transfer is done, all the
remaining updates are incremental, and relatively fast.
Other users may disagree, of course, but from my perspective, having the
application hang while loading a large document would be unacceptable.
However, having to wait a little if I first load a large document, then
jump to the end of it (in the worst-case scenario) would be more
acceptable. Obviously, loading a large document and then immediately
retrieving a list of headings, links etc., is another scenario that
would be subject to potential performance issues. It probably depends on
what the over-all delays are.
--
To unsubscribe e-mail to: accessibility+unsubscribe@global.libreoffice.org
Problems? https://www.libreoffice.org/get-help/mailing-lists/how-to-unsubscribe/
Posting guidelines + more: https://wiki.documentfoundation.org/Netiquette
List archive: https://listarchives.libreoffice.org/global/accessibility/
Privacy Policy: https://www.documentfoundation.org/privacy
Context
Privacy Policy |
Impressum (Legal Info) |
Copyright information: Unless otherwise specified, all text and images
on this website are licensed under the
Creative Commons Attribution-Share Alike 3.0 License.
This does not include the source code of LibreOffice, which is
licensed under the Mozilla Public License (
MPLv2).
"LibreOffice" and "The Document Foundation" are
registered trademarks of their corresponding registered owners or are
in actual use as trademarks in one or more countries. Their respective
logos and icons are also subject to international copyright laws. Use
thereof is explained in our
trademark policy.