On Sep 1, 2011, at 11:00 AM, Mohammad Elahi wrote:
Here, however, someone doing localization would need to add new constants to NumberingType.idl
and would need to add code to defaultnumberingprovider.cxx. That does not feel right.
OK, I'm just started to work with libreoffice code ;) Just searched for a similar feature which
recently has been added, and used it as a templated to how to write code.
Please, Would you mind tell me what is right in this case? How should I write code for localizing
numbers
which best fits?
To be honest, I have no idea what the best approach would look like. Maybe something like this:
In the "front end," have numbering types specified not by a single NumberingType constant, but by a
tuple of NumberingType constant and language code (there should already be at least one such
enumeration of language codes in LibO), where the language code is only used by NumberingType
constants like CHARS_WORD, CHARS_CARDINAL_WORD, etc. In the "back end," devise and algorithm and a
list of string resources that produces the desired output for all the combinations of (a) numeric
value to be formatted according to the given (b) tuple of NumberingType constant and language code.
Depending on what special cases the various languages need, the algorithm and the corresponding
list of necessary string resources will grow over time, as more and more languages are being taken
care of. At runtime, the algorithm then obtains the necessary strings for the given language like
is done for all the other localized strings in LibO.
- "the second table is used for irregular cardinal numbers is not empty": should probably read
"if not empty"?
Sorry for typos
No need to be sorry. :-)
- For the Persian characters (that cannot be given visibly in an ASCII-only .cxx file, anyway)
the practice of specifying sal_Unicode strings as sequences of individual characters (a la
{0x0635,0x062f,0}) appears acceptable. However, for English strings, {'o','n','e',0} vs. "one"
is hard to tolerate. Maybe all the data should be specified as UTF-8 instead, using literal "…"
strings (the literal Persian UTF-16 characters like 0x0635 become a little harder to maintain,
having to encode them as UTF-8 like "\xXX\xYY"), and explicitly converting to rtl::OUString when
used.
Thanks,
Yes, but I think as you said it is better to use UTF-8, one of my problems was defining a two
dimensional
array to hold strings with variable length, first I used (sal_Unicode[]){0x0635, ...}, but a
feedback from
community was "It's not C++03 compatible", so used constant length arrays which do not like them.
But by changing it to UTF-8, it even makes the persian strings more tolerable.
In the end, the best way to represent those strings will depend on how exactly, if at all, this is
extended to more languages than just Persian (see above). (For example, if we should take the
route I sketched above, those strings would be stored externally from the C++ source file, anyway,
and all this would become moot.)
-Stephan
Context
Privacy Policy |
Impressum (Legal Info) |
Copyright information: Unless otherwise specified, all text and images
on this website are licensed under the
Creative Commons Attribution-Share Alike 3.0 License.
This does not include the source code of LibreOffice, which is
licensed under the Mozilla Public License (
MPLv2).
"LibreOffice" and "The Document Foundation" are
registered trademarks of their corresponding registered owners or are
in actual use as trademarks in one or more countries. Their respective
logos and icons are also subject to international copyright laws. Use
thereof is explained in our
trademark policy.