Date: prev next · Thread: first prev next last
2020 Archives by date, by thread · List index


On 2020-01-30 17:25, Luboš Luňák wrote:
On Thursday 30 of January 2020, Kaganski Mike wrote:
Can the hypothetical make_signed function return a signed integer when
there's a bigger integer type exist,

  Yes.

and a struct with overloaded
operator<=> when there's not, and that overloaded operator<=> would
check if contained unsigned value is greater than max value of its
signed argument, and return "contained unsigned is greater than passed
signed" in that case, otherwise fallback to "convert unsigned to signed
and compare normally" strategy? This would comply with the scope of the
function (which, as I understand it, to only be used in preparation to
comparison), always return mathematically correct result of comparison,
and allow all smaller types comparison to still be without overhead?
(But for 64-bit unsigned types, of course, it will introduce the
overhead. Will it be significant, though?)

  Not worth it. That'd be like doing error checking for every memory
allocation - we also bother only with those few cases where it realistically
can go wrong.


I disagree with this approach. Not checking memory allocation result is 
a strategy with specific and easily controlled results of failed 
expectation. I am sure that it will segfault, not proceeding with wrong 
operation - and that's enough for me. Not checking in the discussed case 
is just a sure way to difficult-to-find bugs - and yes, I see the "this 
will not happen in my time" argument.

I share the "unsigned types are harmful" idea, but I see the logic in 
Stephan's solution which may have correct scope of application without 
any overhead; and there is no strictly correct scope of application for 
"make_signed" on 64-bit integers without additional checking. The 
"number is non-negative" has a natural application; "number no greater 
than std::numeric_limits<int64_t>::max()" is absolutely artificial.

IMO the "let's change make_unsigned with make_signed" only makes sense 
if it is *correct* solution, even if it implies overhead.

My take on this is https://gerrit.libreoffice.org/c/core/+/87762. I can 
see why it might be considered wrong and rejected (e.g., because 
overhead it brings is unacceptable; or because asserting on valid range 
is considered correct...) - but at least this would not (unless I made a 
programming error) give wrong results, not "I believe I will never meet 
values outside of this range".

-- 
Best regards,
Mike Kaganski

Context


Privacy Policy | Impressum (Legal Info) | Copyright information: Unless otherwise specified, all text and images on this website are licensed under the Creative Commons Attribution-Share Alike 3.0 License. This does not include the source code of LibreOffice, which is licensed under the Mozilla Public License (MPLv2). "LibreOffice" and "The Document Foundation" are registered trademarks of their corresponding registered owners or are in actual use as trademarks in one or more countries. Their respective logos and icons are also subject to international copyright laws. Use thereof is explained in our trademark policy.