Date: prev next · Thread: first prev next last
2012 Archives by date, by thread · List index


On 02/28/2012 02:48 PM, Lubos Lunak wrote:
  Speaking of the size at the call-site, I good part is the code trying to
throw std::bad_alloc in case the allocation fails. That actually looks rather
useless to me, for several reasons:

- not all OUString methods check for this anyway
- rtl_uString* functions do OSL_ASSERT() after allocations
- with today's systems (overcommitting, etc.) it is rather pointless to guard
against allocation failures

  Does somebody see a good reason not to just remove it?
First of all, Linux' memory overcommitting is a bug IMO (and, AFAIU, 
fully optional these days), and should not be misused to justify sloppy 
application design.
Out-of-memory (OOM) is a somewhat curious conditions, as it can occur 
for two rather different reasons (that ask for different solutions), but 
it is not generally possible to tell which is which.  If a system gets 
really low on memory, there is typically little use in trying to carry 
on with an application that experiences OOM, and the best overall 
solution is to terminate the application quickly and as gracefully as 
possible.
However, there are also situations where bad input (malicious or 
otherwise) would cause an application to request excessive amounts of 
memory to do a single task (e.g., open a document), and at least in 
theory the application should be able to cope with such 
externally-induced OOM conditions, by abandoning the bad operation, 
cleaning up after it, telling the user the operation failed, and 
carrying on.
The traditional building blocks for memory acknowledge this dichotomy by 
reporting OOM to the call site (NULL in case of malloc, bad_alloc in 
case of new), as only the call site can decide how to properly react (or 
pass on up the stack to a knowledgeable one).
With LO we are certainly far away from the ideal, where excessive 
operations would be detected and abandoned cleanly, letting the overall 
application continue as if nothing happened.  But I would nevertheless 
not be happy with shaky foundations that ignore OOM and only fail down 
the road when dereferencing a null pointer.  (Even if that "down the 
road" is still nearby, within the same inline function.  It is already 
hard enough to make use of the typical crash report's call stacks, 
always having to judge whether the situation it presents can have 
legitimately occurred, or is due to some earlier memory corruption that 
put wild data into in-use memory.  "Fail fast" is a sound software 
engineering principle, IMO.)
Hence, my preference is still to flag OOM in C++ code with bad_alloc. 
The second best alternative is IMO to abort.
That this OOM handling has to happen in those inline C++ wrapper 
functions is an unfortunate consequence of our C-based low-level API 
(something that we should probably change, if we ever come around to an 
incompatible LO 4 and still are determined to write that in C++).
But how bad is that, anyway?  A little experiment shows that the 
compiler will happily outline those inline functions detecting for 
bad_alloc, creating one instance of them per library.
Stephan

Context


Privacy Policy | Impressum (Legal Info) | Copyright information: Unless otherwise specified, all text and images on this website are licensed under the Creative Commons Attribution-Share Alike 3.0 License. This does not include the source code of LibreOffice, which is licensed under the Mozilla Public License (MPLv2). "LibreOffice" and "The Document Foundation" are registered trademarks of their corresponding registered owners or are in actual use as trademarks in one or more countries. Their respective logos and icons are also subject to international copyright laws. Use thereof is explained in our trademark policy.