Date: prev next · Thread: first prev next last
2013 Archives by date, by thread · List index


On Thursday 03 of January 2013, Markus Mohrhard wrote:
2013/1/3 Lubos Lunak <l.lunak@suse.cz>:
On Thursday 03 of January 2013, Markus Mohrhard wrote:
Hey,

while going through the list of calc documents crashing during import
I came across gnome#627420-1.ods which creates an insanely large
OUStringBuffer that ultimately leads to a crash. Since I believe we
have quite a few places contain such problems. I wanted to ask if we
should not try to find a solution in the string classes instead of
having crashs with such documents from time to time.

 The question is, what kind of solution do you expect? Presumably the
crash was because the allocation failed and the assert was a no-op
because of non-debug build, leading to NULL pointer dereference. So
probably the only thing we can do is have the assert always active,
changing the crash to a different kind of crash, but that seems to be
about it.

Just to cear somethings up. I do have a dbgutil/debug build and it did
not return a null pointer. It returned a pointer to some invalid
memory which later results in the crash.

 Memory allocation functions are not supposed to return invalid memory just 
like that. Either the crash was because of memory overcommit (which is 
unlikely to cause a crash right after the allocation, and these days Linux 
distros do not enable it by default anyway), or the allocation function is 
buggy (which seems unlikely as well, ImplAlloc even checks for overflows). So 
I think it would help to know what actually went wrong.

Please also note that the document is perfectly fine and valid according to
ODF and my "fix" for it is for example now only a random limit for the text
length. 

I have no idea how to find a general solution but I know that it would
be good to have a way to prevent crashes with these documents. It
sounds much better to find a way to prevent the crash in these
documents on the string implementation than hope to find all places in
our import code. While it might be a good idea to fix the import and
check the input we will always have new import code where we forget to
add these additional dafety checks. A string that is several ten
million characters long is a good indicator for a potential problem.

 The point is, I don't think you can achieve that, which is why I asked what 
kind of solution you expect. If you alter the function to return a 
smaller/empty buffer in case of such a big size request, in some cases this 
call will be followed by code which will expect the buffer to be as big as 
requested and access it anyway (e.g. OUStringBuffer).

-- 
 Lubos Lunak
 l.lunak@suse.cz

Context


Privacy Policy | Impressum (Legal Info) | Copyright information: Unless otherwise specified, all text and images on this website are licensed under the Creative Commons Attribution-Share Alike 3.0 License. This does not include the source code of LibreOffice, which is licensed under the Mozilla Public License (MPLv2). "LibreOffice" and "The Document Foundation" are registered trademarks of their corresponding registered owners or are in actual use as trademarks in one or more countries. Their respective logos and icons are also subject to international copyright laws. Use thereof is explained in our trademark policy.