On Sun, Jul 21, 2013 at 08:23:49PM +0200, Andrzej J. R. Hunt wrote:
I've just been working on the first stages of supporting blobs with
firebird (...). firebird stores table and column descriptions within
blobs so in order to support them we need blob suppport
"Table and column description" looks like they are supposed to contain
strings anyway? But if FireBird considers it as a BLOB (not a CLOB),
we probably enter unknown-string-encoding hell. :-|
Hmm... Reading the InterBase documentation, it seems that their BLOB
also covers CLOB as a "BLOB subtype 1", whereas an SQL BLOB would be
FireBird BLOB subtype 0.
The firebird blob calls require repeated calls to isc_get_segment
which reads the blob (...), whereby it isn't possible to scroll
backwards again. This would probably work well with
Blob::getBinaryStream() -- for the moment I've only implemented
getBytes() though where we read the whole blob into memory on
request.
You could make that lazy and read only the first P bytes when bytes at
position N to P are requested by getBytes(N, P-N).
I'm trying to think of the possibility of just shoving the limitation
back to the caller of getBytes(), that is by throwing an SQLERROR if
getBytes() is not called with strictly monotonically increasing
arguments. I haven't found a convincing argument that it would be OK,
but neither a convincing argument that it would not be OK.
Instead of caching the read data, we could also transparently
isc_close_blob and re-isc_open_blob to fake scroll backwards. Not sure
it is a good idea, especially since we cannot scroll forwards without
actually reading data either.
What does the FireBird JDBC driver do? It started not supporting
getBytes() at all, but not implements it by seeking forwards /
backwards... using an undocumented isc_seek_blob. Google search
suggests that there are two kinds of BLOBs, segmented BLOBs (the
default...) and stream BLOBs, and seeking works only the latter :-(
IMHO, it would be worth to quickly poll the firebirds devs to see if
the situation has changed since the old info I found.
for larger blobs this isn't satisfactory but then hopefully an Input
stream would be used anyway?
Yes, it is reasonable to push the expectation on calling code that
when dealing with very big objects, they should use the stream
interface.
I haven't managed to sucessfully find any usage of Blob within LO
though, so I guess this isn't actually particularly important?
It is nice to have so that user code can use it. If HSQLDB supports/ed
it, then it is also necessary so that we don't regress.
--
Lionel
Context
Privacy Policy |
Impressum (Legal Info) |
Copyright information: Unless otherwise specified, all text and images
on this website are licensed under the
Creative Commons Attribution-Share Alike 3.0 License.
This does not include the source code of LibreOffice, which is
licensed under the Mozilla Public License (
MPLv2).
"LibreOffice" and "The Document Foundation" are
registered trademarks of their corresponding registered owners or are
in actual use as trademarks in one or more countries. Their respective
logos and icons are also subject to international copyright laws. Use
thereof is explained in our
trademark policy.