Date: prev next · Thread: first prev next last
2011 Archives by date, by thread · List index

Op Wo, 2011-04-06 om 10:50 +0200 skryf Christian Lohmaier:
Hi *,

Hi Christian

I find your responses rude and unhelpful. I'll try to assume that
communication isn't going smoothly since we might both be using our
second language.

Somehow we manage to provide this software on hundreds of sites running
fine, ranging from hosting on a few hundred megabytes of RAM to machines
with several gigabytes. All accidents?

I could show you the specific lines of code caching big objects
(expected to be tens to hundreds of megabytes in size), but I guess that
won't convince you either. It is a leak, because someone who (I guess)
never looked at the Pootle code, says so. Why can't you at least start
by assuming that I might have an idea of how Pootle works? I don't have
time to explain each optimisation and feature we have in the code. My
time is limited, and I was hoping we can work together instead of giving
lectures about programming and system administration.

When you are willing to discuss things under the good faith assumption
that I _might_ not be talking nonsense, we can continue the
conversation. I've been trying to help you in my free time based on my
experience, but it seems you'd prefer to assume I don't know what I'm
talking about.

With mutual respect, we can take this forward, but not without it. And
that includes respect for the hard work that Rimas is doing.

In the meanwhile, here is some recommended reading:

Keep well

On Wed, Apr 6, 2011 at 9:58 AM, Dwayne Bailey <> wrote:

Please CC me on any replies as I'm not on the list.

Keeping this, so others will follow the request as well.

Mon, 4 Apr 2011 22:28:43 +0200
Christian Lohmaier
On Mon, Apr 4, 2011 at 2:58 PM, Rimas Kudelis <> wrote:
Again, as you apparently still don't understand what I already wrote many
* Adding more resources will /NOT/ make pootle run faster than it does
The VM already has way more resources assigned than necessary. It is
/idle/ almost all the time.

As far as I know the server hasn't really been used yet, so I guess
we'll be collecting data from now on to see how things go.

Also from the avialble data from the old pootle server. But yes, the
new one doesn't have much data yet.

During the
setup of the server, we make tradeoffs between performance and memory
use. If there is no memory available, we'll obviously try to optimise at
all cost for minimising memory use,

Please explain those settings. Which settings, what is the effect, how
to see the effect (i.e. what UI actions to perform)

and that is what I understand that
Rimas said: things might be slower than necessary, since we are not
optimising for performance, but for memory use.

No, and I repeat again: Memory is not an issue. The memory leak is.

* The only thing that is slow (when executed the first time) is
generation of the zips. So when you as translator request a zip: Don't
click the link multiple times because you don't immediately get the
zip. It can take 10 seconds for the files to be generated. Again:
* Adding more resources will /not/ make that time shorter. It is a
single-threaded process that can only uses one single CPU, thus
assigning more CPUs won't help at all (the VM has 4CPUs assigned
Requesting that same zip another time (or different zips of the
project belonging to the same language is fast/instant, but requesting
the zip for another language again may take some seconds for the first
request (or again after the files did change in between).
* Pootle has a memory leak when creating the zips. It won't release
memory after processing the files.
This would be the only time where the assigned resources may run out
(the VM has 1GB or RAM assigned): Multiple different languages request
the zip at the same time. Then memory usage increases, memory runs out
and either it is crawling along or the process gets killed.

Some stuff that is slow to load is cached for later use.

"Some stuff" maybe, but that little is not what I'm talking about.

This is done
for performance optimisation. This is one of the reasons you won't see
the memory use go down immediately after generating a ZIP file.

No, this has nothing to do with caching. It is a memory leak. Rimas is
very good at not forwarding relevant information it seems. So here I
just copy and paste what I wrote to Rimas already.
After generating, the zip files are cached on disk, so before
redownloading the same file, you'll probably want to execute
$ find /var/www/pootle/po/POOTLE_EXPORT/ -type f -iname *.zip -exec rm {}

No, this is not enough to do the same as what pootle does when
requesting the files the first time.
The first time it takes long (100% CPU while processing), and that is
also the time where the memory leak occurs. After it is finished
producing the file, the memory is not freed.

Requesting one of the zips interestingly also prepares the other files
at once, at least when selecting any of the other files on the same
language and project, the file will be served immediately, without the
lengthy processing step.
(And those requests also don't increase the memory usage further)

But switching to another language, and requesting a file results in
the processing with memory leak again.

But that memory is not needed at all, as is obvious when the process
has reached it's max-requests and is replaced by a new one, then it
serves all of the already covered languages without that increase in

With that info at hand, I tweaked the apache-settings a bit (basically
reduced the amount of concurrent wsgi processes, and reducing their
lifetime) to make the accumulation less likely.

The machine will still run out-of memory when different language zips
are requested in a short amount of time with not enough regular
requests in between. In case people see too many "premature end of
script" or similar (i.e. the worker has been killed by the OOM
killer), reduce the number of requests before the worker is restarted.
Currently it is at 400, which is still pretty high, considering the
more or less non-existant regular load (but then again, this might be
because people know the server is in transition and because 3.3 is
done already)

Similarily, the very first page-load after a reboot also takes "ages",
but after that it runs smoothly,

1GB is enough for Pootle The rest is a matter of
configuration/tweaking. (or best: fixing the memory leak)
reason is the way the garbage collector works in Python.

Well - how long should one wait for the memory to be freed? If it is
not freed when memory is about to run out, or after a hour or so, then
I don't assume it is a problem of running the garbage collector. (and
again that memory is not used/needed for further request of the same
kind, so I don't take your "it is caching stuff to accelerate future
actions" for granted.)

Please give me instructions on stuff to do in pootle to notice a difference.

Deciding to cache something is a tradeoff. So we can disable or minimize
some of the caching, which will simply make a few things slower,
hopefully not by much, but we're guessing while the server hasn't been
used much yet.

Numbers and examples please.

Caching for a more or less one-time operation that impacts the whole
server all the time surely is the wrong way to go. Even more so when
despite that "caching" (I still call it leak, as it doesn't cache
anything) the requesting of the zips is still CPU bound and slow.

I suggested some customisations to the parse pool (to do exactly this).
That affects the number of cached files and search indexes, both of
which are very large on your server.

Please be more detailed on this. What setting, in what way to tweak,
and what should the effect be?

* I will NOT assign RAM to a VM (and thus block that ram for other
use) to satisfy a memory leak, when that RAM is unused 99% of the

I believe what you are seeing is the caching, not a memory leak.

And I strongly disagree, see above.

We haven't seen the server used much yet. My educated guess from having
worked on a few Pootle installations is that the RAM isn't enough,

And I strongly disagree. See above.

* The effects of the memory leak can be nullified by just restarting
the worker processes more frequently. Thus again: the cost of making things slower, since more stuff needs to be
loaded in memory afresh every time you restart a process.

I'm starting to belive you're in the same league as Rimas when it
comes to server administration (i.e. very much guesswork).

The starting/initializing of the processes is negligible. Just fire up
an ab bechmark and request 20000 with concurrency of 30 and look at
the distribution.
I'd gladly live with a few hundred of milliseconds, if that will save
hundreds of memory that can be freed by just restarting the worker

* Adding more resources will /NOT/ make the VM run faster

It most probably will, since we are sacrificing performance to minimise
memory use.

No. As the times where it is slow is not bound by memory, but by CPU.
Please please please: Read what I wrote. I repeated myself so many
times, it really gets annoying.

For example: we opted for more threads, rather than
processes, that is known not to perform as well in Python, especially
for CPU intensive tasks.

Who is "we"? Yes, I decided to (for now) only run with 2 wsgi
processes. But then again: how would that make execution faster?

If you provide me with hard numbers, concrete settings and
instructions on what to do in the UI to reproduce and compare the
settings, I'm willing to try them out.

 it will
/NOT/ allow it to handle more requests

It most probably will, since we reduced the number of processes to
minimise memory use, and slower serving of requests necessarily affects
the number of requests you can serve in any given time.

Again this is wrong. it might allow 4 simultaneous "request the zip"
requests, but all 4 would be taking ages, and all would have the
memory leak, thus needing more RAM (that I'm not wiling to assign,
since regular operation just doesn't need it).
(and the users that are just using regular webinterface have to wait
then as well). Allowing more processes doesn't make sense when pootle
doesn't queue/limit those lengthy processes by itself.

Pootle is idling almost all of the time. There is less than one apache
request per second on average (and for regular requests (i.e.
non-"generate-a-zip" actions) it can easily serve >>50 simultaneous
requests per second.)

Let's keep an eye on things when people actually start to use the

Whether the requests are generated by a benchmark tool or by regular
users doesn't make any difference. The number it can handle is way
beyond any real load to expect.

Maybe if we
manage to give enough load to the server, he'll change his mind (or
find other ways to deal with the problem).

No, I won't change my mind, but depending on the load/effects of the
memory leak I'll reduce lifetime of the server-processes further.

I hope you will be reasonable and look at the data as it becomes
available, and at least consider changing your mind.

Again this is out of the question, but all *data*, real facts I
gathered so far don't suggest that it would be any different from my
current opinion.

As mentioned, I'm
pretty sure there is no memory leak.

I disagree, see above. Alternatively give a reasonable explanation on
on why the export still works fine even when the process that created
it (and thus the memory it occupied, and all possible caching it could
have down in its own space) is gone.

If you reduce the lifetime of the
server processes, you are just making performance worse, which is all
that Rimas warned the users for.

Numbers/examples on how performance can be worse. Otherwise I cannot
take your comments seriously, as my measurements did show otherwise.

I agree. Generating the ZIP files is slow. Doing it multithreaded, will
limit the performance for more users while doing that.

But the time they have to wait will be shorter.

* Requesting the other zips in the same project & language is fast.

So there is no point in clicking all zip-URLs on a page at once (on
the contrary, than the request will all cause CPU to be burnt, while
all that is only needed one single time). If you want to download more
than one zip of a page, click the first one, wait until it is
generated and handed over to your browser, then click as many others
on the same page and get all of them quickly.

Yes, we have optimised for several cases here that are likely, and I
suggested some workarounds for some of the issues we're likely to hit
with the little bit of RAM as Rimas has already started doing, as far as
I know.

Don't suggest them only to Rimas, as he is not communicating well with
such technical matters it seems.

* Any other time, doing in-place-translation, just browsing along is
supposed to be fast.

... as long as we're not hitting the current imposed limits of

Again: Numbers, no foggy guesswork please.

Recently on my blog:

Unsubscribe instructions: E-mail to
Posting guidelines + more:
List archive:
All messages sent to this list will be publicly archived and cannot be deleted


Privacy Policy | Impressum (Legal Info) | Copyright information: Unless otherwise specified, all text and images on this website are licensed under the Creative Commons Attribution-Share Alike 3.0 License. This does not include the source code of LibreOffice, which is licensed under the Mozilla Public License (MPLv2). "LibreOffice" and "The Document Foundation" are registered trademarks of their corresponding registered owners or are in actual use as trademarks in one or more countries. Their respective logos and icons are also subject to international copyright laws. Use thereof is explained in our trademark policy.