Date: prev next · Thread: first prev next last
2012 Archives by date, by thread · List index


Coincidentally, this bug that, for whatever reason, started to repeatedly break "make check" for my local master builds also got reported against (Fedora) LO 3.5.2 (see <https://bugzilla.redhat.com/show_bug.cgi?id=821134>) at about the same time I came up with the below master fix.

Therefore, I'd like to get this reviewed and backported to libreoffice-3-5. As the commit message explains, only part of it is strictly necessary to fix the bug, so I would also be fine if reviewers insisted on only backporting the two parts that are really necessary and leaving out the two cleanup topics. As the fix is mildly delicate (introducing a thread join, which in principle has the potential to introduce a deadlock) and only relevant for specific scenarios (where UNO remote communication is involved; i.e., during deployment of extensions, or in explicit client/server applications), I would not suggest it for inclusion into upcoming LO 3.5.4.

The below master commit will not cleanly apply to libreoffice-3-5 (at least not to libreoffice-3-5-4, where I tried it), so find attached an adapted 0001-Fixed-ThreadPool-and-dependent-ORequestThread-life-c.patch.

Thanks,
Stephan

On 05/16/2012 10:21 PM, Stephan Bergmann wrote:
  binaryurp/source/bridge.cxx           |   52 ++++++++++++++++++++--------------
  binaryurp/source/bridge.hxx           |    4 +-
  cppu/source/threadpool/threadpool.cxx |   24 +++++++++++----
  cppu/source/threadpool/threadpool.hxx |    4 +-
  4 files changed, 52 insertions(+), 32 deletions(-)

New commits:
commit d015384e1d98fe77fd59339044f58efb1ab9fb25
Author: Stephan Bergmann<sbergman@redhat.com>
Date:   Wed May 16 22:09:21 2012 +0200

     Fixed ThreadPool (and dependent ORequestThread) life cycle

     At least with sw_complex test under load, it happened that an ORequestThread
     could still process a remote release request while the main thread was already
     in exit(3).  This was because (a) ThreadPool never joined with the spawned
     worker threads (which has been rectified by calling uno_threadpool_dispose(0)
     from the final uno_threadpool_destroy), and (b) binaryurp::Bridge called
     uno_threadpool_destroy only from its destructor (which could go as late as
     exit(3)) instead of from terminate.

     Additional clean up:
     * Access to Bridge's threadPool_ is now cleanly controlled by mutex_ (even
       though that might not be necessary in every case).
     * ThreadPool's stopDisposing got renamed to destroy, to make meaning clearer.

     Change-Id: I45fa76e80e790a11065e7bf8ac9d92af2e62f262

diff --git a/binaryurp/source/bridge.cxx b/binaryurp/source/bridge.cxx
index 50b873f..2d1f622 100644
--- a/binaryurp/source/bridge.cxx
+++ b/binaryurp/source/bridge.cxx
@@ -235,22 +235,28 @@ Bridge::Bridge(
  }

  void Bridge::start() {
-    assert(threadPool_ == 0&&  !writer_.is()&&  !reader_.is());
-    threadPool_ = uno_threadpool_create();
-    assert(threadPool_ != 0);
-    writer_.set(new Writer(this));
-    writer_->launch();
-    reader_.set(new Reader(this));
-    reader_->launch();
-        // it is important to call reader_->launch() last here; both
-        // Writer::execute and Reader::execute can call Bridge::terminate, but
-        // Writer::execute is initially blocked in unblocked_.wait() until
-        // Reader::execute has called bridge_->sendRequestChangeRequest(), so
-        // effectively only reader_->launch() can lead to an early call to
-        // Bridge::terminate
+    rtl::Reference<  Reader>  r(new Reader(this));
+    rtl::Reference<  Writer>  w(new Writer(this));
+    {
+        osl::MutexGuard g(mutex_);
+        assert(threadPool_ == 0&&  !writer_.is()&&  !reader_.is());
+        threadPool_ = uno_threadpool_create();
+        assert(threadPool_ != 0);
+        reader_ = r;
+        writer_ = w;
+    }
+    // It is important to call reader_->launch() last here; both
+    // Writer::execute and Reader::execute can call Bridge::terminate, but
+    // Writer::execute is initially blocked in unblocked_.wait() until
+    // Reader::execute has called bridge_->sendRequestChangeRequest(), so
+    // effectively only reader_->launch() can lead to an early call to
+    // Bridge::terminate
+    w->launch();
+    r->launch();
  }

  void Bridge::terminate() {
+    uno_ThreadPool tp;
      rtl::Reference<  Reader>  r;
      rtl::Reference<  Writer>  w;
      Listeners ls;
@@ -259,6 +265,7 @@ void Bridge::terminate() {
          if (terminated_) {
              return;
          }
+        tp = threadPool_;
          std::swap(reader_, r);
          std::swap(writer_, w);
          ls.swap(listeners_);
@@ -273,8 +280,8 @@ void Bridge::terminate() {
      w->stop();
      joinThread(r.get());
      joinThread(w.get());
-    assert(threadPool_ != 0);
-    uno_threadpool_dispose(threadPool_);
+    assert(tp != 0);
+    uno_threadpool_dispose(tp);
      Stubs s;
      {
          osl::MutexGuard g(mutex_);
@@ -301,6 +308,7 @@ void Bridge::terminate() {
                  "binaryurp", "caught runtime exception '"<<  e.Message<<  '\'');
          }
      }
+    uno_threadpool_destroy(tp);
  }

  css::uno::Reference<  css::connection::XConnection>  Bridge::getConnection()
@@ -330,7 +338,8 @@ BinaryAny Bridge::mapCppToBinaryAny(css::uno::Any const&  cppAny) {
      return out;
  }

-uno_ThreadPool Bridge::getThreadPool() const {
+uno_ThreadPool Bridge::getThreadPool() {
+    osl::MutexGuard g(mutex_);
      assert(threadPool_ != 0);
      return threadPool_;
  }
@@ -571,7 +580,8 @@ bool Bridge::makeCall(
  {
      std::auto_ptr<  IncomingReply>  resp;
      {
-        AttachThread att(threadPool_);
+        uno_ThreadPool tp = getThreadPool();
+        AttachThread att(tp);
          PopOutgoingRequest pop(
              outgoingRequests_, att.getTid(),
              OutgoingRequest(OutgoingRequest::KIND_NORMAL, member, setter));
@@ -582,7 +592,7 @@ bool Bridge::makeCall(
          incrementCalls(true);
          incrementActiveCalls();
          void * job;
-        uno_threadpool_enter(threadPool_,&job);
+        uno_threadpool_enter(tp,&job);
          resp.reset(static_cast<  IncomingReply *>(job));
          decrementActiveCalls();
          decrementCalls();
@@ -812,8 +822,8 @@ bool Bridge::isCurrentContextMode() {
  }

  Bridge::~Bridge() {
-    if (threadPool_ != 0) {
-        uno_threadpool_destroy(threadPool_);
+    if (getThreadPool() != 0) {
+        terminate();
      }
  }

@@ -940,7 +950,7 @@ void Bridge::sendProtPropRequest(
  void Bridge::makeReleaseCall(
      rtl::OUString const&  oid, css::uno::TypeDescription const&  type)
  {
-    AttachThread att(threadPool_);
+    AttachThread att(getThreadPool());
      sendRequest(
          att.getTid(), oid, type,
          css::uno::TypeDescription(
diff --git a/binaryurp/source/bridge.hxx b/binaryurp/source/bridge.hxx
index cf281f2..8d66789 100644
--- a/binaryurp/source/bridge.hxx
+++ b/binaryurp/source/bridge.hxx
@@ -106,7 +106,7 @@ public:

      BinaryAny mapCppToBinaryAny(com::sun::star::uno::Any const&  cppAny);

-    uno_ThreadPool getThreadPool() const;
+    uno_ThreadPool getThreadPool();

      rtl::Reference<  Writer>  getWriter();

@@ -258,11 +258,11 @@ private:
      com::sun::star::uno::TypeDescription protPropType_;
      com::sun::star::uno::TypeDescription protPropRequest_;
      com::sun::star::uno::TypeDescription protPropCommit_;
-    uno_ThreadPool threadPool_;
      OutgoingRequests outgoingRequests_;

      osl::Mutex mutex_;
      Listeners listeners_;
+    uno_ThreadPool threadPool_;
      rtl::Reference<  Writer>  writer_;
      rtl::Reference<  Reader>  reader_;
      bool currentContextMode_;
diff --git a/cppu/source/threadpool/threadpool.cxx b/cppu/source/threadpool/threadpool.cxx
index 9dda867..d14e260 100644
--- a/cppu/source/threadpool/threadpool.cxx
+++ b/cppu/source/threadpool/threadpool.cxx
@@ -26,7 +26,10 @@
   *
   ************************************************************************/

+#include "sal/config.h"
+
  #include<boost/unordered_map.hpp>
+#include<cassert>
  #include<stdio.h>

  #include<osl/diagnose.h>
@@ -73,7 +76,7 @@ namespace cppu_threadpool
          m_lst.push_back( nDisposeId );
      }

-    void DisposedCallerAdmin::stopDisposing( sal_Int64 nDisposeId )
+    void DisposedCallerAdmin::destroy( sal_Int64 nDisposeId )
      {
          MutexGuard guard( m_mutex );
          for( DisposedCallerList::iterator ii = m_lst.begin() ;
@@ -172,9 +175,9 @@ namespace cppu_threadpool
          }
      }

-    void ThreadPool::stopDisposing( sal_Int64 nDisposeId )
+    void ThreadPool::destroy( sal_Int64 nDisposeId )
      {
-        m_DisposedCallerAdmin->stopDisposing( nDisposeId );
+        m_DisposedCallerAdmin->destroy( nDisposeId );
      }

      /******************
@@ -480,13 +483,14 @@ uno_threadpool_dispose( uno_ThreadPool hPool ) SAL_THROW_EXTERN_C()
  extern "C" void SAL_CALL
  uno_threadpool_destroy( uno_ThreadPool hPool ) SAL_THROW_EXTERN_C()
  {
-    ThreadPool::getInstance()->stopDisposing(
+    assert(hPool != 0);
+
+    ThreadPool::getInstance()->destroy(
          sal::static_int_cast<  sal_Int64>(
              reinterpret_cast<  sal_IntPtr>(hPool)) );

-    if( hPool )
+    bool empty;
      {
-        // special treatment for 0 !
          OSL_ASSERT( g_pThreadpoolHashSet );

          MutexGuard guard( Mutex::getGlobalMutex() );
@@ -496,12 +500,18 @@ uno_threadpool_destroy( uno_ThreadPool hPool ) SAL_THROW_EXTERN_C()
          g_pThreadpoolHashSet->erase( ii );
          delete hPool;

-        if( g_pThreadpoolHashSet->empty() )
+        empty = g_pThreadpoolHashSet->empty();
+        if( empty )
          {
              delete g_pThreadpoolHashSet;
              g_pThreadpoolHashSet = 0;
          }
      }
+
+    if( empty )
+    {
+        uno_threadpool_dispose( 0 );
+    }
  }

  /* vim:set shiftwidth=4 softtabstop=4 expandtab: */
diff --git a/cppu/source/threadpool/threadpool.hxx b/cppu/source/threadpool/threadpool.hxx
index 498ea4a..8b64ed1 100644
--- a/cppu/source/threadpool/threadpool.hxx
+++ b/cppu/source/threadpool/threadpool.hxx
@@ -90,7 +90,7 @@ namespace cppu_threadpool {
          static DisposedCallerAdminHolder getInstance();

          void dispose( sal_Int64 nDisposeId );
-        void stopDisposing( sal_Int64 nDisposeId );
+        void destroy( sal_Int64 nDisposeId );
          sal_Bool isDisposed( sal_Int64 nDisposeId );

      private:
@@ -109,7 +109,7 @@ namespace cppu_threadpool {
          static ThreadPoolHolder getInstance();

          void dispose( sal_Int64 nDisposeId );
-        void stopDisposing( sal_Int64 nDisposeId );
+        void destroy( sal_Int64 nDisposeId );

          void addJob( const ByteSequence&aThreadId,
                       sal_Bool bAsynchron,
_______________________________________________
Libreoffice-commits mailing list
Libreoffice-commits@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/libreoffice-commits

From 3a8d7fc7bef9f51091e571b35a1bd805987ad541 Mon Sep 17 00:00:00 2001
From: Stephan Bergmann <sbergman@redhat.com>
Date: Wed, 16 May 2012 22:09:21 +0200
Subject: [PATCH] Fixed ThreadPool (and dependent ORequestThread) life cycle

At least with sw_complex test under load, it happened that an ORequestThread
could still process a remote release request while the main thread was already
in exit(3).  This was because (a) ThreadPool never joined with the spawned
worker threads (which has been rectified by calling uno_threadpool_dispose(0)
from the final uno_threadpool_destroy), and (b) binaryurp::Bridge called
uno_threadpool_destroy only from its destructor (which could go as late as
exit(3)) instead of from terminate.

Additional clean up:
* Access to Bridge's threadPool_ is now cleanly controlled by mutex_ (even
  though that might not be necessary in every case).
* ThreadPool's stopDisposing got renamed to destroy, to make meaning clearer.

(cherry picked from commit d015384e1d98fe77fd59339044f58efb1ab9fb25)

Conflicts:

        binaryurp/source/bridge.cxx

Change-Id: I45fa76e80e790a11065e7bf8ac9d92af2e62f262
---
 binaryurp/source/bridge.cxx           |   46 ++++++++++++++++++++++----------
 binaryurp/source/bridge.hxx           |    4 +-
 cppu/source/threadpool/threadpool.cxx |   24 ++++++++++++-----
 cppu/source/threadpool/threadpool.hxx |    4 +-
 4 files changed, 52 insertions(+), 26 deletions(-)

diff --git a/binaryurp/source/bridge.cxx b/binaryurp/source/bridge.cxx
index b491a2a..f591fe0 100644
--- a/binaryurp/source/bridge.cxx
+++ b/binaryurp/source/bridge.cxx
@@ -234,16 +234,28 @@ Bridge::Bridge(
 }
 
 void Bridge::start() {
-    assert(threadPool_ == 0 && !writer_.is() && !reader_.is());
-    threadPool_ = uno_threadpool_create();
-    assert(threadPool_ != 0);
-    writer_.set(new Writer(this));
-    writer_->create();
-    reader_.set(new Reader(this));
-    reader_->create();
+    rtl::Reference< Reader > r(new Reader(this));
+    rtl::Reference< Writer > w(new Writer(this));
+    {
+        osl::MutexGuard g(mutex_);
+        assert(threadPool_ == 0 && !writer_.is() && !reader_.is());
+        threadPool_ = uno_threadpool_create();
+        assert(threadPool_ != 0);
+        reader_ = r;
+        writer_ = w;
+    }
+    // It is important to call reader_->create() last here; both
+    // Writer::execute and Reader::execute can call Bridge::terminate, but
+    // Writer::execute is initially blocked in unblocked_.wait() until
+    // Reader::execute has called bridge_->sendRequestChangeRequest(), so
+    // effectively only reader_->create() can lead to an early call to
+    // Bridge::terminate
+    w->create();
+    r->create();
 }
 
 void Bridge::terminate() {
+    uno_ThreadPool tp;
     rtl::Reference< Reader > r;
     rtl::Reference< Writer > w;
     Listeners ls;
@@ -252,6 +264,7 @@ void Bridge::terminate() {
         if (terminated_) {
             return;
         }
+        tp = threadPool_;
         std::swap(reader_, r);
         std::swap(writer_, w);
         ls.swap(listeners_);
@@ -266,8 +279,8 @@ void Bridge::terminate() {
     w->stop();
     joinThread(r.get());
     joinThread(w.get());
-    assert(threadPool_ != 0);
-    uno_threadpool_dispose(threadPool_);
+    assert(tp != 0);
+    uno_threadpool_dispose(tp);
     Stubs s;
     {
         osl::MutexGuard g(mutex_);
@@ -294,6 +307,7 @@ void Bridge::terminate() {
                 "binaryurp", "caught runtime exception '" << e.Message << '\'');
         }
     }
+    uno_threadpool_destroy(tp);
 }
 
 css::uno::Reference< css::connection::XConnection > Bridge::getConnection()
@@ -323,7 +337,8 @@ BinaryAny Bridge::mapCppToBinaryAny(css::uno::Any const & cppAny) {
     return out;
 }
 
-uno_ThreadPool Bridge::getThreadPool() const {
+uno_ThreadPool Bridge::getThreadPool() {
+    osl::MutexGuard g(mutex_);
     assert(threadPool_ != 0);
     return threadPool_;
 }
@@ -564,7 +579,8 @@ bool Bridge::makeCall(
 {
     std::auto_ptr< IncomingReply > resp;
     {
-        AttachThread att(threadPool_);
+        uno_ThreadPool tp = getThreadPool();
+        AttachThread att(tp);
         PopOutgoingRequest pop(
             outgoingRequests_, att.getTid(),
             OutgoingRequest(OutgoingRequest::KIND_NORMAL, member, setter));
@@ -575,7 +591,7 @@ bool Bridge::makeCall(
         incrementCalls(true);
         incrementActiveCalls();
         void * job;
-        uno_threadpool_enter(threadPool_, &job);
+        uno_threadpool_enter(tp, &job);
         resp.reset(static_cast< IncomingReply * >(job));
         decrementActiveCalls();
         decrementCalls();
@@ -806,8 +822,8 @@ bool Bridge::isCurrentContextMode() {
 }
 
 Bridge::~Bridge() {
-    if (threadPool_ != 0) {
-        uno_threadpool_destroy(threadPool_);
+    if (getThreadPool() != 0) {
+        terminate();
     }
 }
 
@@ -934,7 +950,7 @@ void Bridge::sendProtPropRequest(
 void Bridge::makeReleaseCall(
     rtl::OUString const & oid, css::uno::TypeDescription const & type)
 {
-    AttachThread att(threadPool_);
+    AttachThread att(getThreadPool());
     sendRequest(
         att.getTid(), oid, type,
         css::uno::TypeDescription(
diff --git a/binaryurp/source/bridge.hxx b/binaryurp/source/bridge.hxx
index cf281f2..8d66789 100644
--- a/binaryurp/source/bridge.hxx
+++ b/binaryurp/source/bridge.hxx
@@ -106,7 +106,7 @@ public:
 
     BinaryAny mapCppToBinaryAny(com::sun::star::uno::Any const & cppAny);
 
-    uno_ThreadPool getThreadPool() const;
+    uno_ThreadPool getThreadPool();
 
     rtl::Reference< Writer > getWriter();
 
@@ -258,11 +258,11 @@ private:
     com::sun::star::uno::TypeDescription protPropType_;
     com::sun::star::uno::TypeDescription protPropRequest_;
     com::sun::star::uno::TypeDescription protPropCommit_;
-    uno_ThreadPool threadPool_;
     OutgoingRequests outgoingRequests_;
 
     osl::Mutex mutex_;
     Listeners listeners_;
+    uno_ThreadPool threadPool_;
     rtl::Reference< Writer > writer_;
     rtl::Reference< Reader > reader_;
     bool currentContextMode_;
diff --git a/cppu/source/threadpool/threadpool.cxx b/cppu/source/threadpool/threadpool.cxx
index 312fd89..f9f0be6 100644
--- a/cppu/source/threadpool/threadpool.cxx
+++ b/cppu/source/threadpool/threadpool.cxx
@@ -26,7 +26,10 @@
  *
  ************************************************************************/
 
+#include "sal/config.h"
+
 #include <boost/unordered_map.hpp>
+#include <cassert>
 #include <stdio.h>
 
 #include <osl/diagnose.h>
@@ -73,7 +76,7 @@ namespace cppu_threadpool
         m_lst.push_back( nDisposeId );
     }
 
-    void DisposedCallerAdmin::stopDisposing( sal_Int64 nDisposeId )
+    void DisposedCallerAdmin::destroy( sal_Int64 nDisposeId )
     {
         MutexGuard guard( m_mutex );
         for( DisposedCallerList::iterator ii = m_lst.begin() ;
@@ -172,9 +175,9 @@ namespace cppu_threadpool
         }
     }
 
-    void ThreadPool::stopDisposing( sal_Int64 nDisposeId )
+    void ThreadPool::destroy( sal_Int64 nDisposeId )
     {
-        m_DisposedCallerAdmin->stopDisposing( nDisposeId );
+        m_DisposedCallerAdmin->destroy( nDisposeId );
     }
 
     /******************
@@ -480,13 +483,14 @@ uno_threadpool_dispose( uno_ThreadPool hPool ) SAL_THROW_EXTERN_C()
 extern "C" void SAL_CALL
 uno_threadpool_destroy( uno_ThreadPool hPool ) SAL_THROW_EXTERN_C()
 {
-    ThreadPool::getInstance()->stopDisposing(
+    assert(hPool != 0);
+
+    ThreadPool::getInstance()->destroy(
         sal::static_int_cast< sal_Int64 >(
             reinterpret_cast< sal_IntPtr >(hPool)) );
 
-    if( hPool )
+    bool empty;
     {
-        // special treatment for 0 !
         OSL_ASSERT( g_pThreadpoolHashSet );
 
         MutexGuard guard( Mutex::getGlobalMutex() );
@@ -496,12 +500,18 @@ uno_threadpool_destroy( uno_ThreadPool hPool ) SAL_THROW_EXTERN_C()
         g_pThreadpoolHashSet->erase( ii );
         delete hPool;
 
-        if( g_pThreadpoolHashSet->empty() )
+        empty = g_pThreadpoolHashSet->empty();
+        if( empty )
         {
             delete g_pThreadpoolHashSet;
             g_pThreadpoolHashSet = 0;
         }
     }
+
+    if( empty )
+    {
+        uno_threadpool_dispose( 0 );
+    }
 }
 
 /* vim:set shiftwidth=4 softtabstop=4 expandtab: */
diff --git a/cppu/source/threadpool/threadpool.hxx b/cppu/source/threadpool/threadpool.hxx
index 273798c..18bb47a 100644
--- a/cppu/source/threadpool/threadpool.hxx
+++ b/cppu/source/threadpool/threadpool.hxx
@@ -90,7 +90,7 @@ namespace cppu_threadpool {
         static DisposedCallerAdminHolder getInstance();
 
         void dispose( sal_Int64 nDisposeId );
-        void stopDisposing( sal_Int64 nDisposeId );
+        void destroy( sal_Int64 nDisposeId );
         sal_Bool isDisposed( sal_Int64 nDisposeId );
 
     private:
@@ -109,7 +109,7 @@ namespace cppu_threadpool {
         static ThreadPoolHolder getInstance();
 
         void dispose( sal_Int64 nDisposeId );
-        void stopDisposing( sal_Int64 nDisposeId );
+        void destroy( sal_Int64 nDisposeId );
 
         void addJob( const ByteSequence &aThreadId,
                      sal_Bool bAsynchron,
-- 
1.7.7.6


Context


Privacy Policy | Impressum (Legal Info) | Copyright information: Unless otherwise specified, all text and images on this website are licensed under the Creative Commons Attribution-Share Alike 3.0 License. This does not include the source code of LibreOffice, which is licensed under the Mozilla Public License (MPLv2). "LibreOffice" and "The Document Foundation" are registered trademarks of their corresponding registered owners or are in actual use as trademarks in one or more countries. Their respective logos and icons are also subject to international copyright laws. Use thereof is explained in our trademark policy.