[lnkForumImage]
TotalShareware - Download Free Software

Confronta i prezzi di migliaia di prodotti.
Asp Forum
 Home | Login | Register | Search 


 

Forums >

comp.lang.c++

Request for comments about synchronized queue using boost

Nordlöw

10/15/2008 1:37:00 PM

I am currently designing a synchronized queue used to communicate
between threads. Is the code given below a good solution? Am I
using mutex lock/unlock more than needed?

Are there any resources out there on the Internet on how to design
*thread-safe* *efficient* data-
structures?

/Nordlöw

The file synched_queue.hpp follows:

#ifndef PNW__SYNCHED_QUEUE_HPP
#define PNW__SYNCHED_QUEUE_HPP

/*!
* @file synched_queue.hpp
* @brief Synchronized (Thread Safe) Container Wrapper on std:queue
* using Boost::Thread.
*/

#include <queue>
#include <iostream>

#include <boost/bind.hpp>
#include <boost/thread/thread.hpp>
#include <boost/thread/mutex.hpp>
#include <boost/thread/condition.hpp>

//
============================================================================

template <typename T>
class synched_queue
{
std::queue<T> q; ///< Queue.
boost::mutex m; ///< Mutex.
public:
/*!
* Push @p value.
*/
void push(const T & value) {
boost::mutex::scoped_lock sl(m); // NOTE: lock mutex
q.push(value);
}

/*!
* Try and pop into @p value, returning directly in any case.
* @return true if pop was success, false otherwise.
*/
bool try_pop(T & value) {
boost::mutex::scoped_lock sl(m); // NOTE: lock mutex
if (q.size()) {
value = q.front();
q.pop();
return true;
}
return false;
}

/// Pop and return value, possibly waiting forever.
T wait_pop() {
boost::mutex::scoped_lock sl(m); // NOTE: lock mutex
// wait until queue has at least on element()
c.wait(sl, boost::bind(&std::queue<T>::size, q));
T value = q.front();
q.pop();
return value;
}

size_type size() const {
boost::mutex::scoped_lock sl(m); // NOTE: lock mutex
return q.size();
}

bool empty() const {
boost::mutex::scoped_lock sl(m); // NOTE: lock mutex
return q.empty();
}

};

//
============================================================================

#endif
19 Answers

Hendrik Schober

10/15/2008 3:26:00 PM

0

Nordlöw wrote:
> I am currently designing a synchronized queue used to communicate
> between threads. Is the code given below a good solution? Am I
> using mutex lock/unlock more than needed?
>
> Are there any resources out there on the Internet on how to design
> *thread-safe* *efficient* data-
> structures?

comp.programming.threads?

> /Nordlöw
>
> [...]
>
> /// Pop and return value, possibly waiting forever.
> T wait_pop() {
> boost::mutex::scoped_lock sl(m); // NOTE: lock mutex
> // wait until queue has at least on element()
> c.wait(sl, boost::bind(&std::queue<T>::size, q));
> T value = q.front();
> q.pop();
> return value;
> }

I haven't done any threading in a decade or so, but I wonder how
in the above code anything could be put into the locked queue.
What am I missing?
Oh, and I wonder what 'c' is.

> [...]

Schobi

Maxim Yegorushkin

10/15/2008 4:03:00 PM

0

On Oct 15, 2:36 pm, Nordlöw <per.nord...@gmail.com> wrote:
> I am currently designing a synchronized queue used to communicate
> between threads. Is the code given below a good solution? Am I
> using mutex lock/unlock more than needed?
> /Nordlöw
>
> The file synched_queue.hpp follows:
>
> #ifndef PNW__SYNCHED_QUEUE_HPP
> #define PNW__SYNCHED_QUEUE_HPP
>
> /*!
>  * @file synched_queue.hpp
>  * @brief Synchronized (Thread Safe) Container Wrapper on std:queue
>  *        using Boost::Thread.
>  */
>
> #include <queue>
> #include <iostream>
>
> #include <boost/bind.hpp>
> #include <boost/thread/thread.hpp>
> #include <boost/thread/mutex.hpp>
> #include <boost/thread/condition.hpp>
>
> //
> ============================================================================
>
> template <typename T>
> class synched_queue
> {
>     std::queue<T> q;              ///< Queue.
>     boost::mutex m;             ///< Mutex.

A member variable is missing here:

boost::condition c;

> public:
>     /*!
>      * Push @p value.
>      */
>     void push(const T & value) {
>         boost::mutex::scoped_lock sl(m); // NOTE: lock mutex
>         q.push(value);

You need to notify other threads waiting on the queue:

c.notify_one();

>     }
>
>     /*!
>      * Try and pop into @p value, returning directly in any case.
>      * @return true if pop was success, false otherwise.
>      */
>     bool try_pop(T & value) {
>         boost::mutex::scoped_lock sl(m); // NOTE: lock mutex
>         if (q.size()) {
>             value = q.front();
>             q.pop();
>             return true;
>         }
>         return false;
>     }
>
>     /// Pop and return value, possibly waiting forever.
>     T wait_pop() {
>         boost::mutex::scoped_lock sl(m); // NOTE: lock mutex
>         // wait until queue has at least on element()

The following line:

>         c.wait(sl, boost::bind(&std::queue<T>::size, q));

boost::bind(&std::queue<T>::size, q) stores a copy of the queue in the
object created by boost::bind, so that the wait never finishes if the
queue is empty (and if the condition variable is not notified (see
above)).

It should be as simple as:

while(q.empty())
c.wait(sl);

>         T value = q.front();
>         q.pop();
>         return value;
>     }
>
>     size_type size() const {
>         boost::mutex::scoped_lock sl(m); // NOTE: lock mutex
>         return q.size();
>     }
>
>     bool empty() const {
>         boost::mutex::scoped_lock sl(m); // NOTE: lock mutex
>         return q.empty();
>     }
>
> };
>
> //
> ============================================================================
>
> #endif


The other thing is that the queue does not support destruction: the
destructor does not unblock any threads blocked in wait.

Apart from that, the mutex is held for too long. You don't really need
to hold the lock when allocating memory for elements and when invoking
the copy constructor of the elements.

Here is an improved version (although a bit simplified):

#include <boost/thread/mutex.hpp>
#include <boost/thread/condition.hpp>
#include <boost/function.hpp>
#include <list>

template<class T>
class atomic_queue : private boost::noncopyable
{
private:
boost::mutex mtx_;
boost::condition cnd_;
bool cancel_;
unsigned waiting_;

// use list as a queue because it allows for splicing:
// moving elements between lists without any memory allocation and
copying
typedef std::list<T> queue_type;
queue_type q_;

public:
struct cancelled : std::logic_error
{
cancelled() : std::logic_error("cancelled") {}
};

atomic_queue()
: cancel_()
, waiting_()
{}

~atomic_queue()
{
// cancel all waiting threads
this->cancel();
}

void cancel()
{
// cancel all waiting threads
boost::mutex::scoped_lock l(mtx_);
cancel_ = true;
cnd_.notify_all();
// and wait till they are done
while(waiting_)
cnd_.wait(l);
}

void push(T const& t)
{
// this avoids an allocation inside the critical section
bellow
queue_type tmp(&t, &t + 1);
{
boost::mutex::scoped_lock l(mtx_);
q_.splice(q_.end(), tmp);
}
cnd_.notify_one();
}

// this function provides only basic exception safety if T's copy
ctor can
// throw or strong exception safety if T's copy ctor is nothrow
T pop()
{
// this avoids copying T inside the critical section bellow
queue_type tmp;
{
boost::mutex::scoped_lock l(mtx_);
++waiting_;
while(!cancel_ && q_.empty())
cnd_.wait(l);
--waiting_;
if(cancel_)
{
cnd_.notify_all();
throw cancelled();
}
tmp.splice(tmp.end(), q_, q_.begin());
}
return tmp.front();
}
};

typedef boost::function<void()> unit_of_work;
typedef atomic_queue<unit_of_work> work_queue;

void typical_thread_pool_working_thread(work_queue* q)
try
{
for(;;)
q->pop()();
}
catch(work_queue::cancelled&)
{
// time to terminate the thread
}

> Are there any resources out there on the Internet on how to design
> *thread-safe* *efficient* data-structures?

I would recommend "Programming with POSIX Threads" book by by David R.
Butenhof.

--
Max

Thomas J. Gritzan

10/15/2008 4:04:00 PM

0

Hendrik Schober wrote:
> Nordlöw wrote:
>> I am currently designing a synchronized queue used to communicate
>> between threads. Is the code given below a good solution? Am I
>> using mutex lock/unlock more than needed?
>>
>> Are there any resources out there on the Internet on how to design
>> *thread-safe* *efficient* data-
>> structures?
>
> comp.programming.threads?
>
>> /Nordlöw
>>
>> [...]
>>
>> /// Pop and return value, possibly waiting forever.
>> T wait_pop() {
>> boost::mutex::scoped_lock sl(m); // NOTE: lock mutex
>> // wait until queue has at least on element()
>> c.wait(sl, boost::bind(&std::queue<T>::size, q));
>> T value = q.front();
>> q.pop();
>> return value;
>> }
>
> I haven't done any threading in a decade or so, but I wonder how
> in the above code anything could be put into the locked queue.
> What am I missing?
> Oh, and I wonder what 'c' is.

c is a condition variable:
http://www.boost.org/doc/libs/1_36_0/doc/html/thread/synchronization.html#thread.synchronization.c...

You lock the mutex, then wait for a condition, which (automatically)
unlocks the mutex, and locks it again if the condition occurs.

--
Thomas

Hendrik Schober

10/15/2008 6:16:00 PM

0

Thomas J. Gritzan wrote:
> [...]
>> I haven't done any threading in a decade or so, but I wonder how
>> in the above code anything could be put into the locked queue.
>> What am I missing?
>> Oh, and I wonder what 'c' is.
>
> c is a condition variable:
> http://www.boost.org/doc/libs/1_36_0/doc/html/thread/synchronization.html#thread.synchronization.c...
>
> You lock the mutex, then wait for a condition, which (automatically)
> unlocks the mutex, and locks it again if the condition occurs.

Ah, thanks. I haven't looked at boost's threads yet.

Schobi

Nordlöw

10/16/2008 2:45:00 PM

0

On 15 Okt, 18:02, Maxim Yegorushkin <maxim.yegorush...@gmail.com>
wrote:
> On Oct 15, 2:36 pm, Nordlöw <per.nord...@gmail.com> wrote:
>
>
>
> > I am currently designing a synchronized queue used to communicate
> > between threads. Is the code given below a good solution? Am I
> > using mutex lock/unlock more than needed?
> > /Nordlöw
>
> > The file synched_queue.hpp follows:
>
> > #ifndef PNW__SYNCHED_QUEUE_HPP
> > #define PNW__SYNCHED_QUEUE_HPP
>
> > /*!
> >  * @file synched_queue.hpp
> >  * @brief Synchronized (Thread Safe) Container Wrapper on std:queue
> >  *        using Boost::Thread.
> >  */
>
> > #include <queue>
> > #include <iostream>
>
> > #include <boost/bind.hpp>
> > #include <boost/thread/thread.hpp>
> > #include <boost/thread/mutex.hpp>
> > #include <boost/thread/condition.hpp>
>
> > //
> > ============================================================================
>
> > template <typename T>
> > class synched_queue
> > {
> >     std::queue<T> q;              ///< Queue.
> >     boost::mutex m;             ///< Mutex.
>
> A member variable is missing here:
>
>     boost::condition c;
>
> > public:
> >     /*!
> >      * Push @p value.
> >      */
> >     void push(const T & value) {
> >         boost::mutex::scoped_lock sl(m); // NOTE: lock mutex
> >         q.push(value);
>
> You need to notify other threads waiting on the queue:
>
>     c.notify_one();
>
>
>
> >     }
>
> >     /*!
> >      * Try and pop into @p value, returning directly in any case.
> >      * @return true if pop was success, false otherwise.
> >      */
> >     bool try_pop(T & value) {
> >         boost::mutex::scoped_lock sl(m); // NOTE: lock mutex
> >         if (q.size()) {
> >             value = q.front();
> >             q.pop();
> >             return true;
> >         }
> >         return false;
> >     }
>
> >     /// Pop and return value, possibly waiting forever.
> >     T wait_pop() {
> >         boost::mutex::scoped_lock sl(m); // NOTE: lock mutex
> >         // wait until queue has at least on element()
>
> The following line:
>
> >         c.wait(sl, boost::bind(&std::queue<T>::size, q));
>
> boost::bind(&std::queue<T>::size, q) stores a copy of the queue in the
> object created by boost::bind, so that the wait never finishes if the
> queue is empty (and if the condition variable is not notified (see
> above)).
>
> It should be as simple as:
>
>     while(q.empty())
>         c.wait(sl);
>
>
>
> >         T value = q.front();
> >         q.pop();
> >         return value;
> >     }
>
> >     size_type size() const {
> >         boost::mutex::scoped_lock sl(m); // NOTE: lock mutex
> >         return q.size();
> >     }
>
> >     bool empty() const {
> >         boost::mutex::scoped_lock sl(m); // NOTE: lock mutex
> >         return q.empty();
> >     }
>
> > };
>
> > //
> > ============================================================================
>
> > #endif
>
> The other thing is that the queue does not support destruction: the
> destructor does not unblock any threads blocked in wait.
>
> Apart from that, the mutex is held for too long. You don't really need
> to hold the lock when allocating memory for elements and when invoking
> the copy constructor of the elements.
>
> Here is an improved version (although a bit simplified):
>
> #include <boost/thread/mutex.hpp>
> #include <boost/thread/condition.hpp>
> #include <boost/function.hpp>
> #include <list>
>
> template<class T>
> class atomic_queue : private boost::noncopyable
> {
> private:
>     boost::mutex mtx_;
>     boost::condition cnd_;
>     bool cancel_;
>     unsigned waiting_;
>
>     // use list as a queue because it allows for splicing:
>     // moving elements between lists without any memory allocation and
> copying
>     typedef std::list<T> queue_type;
>     queue_type q_;
>
> public:
>     struct cancelled : std::logic_error
>     {
>         cancelled() : std::logic_error("cancelled") {}
>     };
>
>     atomic_queue()
>         : cancel_()
>         , waiting_()
>     {}
>
>     ~atomic_queue()
>     {
>         // cancel all waiting threads
>         this->cancel();
>     }
>
>     void cancel()
>     {
>         // cancel all waiting threads
>         boost::mutex::scoped_lock l(mtx_);
>         cancel_ = true;
>         cnd_.notify_all();
>         // and wait till they are done
>         while(waiting_)
>             cnd_.wait(l);
>     }
>
>     void push(T const& t)
>     {
>         // this avoids an allocation inside the critical section
> bellow
>         queue_type tmp(&t, &t + 1);
>         {
>             boost::mutex::scoped_lock l(mtx_);
>             q_.splice(q_.end(), tmp);
>         }
>         cnd_.notify_one();
>     }
>
>     // this function provides only basic exception safety if T's copy
> ctor can
>     // throw or strong exception safety if T's copy ctor is nothrow
>     T pop()
>     {
>         // this avoids copying T inside the critical section bellow
>         queue_type tmp;
>         {
>             boost::mutex::scoped_lock l(mtx_);
>             ++waiting_;
>             while(!cancel_ && q_.empty())
>                 cnd_.wait(l);
>             --waiting_;
>             if(cancel_)
>             {
>                 cnd_.notify_all();
>                 throw cancelled();
>             }
>             tmp.splice(tmp.end(), q_, q_.begin());
>         }
>         return tmp.front();
>     }
>
> };
>
> typedef boost::function<void()> unit_of_work;
> typedef atomic_queue<unit_of_work> work_queue;
>
> void typical_thread_pool_working_thread(work_queue* q)
> try
> {
>     for(;;)
>         q->pop()();}
>
> catch(work_queue::cancelled&)
> {
>     // time to terminate the thread
>
> }
> > Are there any resources out there on the Internet on how to design
> > *thread-safe* *efficient* data-structures?
>
> I would recommend "Programming with POSIX Threads" book by by David R.
> Butenhof.
>
> --
> Max


Doesn't the push-argument "T const & t" instead of my version "const T
& t" mean that we don't copy at all here? I believe &t evaluates to
the memory pointer of t:

void push(T const& t)
{
// this avoids an allocation inside the critical section
bellow
queue_type tmp(&t, &t + 1);
{
boost::mutex::scoped_lock l(mtx_);
q_.splice(q_.end(), tmp);
}
cnd_.notify_one();
}

/Nordlöw

Nordlöw

10/16/2008 2:46:00 PM

0

On 15 Okt, 20:16, Hendrik Schober <spamt...@gmx.de> wrote:
> Thomas J. Gritzan wrote:
> > [...]
> >>  I haven't done any threading in a decade or so, but I wonder how
> >>  in the above code anything could be put into the locked queue.
> >>  What am I missing?
> >>  Oh, and I wonder what 'c' is.
>
> > c is a condition variable:
> >http://www.boost.org/doc/libs/1_36_0/doc/html/thread/synch.......
>
> > You lock the mutex, then wait for a condition, which (automatically)
> > unlocks the mutex, and locks it again if the condition occurs.
>
>   Ah, thanks. I haven't looked at boost's threads yet.
>
>   Schobi

How can I your queue structure in the following code example:


#include "../synched_queue.hpp"
#include "../threadpool/include/threadpool.hpp"
#include <iostream>

using namespace boost::threadpool;

template <typename T>
void produce(synched_queue<T> & q, size_t n)
{
for (size_t i = 0; i < n; i++) {
T x = i;
q.push(x);
std::cout << "i:" << i << " produced: " << x << std::endl;
}
}

template <typename T>
void consume(synched_queue<T> & q, size_t n)
{
for (size_t i = 0; i < n; i++) {
T x = q.wait_pop();
std::cout << "i:" << i << " consumed: " << x << std::endl;
}
}

int main()
{
typedef float Elm;
synched_queue<float> q;
// boost::thread pt(boost::bind(produce<Elm>, q, 10));
// boost::thread ct(boost::bind(consume<Elm>, q, 10));
// pt.join();
// ct.join();
return 0;
}


Thanks in advance,
/Nordlöw

Maxim Yegorushkin

10/16/2008 3:42:00 PM

0

On Oct 16, 3:44 pm, Nordlöw <per.nord...@gmail.com> wrote:
> On 15 Okt, 18:02, Maxim Yegorushkin <maxim.yegorush...@gmail.com>
> wrote:
>
>
>
> > On Oct 15, 2:36 pm, Nordlöw <per.nord...@gmail.com> wrote:
>
> > > I am currently designing a synchronized queue used to communicate
> > > between threads. Is the code given below a good solution? Am I
> > > using mutex lock/unlock more than needed?
> > > /Nordlöw
>
> > > The file synched_queue.hpp follows:
>
> > > #ifndef PNW__SYNCHED_QUEUE_HPP
> > > #define PNW__SYNCHED_QUEUE_HPP
>
> > > /*!
> > >  * @file synched_queue.hpp
> > >  * @brief Synchronized (Thread Safe) Container Wrapper on std:queue
> > >  *        using Boost::Thread.
> > >  */
>
> > > #include <queue>
> > > #include <iostream>
>
> > > #include <boost/bind.hpp>
> > > #include <boost/thread/thread.hpp>
> > > #include <boost/thread/mutex.hpp>
> > > #include <boost/thread/condition.hpp>
>
> > > //
> > > ============================================================================
>
> > > template <typename T>
> > > class synched_queue
> > > {
> > >     std::queue<T> q;              ///< Queue.
> > >     boost::mutex m;             ///< Mutex.
>
> > A member variable is missing here:
>
> >     boost::condition c;
>
> > > public:
> > >     /*!
> > >      * Push @p value.
> > >      */
> > >     void push(const T & value) {
> > >         boost::mutex::scoped_lock sl(m); // NOTE: lock mutex
> > >         q.push(value);
>
> > You need to notify other threads waiting on the queue:
>
> >     c.notify_one();
>
> > >     }
>
> > >     /*!
> > >      * Try and pop into @p value, returning directly in any case.
> > >      * @return true if pop was success, false otherwise.
> > >      */
> > >     bool try_pop(T & value) {
> > >         boost::mutex::scoped_lock sl(m); // NOTE: lock mutex
> > >         if (q.size()) {
> > >             value = q.front();
> > >             q.pop();
> > >             return true;
> > >         }
> > >         return false;
> > >     }
>
> > >     /// Pop and return value, possibly waiting forever.
> > >     T wait_pop() {
> > >         boost::mutex::scoped_lock sl(m); // NOTE: lock mutex
> > >         // wait until queue has at least on element()
>
> > The following line:
>
> > >         c.wait(sl, boost::bind(&std::queue<T>::size, q));
>
> > boost::bind(&std::queue<T>::size, q) stores a copy of the queue in the
> > object created by boost::bind, so that the wait never finishes if the
> > queue is empty (and if the condition variable is not notified (see
> > above)).
>
> > It should be as simple as:
>
> >     while(q.empty())
> >         c.wait(sl);
>
> > >         T value = q.front();
> > >         q.pop();
> > >         return value;
> > >     }
>
> > >     size_type size() const {
> > >         boost::mutex::scoped_lock sl(m); // NOTE: lock mutex
> > >         return q.size();
> > >     }
>
> > >     bool empty() const {
> > >         boost::mutex::scoped_lock sl(m); // NOTE: lock mutex
> > >         return q.empty();
> > >     }
>
> > > };
>
> > > //
> > > ============================================================================
>
> > > #endif
>
> > The other thing is that the queue does not support destruction: the
> > destructor does not unblock any threads blocked in wait.
>
> > Apart from that, the mutex is held for too long. You don't really need
> > to hold the lock when allocating memory for elements and when invoking
> > the copy constructor of the elements.
>
> > Here is an improved version (although a bit simplified):
>
> > #include <boost/thread/mutex.hpp>
> > #include <boost/thread/condition.hpp>
> > #include <boost/function.hpp>
> > #include <list>
>
> > template<class T>
> > class atomic_queue : private boost::noncopyable
> > {
> > private:
> >     boost::mutex mtx_;
> >     boost::condition cnd_;
> >     bool cancel_;
> >     unsigned waiting_;
>
> >     // use list as a queue because it allows for splicing:
> >     // moving elements between lists without any memory allocation and
> > copying
> >     typedef std::list<T> queue_type;
> >     queue_type q_;
>
> > public:
> >     struct cancelled : std::logic_error
> >     {
> >         cancelled() : std::logic_error("cancelled") {}
> >     };
>
> >     atomic_queue()
> >         : cancel_()
> >         , waiting_()
> >     {}
>
> >     ~atomic_queue()
> >     {
> >         // cancel all waiting threads
> >         this->cancel();
> >     }
>
> >     void cancel()
> >     {
> >         // cancel all waiting threads
> >         boost::mutex::scoped_lock l(mtx_);
> >         cancel_ = true;
> >         cnd_.notify_all();
> >         // and wait till they are done
> >         while(waiting_)
> >             cnd_.wait(l);
> >     }
>
> >     void push(T const& t)
> >     {
> >         // this avoids an allocation inside the critical section
> > bellow
> >         queue_type tmp(&t, &t + 1);
> >         {
> >             boost::mutex::scoped_lock l(mtx_);
> >             q_.splice(q_.end(), tmp);
> >         }
> >         cnd_.notify_one();
> >     }
>
> >     // this function provides only basic exception safety if T's copy
> > ctor can
> >     // throw or strong exception safety if T's copy ctor is nothrow
> >     T pop()
> >     {
> >         // this avoids copying T inside the critical section bellow
> >         queue_type tmp;
> >         {
> >             boost::mutex::scoped_lock l(mtx_);
> >             ++waiting_;
> >             while(!cancel_ && q_.empty())
> >                 cnd_.wait(l);
> >             --waiting_;
> >             if(cancel_)
> >             {
> >                 cnd_.notify_all();
> >                 throw cancelled();
> >             }
> >             tmp.splice(tmp.end(), q_, q_.begin());
> >         }
> >         return tmp.front();
> >     }
>
> > };
>
> > typedef boost::function<void()> unit_of_work;
> > typedef atomic_queue<unit_of_work> work_queue;
>
> > void typical_thread_pool_working_thread(work_queue* q)
> > try
> > {
> >     for(;;)
> >         q->pop()();}
>
> > catch(work_queue::cancelled&)
> > {
> >     // time to terminate the thread
>
> > }
> > > Are there any resources out there on the Internet on how to design
> > > *thread-safe* *efficient* data-structures?
>
> > I would recommend "Programming with POSIX Threads" book by by David R.
> > Butenhof.
>
> Doesn't the push-argument "T const & t" instead of my version "const T
> & t" mean that we don't copy at all here?

No, T const& and const T& is the same thing: a reference to a constant
T.

> I believe &t evaluates to
> the memory pointer of t:
>
>     void push(T const& t)
>     {
>         // this avoids an allocation inside the critical section
> bellow
>         queue_type tmp(&t, &t + 1);
>         {
>             boost::mutex::scoped_lock l(mtx_);
>             q_.splice(q_.end(), tmp);
>         }
>         cnd_.notify_one();
>     }

The trick here is that element t is first inserted in a temporary list
tmp on the stack.

queue_type tmp(&t, &t + 1); // create a list with a copy of t

This involves allocating memory and copying t. And here it is done
without holding the lock because allocating memory may be expensive
(might cause the system to do swapping) and as you hold the lock all
the worker threads won't be able to pop elements from the queue during
such time. Next, the lock is acquired and the element is moved from
list tmp into q_:

q_.splice(q_.end(), tmp);

This operation does not involve any memory allocation or copying
elements (because you can do so easily with the nodes of doubly-linked
lists), which make your critical section of code execute really fast
without stalling the worked threads for too long.

--
Max

Szabolcs Ferenczi

10/16/2008 3:43:00 PM

0

On Oct 15, 3:36 pm, Nordlöw <per.nord...@gmail.com> wrote:
> I am currently designing a synchronized queue used to communicate
> between threads. Is the code given below a good solution?

Not really.

> [...]
> Are there any resources out there on the Internet on how to design
> *thread-safe* *efficient* data-
> structures?

Sure.
http://www.google.nl/search?q=boost+thread+s...

Best Regards,
Szabolcs

Maxim Yegorushkin

10/16/2008 3:46:00 PM

0

On Oct 15, 2:36 pm, Nordlöw <per.nord...@gmail.com> wrote:
> I am currently designing a synchronized queue used to communicate
> between threads. Is the code given below a good solution? Am I
> using mutex lock/unlock more than needed?
>
> Are there any resources out there on the Internet on how to design
> *thread-safe* *efficient* data-
> structures?

You can also try concurrent_queue from
http://www.threadingbuildingblocks.org/codesamples.php#concur...

Scout around that link for more documentation.

--
Max

James Kanze

10/17/2008 7:44:00 AM

0

On Oct 16, 5:42 pm, Szabolcs Ferenczi <szabolcs.feren...@gmail.com>
wrote:
> On Oct 15, 3:36 pm, Nordlöw <per.nord...@gmail.com> wrote:

> > [...]
> > Are there any resources out there on the Internet on how to
> > design *thread-safe* *efficient* data- structures?

> Sure.http://www.google.nl/search?q=boost+thread+s...

You have to be very careful with googling in cases like this.
There's an awful lot of junk on the net. Just looking at the
first hit, for example, it's quite clear that the author doesn't
know what he's talking about, and I suspect that that's true in
a large number of cases.

--
James Kanze (GABI Software) email:james.kanze@gmail.com
Conseils en informatique orientée objet/
Beratung in objektorientierter Datenverarbeitung
9 place Sémard, 78210 St.-Cyr-l'École, France, +33 (0)1 30 23 00 34