[lnkForumImage]
TotalShareware - Download Free Software

Confronta i prezzi di migliaia di prodotti.
Asp Forum
 Home | Login | Register | Search 


 

Forums >

comp.programming.threads

More about NUMA and scalability...

Ramine

3/20/2015 9:01:00 PM

Hello,


I have corrected some typos , please read again...


As you have seen me talking in my previous post, i have explained to
you a very important thing, but we have to be smarter than that to
be able to see clearly the overall picture, as you have noticed
researchers are inventing transactional memory, but transactional memory
and my SeqlockX are optimistic mechanisms, this means that you
can not always use transactional memory in a high level way,
for example with AVL trees and Red-black trees and Skiplists,
transactional memory can not be used in a high level way because
the writers can modify the pointers and this can raise exceptions
inside the readers and inside writers, and you can not do it
from high level around the insert() and search() and delete() because
you have to respect the logic of the sequential algorithms, that's
the same with my SeqlockX, you have to use them in this situation in
a finer grained manner from inside the insert() and delete() and
search() of the algorithms... this is the problem with optimistic
mechanisms of transactional memory and my SeqlockX and SMR and RCU
have the same problem... but with the scalable reader-writer locks you
can reason in a high level manner and put the Rlock() RUnlock() and
WLock() and WUnlock() in a straight forward manner around the insert()
and search() and delete() of the AVL tree or Red-Black tree or the
Skiplist, that's the advantage with scalable read-writer locks.

I have thought more about concurrent datastructures, and
i think they will scale well on NUMA architecture, because with
concurrent AVL trees and concurrent Red Black trees and concurrent
Skiplists the access to different nodes allocated in different NUMA
nodes will be random and i have thought about it and this will get
you a good result on NUMA architecture, what is my proof ?
imagine that you have 32 cores and one NUMA node for each 4 cores,
that means 8 NUMA nodes in total, so you will allocate your
nodes in different NUMA nodes, so when 32 threads on 32 cores will
access those concurrent datastructures above, they will do it in a
probabilistic way , this will give a probability of 1/8 (1 over 8 NUMA
nodes) for each thread, so in average i think you will have a contention
for a different NUMA node for every 4 threads , so from the Amdahl's law
this will scale on average to 8X on 8 NUMA nodes, that's really good !
and my reasonning is true for more NUMA nodes, that means it will scale
on more NUMA nodes, so we are safe !

Other than that i have done some scalability prediction for the
following distributed reader-writer mutex:

https://sites.google.com/site/aminer68/scalable-distributed-reader-wr...

as you will noticed i am using an atomic "lock add" assembler
instruction that is executed by only the threads that belong to the same
core, so this will render it less expensive, i have benchmarked
it and i have noticed that it takes 20 CPU cycles on x86, so that's not
so expensive, and i have done a scalability prediction using
this distributed reader-writer mutex with a concurrent AVL tree
and a concurrent Red-Black tree, and it gives 50X scalability on NUMA
architecture when used in client-server way, that's because the "lock
add" assembler instruction that is executed by only the threads that
belong to the same core does take only 20 CPU cycles on x86.

I have finished to port a beautiful skiplist algorithm to freepascal and
delphi... and i am rendering it to a concurrent SkipList using the
distributed reader-writer mutex that i have talked to you about before,
and i have noticed on my benchmarks and doing some calculations
with the Amdahl's law that this concurrent Skiplist that i am
implementing will scale to 100X on read-mostly scenarios and on a NUMA
architecture when it is used in a client-server manner using threads,
that's good.




Thank you,
Amine Moulay Ramdane.