I have several datasets consisting on:
- Number of threads
n
- Start process time
t1
- Stop process time
t2
- Operations processed
x
So each line of the dataset mean n
threads processed x
operations in t2-t1
time
When more threads are added, the processing time is reduced, because they run on parallel. However, they have locks between them, so the total time is not (t2 -t1)/n
, but a bit more.
I would like to infer the time a process will take to do x
operations depending on the thread number, some sort of "parallel factor" so t = x * n * factor
gives me the estimated time. How could I achieve this?