I am writing a report that allows my employer to rank employees by productivity and fault rate. Employee ranks are based on productivity (value of 25%), supervisor input(value of 25%) and fault rate(50%). Catching a fault awards you a single point, causing a minor fault takes a point away, causing a major fault takes three points away. Using this number and the amount of hours tracked I want to get a "faults per hour" that is a positive number and use this to sum with the rest of the score in order to get a value with around 100 being the max that I can use to rank employees.
How do I properly calculate the fault rate per hour? The method I am considering is adding the score to a hundred and then weighting it according to time spent at the station, the issue that I'm having is I compare the final score to the average score at the station, how can I keep the average score of that station reasonable for comparison if I'm adding the results to 100?
So I guess I'm doing this
(x-3y - z + 100 / a) * b [x] is the amount of faults they have caught. [y] is the amount of major faults they have created. [z] is the number of minor faults they have created. 100 is to get a positive number. [a] is number hours spent at station. [b] is the number hours spent on this station compared to total time
I'm dividing it by [a] because I need an hourly rate.
Since the period that I'm using as the baseline is 6 weeks and the period that I'm using to compare it is 6 weeks maybe I could just sum it all together and add the number to (100* 4) because it's 4 times the period size? These numbers don't have to be absolutely perfect because they will never be equal. Just looking to compare a 6week sample of a person's performance to a 6month sample of the entire average.