0
$\begingroup$

I'm not even sure how to ask this question, so bear with me for a second.

Given a linear input value, such as floating point numbers between 0 and 1, how can I produce an output that favors higher input values?

For example, let's say I have a function that generates probability values between 0 and 1, where 0 is a definite NO and 1 is a definite YES. Between 0 and 1, anything above 0.9 is a likely YES and anything below 0.5 is a likely NO. Now, let's say I want to produce an output value based on those probabilities that favors (places more value on) the higher end of the scale, and that the output value should be integers between 0 and 255.

So, a manual example goes like this:

prob 0.00 = output 0 prob 0.50 = output 50 prob 0.75 = output 130 prob 0.90 = output 150 prob 0.92 = output 155 prob 0.94 = output 160 prob 0.96 = output 175 prob 0.98 = output 200 prob 1.00 = output 255 

These exact outputs aren't of interest for me, I'm simply trying to show the concept---that the majority of the available output numbers are achieved from probabilities between 0.9 and 1.0.

Is this approximating a logarithmic scale? Something else? Any easy way to calculate this kind of output (something close to it concept, not precision)?

thanks!

  • 0
    $f(x) = 255x^2$2012-03-11

1 Answers 1

2

The answer (given in comments) is to use a polynomial function such as $255x^2$ (or $255 x^n$ for other values of $n$). Other nonlinear functions, such as $\exp$, can also be used but polynomials are very easy to evaluate. A related concept is sigmoid.