Even more predictive models

In the last post, I presented a comparison of different ways of doing prediction. A natural follow-up question is whether or not there are even better functions?  I observed in the last post that a straight line performs better than the logistic function which has a downward hump.  A natural set of functions to explore then has an upwards hump.   First let’s plot the two probability functions from the previous post along with several new ones (note the color scheme is different): 

Even more link probability functions

It’s worth mentioning that I’ve parameterized them in such a way so as to ensure that the endpoints all line up.  Let’s describe them and see how well they do:

Function Label Predictive Likelihood
b - \exp(-c x + d) \psi_b, b=1 -11.4085
\psi_b, b=0.8 -11.402
\psi_b, b=0.75 -11.4003
\pi_0 + (\pi_1 - \pi_0)\delta_{>r}(x) threshold -11.7929
a x^2 -2 a x + c \psi_q -11.3968
u \exp(- \frac{(x - 1)^2}{v^2}) \psi_g -11.4704
u + w * \exp(- \log(x)^2 /2) \psi_l -11.3747

As it happens, we actually do better by making the hump protrude upwards (i.e., making the function concave.). \psi_b, \psi_q and \psi_l both perform better than the previously proposed functions; \psi_q is easy since it has a simple quadratic form. In other words, it could be a second approximation of something, but of what?  \psi_l does a little bit better than that. Maybe we’re looking a posterior normality of log likelihoods?  If I knew the answer to this question, I think I’d be able to get a grip on a function which is actually justifiably better…


Leave a comment

Filed under Uncategorized

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s