# Building a hurdle regression estimator in scikit-learn

### What are hurdle models?

The hurdle model is a two-part model that specifies one process for zero counts and another process for positive counts. The idea is that positive counts occur once a threshold is crossed, or put another way, a hurdle is cleared.

Getting started with hurdle models [University of Virginia Library]

### What are hurdle models useful for?

Many statistical learning models—particularly linear models—assume some level of normality in the response variable being predicted. If we have a dataset with a heavily skewed response or one which contains extreme outliers, it is a common practice to apply something like a Box-Cox power transformation before fitting.

But what do you do if you come across a clearly multi-modal distribution like the one below? Applying a power transform here will just change the scale of the variable, it won’t help with the fact that there is a huge spike of values at zero. The fact that it is multi-modal is a good indicator that we are over-aggregating) data which belong to two or more distinct underlying data generation processes.

Distributions like this are commonly seen when analyzing composite variables such as insurance claims, where some large proportion are zero, but then the proportion of non-zero values take on a distribution of their own. Breaking down these sorts of distributions into their component parts allows us to more effetively model each piece and then recombine them at a later stage.

In the toy example above we have two underlying processes: Does a customer come back? If so, how many purchases does he or she make? The first is modeled as a binomial random variable (coin flip) and the second as a $\text{Pois}(\lambda=4)$ random variable, which represents discrete event counts.

### How can I implement a hurdle model?

So we want to fit and predict two sub-models, and then multiply their predictions together:

1. A classifier, trained and tested on all of our data.
2. A regressor, trained only on true positive samples, but used to make predictions on all test data.

The most straightforward way to achieve this would be to just train two separate models, make predictions on the same test dataset, and multiply their predictions together before evaluating. However with this approach we lose the ability to interface our model with the rest of the scikit-learn ecosystem, including passing it into GridSearchCV or any of the evaluation functions such as cross_val_predict.

A better approach is to implement our hurdle model as a valid scikit-learn estimator object by extending from the provided BaseEstimator class.

from sklearn.linear_model import LinearRegression, LogisticRegression
from sklearn.base import BaseEstimator
from sklearn.utils.estimator_checks import check_estimator
from sklearn.utils.validation import check_X_y, check_array, check_is_fitted
from lightgbm import LGBMClassifier, LGBMRegressor

class HurdleRegression(BaseEstimator):

def __init__(self,
clf_name='logistic',
reg_name='linear',
clf_params=None,
reg_params=None):

self.clf_name = clf_name
self.reg_name = reg_name
self.clf_params = clf_params
self.reg_params = reg_params

@staticmethod
def _resolve_estimator(func_name):
""" Lookup table for supported estimators.
This is necessary because sklearn estimator default arguments
must pass equality test, and instantiated sub-estimators are not equal. """

funcs = {'linear': LinearRegression(),
'logistic': LogisticRegression(solver='liblinear'),
'LGBMRegressor': LGBMRegressor(n_estimators=50),
'LGBMClassifier': LGBMClassifier(n_estimators=50)}

return funcs[func_name]

def fit(self, X, y):
X, y = check_X_y(X,y, dtype=None,
accept_sparse=False,
accept_large_sparse=False,
force_all_finite='allow-nan')

if X.shape[1] < 2:
raise ValueError('Cannot fit model when n_features = 1')

if len(set(y_clf)) != 2:
raise ValueError('Cannot fit data when y only has one class')

self.clf_ = self._resolve_estimator(self.clf_name)
if self.clf_params:
self.clf_.set_params(**self.clf_params)
self.clf_.fit(X, y > 0)
print(self.clf_)

self.reg_ = self._resolve_estimator(self.reg_name)
if self.reg_params:
self.reg_.set_params(**self.reg_params)
self.reg_.fit(X[y > 0], y[y > 0])
print(self.reg_)

self.is_fitted_ = True
return self

def predict(self, X):
X = check_array(X, accept_sparse=False, accept_large_sparse=False)
check_is_fitted(self, 'is_fitted_')
return self.clf_.predict(X) * self.reg_.predict(X)

def predict_expected_value(self, X):
X = check_array(X, accept_sparse=False, accept_large_sparse=False)
check_is_fitted(self, 'is_fitted_')
return self.clf_.predict_proba(X)[:, 1] * self.reg_.predict(X)

if __name__ == '__main__':
check_estimator(HurdleRegression)

### Making it a valid Scikit-Learn estimator

The code snippet above may feel like it is longer than it needs to be. This is primarily because I tried to write it as a valid scikit-learn estimator, which I learned involves jumping through a few hoops so that it is compatible with other sklearn functions, including:

1. Init variables must each be of a data type which evaluates as equal when compared with another copy of itself. This is necessary because sklearn clones estimators behind the scenes to do parallel processing in functions such as GridSearchCv. Primitive datatypes (e.g. 'yo' == 'yo' and 42 == 42) pass this test, but already-initialized estimators to use as sub-models do not. Because of this, I pass model type as a string, then use the _resolve_estimator method to instantiate the actual estimator.
2. The fit method returns the estimator itself, to enable method chaining.
3. The attribute self.is_fitted_ is set by the .fit() method and then checked by .predict().
4. Any input is validated using the check_array() function before being fit or predicted.

Scikit-learn provides a check_estimator function which runs a battery of automated tests against your estimator. I learned most of these requirements above while attempting to pass these tests.

Creating your own estimator in scikit-learn – Some additional concerns w.r.t GridSearchCV