Logistic regression#
- class equadratures.logistic_poly.LogisticPoly(n=2, M_init=None, tol=1e-07, cauchy_tol=1e-05, cauchy_length=3, verbosity=2, order=2, C=1.0, max_M_iters=10, restarts=1)[source]#
Class for defining a logistic subspace polynomial, used for classification tasks.
- Parameters
n (optional, int) – Dimension of subspace (should be smaller than ambient input dimension d). Defaults to 2.
M_init (optional, numpy array of dimensions (d, n)) – Initial guess for subspace matrix. Defaults to a random projection.
tol (optional, float) – Optimisation terminates when cost function on training data falls below
tol
. Defaults to 1e-7.cauchy_tol (optional, float) – Optimisation terminates when the difference between the average of the last
cauchy_length
cost function evaluations and the current cost is belowcauchy_tol
times the current evaluation. Defaults to 1e-5.cauchy_length (optional, int) – Length of comparison history for Cauchy convergence. Defaults to 3.
verbosity (optional, one of (0, 1, 2)) – Print debug messages during optimisation. 0 for no messages, 1 for printing final residual every restart, 2 for printing residuals at every iteration. Defaults to 0.
order (optional, int) – Maximum order of subspace polynomial used. Defaults to 2.
C (optional, float) – L2 penalty on coefficients. Defaults to 1.0.
max_M_iters (optional, int) – Maximum optimisation iterations per restart. Defaults to 10.
restarts (optional, int) – Number of times to restart optimisation. The result with lowest training error is taken at the end. Defaults to 1.
Examples
- Fitting and testing a logistic polynomial on a dataset.
>>> log_quad = eq.LogisticPoly(n=1, cauchy_tol=1e-5, verbosity=0, order=p_order, max_M_iters=100, C=0.001) >>> log_quad.fit(X_train, y_train) >>> prediction = log_quad.predict(X_test) >>> error_rate = np.sum(np.abs(np.round(prediction) - y_test)) / y_test.shape[0] >>> print(error_rate)