Изменения

Перейти к: навигация, поиск

Мета-обучение

95 байт убрано, 00:51, 6 апреля 2019
Нет описания правки
| Gravity || gravity(X) || Inter-class dispersion <ref>Shawkat Ali and Kate~A. Smith-Miles. On learning algorithm selection for classification. Applied Soft Computing, 2006.</ref> ||
|-
| ANOVA p-value || $p_{val_{\texttt{X}_{1}X_{2}}}$ || Feature redundancy || $p_{val_{XY}}$\citep{soares+04}
|-
| Coeff. of variation || $\frac{\sigma_{Y}}{\mu_{Y}}$ || Variation in target <ref>C. Soares, P. Brazdil, and P. Kuba. A meta-learning method to select the kernel width in support vector regression, 2004.</ref> ||
|-
| PCA $\rho_{\lambda_{1}}$ || $\sqrt{\frac{\lambda_{1}}{1+\lambda_{1}}}$ || Variance in first PC || $\frac{\lambda_{1}}{\sum_{i} \lambda_{i}}$\citep{<re[https://www1.maths.leeds.ac.uk~charlesstatlogwhole.pdf]</ref>f>}
|-
| PCA skewness || || Skewness of first PC \citep{feurer2014using} || PCA kurtosis
|-
| PCA 95\% || $\frac{dim_{95\% var}}{p}$ || Intrinsic dimensionality <ref>R ́emi Bardenet, M ́aty ́as Brendel, Bal ́azs K ́egl, and Michele Sebag. Collaborative hyperparameter tuning. In Proceedings of ICML 2013, pages 199–207, 2013</ref> ||
| Information gain || || Feature importance || min,max,$\mu$,$\sigma$, gini
|-
| colspan="4" align="center" | '''ориентиры (landmarks)'''
|-
| Landmarker(1NN) || $P(\theta_{1NN},t_{j})$ || Data sparsity <ref>Bernhard Pfahringer, Hilan Bensusan, and Christophe G. Giraud-Carrier. Meta-learning by landmarking various learning algorithms.In \emph{17th International Conference on Machine Learning (ICML)}, pages 743 -- 750, 2000.</ref> ||
16
правок

Навигация