类别型变量onehot编码的那些事

Reading time ~1 minute

机器学习中的许多模型中,对类别型变量,常作的处理是,将它们编码成one-hot。但是对于树模型来说,将类别型变量编码成one-hot,这样作是否有意义呢?像一些机器学习工具包(比如:spark gbm实现),你可以指定为类别型变量,内部自己去做one-hot实现。而像xgboost,则将输入全认为是数值型特征去处理。

一、要不要one-hot?

这在机器学习界也有争论。理论上,树模型如果够深,也能将关键的类别型特型切出来。

关于这个,xgboost的作者tqchen在某个issues有提到过:

I do not know what you mean by vector. xgboost treat every input feature as numerical, with support for missing values and sparsity. The decision is at the user

So if you want ordered variables, you can transform the variables into numerical levels(say age). Or if you prefer treat it as categorical variable, do one hot encoding.

在另一个issues上也提到过(tqchen commented on 8 May 2015):

One-hot encoding could be helpful when the number of categories are small( in level of 10 to 100). In such case one-hot encoding can discover interesting interactions like (gender=male) AND (job = teacher).

While ordering them makes it harder to be discovered(need two split on job). However, indeed there is not a unified way handling categorical features in trees, and usually what tree was really good at was ordered continuous features anyway..

总结起来的结论,大至两条:

  • 1.对于类别有序的类别型变量,比如age等,当成数值型变量处理可以的。对于非类别有序的类别型变量,推荐one-hot。但是one-hot会增加内存开销以及训练时间开销。
  • 2.类别型变量在范围较小时(tqchen给出的是[10,100]范围内)推荐使用

二、one-hot的一致性问题

当你进行one-hot编码时,有些机器学习工具包内置的one-hot编码工具会帮你做这些事,但是真实的情况是,我们有数据集有如下分类:训练集、测试集、预测集(真实数据)等。

这些数据集并不会统一,比如:

训练集上A特征有: a,b,c,d,测试集上A特征有:a,b,d,预测集上可能有b,c,e,f

因此,你需要做的是,将它们统一使用one-hot编码,而非分开做不同的one-hot编码.

参考

Netflix关于cosine相似度的讨论

Netflix团队发了篇paper《Is Cosine-Similarity of Embeddings Really About Similarity?》,对cosine相似度做了相应的研究。# 摘要余弦相似度(cosine similarity)是指两个向量间夹角的余弦...… Continue reading

Meta AdaTT介绍

Published on January 02, 2024

SATrans介绍

Published on December 02, 2023