631. UM - Game-Playing Strength of MCTS Variants | um-game-playing-strength-of-mcts-variants
哈!由于顶尖分数的饱和,我们原本预计会再次出现令人沮丧的排名洗牌。但我们把希望寄托在了我们的集成提交上。它没有让我们失望,我和 @sercanyesiloz 获得了我们的第一枚金牌并成为竞赛大师!
我现在会写一个简要的总结,明天会添加细节。
还添加了调整后的优势作为特征:
((pl.col("AdvantageP1") * pl.col("Completion")) + (pl.col("Drawishness")/2)).alias("adv_p1_adj"), ((pl.col("AdvantageP2") * pl.col("Completion")) + (pl.col("Drawishness")/2)).alias("adv_p2_adj")
并且没有使用任何其他特征!
Catboost 参数:
cb_params = {
"random_state": 42,
"iterations": 3000,
"learning_rate": 0.085,
"depth": 10,
"verbose": 100,
"use_best_model": False,
"task_type": "GPU",
"l2_leaf_reg": 0.,
"border_count": 254,
"objective": "RMSE",
"loss_function": "RMSE",
"eval_metric": "RMSE",
}
nets = ['dnn_nets'] + ['fm_nets'] + ['cin_nets'] + ['ipnn_nets']
hidden_units = ((1024, 0.0, True), (512, 0.0, True), (256, 0.0, True), (128, 0.0, True))
embeddings_output_dim = 4
embedding_dropout = 0.1
apply_gbm_features = True
epochs = 7
learning_rate = 0.001
lgb_params = {
'objective': 'regression',
'min_child_samples': 24,
'num_iterations': 13000,
'learning_rate': 0.07,
'extra_trees': True,
'reg_lambda': 0.8,
'reg_alpha': 0.1,
'num_leaves': 64,
'metric': 'rmse',
'device': 'cpu',
'max_depth': 24,
'max_bin': 128,
'verbose': -1,
'seed': 42
}
ctb_params = {
'loss_function': 'RMSE',
'learning_rate': 0.03,
'num_trees': 13000,
'random_state': 42,
'task_type': 'GPU',
'border_count': 254,
'reg_lambda': 0.8,
'depth': 8
}