返回列表

蛋白质结构预测大赛TOP2团队方案分享

蛋白质结构预测大赛 | 231781

开始: 2020-02-13 结束: 2020-03-30 药物研发 数据算法赛
蛋白质二级结构预测 · LightGBM方案 · 特征工程 & 阈值优化
🧬 蛋白质二级结构预测 · 树模型方案 📊 特征工程 · LightGBM · 阈值搜索 🏆 线上 0.7732

蛋白质二级结构预测 · LightGBM方案详解

基于树模型的特征工程与阈值优化 · 天池大赛

数据分析

训练集数据包含了20000条蛋白质的氨基酸序列和二级结构序列,氨基酸序列长度和二级结构序列长度是相等的,其中有2条不相等的直接清洗掉。测试集根据运行时间判断大概有6000条左右。

  • 氨基酸序列 包含了23种氨基酸,由23个不同字母表示,其中有3种氨基酸数量非常少。
  • 二级结构序列 包含了8种二级结构,由7个不同字母和空格字符表示。

建模思路

由于氨基酸序列和二级结构序列的长度相等,我们考虑将序列中的某个氨基酸作为 样本 ,其对应位置的二级结构作为 标签 ,作为多分类任务。
下面部分的代码进行数据预处理,将原始数据转换为单个氨基酸的样本,并保留了序列顺序,以id标记属于统一序列的氨基酸。

# 数据预处理代码 import pandas as pd import numpy as np import lightgbm as lgb from sklearn.model_selection import StratifiedKFold, GroupKFold, train_test_split from sklearn.metrics import roc_auc_score, f1_score, recall_score, precision_score, accuracy_score, confusion_matrix import seaborn as sns import matplotlib.pylab as plt import os import gc import time from tqdm import tqdm from IPython.core.interactiveshell import InteractiveShell InteractiveShell.ast_node_interactivity = "all" pd.set_option('display.max_rows', 1000) pd.set_option('display.max_columns', 1000) path = 'D:/data/蛋白质结构预测/' train_seq = pd.read_table(path+'data_seq_train.txt', header=None) train_sec = pd.read_table(path+'data_sec_train.txt', header=None) train = pd.DataFrame() train['seq'] = train_seq[0] train['sec'] = train_sec[0] train['len_seq'] = train['seq'].map(lambda x: len(x)) train['len_sec'] = train['sec'].map(lambda x: len(x)) train = train.drop((train.loc[train['len_seq']!=train['len_sec']]).index) # 数据清洗 seqs = '' secs = '' group = [] i = 0 temp = train for seq in temp['seq']: seqs += seq group += [i]*len(seq) i += 1 for sec in temp['sec']: secs += sec data = pd.DataFrame({'amino_acid':list(seqs), 'secondary_structure':list(secs), 'id':group}) data.to_csv(path+'data.csv', index=None)

模型选择

我们选择了树模型中的LightGBM进行尝试。

特征工程

作为树模型中非常重要的一部分,我们考虑了这些特征:

  • 首先将氨基酸和二级序列编码为连续的数值型变量(label encode),以便输入模型。
  • 除了当前位置的氨基酸外,构造前后各20个氨基酸的shift特征,并都作为categorical feature输入模型。
  • 将当前氨基酸所属蛋白质序列含有的各种氨基酸数目和占比作为特征。
  • 将当前位置的氨基酸及其前后各10个氨基酸作为窗口,统计各类氨基酸的数目。

以下的特征尝试过,但收效甚微,后续不予考虑:

  • 氨基酸的PseAA特征(一些理化属性)及相关特征。
  • 氨基酸word2vec后的embedding向量。
# 特征工程代码 data = pd.read_csv('data/data.csv') amino_acid_map = {'A': 0, 'C': 1, 'D': 2, 'E': 3, 'F': 4, 'G': 5, 'H': 6, 'I': 7, 'K': 8, 'L': 9, 'M': 10, 'N': 11, 'P': 12, 'Q': 13, 'R': 14, 'S': 15, 'T': 16, 'U': 17, 'V': 18, 'W': 19, 'X': 20, 'Y': 21, 'Z': 22} secondary_structure_map = {' ': 0, 'B': 1, 'E': 2, 'G': 3, 'H': 4, 'I': 5, 'S': 6, 'T': 7} data['amino_acid'] = data['amino_acid'].map(amino_acid_map).astype('int8') data['secondary_structure'] = data['secondary_structure'].map(secondary_structure_map).astype('int8') data['count_all'] = data.groupby('id')['amino_acid'].transform('count').astype('int16') amino_acids = sorted(list(data['amino_acid'].unique())) for i in tqdm(amino_acids): data['count_'+str(i)] = (data['amino_acid'] == i) data['rolling_count_'+str(i)] = data.groupby('id')['count_'+str(i)].rolling(21, center=True).sum().\ reset_index(drop=True).fillna(-1).astype('int8') data['count_'+str(i)] = data.groupby('id')['count_'+str(i)].transform('sum').astype('int16') for i in amino_acids: data['count_'+str(i)+'_ratio'] = (data['count_'+str(i)] / data['count_all']).astype('float32') for i in tqdm(range(-20, 21)): if i==0: continue data['shift_'+str(i)] = data.groupby('id')['amino_acid'].shift(i).fillna('-1').astype('int8') # PseAA特征 # PseAA = pd.read_table('work/PseAA.txt', index_col=False) # PseAA['amino_acid'] = PseAA['amino_acid'].map(amino_acid_map).astype('int8') # for col in ['hydrophobicity', 'hydrophilicity', 'mass', 'pK1', 'pK2', 'pI']: # PseAA[col] = PseAA[col].astype('float32') # data = pd.merge(data, PseAA, on='amino_acid', how='left') # for col in tqdm(['hydrophobicity', 'hydrophilicity', 'mass', 'pK1', 'pK2', 'pI']): # for i in range(-5, 6): # if i==0: # continue # data[col+'_diff_'+str(i)] = data.groupby('id')[col].diff(periods=i).fillna(-999).astype('float32')
# word2vec特征 # data = pd.read_csv('work/data_seq_train.txt', header=None) # data[0] = data[0].map(lambda x: list(x)) # from gensim.models import Word2Vec # print('Learning embedding vectors...') # t = time.time() # size=32 # model = Word2Vec(sentences = data[0], # iter = 5, # epoch # min_count = 1, # a movie has to appear more than x times to be keeped # size = size, # size of the hidden layer # workers = 8, # specify the number of threads to be used for training # sg = 1, # hs = 1, # window = 20) # print('Learning embedding vectors done!') # print(time.time() - t) # embeddings = [[] for _ in range(23)] # for i, key in enumerate(sorted(list(model.wv.vocab.keys()))): # embeddings[i] = list(key) + list(model.wv[key]) # embeddings = pd.DataFrame(embeddings) # col_name = ['amino_acid'] # for i in range(size): # col_name.append('emb'+str(i)) # embeddings.columns = col_name # embeddings # embeddings.to_csv('work/embeddings.csv', index=False) # data = pd.read_feather('work/data.feather') # embeddings = pd.read_csv('work/embeddings.csv') # amino_acid_map = {'A': 0, 'C': 1, 'D': 2, 'E': 3, 'F': 4, 'G': 5, 'H': 6, 'I': 7, 'K': 8, 'L': 9, 'M': 10, 'N': 11, # 'P': 12, 'Q': 13, 'R': 14, 'S': 15, 'T': 16, 'U': 17, 'V': 18, 'W': 19, 'X': 20, 'Y': 21, 'Z': 22} # embeddings['amino_acid'] = embeddings['amino_acid'].map(amino_acid_map).astype('int8') # data = pd.merge(data, embeddings, on='amino_acid', how='left') # for col in list(data.columns): # if 'emb' in col: # data[col] = data[col].astype('float32') # data.info() # data.to_feather('work/data.feather')

模型训练及tricks

首先用了6折交叉验证(cv)进行训练,每折大概450轮左右早停。然而docker提交运行时间限制30分钟,测试集上预测如果使用交叉验证的模型,预测时间完全不够。

  • 考虑使用全部训练集进行训练,手动调试迭代轮数。提交时发现全数据训练的模型大概使用580轮为最佳,30分钟可以运行两个全数据训练模型,因此使用了两个全数据训练模型的融合。
  • 注意到比赛的评测指标为macro F1,因此如果训练集和测试集样本标签分布类似的话,我们可以对模型预测出的logits进行阈值搜索,最优化F1 score。(我觉得是全部氨基酸的macro F1,而不是题目表述的蛋白质序列的macro F1的算数平均数,因为如果某个序列预测出的二级结构并不在这个序列所有的二级结构中,那么直接调用sklearn.metrics.f1_score会出问题,不知道是不是这样)

线上分数详情

trickscore
cv单模0.761
全数据单模0.770
全数据双模融合0.7726
全数据双模融合后使用阈值搜索0.7732
# 模型训练代码 data = pd.read_feather('work/data_reverse.feather') drop_feat = ['emb0', 'emb1', 'emb2', 'emb3', 'emb4', 'emb5', 'emb6', 'emb7', 'emb8', 'emb9', 'emb10', 'emb11', 'emb12', 'emb13', 'emb14', 'emb15', 'emb16', 'emb17', 'emb18', 'emb19', 'emb20', 'emb21', 'emb22', 'emb23', 'emb24', 'emb25', 'emb26', 'emb27', 'emb28', 'emb29', 'emb30', 'emb31'] used_feat = [f for f in data.columns if f not in (['id', 'secondary_structure'] + drop_feat)] cate_feat = ['amino_acid'] + [f for f in used_feat if 'shift' in f] train_x = data[used_feat] train_y = data['secondary_structure'] params = { 'learning_rate': 0.2, # 'metric': None, 'objective': 'multiclass', 'num_class': 8, 'feature_fraction': 0.80, 'bagging_fraction': 0.75, 'bagging_freq': 2, 'n_jobs': 8, 'seed': 2020, 'max_depth': 10, 'num_leaves': 100000, 'lambda_l1': 0.5, 'lambda_l2': 0.5, } # cv oof_train = np.zeros((len(train_x), 8)) kfold = StratifiedKFold(n_splits=6, shuffle=True, random_state=44) for index, (trn_idx, val_idx) in enumerate(kfold.split(train_x, train_y)): x_trn, y_trn, x_val, y_val = train_x.iloc[trn_idx], train_y.iloc[trn_idx], train_x.iloc[val_idx], train_y.iloc[val_idx] train_set = lgb.Dataset(x_trn, y_trn) val_set = lgb.Dataset(x_val, y_val) model = lgb.train(params, train_set, num_boost_round=5000, categorical_feature=cate_feat, valid_sets=val_set, early_stopping_rounds=50, verbose_eval=20) # oof_train[val_idx] = model.predict(x_val) # model.save_model('work/model2020_fold%d.txt' % (index+1)) # del x_trn, y_trn, x_val, y_val, model, train_set, val_set # gc.collect() break # 全数据训练 # train_set = lgb.Dataset(train_x, train_y) # model = lgb.train(params, train_set, num_boost_round=650, # categorical_feature=cate_feat, # verbose_eval=20) # model.save_model('work/model.txt') model.num_trees() display(lgb.plot_importance(model, figsize=(8,25)))
# 阈值搜索代码 oof = model.predict(train_x, num_iteration=580) x1 = np.array(oof) y1 = np.array(y_val) from scipy import optimize def fun(x): return - f1_score(y1, np.argmax(x1*x, axis=1), labels=[0,1,2,3,4,5,6,7], average='macro') x0 = np.asarray((1,1,1,1,1,1,1,1)) res = optimize.fmin_powell(fun, x0) score_before = f1_score(y1, np.argmax(x1, axis=1), labels=[0,1,2,3,4,5,6,7], average='macro') score_after = f1_score(y1, np.argmax(x1 * res, axis=1), labels=[0,1,2,3,4,5,6,7], average='macro') print('before:', score_before) print('after:', score_after)
同比赛其他方案