返回列表

1st place solution: transformer and acceleration data

555. Parkinsons Freezing of Gait Prediction | tlvmc-parkinsons-freezing-gait-prediction

开始: 2023-03-09 结束: 2023-06-08 临床决策支持 数据算法赛
第一名解决方案:Transformer与加速度数据

第一名解决方案:Transformer与加速度数据

问候Kaggle社区。在这篇文章中,我将向大家介绍我的解决方案。

感谢Kaggle为所有人提供免费GPU和TPU资源。在我的显卡(1050 Ti)上,我无法取得这些成果。
感谢Google提供的优秀tensorflow库。
我的所有工作都在Kaggle Notebooks中完成,并依赖于TensorFlow的功能。

关键决策

在我看来,带来良好结果的关键决策如下:

  1. 使用Transformer编码器和两层双向LSTM(BidirectionalLSTM)的组合
  2. 像Vision Transformer一样使用分块(patches)
  3. 降低目标分辨率

工作原理

假设我们有一个tdcsfog传感器数据序列,包含AccV、AccML、AccAP三列,长度为5000。

首先,对AccV、AccML、AccAP三列应用均值-标准差归一化:

def sample_normalize(sample):
    mean = tf.math.reduce_mean(sample)
    std = tf.math.reduce_std(sample)
    sample = tf.math.divide_no_nan(sample-mean, std)
    
    return sample.numpy()

然后将序列进行零填充,使最终长度能被block_size = 15552(defog为12096)整除。现在序列形状为(15552, 3)。

并使用patch_size = 18(defog为14)创建分块:

series # 示例形状 (15552, 3)
series = tf.reshape(series, shape=(CFG['block_size'] // CFG['patch_size'], CFG['patch_size'], 3)) # 示例形状 (864, 18, 3)
series = tf.reshape(series, shape=(CFG['block_size'] // CFG['patch_size'], CFG['patch_size']*3))  # 示例形状 (864, 54)

现在序列形状为(864, 54),这就是模型的输入。

如何处理StartHesitation、Turn、Walking数据?方法相同,但在最后应用tf.reduce_max:

series_targets # 示例形状 (15552, 3)
series_targets = tf.reshape(series_targets, shape=(CFG['block_size'] // CFG['patch_size'], CFG['patch_size'], 3)) # 示例形状 (864, 18, 3)
series_targets = tf.transpose(series_targets, perm=[0, 2, 1]) # 示例形状 (864, 3, 18)
series_targets = tf.reduce_max(series_targets, axis=-1) # 示例形状 (864, 3)

现在序列形状为(864, 3),这就是模型的输出。

最后,使用tf.tile简单地恢复真实分辨率:

predictions = model.predict(...) # 示例形状 (1, 864, 3)
predictions = tf.expand_dims(predictions, axis=-1) # 示例形状 (1, 864, 3, 1)
predictions = tf.transpose(predictions, perm=[0, 1, 3, 2]) # 示例形状 (1, 864, 1, 3)
predictions = tf.tile(predictions, multiples=[1, 1, CFG['patch_size'], 1]) # 示例形状 (1, 864, 18, 3)
predictions = tf.reshape(predictions, shape=(predictions.shape[0], predictions.shape[1]*predictions.shape[2], 3)) # 示例形状 (1, 15552, 3)

详细信息

日常数据、events.csv、subjects.csv、tasks.csv从未被使用。

Tdcsfog数据不用于训练defog模型。

Defog数据不用于训练tdcsfog模型。

优化器

tf.keras.optimizers.Adam(learning_rate=Schedule(LEARNING_RATE, WARMUP_STEPS), beta_1=0.9, beta_2=0.98, epsilon=1e-9)

损失函数

'''
损失函数参数说明

real是一个形状为(GPU_BATCH_SIZE, CFG['block_size'] // CFG['patch_size'], 5)的张量,其中最后一轴表示:
0 - StartHesitation
1 - Turn
2 - Walking
3 - Valid
4 - Mask

output是一个形状为(GPU_BATCH_SIZE, CFG['block_size'] // CFG['patch_size'], 3)的张量,其中最后一轴表示:
0 - StartHesitation预测值
1 - Turn预测值
2 - Walking预测值

'''

ce = tf.keras.losses.BinaryCrossentropy(reduction='none')

def loss_function(real, output, name='loss_function'):
    loss = ce(tf.expand_dims(real[:, :, 0:3], axis=-1), tf.expand_dims(output, axis=-1)) # 示例形状 (32, 864, 3)
    
    mask = tf.math.multiply(real[:, :, 3], real[:, :, 4]) # 示例形状 (32, 864)
    mask = tf.cast(mask, dtype=loss.dtype)
    mask = tf.expand_dims(mask, axis=-1) # 示例形状 (32, 864, 1)
    mask = tf.tile(mask, multiples=[1, 1, 3]) # 示例形状 (32, 864, 3)
    loss *= mask # 示例形状 (32, 864, 3)

    return tf.reduce_sum(loss) / tf.reduce_sum(mask)

模型

CFG = {'TPU': 0,
       'block_size': 15552,
       'block_stride': 15552//16,
       'patch_size': 18,
       
       'fog_model_dim': 320,
       'fog_model_num_heads': 6,
       'fog_model_num_encoder_layers': 5,
       'fog_model_num_lstm_layers': 2,
       'fog_model_first_dropout': 0.1,
       'fog_model_encoder_dropout': 0.1,
       'fog_model_mha_dropout': 0.0,
      }

'''
Transformer编码器层
更多细节参见 https://arxiv.org/pdf/1706.03762.pdf [Attention Is All You Need]

'''

class EncoderLayer(tf.keras.layers.Layer):
    def __init__(self):
        super().__init__()
        
        self.mha = tf.keras.layers.MultiHeadAttention(num_heads=CFG['fog_model_num_heads'], key_dim=CFG['fog_model_dim'], dropout=CFG['fog_model_mha_dropout'])
        
        self.add = tf.keras.layers.Add()
        
        self.layernorm = tf.keras.layers.LayerNormalization()
        
        self.seq = tf.keras.Sequential([tf.keras.layers.Dense(CFG['fog_model_dim'], activation='relu'),
                                        tf.keras.layers.Dropout(CFG['fog_model_encoder_dropout']),
                                        tf.keras.layers.Dense(CFG['fog_model_dim']),
                                        tf.keras.layers.Dropout(CFG['fog_model_encoder_dropout']),
                                       ])
        
    def call(self, x):
        attn_output = self.mha(query=x, key=x, value=x)
        x = self.add([x, attn_output])
        x = self.layernorm(x)
        x = self.add([x, self.seq(x)])
        x = self.layernorm(x)
        
        return x

'''
FOGEncoder是Transformer编码器(D=320, H=6, L=5)和两层双向LSTM的组合

'''

class FOGEncoder(tf.keras.Model):
    def __init__(self):
        super().__init__()
        
        self.first_linear = tf.keras.layers.Dense(CFG['fog_model_dim'])
        
        self.add = tf.keras.layers.Add()
        
        self.first_dropout = tf.keras.layers.Dropout(CFG['fog_model_first_dropout'])
        
        self.enc_layers = [EncoderLayer() for _ in range(CFG['fog_model_num_encoder_layers'])]
        
        self.lstm_layers = [tf.keras.layers.Bidirectional(tf.keras.layers.LSTM(CFG['fog_model_dim'], return_sequences=True)) for _ in range(CFG['fog_model_num_lstm_layers'])]
        
        self.sequence_len = CFG['block_size'] // CFG['patch_size']
        self.pos_encoding = tf.Variable(initial_value=tf.random.normal(shape=(1, self.sequence_len, CFG['fog_model_dim']), stddev=0.02), trainable=True)
        
    def call(self, x, training=None): # (GPU_BATCH_SIZE, CFG['block_size'] // CFG['patch_size'], CFG['patch_size']*3), 示例形状 (4, 864, 54)
        x = x / 25.0 # 在[-1, 1]区间的归一化尝试
        x = self.first_linear(x) # (GPU_BATCH_SIZE, CFG['block_size'] // CFG['patch_size'], CFG['fog_model_dim']), 示例形状 (4, 864, 320)
        
        if training: # 训练时通过随机移位位置编码张量进行数据增强
            random_pos_encoding = tf.roll(tf.tile(self.pos_encoding, multiples=[GPU_BATCH_SIZE, 1, 1]),
                                          shift=tf.random.uniform(shape=(GPU_BATCH_SIZE,), minval=-self.sequence_len, maxval=0, dtype=tf.int32),
                                          axis=GPU_BATCH_SIZE * [1],
                                          )
            x = self.add([x, random_pos_encoding])
        
        else: # 非训练时不使用数据增强
            x = self.add([x, tf.tile(self.pos_encoding, multiples=[GPU_BATCH_SIZE, 1, 1])])
            
        x = self.first_dropout(x)
        
        for i in range(CFG['fog_model_num_encoder_layers']): x = self.enc_layers[i](x) # (GPU_BATCH_SIZE, CFG['block_size'] // CFG['patch_size'], CFG['fog_model_dim']), 示例形状 (4, 864, 320)
        for i in range(CFG['fog_model_num_lstm_layers']): x = self.lstm_layers[i](x) # (GPU_BATCH_SIZE, CFG['block_size'] // CFG['patch_size'], CFG['fog_model_dim']*2), 示例形状 (4, 864, 640)
        
        return x

class FOGModel(tf.keras.Model):
    def __init__(self):
        super().__init__()
        
        self.encoder = FOGEncoder()
        self.last_linear = tf.keras.layers.Dense(3)
        
    def call(self, x): # (GPU_BATCH_SIZE, CFG['block_size'] // CFG['patch_size'], CFG['patch_size']*3), 示例形状 (4, 864, 54)
        x = self.encoder(x) # (GPU_BATCH_SIZE, CFG['block_size'] // CFG['patch_size'], CFG['fog_model_dim']*2), 示例形状 (4, 864, 640)
        x = self.last_linear(x) # (GPU_BATCH_SIZE, CFG['block_size'] // CFG['patch_size'], 3), 示例形状 (4, 864, 3)
        x = tf.nn.sigmoid(x) # Sigmoid激活
        
        return x

提交结果(私有分数 0.514,公开分数 0.527)由8个模型组成:

模型1(tdcsfog模型)

CFG = {'TPU': 1, 
       'block_size': 15552, 
       'block_stride': 15552//16,
       'patch_size': 18, 
       
       'fog_model_dim': 320,
       'fog_model_num_heads': 6,
       'fog_model_num_encoder_layers': 5,
       'fog_model_num_lstm_layers': 2,
       'fog_model_first_dropout': 0.1,
       'fog_model_encoder_dropout': 0.1,
       'fog_model_mha_dropout': 0.0,
      }

LEARNING_RATE = 0.01/38
STEPS_PER_EPOCH = 64
WARMUP_STEPS = 64
BATCH_SIZE=32

验证对象
['07285e', '220a17', '54ee6e', '312788', '24a59d', '4bb5d0', '48fd62', '79011a', '7688c1']

在TPU上训练15分钟。验证分数:
StartHesitation AP - 0.462 Turn AP - 0.896 Walking AP - 0.470 mAP - 0.609

模型2(tdcsfog模型)

CFG = {'TPU': 0, 
       'block_size': 15552, 
       'block_stride': 15552//16,
       'patch_size': 18, 
       
       'fog_model_dim': 256,
       'fog_model_num_heads': 6,
       'fog_model_num_encoder_layers': 3,
       'fog_model_num_lstm_layers': 2,
       'fog_model_first_dropout': 0.1,
       'fog_model_encoder_dropout': 0.1,
       'fog_model_mha_dropout': 0.0,
      }

LEARNING_RATE = 0.01/24
STEPS_PER_EPOCH = 64
WARMUP_STEPS = 64
BATCH_SIZE = 16

验证对象
['07285e', '220a17', '54ee6e', '312788', '24a59d', '4bb5d0', '48fd62', '79011a', '7688c1']

在GPU上训练40分钟。验证分数:
StartHesitation AP - 0.481 Turn AP - 0.886 Walking AP - 0.437 mAP - 0.601

模型3(tdcsfog模型)

CFG = {'TPU': 1,
       'block_size': 15552, 
       'block_stride': 15552//16,
       'patch_size': 18, 
       
       'fog_model_dim': 320,
       'fog_model_num_heads': 6,
       'fog_model_num_encoder_layers': 5,
       'fog_model_num_lstm_layers': 2,
       'fog_model_first_dropout': 0.1,
       'fog_model_encoder_dropout': 0.1,
       'fog_model_mha_dropout': 0.0,
      }

LEARNING_RATE = 0.01/48
STEPS_PER_EPOCH = 64
WARMUP_STEPS = 64
BATCH_SIZE = 32

验证对象
['e39bc5', '516a67', 'af82b2', '4dc2f8', '743f4e', 'fa8764', 'a03db7', '51574c', '2d57c2']

在TPU上训练11分钟。验证分数:
StartHesitation AP - 0.601 Turn AP - 0.857 Walking AP - 0.289 mAP - 0.582

模型4(tdcsfog模型)

CFG = {'TPU': 1,
       'block_size': 15552, 
       'block_stride': 15552//16,
       'patch_size': 18, 
       
       'fog_model_dim': 320,
       'fog_model_num_heads': 6,
       'fog_model_num_encoder_layers': 5,
       'fog_model_num_lstm_layers': 2,
       'fog_model_first_dropout': 0.1,
       'fog_model_encoder_dropout': 0.1,
       'fog_model_mha_dropout': 0.0,
      }

LEARNING_RATE = 0.01/38
STEPS_PER_EPOCH = 64
WARMUP_STEPS = 64
BATCH_SIZE = 32

验证对象
['5c0b8a', 'a03db7', '7fcee9', '2c98f7', '2a39f8', '4f13b4', 'af82b2', 'f686f0', '93f49f', '194d1d', '02bc69', '082f01']

在TPU上训练13分钟。验证分数:
StartHesitation AP - 0.367 Turn AP - 0.879 Walking AP - 0.194 mAP - 0.480

模型5(defog模型)

CFG = {'TPU': 1,
       'block_size': 12096, 
       'block_stride': 12096//16,
       'patch_size': 14, 
       
       'fog_model_dim': 320,
       'fog_model_num_heads': 6,
       'fog_model_num_encoder_layers': 5,
       'fog_model_num_lstm_layers': 2,
       'fog_model_first_dropout': 0.1,
       'fog_model_encoder_dropout': 0.1,
       'fog_model_mha_dropout': 0.0,
      }

LEARNING_RATE = 0.01/62
STEPS_PER_EPOCH = 256
WARMUP_STEPS = 256
BATCH_SIZE = 32

验证对象
['00f674', '8d43d9', '107712', '7b2e84', '575c60', '7f8949', '2874c5', '72e2c7']

训练数据:defog数据,notype数据
验证数据:defog数据,notype数据

在TPU上训练45分钟。验证分数:
StartHesitation AP - [未使用] Turn AP - 0.625 Walking AP - 0.238 mAP - 0.432
Event AP - 0.800

模型6(defog模型)

CFG = {'TPU': 1,
       'block_size': 12096, 
       'block_stride': 12096//16,
       'patch_size': 14, 
       
       'fog_model_dim': 320,
       'fog_model_num_heads': 5,
       'fog_model_num_encoder_layers': 5,
       'fog_model_num_lstm_layers': 2,
       'fog_model_first_dropout': 0.1,
       'fog_model_encoder_dropout': 0.1,
       'fog_model_mha_dropout': 0.0,
      }

训练数据:defog数据(约85%)
验证数据:defog数据(约15%),notype数据(100%)

模型7(defog模型)

CFG = {'TPU': 1,
       'block_size': 12096, 
       'block_stride': 12096//16,
       'patch_size': 14, 
       
       'fog_model_dim': 320,
       'fog_model_num_heads': 6,
       'fog_model_num_encoder_layers': 4,
       'fog_model_num_lstm_layers': 2,
       'fog_model_first_dropout': 0.1,
       'fog_model_encoder_dropout': 0.1,
       'fog_model_mha_dropout': 0.0,
      }

LEARNING_RATE = 0.01/24
STEPS_PER_EPOCH = 32
WARMUP_STEPS = 64
BATCH_SIZE = 128

训练数据:defog数据(100%)
验证数据:notype数据(100%)

在TPU上训练18分钟。验证分数:
StartHesitation AP - [未使用] Turn AP - [未使用] Walking AP - [未使用] mAP - [未使用]
Event AP - 0.764

模型8(defog模型)

CFG = {'TPU': 1,
       'block_size': 12096, 
       'block_stride': 12096//16,
       'patch_size': 14, 
       
       'fog_model_dim': 320,
       'fog_model_num_heads': 6,
       'fog_model_num_encoder_layers': 5,
       'fog_model_num_lstm_layers': 2,
       'fog_model_first_dropout': 0.1,
       'fog_model_encoder_dropout': 0.1,
       'fog_model_mha_dropout': 0.0,
      }

LEARNING_RATE = 0.01/46
STEPS_PER_EPOCH = 256
WARMUP_STEPS = 256
BATCH_SIZE = 32

验证对象
['12f8d1', '8c1f5e', '387ea0', 'c56629', '7da72f', '413532', 'd89567', 'ab3b2e', 'c83ff6', '056372']

训练数据:defog数据,notype数据
验证数据:defog数据,notype数据

在TPU上训练28分钟。验证分数:
StartHesitation AP - [未使用] Turn AP - 0.758 Walking AP - 0.221 mAP - 0.489
Event AP - 0.744

最终模型

Tdcsfog: 0.25 * 模型1 + 0.25 * 模型2 + 0.25 * 模型3 + 0.25 * 模型4

Defog: 0.25 * 模型5 + 0.25 * 模型6 + 0.25 * 模型7 + 0.25 * 模型8

同比赛其他方案