知识蒸馏方法
CVPR2023
Minimizing the Accumulated Trajectory Error To Improve Dataset Distillation
EcoTTA: Memory-Efficient Continual Test-Time Adaptation via Self-Distilled Regularization
itKD: Interchange Transfer-Based Knowledge Distillation for 3D Object Detection
Masked Autoencoders Enable Efficient Knowledge Distillers
TinyMIM: An Empirical Study of Distilling MIM Pre-Trained Models
DaFKD: Domain-Aware Federated Knowledge Distillation
Multi-Level Logit Distillation
ScaleKD: Distilling Scale-Aware Knowledge in Small Object Detector
Accelerating Dataset Distillation via Model Augmentation
Object-Aware Distillation Pyramid for Open-Vocabulary Object Detection
Self-Supervised Image-to-Point Distillation via Semantically Tolerant Contrastive Loss
Distilling Neural Fields for Real-Time Articulated Shape Reconstruction
Class Attention Transfer Based Knowledge Distillation
Efficient RGB-T Tracking via Cross-Modality Distillation
StructVPR: Distill Structural Knowledge With Weighting Samples for Visual Place Recognition
Few-Shot Class-Incremental Learning via Class-Aware Bilateral Distillation