Virginia Tech Preprint Challenges Skill-MD Paradigm with Model-Native Training
A Virginia Tech preprint shows model-native skills extracted via sparse autoencoders outperform human-defined skill files in SFT — and produce 41% gains on math via activation-space data selection.