This work investigates the score prediction for evidence dimension of Response-to-Text Assessment (RTA). In previous work of this project, a new set of interpretable features has been designed for evaluating evidence dimension of RTA. The results show that the new set of features outperform baselines. In this work, we are trying to improve the performance of the previous model by introducing word embedding and topic importance models into the features extraction process. We will talk about the preliminary results of this work.