Skeleton-based action recognition has al-ways been an important research topic in computer vis-ion.Most of the researchers in this field currently pay more attention to actions performed by a single person while there is very little work dedicated to the identifica-tion of interactions between two people.However,the practical application of interaction recognition is actually more critical in our society considering that actions are often performed by multiple people.How to design an ef-fective scheme to learn discriminative spatial and tempor-al representations for skeleton-based interaction recogni-tion is still a challenging problem.Focusing on the char-acteristics of skeleton data for interactions,we first define the moving distance to distinguish the action status of the participants.Then some view-invariant relative features are proposed to fully represent the spatial and temporal relationship of the skeleton sequence.Further,a new cod-ing method is proposed to obtain the novel relative fea-ture representations.Finally,we design a three-stream CNN model to learn deep features for interaction recogni-tion.We evaluate our method on SBU dataset,NTU RGB+D 60 dataset and NTU RGB+D 120 dataset.The experimental results also verify that our method is effect-ive and exhibits great robustness compared with current state-of-the-art methods.