Sensor-based activity recognition (AR) depends on effective feature representation and classification. However, many recent studies focus on recognition methods, but largely ignore feature representation. Benefitting from the success of Convolutional Neural Networks (CNN) in feature extraction, we propose to improve the feature representation of activities. Specifically, we use a reversed CNN to generate the significant data based on the original features and combine the raw training data with significant data to obtain to enhanced training data. The proposed method can not only train better feature extractors but also help better understand the abstract features of sensor-based activity data. To demonstrate the effectiveness of our proposed method, we conduct comparative experiments with CNN Classifier and CNN-LSTM Classifier on five public datasets, namely the UCIHAR, UniMiB SHAR, OPPORTUNITY, WISDM, and PAMAP2. In addition, we evaluate our proposed method in comparison with traditional methods such as Decision Tree, Multi-layer Perceptron, Extremely randomized trees, Random Forest, and k-Nearest Neighbour on a specific dataset, WISDM. The results show our proposed method consistently outperforms the state-of-the-art methods.