The intensity-based local feature matching methods are sensitive to image contrast variations, so the performance declines significantly when they are applied in multimodal image registration. To solve the above problem, a multimodality robust local feature descriptor was proposed and the corresponding feature matching method was developed. Firstly, an extraction method for the multimodality robust corner and line segment was proposed based on the phase congruency and local direction information insensitive to contrast variants. Compared with intensity based method, more equivalent corners and line segments were extracted between multimodal images with more contrast differences. Then, the feature region containing of 48circular sub-regions was selected by using the corner for a center and the 96dimensional feature vectors were generated by using the distance values of corners and the length values of line segments located in feature subregions. Finally, the feature matching method based on normalized correlation function was proposed and the location constraint-based RANdom SAmple Consensus(RANSAC)algorithm was used to remove false matching point pairs. The experimental results indicate that the precision and repeatability on multimodal image matching of the proposed method reach 80%and 13%respectively. As compared with the other intensity-based image matching methods, the precision and repeatability of proposed method are 2-4times and 4-7times respectively those of Symmetric-Scale Invariable Feature Transformation(S-SIFT)and Multimodal-Speeded-up Robust Features(MM-SURF).It concludes that the proposed method outperforms many state-of-the-art methods significantly.