{"id":199,"date":"2015-10-23T07:35:35","date_gmt":"2015-10-23T07:35:35","guid":{"rendered":"http:\/\/www.cs.ubbcluj.ro\/~meco\/?p=199"},"modified":"2026-02-01T12:09:34","modified_gmt":"2026-02-01T12:09:34","slug":"pedestrian-recognition-by-using-a-dynamic-modality-selection-approach","status":"publish","type":"post","link":"https:\/\/www.cs.ubbcluj.ro\/~meco\/pedestrian-recognition-by-using-a-dynamic-modality-selection-approach\/","title":{"rendered":"Pedestrian recognition by using a dynamic modality selection approach (2015)"},"content":{"rendered":"\n<h3 class=\"wp-block-heading\">Abstract<\/h3>\n\n\n\n<p>Despite many years of research, pedestrian recognition is still a difficult, but very important task. It was proved that concatenating information from multi-modality images improves the recognition accuracy, but with a high computational cost. We present a modality selection approach, which is able to dynamically select the most discriminative modality for a given image and furthermore use it in the classification process. Firstly, we extract kernel descriptor features from a given image in three modalities: intensity, depth and flow. Secondly, we dynamically determine the most suitable modality for that image using both: a modality pertinence classifier and a decision confidence indicator. Thirdly, we classify the image in the selected modality using a linear SVM approach. Numerical experiments are performed on the Daimler benchmark dataset consisting of pedestrian and non-pedestrian bounding boxes captured in outdoor urban environments and indicate that our model outperforms all the individual-modality classifiers and the model based on a posterior fusion of multi-modality decisions. Moreover, the proposed selection model is a promising and less computational expensive alternative to the concatenation of multi-modality features prior to classification.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">Citare<\/h3>\n\n\n\n<p>Rus, A., Rogozan, A., Dio\u0219an, L., Benshrair, A., Pedestrian recognition by using a dynamic modality selection approach, ITSC, 2015, 1862 &#8211; 1867<br><a href=\"https:\/\/doi.org\/10.1109\/ITSC.2015.302 \">https:\/\/doi.org\/10.1109\/ITSC.2015.302 <\/a><br><\/p>\n","protected":false},"excerpt":{"rendered":"<p>Despite many years of research, pedestrian recognition is still a difficult, but very important task. It was proved that concatenating information from multi-modality images improves the recognition accuracy, but with a high computational cost. We present a modality selection approach, which is able to dynamically select the most discriminative modality for a given image and furthermore use it in the classification process. Firstly, we extract kernel descriptor features from a given image in three modalities: intensity, depth and flow. Secondly, we dynamically determine the most suitable modality for that image using both: a modality pertinence classifier and a decision confidence indicator. Thirdly, we classify the image in the selected modality using a linear SVM approach. Numerical experiments are performed on the Daimler benchmark dataset consisting of pedestrian and non-pedestrian bounding boxes captured in outdoor urban environments and indicate that our model outperforms all the individual-modality classifiers and the model based on a posterior fusion of multi-modality decisions. Moreover, the proposed selection model is a promising and less computational expensive alternative to the concatenation of multi-modality features prior to classification.<\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"closed","ping_status":"open","sticky":false,"template":"","format":"standard","meta":[],"categories":[4],"tags":[11],"_links":{"self":[{"href":"https:\/\/www.cs.ubbcluj.ro\/~meco\/wp-json\/wp\/v2\/posts\/199"}],"collection":[{"href":"https:\/\/www.cs.ubbcluj.ro\/~meco\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.cs.ubbcluj.ro\/~meco\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.cs.ubbcluj.ro\/~meco\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.cs.ubbcluj.ro\/~meco\/wp-json\/wp\/v2\/comments?post=199"}],"version-history":[{"count":3,"href":"https:\/\/www.cs.ubbcluj.ro\/~meco\/wp-json\/wp\/v2\/posts\/199\/revisions"}],"predecessor-version":[{"id":1576,"href":"https:\/\/www.cs.ubbcluj.ro\/~meco\/wp-json\/wp\/v2\/posts\/199\/revisions\/1576"}],"wp:attachment":[{"href":"https:\/\/www.cs.ubbcluj.ro\/~meco\/wp-json\/wp\/v2\/media?parent=199"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.cs.ubbcluj.ro\/~meco\/wp-json\/wp\/v2\/categories?post=199"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.cs.ubbcluj.ro\/~meco\/wp-json\/wp\/v2\/tags?post=199"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}