Publication:
Visual and Auditory Data Fusion for Energy-Efficient and Improved Object Recognition in Wireless Multimedia Sensor Networks

cris.virtual.department#PLACEHOLDER_PARENT_METADATA_VALUE#
cris.virtual.orcid#PLACEHOLDER_PARENT_METADATA_VALUE#
cris.virtualsource.department12598c28-9a75-4536-9288-99cb12037cbc
cris.virtualsource.orcid12598c28-9a75-4536-9288-99cb12037cbc
dc.contributor.affiliationAtilim University; Nazarbayev University; Ministry of Defense - Turkey; Turk Hava Kurumu University; Turkish Aeronautical Association; Baskent University
dc.contributor.authorKoyuncu, Murat; Yazici, Adnan; Civelek, Muhsin; Cosar, Ahmet; Sert, Mustafa
dc.date.accessioned2024-06-25T11:45:54Z
dc.date.available2024-06-25T11:45:54Z
dc.date.issued2019
dc.description.abstractAutomatic threat classification without human intervention is a popular research topic in wireless multimedia sensor networks (WMSNs) especially within the context of surveillance applications. This paper explores the effect of fusing audio-visual multimedia and scalar data collected by the sensor nodes in a WMSN for the purpose of energy-efficient and accurate object detection and classification. In order to do that, we implemented a wireless multimedia sensor node with video and audio capturing and processing capabilities in addition to traditional/ordinary scalar sensors. The multimedia sensors are kept in sleep mode in order to save energy until they are activated by the scalar sensors which are always active. The object recognition results obtained from video and audio applications are fused to increase the object recognition performance of the sensor node. Final results are forwarded to the sink in text format, and this greatly reduces the size of data transmitted in network. Performance test results of the implemented prototype system show that the fusing audio data with visual data improves automatic object recognition capability of a sensor node significantly. Since auditory data requires less processing power compared to visual data, the overhead of processing the auditory data is not high, and it helps to extend network lifetime of WMSNs.
dc.description.doi10.1109/JSEN.2018.2885281
dc.description.endpage1849
dc.description.issue5
dc.description.pages11
dc.description.researchareasEngineering; Instruments & Instrumentation; Physics
dc.description.startpage1839
dc.description.urihttp://dx.doi.org/10.1109/JSEN.2018.2885281
dc.description.volume19
dc.description.woscategoryEngineering, Electrical & Electronic; Instruments & Instrumentation; Physics, Applied
dc.identifier.issn1530-437X
dc.identifier.urihttps://acikarsiv.thk.edu.tr/handle/123456789/1352
dc.language.isoEnglish
dc.publisherIEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
dc.relation.journalIEEE SENSORS JOURNAL
dc.subjectWireless multimedia sensor; object detection; visual and auditory data fusion; WMSN
dc.subjectSURVEILLANCE; SCHEME
dc.titleVisual and Auditory Data Fusion for Energy-Efficient and Improved Object Recognition in Wireless Multimedia Sensor Networks
dc.typeArticle
dspace.entity.typePublication

Files