Search published articles


Showing 2 results for Amirkhani

Reza Bayat Rizi, Amir R. Forouzan, Farshad Miramirkhani, Mohamad F. Sabahi,
Volume 20, Issue 4 (Special Issue on ADLEEE - December 2024)
Abstract

Visible Light Communication, a key optical wireless technology, offers reliable, high-bandwidth, and secure communication, making it a promising soloution for a variety of applications. Despite its many advantages, optical wireless communication faces challenges in medical environments due to fluctuating signal strength caused by patient movement. Smart transmitter structures can improve system performance by adjusting system parameters to the fluctuating channel conditions. The purpose of this research is to examine how adaptive modulation performs in a medical body sensor network system that uses visible light communication. The analysis focuses on various medical situations and investigates machine learning algorithms. The study compares adaptive modulation based on supervised learning with that based on reinforcement learning. The findings indicate that both approaches greatly improve spectral efficiency, emphasizing the significance of implementing link adaptation in visible light communication-based medical body sensor networks. The use of the Q-learning algorithm in adaptive modulation enables real-time training and enables the system to adjust to the changing environment without any prior knowledge about the environment. A remarkable improvement is observed for photodetectors on the shoulder and wrist since they experience more DC gain.
Mahdi Khourishandiz, Abdollah Amirkhani,
Volume 21, Issue 0 (In Press 2025)
Abstract

Protecting privacy in street view imagery is a critical challenge in urban analytics, requiring comprehensive and scalable solutions beyond localized obfuscation techniques such as face or license plate blurring. To address this, we propose a novel framework that automatically detects and removes sensitive objects, such as pedestrians and vehicles, ensuring robust privacy preservation while maintaining the visual integrity of the images. Our approach integrates semantic segmentation with 2D priors and multimodal data from cameras and LiDAR to achieve precise object detection in complex urban scenes. Detected regions are seamlessly filled using a large-mask inpainting technique based on fast Fourier convolutions (FFC), enabling efficient generalization to high-resolution imagery. Evaluated on the SemanticKITTI dataset, our method achieves a mean Intersection over Union (mIoU) of 64.9%, surpassing state-of-the-art benchmarks. Despite its reliance on accurate sensor calibration and multimodal data availability, the proposed framework offers a scalable solution for privacy-sensitive applications such as urban mapping, and virtual tourism, delivering high-quality anonymized imagery with minimal artifacts.

Page 1 from 1     

Creative Commons License
© 2022 by the authors. Licensee IUST, Tehran, Iran. This is an open access journal distributed under the terms and conditions of the Creative Commons Attribution-NonCommercial 4.0 International (CC BY-NC 4.0) license.