Search published articles



M. Shams Esfand Abadi, S. Nikbakht,
Volume 7, Issue 2 (6-2011)
Abstract

Two-dimensional (TD) adaptive filtering is a technique that can be applied to many image, and signal processing applications. This paper extends the one-dimensional adaptive filter algorithms to TD structures and the novel TD adaptive filters are established. Based on this extension, the TD variable step-size normalized least mean squares (TD-VSS-NLMS), the TD-VSS affine projection algorithms (TD-VSS-APA), the TD set-membership NLMS (TD-SM-NLMS), the TD-SM-APA, the TD selective partial update NLMS (TD-SPU-NLMS), and the TD-SPU-APA are presented. In TD-VSS adaptive filters, the step-size changes during the adaptation which leads to improve the performance of the algorithms. In TD-SM adaptive filter algorithms, the filter coefficients are not updated at each iteration. Therefore, the computational complexity is reduced. In TD-SPU adaptive algorithms, the filter coefficients are partially updated which reduce the computational complexity. We demonstrate the good performance of the proposed algorithms thorough several simulation results in TD adaptive noise cancellation (TD-ANC) for image restoration. The results are compared with the classical TD adaptive filters such as TD-LMS, TD-NLMS, and TD-APA
S. Mohammadi, S. Talebi, A. Hakimi,
Volume 8, Issue 2 (6-2012)
Abstract

In this paper we introduce two innovative image and video watermarking algorithms. The paper’s main emphasis is on the use of chaotic maps to boost the algorithms’ security and resistance against attacks. By encrypting the watermark information in a one dimensional chaotic map, we make the extraction of watermark for potential attackers very hard. In another approach, we select embedding positions by a two dimensional chaotic map which enables us to satisfactorily distribute watermark information throughout the host signal. This prevents concentration of watermark data in a corner of the host signal which effectively saves it from being a target for attacks that include cropping of the signal. The simulation results demonstrate that the proposed schemes are quite resistant to many kinds of attacks which commonly threaten watermarking algorithms.
M. Soleimanpour-Moghadam, S. Talebi,
Volume 9, Issue 2 (6-2013)
Abstract

This paper devotes itself to the study of secret message delivery using cover image and introduces a novel steganographic technique based on genetic algorithm to find a near-optimum structure for the pair-wise least-significant-bit (LSB) matching scheme. A survey of the related literatures shows that the LSB matching method developed by Mielikainen, employs a binary function to reduce the number of changes of LSB values. This method verifiably reduces the probability of detection and also improves the visual quality of stego images. So, our proposal draws on the Mielikainen's technique to present an enhanced dual-state scoring model, structured upon genetic algorithm which assesses the performance of different orders for LSB matching and searches for a near-optimum solution among all the permutation orders. Experimental results confirm superiority of the new approach compared to the Mielikainen’s pair-wise LSB matching scheme.
S. Mozaffari, A. A. Hajian Nezhad,
Volume 10, Issue 3 (9-2014)
Abstract

One of the related problems of OCR systems is discrimination of fonts in machine printed document images. This task improves performance of general OCR systems. Proposed methods in this paper are based on various fractal dimensions for font discrimination. First, some predefined fractal dimensions were combined with directional methods to enhance font differentiation. Then, a novel fractal dimension was introduced in this paper for the first time. Our feature extraction methods which consider font recognition as texture identification are independent of document content. Experimental results on different pages written by several font types show that fractal geometry can overcome the complexities of font recognition problem.
S. M. Marvasti Zadeh, H. Ghanei Yakhdan, Sh. Kasaei,
Volume 10, Issue 3 (9-2014)
Abstract

Sending compressed video data in error-prone environments (like the Internet and wireless networks) might cause data degradation. Error concealment techniques try to conceal the received data in the decoder side. In this paper, an adaptive boundary matching algorithm is presented for recovering the damaged motion vectors (MVs). This algorithm uses an outer boundary matching or directional temporal boundary matching method to compare every boundary of candidate macroblocks (MBs), adaptively. It gives a specific weight according to the accuracy of each boundary of the damaged MB. Moreover, if each of the adjacent MBs is already concealed, different weights are given to the boundaries. Finally, the MV with minimum adaptive boundary distortion is selected as the MV of the damaged MB. Experimental results show that the proposed algorithm can improve both objective and subjective quality of reconstructed frames without any considerable computational complexity The average PSNR in some frames of test sequences increases about 4.59, 4.44, 3.57, and 2.98 dB compared to classic boundary matching, directional boundary matching, directional temporal boundary matching, and outer boundary matching algorithm, respectively.
M. H Shakoor, F. Tajeripour,
Volume 11, Issue 3 (9-2015)
Abstract

In this paper, a special preprocessing operations (filter) is proposed to decrease
the effects of noise of textures. This filter using average of circular neighbor points (Cmean)
to reduce noise effect. Comparing this filter with other average filters such as square
mean filter and square median filter indicates that it provides more noise reduction and
increases the classification accuracy. After applying filter to noisy textures some Local
Binary Pattern (LBP) variants are used for feature extraction. The Implementation part for
noisy textures of Outex, UIUC and CUReT datasets shows that using proposed filter
increases the classification accuracy significantly. Furthermore, a simple and new technique
is proposed that increases the speed of c-mean filter noticeably.

AWT IMAGE


E. Ehsaeyan,
Volume 12, Issue 1 (3-2016)
Abstract

The use of wavelets in denoising, seems to be an advantage in representing well the details. However, the edges are not so well preserved. Total variation technique has advantages over simple denoising techniques such as linear smoothing or median filtering, which reduce noise, but at the same time smooth away edges to a greater or lesser degree. In this paper, an efficient denoising method based on Total Variation model (TV), and Dual-Tree Complex Wavelet Transform (DTCWT) is proposed to incorporate both properties. In our method, TV is employed to refine low-passed coefficients and DTCWT is used to shrink high-passed noisy coefficients to achieve more accurate image recovery. The efficiency of our approach is firstly analyzed by comparing the results with well-known methods such as probShrink, BLS-GSM, SUREbivariate, NL-Means and TV model. Secondly, it is compared to some denoising methods, which have been reported recently. Experimental results show that the proposed method outperforms the Steerable pyramid denoising by 8.5% in terms of PSNR and 17.5% in terms of SSIM for standard images. Obtained results convince us that the proposed scheme provides a better performance in noise blocking among reported state-of-the-art methods.


E. Ehsaeyan,
Volume 12, Issue 2 (6-2016)
Abstract

Traditional noise removal methods like Non-Local Means create spurious boundaries inside regular zones. Visushrink removes too many coefficients and yields recovered images that are overly smoothed. In Bayesshrink method, sharp features are preserved. However, PSNR (Peak Signal-to-Noise Ratio) is considerably low. BLS-GSM generates some discontinuous information during the course of denoising and destroys the flatness of homogenous area. Wavelets are not very effective in dealing with multidimensional signals containing distributed discontinuities such as edges. This paper develops an effective shearlet-based denoising method with a strong ability to localize distributed discontinuities to overcome this limitation. The approach introduced here presents two major contributions: (a) Shearlet Transform is designed to get more directional subbands which helps to capture the anisotropic information of the image; (b) coefficients are divided into low frequency and high frequency subband. Then, the low frequency band is refined by Wiener filter and the high-pass bands are denoised via NeighShrink model. Our framework outperforms the wavelet transform denoising by %7.34 in terms of PSNR (peak signal-to-noise ratio) and %13.42 in terms of SSIM (Structural Similarity Index) for ‘Lena’ image. Our results in standard images show the good performance of this algorithm, and prove that the algorithm proposed is robust to noise.


E. Ehsaeyan,
Volume 13, Issue 3 (9-2017)
Abstract

Image denoising as a pre-processing stage is a used to preserve details, edges and global contrast without blurring the corrupted image. Among state-of-the-art algorithms, block shrinkage denoising is an effective and compatible method to suppress additive white Gaussian noise (AWGN). Traditional NeighShrink algorithm can remove the Gaussian noise significantly, but loses the edge information instead. To overcome this drawback, this paper aims to develop an improvement shrinkage algorithm in the wavelet space based on the NeighSURE Shrink. We establish a novel function to shrink neighbor coefficients and minimize Stein’s Unbiased Risk Estimate (SURE). Some regularization parameters are employed to form a flexible threshold and can be adjusted via genetic algorithm (GA) as an optimization method with SURE fitness function. The proposed function is verified to be competitive or better than the other Shrinkage algorithms such as OracleShrink, BayesShrink, BiShrink, ProbShrink and SURE Bivariate Shrink in visual quality measurements. Overall, the corrected NeighShrink algorithm improves PSNR values of denoised images by 2 dB.

A. Sadr, N. Orouji,
Volume 15, Issue 2 (6-2019)
Abstract

Clifford Algebra (CA) is an effective substitute for classic algebra as the modern generation of mathematics. However, massive computational loads of CA-based algorithms have hindered its practical usage in the past decades. Nowadays, due to magnificent developments in computational architectures and systems, CA framework plays a vital role in the intuitive description of many scientific issues. Geometric Product is the most important CA operator, which created a novel perspective on image processing problems. In this work, Geometric Product and its properties are discussed precisely, and it is used for image partitioning as a straightforward instance. Efficient implementation of CA operators needs a specialized structure, therefore a hardware architecture is proposed that achieves 25x speed-up in comparison to the software approach.

A. Amiri, S. Mirzakuchaki,
Volume 16, Issue 3 (9-2020)
Abstract

Watermarking has increased dramatically in recent years in the Internet and digital media. Watermarking is one of the powerful tools to protect copyright. Local image features have been widely used in watermarking techniques based on feature points. In various papers, the invariance feature has been used to obtain the robustness against attacks. The purpose of this research was based on local feature-based stability as the second-generation of watermarking due to invariance feature to achieve robustness against attacks. In the proposed algorithm, initially, the points were identified by the proposed function in the extraction and Harris and Surf algorithms. Then, an optimal selection process, formulated in the form of a Knapsack problem. That the Knapsack problem algorithm selects non-overlapping areas as they are more robust to embed watermark bits. The results are compared with each of the mentioned feature extraction algorithms and finally, we use the OPAP algorithm to increase the amount of PSNR. The evaluation of the results is based on most of the StirMark criterion.

A. Fattahi, S. Emadi,
Volume 16, Issue 4 (12-2020)
Abstract

Increased popularity of digital media and image editing software has led to the spread of multimedia content forgery for various purposes. Undoubtedly, law and forensic medicine experts require trustworthy and non-forged images to enforce rights. Copy-move forgery is the most common type of manipulation of digital images. Copy-move forgery is used to hide an area of the image or to repeat a portion in the same image. In this paper, a method is presented for detecting copy-move forgery using the Scale-Invariant Feature Transform (SIFT) algorithm. The spearman relationship and ward clustering algorithm are used to measure the similarity between key-points, also to increase the accuracy of forgery detection. This method is invariant to changes such as rotation, scale change, deformation, and light change; it falls into the category of blind forgery detection methods. The experimental results show that with its high resistance to apparent changes, the proposed method correctly detects 99.56 percent of the forged images in the dataset and reveals the forged areas.

S. M. Zabihi, H. Ghanei-Yakhdan, N. Mehrshad,
Volume 16, Issue 4 (12-2020)
Abstract

In order to enhance the accuracy of the motion vector (MV) estimation and also reduce the error propagation issue during the estimation, in this paper, a new adaptive error concealment (EC) approach is proposed based on the information extracted from the video scene. In this regard, the motion information of the video scene around the degraded MB is first analyzed to estimate the motion type of the degraded MB. If the neighboring MBs possess uniform motion, the degraded MB imitates the behavior of neighboring MBs by choosing the MV of the collocated MB. Otherwise, the lost MV is estimated through the second proposed EC technique (i.e., IOBMA). In the IOBMA, unlike the conventional boundary matching criterion-based EC techniques, not only each boundary distortion is evaluated regarding both the luminance and the chrominance components of the boundary pixels, but also the total boundary distortion corresponding to each candidate MV is calculated as the weighted average of the available boundary distortions. Compared with the state-of-the-art EC techniques, the simulation results indicate the superiority of the proposed EC approach in terms of both the objective and subjective quality assessments.

B. Nasersharif, N. Naderi,
Volume 17, Issue 2 (6-2021)
Abstract

Convolutional Neural Networks (CNNs) have been shown their performance in speech recognition systems for extracting features, and also acoustic modeling. In addition, CNNs have been used for robust speech recognition and competitive results have been reported. Convolutive Bottleneck Network (CBN) is a kind of CNNs which has a bottleneck layer among its fully connected layers. The bottleneck features extracted by CBNs contain discriminative and rich context information. In this paper, we discuss these bottleneck features from an information theory viewpoint and use them as robust features for noisy speech recognition. In the proposed method, CBN inputs are the noisy logarithm of Mel filter bank energies (LMFBs) in a number of neighbor frames and its outputs are corresponding phone labels. In such a system, we showed that the mutual information between the bottleneck layer and labels are higher than the mutual information between noisy input features and labels. Thus, the bottleneck features are a denoised compressed form of input features which are more representative than input features for discriminating phone classes. Experimental results on the Aurora2 database show that bottleneck features extracted by CBN outperform some conventional speech features and also robust features extracted by CNN.

A. Karizi, S. M. Razavi, M. Taghipour-Gorjikolaie,
Volume 18, Issue 1 (3-2022)
Abstract

There are two serious issues regarding gait recognition. The first issue presents when the walking direction is unknown and the other one presents when the appearance of the user changes due to various reasons including carrying a bag or changing clothes. In this paper, a two-step view-invariant robust system is proposed to address these. In the first step, the walking direction is determined using five features of pixels of the leg region from gait energy image (GEI). In the second step, the GEI is decomposed into rectangular sections and the influence of changes in the appearance is confined to a small number of sections that could be eliminated by masking these sections. The system performs very well because the first step is computationally inexpensive and the second step preserves more useful information compared to other methods. In comparison with other methods, the proposed method shows better results.

A. Ataee, S. J. Kazemitabar,
Volume 19, Issue 1 (3-2023)
Abstract

We propose a real-time Yolov5 based deep convolutional neural network for detecting ships in the video. We begin with two famous publicly available SeaShip datasets each having around 9,000 images. We then supplement that with our self-collected dataset containing another thirteen thousand images. These images were labeled in six different classes, including passenger ships, military ships, cargo ships, container ships, fishing boats, and crane ships. The results confirm that Yolov5s can classify the ship's position in real-time from 135 frames per second videos with 99 % precision.

G. S. Kumar, G. Mamatha,
Volume 19, Issue 1 (3-2023)
Abstract

In today's technological environment, designing the Static Random Access Memory (SRAM) is most vital and critical memory devices. In this manuscript, two kinds of 5TSRAM are designed using different CNTFET such as Dual-ChiralityGate all around (GAA) CNTFET and Ballistic wrap gate CNTFET based 5T SRAM cell designs for enhancing the read/write assist process. Here, the proposed Dual-ChiralityGAA-CNTFET based 5T-SRAM has two cross-coupled inverters using one access transistor that is connected to the bit line (BL) and word line (WL) through minimum supply voltage. Instead of cross-coupled inverter circuit, the BWG-CNTFET based 5T-SRAM cell is intended for achieving less power and improved read/write assist process. Also, one transistor is executed as low-threshold (LVT) device in the proposed BWG-CNTFET based 5T-SRAM. Thus, proposed two kinds of 5T SRAM cells increases the read/write assist operation and reduce the leakage current/ power. The simulation of the proposed two kinds of 5T SRAM cell is done by HSPICE simulation tool and the performance metrics are calculated. Therefore, the proposed Dual-ChiralityGAA-CNTFET based 5T-SRAM cell design has attained 11.31%, 51.47% lower read delay, 44.44%, 26.33% lower write delay, 36.12%, 45.28% lower read power, 34.5% , 22.41% lower write power, 37.4%, 15.3% higher read SNM and 35.8%, 12.09% higher write SNM than Double gate carbon nanotube field effect transistors (DG CNTFET) and state-of-art method respectively. Similarly, the proposed BWG-CNTFET 5T SRAM cell design has attained 45.53%, 38.77% lower write delay, 56.67%, 45.64% lower read delay, 58.4%, 56.75% lower read power, 49.66%, 28.56% lower write power, 35.32%, 12.7% higher read SNM and 45.8%, 15.6% higher write SNM than Reduced Power with Enhanced Speed (RPES) approach and state-of-art method respectively.

Priyanka Handa, Balkrishan Jindal ,
Volume 20, Issue 1 (3-2024)
Abstract

The potential adverse effects of maize leaf diseases on agricultural productivity highlight the significance of precise disease diagnosis using effective leaf segmentation techniques. In order to improve maize leaf segmentation, especially for maize leaf disease detection, a hybrid optimization method is proposed in this paper. The proposed method provides better segmentation accuracy and outperforms traditional approaches by combining enhanced Particle Swarm Optimisation (PSO) with Firefly algorithm (FFA). Extensive tests on images of maize leaves taken from the Plant Village dataset are used to show the algorithm's superiority. Experimental results show a considerable decrease in Hausdorff distances, indicating better segmentation accuracy than conventional methods. The proposed method also performs better than expected in terms of Jaccard and Dice coefficients, which measure the overlap and similarity between segmented sections. The proposed hybrid optimization method significantly contributes to agricultural research and indicates that the method may be helpful in real scenarios.  The performance of proposed method is compared with existing techniques like K-Mean, OTSU, Canny, FuzzyOTSU, PSO and Firefly. The overall performance of the proposed method is satisfactory.

Page 1 from 1     

Creative Commons License
© 2022 by the authors. Licensee IUST, Tehran, Iran. This is an open access journal distributed under the terms and conditions of the Creative Commons Attribution-NonCommercial 4.0 International (CC BY-NC 4.0) license.