Volume 7, Number 1, 2010
Special Issue on Intelligent Systems and Robotics (pp.1-77)
This paper focuses on land resource consumption due to urban sprawl. Special attention is given to shrinking regions, characterized by economic decline, demographic change, and high unemployment rates. In these regions, vast terrain is abandoned and falls derelict. A geographic information system (GIS) based multi-criteria decision tool is introduced to determine the reuse potential of derelict terrain, to investigate the possible reuse options (housing, business and trade, industry, services, tourism and leisure, and re-greening), and to visualize the best reuse options for groups of sites on a regional scale. Achievement functions for attribute data are presented to assess the best reuse options based on a multi-attribute technique. The assessment tool developed is applied to a model region in Germany. The application of the assessment tool enables communities to become aware of their resources of derelict land and their reuse potential.
This paper presents discrete wavelet transform (DWT) and its inverse (IDWT) with Haar wavelets as tools to compute the variable size interpolated versions of an image at optimum computational load. As a human observer moves closer to or farther from a scene, the retinal image of the scene zooms in or out, respectively. This zooming in or out can be modeled using variable scale interpolation. The paper proposes a novel way of applying DWT and IDWT in a piecewise manner by non-uniform down-or up-sampling of the images to achieve partially sampled versions of the images. The partially sampled versions are then aggregated to achieve the final variable scale interpolated images. The non-uniform down-or up-sampling here is a function of the required scale of interpolation. Appropriate zero padding is used to make the images suitable for the required non-uniform sampling and the subsequent interpolation to the required scale. The concept of zeroeth level DWT is introduced here, which works as the basis for interpolating the images to achieve bigger size than the original one. The main emphasis here is on the computation of variable size images at less computational load, without compromise of quality of images. The interpolated images to different sizes and the reconstructed images are benchmarked using the statistical parameters and visual comparison. It has been found that the proposed approach performs better as compared to bilinear and bicubic interpolation techniques.
Invisible watermarking methods have been applied in frequency domains, trying to embed a small image inside a large original image. The original bitmap image will be converted into frequency domain to obtain the discrete cosine transform (DCT) matrices from its blocks. The bits of the logo image are embedded in random color components of the original image, as well as in random positions in each selected block. These positions are alternating current (AC) coefficients of the DCT matrix. The randomness is obtained from RC4 pseudorandom bit generator that determines in which color component this logo image bits will be embedded. The embedded bits have been hidden in random blocks in the image, which are chosen according to a (semi-random) function proposed in this work.
How to construct an appropriate spatial consistent measurement is the key to improving image retrieval performance. To address this problem, this paper introduces a novel image retrieval mechanism based on the family filtration in object region. First, we supply an object region by selecting a rectangle in a query image such that system returns a ranked list of images that contain the same object, retrieved from the corpus based on 100 images, as a result of the first rank. To further improve retrieval performance, we add an efficient spatial consistency stage, which is named family-based spatial consistency filtration, to re-rank the results returned by the first rank. We elaborate the performance of the retrieval system by some experiments on the dataset selected from the key frames of TREC Video Retrieval Evaluation 2005 (TRECVID2005). The results of experiments show that the retrieval mechanism proposed by us has vast major effect on the retrieval quality. The paper also verifies the stability of the retrieval mechanism by increasing the number of images from 100 to 2000 and realizes generalized retrieval with the object outside the dataset.
We propose a robust visual tracking framework based on particle filter to deal with the object appearance changes due to varying illumination, pose variantions, and occlusions. We mainly improve the observation model and re-sampling process in a particle filter. We use on-line updating appearance model, affine transformation, and M-estimation to construct an adaptive observation model. On-line updating appearance model can adapt to the changes of illumination partially. Affine transformation-based similarity measurement is introduced to tackle pose variantions, and M-estimation is used to handle the occluded object in computing observation likelihood. To take advantage of the most recent observation and produce a suboptimal Gaussian proposal distribution, we incorporate Kalman filter into a particle filter to enhance the performance of the resampling process. To estimate the posterior probability density properly with lower computational complexity, we only employ a single Kalman filter to propagate Gaussian distribution. Experimental results have demonstrated the effectiveness and robustness of the proposed algorithm by tracking visual objects in the recorded video sequences.
Inspired by human behaviors, a robot object tracking model is proposed on the basis of visual attention mechanism, which is fit for the theory of topological perception. The model integrates the image-driven, bottom-up attention and the object-driven, top-down attention, whereas the previous attention model has mostly focused on either the bottom-up or top-down attention. By the bottom-up component, the whole scene is segmented into the ground region and the salient regions. Guided by top-down strategy which is achieved by a topological graph, the object regions are separated from the salient regions. The salient regions except the object regions are the barrier regions. In order to estimate the model, a mobile robot platform is developed, on which some experiments are implemented. The experimental results indicate that processing an image with a resolution of 752 480 pixels takes less than 200 ms and the object regions are unabridged. The analysis obtained by comparing the proposed model with the existing model demonstrates that the proposed model has some advantages in robot object tracking in terms of speed and efficiency.
This paper presents an improved support vector machine (SVM) algorithm, which employs invariant moments-based edge extraction to obtain feature attribute. A heuristic attribute reduction algorithm based on rough set s discernible matrix is proposed to identify and classify micro-targets. To avoid the complicated calibration for intrinsic parameters of camera, an improved Broyden s method is proposed to estimate the image Jacobian matrix which employs Chebyshev polynomial to construct a cost function to approximate the optimization value. Finally, a visual controller is designed for a robotic micromanipulation system. The experiment results of micro-parts assembly show that the proposed methods and algorithms are effective and feasible.
A multi-modal action control approach is proposed for an autonomous soccer robot when the bottom hardware is unchange-able. Different from existing methods, the proposed control approach defines actions with the principle of perception-planning-action inspired by human intelligence. Character extraction is used to divide the perception input into different modes. Different control modes are built by combining different control methods for the linear velocity and angular velocity. Based on production rules, the motion control is realized by connecting different perceptions to the corresponding control mode. Simulation and real experiments are conducted with the middle-sized robot Frontier-I, and the proposed method is compared with a proportional-integral-derivative (PID) control method to display its feasibility and performance. The results show that the multi-modal action control method can make robots react rapidly in a dynamic environment.
In this paper, an emotional mathematical model and affective state probability description space of a humanoid robot are set up on the basis of psycho-dynamics psychological energy and affective energy conservation law. The emotional state transferring process and hidden Markov chain algorithm of stimulating transition process are then studied. The simulation results show that the mathematical model is applicable to the authentic affective state change rule of human beings. Finally, the gait generation experiment results of control signal and electric current tracking wave-form are presented to demonstrate the validity of the proposed mathematical model.
The binary decision diagrams (BDDs) can give canonical representation to Boolean functions; they have wide applications in the design and verification of digital systems. A new method based on cultural algorithms for minimizing the size of BDDs is presented in this paper. First of all, the coding of an individual representing a BDDs is given, and the fitness of an individual is defined. The population is built by a set of the individuals. Second, the implementations based on cultural algorithms for the minimization of BDDs, i.e., the designs of belief space and population space, and the designs of acceptance function and influence function, are given in detail. Third, the fault detection approaches using BDDs for digital circuits are studied. A new method for the detection of crosstalk faults by using BDDs is presented. Experimental results on a number of digital circuits show that the BDDs with small number of nodes can be obtained by the method proposed in this paper, and all test vectors of a fault in digital circuits can also be produced.
Diode clamped multi-level inverter (DCMLI) has a wide application prospect in high-voltage and adjustable speed drive systems due to its low stress on switching devices, low harmonic output, and simple structure. However, the problem of complexity of selecting vectors and capacitor voltage unbalance needs to be solved when the algorithm of direct torque control (DTC) is implemented on DCMLI. In this paper, a fuzzy DTC system of an induction machine fed by a three-level neutral-point-clamped (NPC) inverter is proposed. After introducing fuzzy logic, optimal selecting switching state is realized by applying various strategies which can distinguish the grade of the errors of stator flux linkage, torque, the neutral-point potential, and the position of stator flux linkage. Consequently, the neutral-point potential unbalance, the dv/dt of output voltage and the switching loss are restrained effectively, and desirable dynamic and steady-state performances of induction machines can be obtained for the DTC scheme. A design method of the fuzzy controller is introduced in detail, and the relevant simulation and experimental results have verified the feasibility of the proposed control algorithm.
Template matching methods have been widely utilized to detect fabric defects in textile quality control. In this paper, a novel approach is proposed to design a flexible classifier for distinguishing flaws from twill fabrics by statistically learning from the normal fabric texture. Statistical information of natural and normal texture of the fabric can be extracted via collecting and analyzing the gray image. On the basis of this, both judging threshold and template are acquired and updated adaptively in real-time according to the real textures of fabric, which promises more flexibility and universality. The algorithms are experimented with images of fault free and faulty textile samples.
In this paper, we propose a mechanism named modified backoff (MB) mechanism to decrease the channel idle time in IEEE 802.11 distributed coordination function (DCF). In the noisy channel, when signal-to-noise ratio (SNR) is low, applying this mechanism in DCF greatly improves the throughput and lowers the channel idle time. This paper presents an analytical model for the performance study of IEEE 802.11 MB-DCF for nonsaturated heterogeneous traffic in the presence of transmission errors. First, we introduce the MB-DCF and compare its performance to IEEE 802.11 DCF with binary exponential backoff(BEB). The IEEE 802.11 DCF with BEB mechanism suffers from more channel idle time under low SNR. The MB-DCF ensures high throughput and low packet delay by reducing the channel idle time under the low traffic in the network. However, to the best of the authors knowledge, there are no previous works that enhance the performance of the DCF under imperfect wireless channel. We show through analysis that the proposed mechanism greatly outperforms the original IEEE 802.11 DCF in the imperfect channel condition. The effectiveness of physical and link layer parameters on throughput performance is explored. We also present a throughput investigation of the heterogeneous traffic for different radio conditions.
Based on a model of network encoding and dynamics called the artificial genome, we propose a segmental duplication and divergence model for evolving artificial regulatory networks. We find that this class of networks share structural properties with natural transcriptional regulatory networks. Specifically, these networks can display scale-free and small-world structures. We also find that these networks have a higher probability to operate in the ordered regimen, and a lower probability to operate in the chaotic regimen. That is, the dynamics of these networks is similar to that of natural networks. The results show that the structure and dynamics inherent in natural networks may be in part due to their method of generation rather than being exclusively shaped by subsequent evolution under natural selection.
This paper proposes an adaptive chaos quantum honey bee algorithm (CQHBA) for solving chance-constrained programming in random fuzzy environment based on random fuzzy simulations. Random fuzzy simulation is designed to estimate the chance of a random fuzzy event and the optimistic value to a random fuzzy variable. In CQHBA, each bee carries a group of quantum bits representing a solution. Chaos optimization searches space around the selected best-so-far food source. In the marriage process, random interferential discrete quantum crossover is done between selected drones and the queen. Gaussian quantum mutation is used to keep the diversity of whole population. New methods of computing quantum rotation angles are designed based on grads. A proof of convergence for CQHBA is developed and a theoretical analysis of the computational overhead for the algorithm is presented. Numerical examples are presented to demonstrate its superiority in robustness and stability, efficiency of computational complexity, success rate, and accuracy of solution quality. CQHBA is manifested to be highly robust under various conditions and capable of handling most random fuzzy programmings with any parameter settings, variable initializations, system tolerance and confidence level, perturbations, and noises.
This paper presents a description and performance evaluation of a new bit-level, lossless, adaptive, and asymmetric data compression scheme that is based on the adaptive character wordlength (ACW(n)) algorithm. The proposed scheme enhances the compression ratio of the ACW(n) algorithm by dividing the binary sequence into a number of subsequences (s), each of them satisfying the condition that the number of decimal values (d) of the n-bit length characters is equal to or less than 256. Therefore, the new scheme is referred to as ACW(n, s), where n is the adaptive character wordlength and s is the number of subsequences. The new scheme was used to compress a number of text files from standard corpora. The obtained results demonstrate that the ACW(n, s) scheme achieves higher compression ratio than many widely used compression algorithms and it achieves a competitive performance compared to state-of-the-art compression tools.
This paper presents an economic lot-sizing problem with perishable inventory and general economies of scale cost functions. For the case with backlogging allowed, a mathematical model is formulated, and several properties of the optimal solutions are explored. With the help of these optimality properties, a polynomial time approximation algorithm is developed by a new method. The new method adopts a shift technique to obtain a feasible solution of subproblem and takes the optimal solution of the subproblem as an approximation solution of our problem. The worst case performance for the approximation algorithm is proven to be (42 + 5)/7. Finally, an instance illustrates that the bound is tight.