Search results for: Design decomposition
4992 Pathway to Reduce Industrial Energy Intensity for Energy Conservation at Chinese Provincial Level
Authors: Shengman Zhao, Yang Yu, Shenghui Cui
Abstract:
Using logarithmic mean Divisia decomposition technique, this paper analyzes the change in industrial energy intensity of Fujian Province in China, based on data sets of added value and energy consumption for 35 selected industrial sub-sectors from 1999 to 2009. The change in industrial energy intensity is decomposed into intensity effect and structure effect. Results show that the industrial energy intensity of Fujian Province has achieved a reduction of 51% over the past ten years. The structural change, a shift in the mix of industrial sub-sectors, made overwhelming contribution to the reduction. The impact of energy efficiency’s improvement was relatively small. However, the aggregate industrial energy intensity was very sensitive to both the changes in energy intensity and in production share of energy-intensive sub-sectors, such as production and supply of electric power, steam and hot water. Pathway to reduce industrial energy intensity for energy conservation in Fujian Province is proposed in the end.
Keywords: Decomposition analysis, energy intensity, Fujian Province, industry.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 13884991 Simulation-Based Optimization of a Non-Uniform Piezoelectric Energy Harvester with Stack Boundary
Authors: Alireza Keshmiri, Shahriar Bagheri, Nan Wu
Abstract:
This research presents an analytical model for the development of an energy harvester with piezoelectric rings stacked at the boundary of the structure based on the Adomian decomposition method. The model is applied to geometrically non-uniform beams to derive the steady-state dynamic response of the structure subjected to base motion excitation and efficiently harvest the subsequent vibrational energy. The in-plane polarization of the piezoelectric rings is employed to enhance the electrical power output. A parametric study for the proposed energy harvester with various design parameters is done to prepare the dataset required for optimization. Finally, simulation-based optimization technique helps to find the optimum structural design with maximum efficiency. To solve the optimization problem, an artificial neural network is first trained to replace the simulation model, and then, a genetic algorithm is employed to find the optimized design variables. Higher geometrical non-uniformity and length of the beam lowers the structure natural frequency and generates a larger power output.Keywords: Piezoelectricity, energy harvesting, simulation-based optimization, artificial neural network, genetic algorithm.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 8474990 A New Method for Image Classification Based on Multi-level Neural Networks
Authors: Samy Sadek, Ayoub Al-Hamadi, Bernd Michaelis, Usama Sayed
Abstract:
In this paper, we propose a supervised method for color image classification based on a multilevel sigmoidal neural network (MSNN) model. In this method, images are classified into five categories, i.e., “Car", “Building", “Mountain", “Farm" and “Coast". This classification is performed without any segmentation processes. To verify the learning capabilities of the proposed method, we compare our MSNN model with the traditional Sigmoidal Neural Network (SNN) model. Results of comparison have shown that the MSNN model performs better than the traditional SNN model in the context of training run time and classification rate. Both color moments and multi-level wavelets decomposition technique are used to extract features from images. The proposed method has been tested on a variety of real and synthetic images.Keywords: Image classification, multi-level neural networks, feature extraction, wavelets decomposition.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16484989 Tracking Objects in Color Image Sequences: Application to Football Images
Authors: Mourad Moussa, Ali Douik, Hassani Messaoud
Abstract:
In this paper, we present a comparative study between two computer vision systems for objects recognition and tracking, these algorithms describe two different approach based on regions constituted by a set of pixels which parameterized objects in shot sequences. For the image segmentation and objects detection, the FCM technique is used, the overlapping between cluster's distribution is minimized by the use of suitable color space (other that the RGB one). The first technique takes into account a priori probabilities governing the computation of various clusters to track objects. A Parzen kernel method is described and allows identifying the players in each frame, we also show the importance of standard deviation value research of the Gaussian probability density function. Region matching is carried out by an algorithm that operates on the Mahalanobis distance between region descriptors in two subsequent frames and uses singular value decomposition to compute a set of correspondences satisfying both the principle of proximity and the principle of exclusion.
Keywords: Image segmentation, objects tracking, Parzen window, singular value decomposition, target recognition.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 19854988 Source of Oseltamivir Resistance Due to R152K Mutation of Influenza B Virus Neuraminidase: Molecular Modeling
Authors: J. Tengrang, T. Rungrotmongkol, S. Hannongbua
Abstract:
Every 2-3 years the influenza B virus serves epidemics. Neuraminidase (NA) is an important target for influenza drug design. Although, oseltamivir, an oral neuraminidase drug, has been shown good inhibitory efficiency against wild-type of influenza B virus, the lower susceptibility to the R152K mutation has been reported. Better understanding of oseltamivir efficiency and resistance toward the influenza B NA wild-type and R152K mutant, respectively, could be useful for rational drug design. Here, two complex systems of wild-type and R152K NAs with oseltamivir bound were studied using molecular dynamics (MD) simulations. Based on 5-ns MD simulation, the loss of notable hydrogen bond and decrease in per-residue decomposition energy from the mutated residue K152 contributed to drug compared to those of R152 in wildtype were found to be a primary source of high-level of oseltamivir resistance due to the R152K mutation.Keywords: Influenza B neuraminidase, Molecular dynamics simulation, Oseltamivir resistance, R152K mutant
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 19104987 Analysis of the EEG Signal for a Practical Biometric System
Authors: Muhammad Kamil Abdullah, Khazaimatol S Subari, Justin Leo Cheang Loong, Nurul Nadia Ahmad
Abstract:
This paper discusses the effectiveness of the EEG signal for human identification using four or less of channels of two different types of EEG recordings. Studies have shown that the EEG signal has biometric potential because signal varies from person to person and impossible to replicate and steal. Data were collected from 10 male subjects while resting with eyes open and eyes closed in 5 separate sessions conducted over a course of two weeks. Features were extracted using the wavelet packet decomposition and analyzed to obtain the feature vectors. Subsequently, the neural networks algorithm was used to classify the feature vectors. Results show that, whether or not the subjects- eyes were open are insignificant for a 4– channel biometrics system with a classification rate of 81%. However, for a 2–channel system, the P4 channel should not be included if data is acquired with the subjects- eyes open. It was observed that for 2– channel system using only the C3 and C4 channels, a classification rate of 71% was achieved.Keywords: Biometric, EEG, Wavelet Packet Decomposition, NeuralNetworks
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 30304986 Video Shot Detection and Key Frame Extraction Using Faber Shauder DWT and SVD
Authors: Assma Azeroual, Karim Afdel, Mohamed El Hajji, Hassan Douzi
Abstract:
Key frame extraction methods select the most representative frames of a video, which can be used in different areas of video processing such as video retrieval, video summary, and video indexing. In this paper we present a novel approach for extracting key frames from video sequences. The frame is characterized uniquely by his contours which are represented by the dominant blocks. These dominant blocks are located on the contours and its near textures. When the video frames have a noticeable changement, its dominant blocks changed, then we can extracte a key frame. The dominant blocks of every frame is computed, and then feature vectors are extracted from the dominant blocks image of each frame and arranged in a feature matrix. Singular Value Decomposition is used to calculate sliding windows ranks of those matrices. Finally the computed ranks are traced and then we are able to extract key frames of a video. Experimental results show that the proposed approach is robust against a large range of digital effects used during shot transition.
Keywords: Key Frame Extraction, Shot detection, FSDWT, Singular Value Decomposition.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 25204985 Parametric Optimization of Hospital Design
Authors: M. K. Holst, P. H. Kirkegaard, L. D. Christoffersen
Abstract:
Present paper presents a parametric performancebased design model for optimizing hospital design. The design model operates with geometric input parameters defining the functional requirements of the hospital and input parameters in terms of performance objectives defining the design requirements and preferences of the hospital with respect to performances. The design model takes point of departure in the hospital functionalities as a set of defined parameters and rules describing the design requirements and preferences.Keywords: Architectural Layout Design, Hospital Design, Parametric design, Performance-based models.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 27204984 1-D Modeling of Hydrate Decomposition in Porous Media
Authors: F. Esmaeilzadeh, M. E. Zeighami, J. Fathi
Abstract:
This paper describes a one-dimensional numerical model for natural gas production from the dissociation of methane hydrate in hydrate-capped gas reservoir under depressurization and thermal stimulation. Some of the hydrate reservoirs discovered are overlying a free-gas layer, known as hydrate-capped gas reservoirs. These reservoirs are thought to be easiest and probably the first type of hydrate reservoirs to be produced. The mathematical equations that can be described this type of reservoir include mass balance, heat balance and kinetics of hydrate decomposition. These non-linear partial differential equations are solved using finite-difference fully implicit scheme. In the model, the effect of convection and conduction heat transfer, variation change of formation porosity, the effect of using different equations of state such as PR and ER and steam or hot water injection are considered. In addition distributions of pressure, temperature, saturation of gas, hydrate and water in the reservoir are evaluated. It is shown that the gas production rate is a sensitive function of well pressure.
Keywords: Hydrate reservoir, numerical modeling, depressurization, thermal stimulation, gas generation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 20554983 Speech Intelligibility Improvement Using Variable Level Decomposition DWT
Authors: Samba Raju, Chiluveru, Manoj Tripathy
Abstract:
Intelligibility is an essential characteristic of a speech signal, which is used to help in the understanding of information in speech signal. Background noise in the environment can deteriorate the intelligibility of a recorded speech. In this paper, we presented a simple variance subtracted - variable level discrete wavelet transform, which improve the intelligibility of speech. The proposed algorithm does not require an explicit estimation of noise, i.e., prior knowledge of the noise; hence, it is easy to implement, and it reduces the computational burden. The proposed algorithm decides a separate decomposition level for each frame based on signal dominant and dominant noise criteria. The performance of the proposed algorithm is evaluated with speech intelligibility measure (STOI), and results obtained are compared with Universal Discrete Wavelet Transform (DWT) thresholding and Minimum Mean Square Error (MMSE) methods. The experimental results revealed that the proposed scheme outperformed competing methodsKeywords: Discrete Wavelet Transform, speech intelligibility, STOI, standard deviation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 6934982 Speaker Identification by Atomic Decomposition of Learned Features Using Computational Auditory Scene Analysis Principals in Noisy Environments
Authors: Thomas Bryan, Veton Kepuska, Ivica Kostanic
Abstract:
Speaker recognition is performed in high Additive White Gaussian Noise (AWGN) environments using principals of Computational Auditory Scene Analysis (CASA). CASA methods often classify sounds from images in the time-frequency (T-F) plane using spectrograms or cochleargrams as the image. In this paper atomic decomposition implemented by matching pursuit performs a transform from time series speech signals to the T-F plane. The atomic decomposition creates a sparsely populated T-F vector in “weight space” where each populated T-F position contains an amplitude weight. The weight space vector along with the atomic dictionary represents a denoised, compressed version of the original signal. The arraignment or of the atomic indices in the T-F vector are used for classification. Unsupervised feature learning implemented by a sparse autoencoder learns a single dictionary of basis features from a collection of envelope samples from all speakers. The approach is demonstrated using pairs of speakers from the TIMIT data set. Pairs of speakers are selected randomly from a single district. Each speak has 10 sentences. Two are used for training and 8 for testing. Atomic index probabilities are created for each training sentence and also for each test sentence. Classification is performed by finding the lowest Euclidean distance between then probabilities from the training sentences and the test sentences. Training is done at a 30dB Signal-to-Noise Ratio (SNR). Testing is performed at SNR’s of 0 dB, 5 dB, 10 dB and 30dB. The algorithm has a baseline classification accuracy of ~93% averaged over 10 pairs of speakers from the TIMIT data set. The baseline accuracy is attributable to short sequences of training and test data as well as the overall simplicity of the classification algorithm. The accuracy is not affected by AWGN and produces ~93% accuracy at 0dB SNR.
Keywords: Time-frequency plane, atomic decomposition, envelope sampling, Gabor atoms, matching pursuit, sparse dictionary learning, sparse autoencoder.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 15704981 A Neural-Network-Based Fault Diagnosis Approach for Analog Circuits by Using Wavelet Transformation and Fractal Dimension as a Preprocessor
Abstract:
This paper presents a new method of analog fault diagnosis based on back-propagation neural networks (BPNNs) using wavelet decomposition and fractal dimension as preprocessors. The proposed method has the capability to detect and identify faulty components in an analog electronic circuit with tolerance by analyzing its impulse response. Using wavelet decomposition to preprocess the impulse response drastically de-noises the inputs to the neural network. The second preprocessing by fractal dimension can extract unique features, which are the fed to a neural network as inputs for further classification. A comparison of our work with [1] and [6], which also employs back-propagation (BP) neural networks, reveals that our system requires a much smaller network and performs significantly better in fault diagnosis of analog circuits due to our proposed preprocessing techniques.
Keywords: Analog circuits, fault diagnosis, tolerance, wavelettransform, fractal dimension, box dimension.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 22004980 A Spatial Information Network Traffic Prediction Method Based on Hybrid Model
Authors: Jingling Li, Yi Zhang, Wei Liang, Tao Cui, Jun Li
Abstract:
Compared with terrestrial network, the traffic of spatial information network has both self-similarity and short correlation characteristics. By studying its traffic prediction method, the resource utilization of spatial information network can be improved, and the method can provide an important basis for traffic planning of a spatial information network. In this paper, considering the accuracy and complexity of the algorithm, the spatial information network traffic is decomposed into approximate component with long correlation and detail component with short correlation, and a time series hybrid prediction model based on wavelet decomposition is proposed to predict the spatial network traffic. Firstly, the original traffic data are decomposed to approximate components and detail components by using wavelet decomposition algorithm. According to the autocorrelation and partial correlation smearing and truncation characteristics of each component, the corresponding model (AR/MA/ARMA) of each detail component can be directly established, while the type of approximate component modeling can be established by ARIMA model after smoothing. Finally, the prediction results of the multiple models are fitted to obtain the prediction results of the original data. The method not only considers the self-similarity of a spatial information network, but also takes into account the short correlation caused by network burst information, which is verified by using the measured data of a certain back bone network released by the MAWI working group in 2018. Compared with the typical time series model, the predicted data of hybrid model is closer to the real traffic data and has a smaller relative root means square error, which is more suitable for a spatial information network.
Keywords: Spatial Information Network, Traffic prediction, Wavelet decomposition, Time series model.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 6404979 A Study on Roles of the Community Design in Crime Prevention: Focusing on Project called Root out Crime by Design in South Korea
Authors: Miyoun Won, Youngkyung Choi
Abstract:
In the meantime, there were lots of hardware solutions like products or urban facilities for crime prevention in the public design area. Meanwhile, people have growing interest in public design so by making a village; community design in public design is getting active by the society. The system for crime prevention is actively done by the citizens who created the community. Regarding the social situation, in this project, we saw it as a kind of community design practices and researched about 'how does community design influence Crime prevention?' The purpose of this study is to propose the community design as a way of preventing the crime in the city. First, we found out about the definition, elements and methods of community design by reviewing the theory. And then, this study analyzed the case that was enforced in Seoul and organize the elements and methods of community design. This study can be refer to Public Design based on civil participation and make the community design area contribute to expand the way of solving social problems.
Keywords: Public Design, Sustainable Community Design, Crime Prevention, Participatory Design.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 21494978 Ozone Assisted Low Temperature Catalytic Benzene Oxidation over Al2O3, SiO2, AlOOH Supported Ni/Pd Catalytic
Authors: V. Georgiev
Abstract:
Catalytic oxidation of benzene assisted by ozone, on alumina, silica, and boehmite-supported Ni/Pd catalysts was investigated at 353 K to assess the influence of the support on the reaction. Three bimetallic Ni/Pd nanosized samples with loading 4.7% of Ni and 0.17% of Pd supported on SiO2, AlOOH and Al2O3 were synthesized by the extractive-pyrolytic method. The phase composition was characterized by means of XRD and the surface area and pore size were estimated using Brunauer–Emmett–Teller (BET) and Barrett–Joyner–Halenda (BJH) methods. At the beginning of the reaction, catalysts were significantly deactivated due to the accumulation of intermediates on the catalyst surface and after 60 minutes it turned stable. Ni/Pd/AlOOH catalyst showed the highest steady-state activity in comparison with the Ni/Pd/SiO2 and Ni/Pd/Al2O3 catalysts. Their activity depends on the ozone decomposition potential of the catalysts because of generating oxidizing active species. The sample with the highest ozone decomposition ability which correlated to the surface area of the support oxidizes benzene to the highest extent.
Keywords: Ozone, catalysts, oxidation, Volatile organic compounds, VOCs.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 6204977 Optimum Conditions for Effective Decomposition of Toluene as VOC Gas by Pilot-Scale Regenerative Thermal Oxidizer
Authors: S. Iijima, K. Nakayama, D. Kuchar, M. Kubota, H. Matsuda
Abstract:
Regenerative Thermal Oxidizer (RTO) is one of the best solutions for removal of Volatile Organic Compounds (VOC) from industrial processes. In the RTO, VOC in a raw gas are usually decomposed at 950-1300 K and the combustion heat of VOC is recovered by regenerative heat exchangers charged with ceramic honeycombs. The optimization of the treatment of VOC leads to the reduction of fuel addition to VOC decomposition, the minimization of CO2 emission and operating cost as well. In the present work, the thermal efficiency of the RTO was investigated experimentally in a pilot-scale RTO unit using toluene as a typical representative of VOC. As a result, it was recognized that the radiative heat transfer was dominant in the preheating process of a raw gas when the gas flow rate was relatively low. Further, it was found that a minimum heat exchanger volume to achieve self combustion of toluene without additional heating of the RTO by fuel combustion was dependent on both the flow rate of a raw gas and the concentration of toluene. The thermal efficiency calculated from fuel consumption and the decomposed toluene ratio, was found to have a maximum value of 0.95 at a raw gas mass flow rate of 1810 kg·h-1 and honeycombs height of 1.5m.Keywords: Regenerative Heat Exchange, Self Combustion, Toluene, Volatile Organic Compounds.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 24424976 Labview-Based System for Fiber Links Events Detection
Authors: Bo Liu, Qingshan Kong, Weiqing Huang
Abstract:
With the rapid development of modern communication, diagnosing the fiber-optic quality and faults in real-time is widely focused. In this paper, a Labview-based system is proposed for fiber-optic faults detection. The wavelet threshold denoising method combined with Empirical Mode Decomposition (EMD) is applied to denoise the optical time domain reflectometer (OTDR) signal. Then the method based on Gabor representation is used to detect events. Experimental measurements show that signal to noise ratio (SNR) of the OTDR signal is improved by 1.34dB on average, compared with using the wavelet threshold denosing method. The proposed system has a high score in event detection capability and accuracy. The maximum detectable fiber length of the proposed Labview-based system can be 65km.
Keywords: Empirical mode decomposition (EMD), events detection, Gabor transform, optical time domain reflectometer (OTDR), wavelet threshold denoising.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 8044975 Performance Analysis of a Discrete-time GeoX/G/1 Queue with Single Working Vacation
Authors: Shan Gao, Zaiming Liu
Abstract:
This paper treats a discrete-time batch arrival queue with single working vacation. The main purpose of this paper is to present a performance analysis of this system by using the supplementary variable technique. For this purpose, we first analyze the Markov chain underlying the queueing system and obtain its ergodicity condition. Next, we present the stationary distributions of the system length as well as some performance measures at random epochs by using the supplementary variable method. Thirdly, still based on the supplementary variable method we give the probability generating function (PGF) of the number of customers at the beginning of a busy period and give a stochastic decomposition formulae for the PGF of the stationary system length at the departure epochs. Additionally, we investigate the relation between our discretetime system and its continuous counterpart. Finally, some numerical examples show the influence of the parameters on some crucial performance characteristics of the system.
Keywords: Discrete-time queue, batch arrival, working vacation, supplementary variable technique, stochastic decomposition.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 14344974 A Sparse Representation Speech Denoising Method Based on Adapted Stopping Residue Error
Authors: Qianhua He, Weili Zhou, Aiwu Chen
Abstract:
A sparse representation speech denoising method based on adapted stopping residue error was presented in this paper. Firstly, the cross-correlation between the clean speech spectrum and the noise spectrum was analyzed, and an estimation method was proposed. In the denoising method, an over-complete dictionary of the clean speech power spectrum was learned with the K-singular value decomposition (K-SVD) algorithm. In the sparse representation stage, the stopping residue error was adaptively achieved according to the estimated cross-correlation and the adjusted noise spectrum, and the orthogonal matching pursuit (OMP) approach was applied to reconstruct the clean speech spectrum from the noisy speech. Finally, the clean speech was re-synthesised via the inverse Fourier transform with the reconstructed speech spectrum and the noisy speech phase. The experiment results show that the proposed method outperforms the conventional methods in terms of subjective and objective measure.
Keywords: Speech denoising, sparse representation, K-singular value decomposition, orthogonal matching pursuit.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 10144973 Random Projections for Dimensionality Reduction in ICA
Authors: Sabrina Gaito, Andrea Greppi, Giuliano Grossi
Abstract:
In this paper we present a technique to speed up ICA based on the idea of reducing the dimensionality of the data set preserving the quality of the results. In particular we refer to FastICA algorithm which uses the Kurtosis as statistical property to be maximized. By performing a particular Johnson-Lindenstrauss like projection of the data set, we find the minimum dimensionality reduction rate ¤ü, defined as the ratio between the size k of the reduced space and the original one d, which guarantees a narrow confidence interval of such estimator with high confidence level. The derived dimensionality reduction rate depends on a system control parameter β easily computed a priori on the basis of the observations only. Extensive simulations have been done on different sets of real world signals. They show that actually the dimensionality reduction is very high, it preserves the quality of the decomposition and impressively speeds up FastICA. On the other hand, a set of signals, on which the estimated reduction rate is greater than 1, exhibits bad decomposition results if reduced, thus validating the reliability of the parameter β. We are confident that our method will lead to a better approach to real time applications.Keywords: Independent Component Analysis, FastICA algorithm, Higher-order statistics, Johnson-Lindenstrauss lemma.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 18904972 A Normalization-based Robust Image Watermarking Scheme Using SVD and DCT
Authors: Say Wei Foo, Qi Dong
Abstract:
Digital watermarking is one of the techniques for copyright protection. In this paper, a normalization-based robust image watermarking scheme which encompasses singular value decomposition (SVD) and discrete cosine transform (DCT) techniques is proposed. For the proposed scheme, the host image is first normalized to a standard form and divided into non-overlapping image blocks. SVD is applied to each block. By concatenating the first singular values (SV) of adjacent blocks of the normalized image, a SV block is obtained. DCT is then carried out on the SV blocks to produce SVD-DCT blocks. A watermark bit is embedded in the highfrequency band of a SVD-DCT block by imposing a particular relationship between two pseudo-randomly selected DCT coefficients. An adaptive frequency mask is used to adjust local watermark embedding strength. Watermark extraction involves mainly the inverse process. The watermark extracting method is blind and efficient. Experimental results show that the quality degradation of watermarked image caused by the embedded watermark is visually transparent. Results also show that the proposed scheme is robust against various image processing operations and geometric attacks.Keywords: Image watermarking, Image normalization, Singularvalue decomposition, Discrete cosine transform, Robustness.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 20964971 Cellulolytic Microbial Activator Influence on Decomposition of Rubber Factory Waste Composting
Authors: Thaniya Kaosol, Sirinthrar Wandee
Abstract:
In this research, an aerobic composting method is studied to reuse organic waste from rubber factory waste as soil fertilizer and to study the effect of cellulolytic microbial activator (CMA) as the activator in the rubber factory waste composting. The performance of the composting process was monitored as a function of carbon and organic matter decomposition rate, temperature and moisture content. The results indicate that the rubber factory waste is best composted with water hyacinth and sludge than composted alone. In addition, the CMA is more affective when mixed with the rubber factory waste, water hyacinth and sludge since a good fertilizer is achieved. When adding CMA into the rubber factory waste composted alone, the finished product does not achieve a standard of fertilizer, especially the C/N ratio. Finally, the finished products of composting rubber factory waste and water hyacinth and sludge (both CMA and without CMA), can be an environmental friendly alternative to solve the disposal problems of rubber factory waste. Since the C/N ratio, pH, moisture content, temperature, and nutrients of the finished products are acceptable for agriculture use.Keywords: composting, rubber waste, C/N ratio, sludge, cellulolytic microbial activator
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 21384970 Method of Intelligent Fault Diagnosis of Preload Loss for Single Nut Ball Screws through the Sensed Vibration Signals
Authors: Yi-Cheng Huang, Yan-Chen Shin
Abstract:
This paper proposes method of diagnosing ball screw preload loss through the Hilbert-Huang Transform (HHT) and Multiscale entropy (MSE) process. The proposed method can diagnose ball screw preload loss through vibration signals when the machine tool is in operation. Maximum dynamic preload of 2 %, 4 %, and 6 % ball screws were predesigned, manufactured, and tested experimentally. Signal patterns are discussed and revealed using Empirical Mode Decomposition(EMD)with the Hilbert Spectrum. Different preload features are extracted and discriminated using HHT. The irregularity development of a ball screw with preload loss is determined and abstracted using MSE based on complexity perception. Experiment results show that the proposed method can predict the status of ball screw preload loss. Smart sensing for the health of the ball screw is also possible based on a comparative evaluation of MSE by the signal processing and pattern matching of EMD/HHT. This diagnosis method realizes the purposes of prognostic effectiveness on knowing the preload loss and utilizing convenience.Keywords: Empirical Mode Decomposition, Hilbert-Huang Transform, Multi-scale Entropy, Preload Loss, Single-nut Ball Screw
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 28434969 A Study on using N-Pattern Chains of Design Patterns based on Software Quality Metrics
Authors: Niloofar Khedri, Masoud Rahgozar, MahmoudReza Hashemi
Abstract:
Design patterns describe good solutions to common and reoccurring problems in program design. Applying design patterns in software design and implementation have significant effects on software quality metrics such as flexibility, usability, reusability, scalability and robustness. There is no standard rule for using design patterns. There are some situations that a pattern is applied for a specific problem and this pattern uses another pattern. In this paper, we study the effect of using chain of patterns on software quality metrics.Keywords: Design Patterns, Design patterns' Relationship, Software quality Metrics, Software Engineering.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 15784968 Emulation Model in Architectural Education
Authors: Ö. Şenyiğit, A. Çolak
Abstract:
It is of great importance for an architectural student to know the parameters through which he/she can conduct his/her design and makes his/her design effective in architectural education. Therefore; an empirical application study was carried out through the designing activity using the emulation model to support the design and design approaches of architectural students. During the investigation period, studies were done on the basic design elements and principles of the fall semester, and the emulation model, one of the designing methods that constitute the subject of the study, was fictionalized as three phased “recognition-interpretation-application”. As a result of the study, it was observed that when students were given a key method during the design process, their awareness increased and their aspects improved as well.
Keywords: Basic design, design education, design methods, emulation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 9444967 A TFETI Domain Decompositon Solver for Von Mises Elastoplasticity Model with Combination of Linear Isotropic-Kinematic Hardening
Authors: Martin Cermak, Stanislav Sysala
Abstract:
In this paper we present the efficient parallel implementation of elastoplastic problems based on the TFETI (Total Finite Element Tearing and Interconnecting) domain decomposition method. This approach allow us to use parallel solution and compute this nonlinear problem on the supercomputers and decrease the solution time and compute problems with millions of DOFs. In our approach we consider an associated elastoplastic model with the von Mises plastic criterion and the combination of linear isotropic-kinematic hardening law. This model is discretized by the implicit Euler method in time and by the finite element method in space. We consider the system of nonlinear equations with a strongly semismooth and strongly monotone operator. The semismooth Newton method is applied to solve this nonlinear system. Corresponding linearized problems arising in the Newton iterations are solved in parallel by the above mentioned TFETI. The implementation of this problem is realized in our in-house MatSol packages developed in MatLab.
Keywords: Isotropic-kinematic hardening, TFETI, domain decomposition, parallel solution.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 17594966 A Genetic-Neural-Network Modeling Approach for Self-Heating in GaN High Electron Mobility Transistors
Authors: Anwar Jarndal
Abstract:
In this paper, a genetic-neural-network (GNN) based large-signal model for GaN HEMTs is presented along with its parameters extraction procedure. The model is easy to construct and implement in CAD software and requires only DC and S-parameter measurements. An improved decomposition technique is used to model self-heating effect. Two GNN models are constructed to simulate isothermal drain current and power dissipation, respectively. The two model are then composed to simulate the drain current. The modeling procedure was applied to a packaged GaN-on-Si HEMT and the developed model is validated by comparing its large-signal simulation with measured data. A very good agreement between the simulation and measurement is obtained.
Keywords: GaN HEMT, computer-aided design & modeling, neural networks, genetic optimization.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16604965 On the Application of Meta-Design Techniques in Hardware Design Domain
Authors: R. Damaševičius
Abstract:
System-level design based on high-level abstractions is becoming increasingly important in hardware and embedded system design. This paper analyzes meta-design techniques oriented at developing meta-programs and meta-models for well-understood domains. Meta-design techniques include meta-programming and meta-modeling. At the programming level of design process, metadesign means developing generic components that are usable in a wider context of application than original domain components. At the modeling level, meta-design means developing design patterns that describe general solutions to the common recurring design problems, and meta-models that describe the relationship between different types of design models and abstractions. The paper describes and evaluates the implementation of meta-design in hardware design domain using object-oriented and meta-programming techniques. The presented ideas are illustrated with a case study.Keywords: Design patterns, meta-design, meta-modeling, metaprogramming.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 23154964 Decomposition of Homeomorphism on Topological Spaces
Authors: Ahmet Z. Ozcelik, Serkan Narli
Abstract:
In this study, two new classes of generalized homeomorphisms are introduced and shown that one of these classes has a group structure. Moreover, some properties of these two homeomorphisms are obtained.Keywords: Generalized closed set, homeomorphism, gsghomeomorphism, sgs-homeomorphism.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 18904963 An Implementation of MacMahon's Partition Analysis in Ordering the Lower Bound of Processing Elements for the Algorithm of LU Decomposition
Authors: Halil Snopce, Ilir Spahiu, Lavdrim Elmazi
Abstract:
A lot of Scientific and Engineering problems require the solution of large systems of linear equations of the form bAx in an effective manner. LU-Decomposition offers good choices for solving this problem. Our approach is to find the lower bound of processing elements needed for this purpose. Here is used the so called Omega calculus, as a computational method for solving problems via their corresponding Diophantine relation. From the corresponding algorithm is formed a system of linear diophantine equalities using the domain of computation which is given by the set of lattice points inside the polyhedron. Then is run the Mathematica program DiophantineGF.m. This program calculates the generating function from which is possible to find the number of solutions to the system of Diophantine equalities, which in fact gives the lower bound for the number of processors needed for the corresponding algorithm. There is given a mathematical explanation of the problem as well. Keywordsgenerating function, lattice points in polyhedron, lower bound of processor elements, system of Diophantine equationsand : calculus.
Keywords: generating function, lattice points in polyhedron, lower bound of processor elements, system of Diophantine equations and calculus.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1475