Search results for: bilateral minimum filter
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1512

Search results for: bilateral minimum filter

222 Algorithms for Computing of Optimization Problems with a Common Minimum-Norm Fixed Point with Applications

Authors: Apirak Sombat, Teerapol Saleewong, Poom Kumam, Parin Chaipunya, Wiyada Kumam, Anantachai Padcharoen, Yeol Je Cho, Thana Sutthibutpong

Abstract:

This research is aimed to study a two-step iteration process defined over a finite family of σ-asymptotically quasi-nonexpansive nonself-mappings. The strong convergence is guaranteed under the framework of Banach spaces with some additional structural properties including strict and uniform convexity, reflexivity, and smoothness assumptions. With similar projection technique for nonself-mapping in Hilbert spaces, we hereby use the generalized projection to construct a point within the corresponding domain. Moreover, we have to introduce the use of duality mapping and its inverse to overcome the unavailability of duality representation that is exploit by Hilbert space theorists. We then apply our results for σ-asymptotically quasi-nonexpansive nonself-mappings to solve for ideal efficiency of vector optimization problems composed of finitely many objective functions. We also showed that the obtained solution from our process is the closest to the origin. Moreover, we also give an illustrative numerical example to support our results.

Keywords: σ-asymptotically quasi-nonexpansive nonselfmapping, strong convergence, fixed point, uniformly convex and uniformly smooth Banach space.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1095
221 Intelligent Temperature Controller for Water-Bath System

Authors: Om Prakash Verma, Rajesh Singla, Rajesh Kumar

Abstract:

Conventional controller’s usually required a prior knowledge of mathematical modelling of the process. The inaccuracy of mathematical modelling degrades the performance of the process, especially for non-linear and complex control problem. The process used is Water-Bath system, which is most widely used and nonlinear to some extent. For Water-Bath system, it is necessary to attain desired temperature within a specified period of time to avoid the overshoot and absolute error, with better temperature tracking capability, else the process is disturbed.

To overcome above difficulties intelligent controllers, Fuzzy Logic (FL) and Adaptive Neuro-Fuzzy Inference System (ANFIS), are proposed in this paper. The Fuzzy controller is designed to work with knowledge in the form of linguistic control rules. But the translation of these linguistic rules into the framework of fuzzy set theory depends on the choice of certain parameters, for which no formal method is known. To design ANFIS, Fuzzy-Inference-System is combined with learning capability of Neural-Network.

It is analyzed that ANFIS is best suitable for adaptive temperature control of above system. As compared to PID and FLC, ANFIS produces a stable control signal. It has much better temperature tracking capability with almost zero overshoot and minimum absolute error.

Keywords: PID Controller, FLC, ANFIS, Non-Linear Control System, Water-Bath System, MATLAB-7.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 5548
220 High Accuracy ESPRIT-TLS Technique for Wind Turbine Fault Discrimination

Authors: Saad Chakkor, Mostafa Baghouri, Abderrahmane Hajraoui

Abstract:

ESPRIT-TLS method appears a good choice for high resolution fault detection in induction machines. It has a very high effectiveness in the frequency and amplitude identification. Contrariwise, it presents a high computation complexity which affects its implementation in real time fault diagnosis. To avoid this problem, a Fast-ESPRIT algorithm that combined the IIR band-pass filtering technique, the decimation technique and the original ESPRIT-TLS method was employed to enhance extracting accurately frequencies and their magnitudes from the wind stator current with less computation cost. The proposed algorithm has been applied to verify the wind turbine machine need in the implementation of an online, fast, and proactive condition monitoring. This type of remote and periodic maintenance provides an acceptable machine lifetime, minimize its downtimes and maximize its productivity. The developed technique has evaluated by computer simulations under many fault scenarios. Study results prove the performance of Fast- ESPRIT offering rapid and high resolution harmonics recognizing with minimum computation time and less memory cost.

Keywords: Spectral Estimation, ESPRIT-TLS, Real Time, Diagnosis, Wind Turbine Faults, Band-Pass Filtering, Decimation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2259
219 A New Brazilian Friction-Resistant Low Alloy High Strength Steel – A Life Testing Approach

Authors: D. I. De Souza, G. P. Azevedo, R. Rocha

Abstract:

In this paper we will develop a sequential life test approach applied to a modified low alloy-high strength steel part used in highway overpasses in Brazil.We will consider two possible underlying sampling distributions: the Normal and theInverse Weibull models. The minimum life will be considered equal to zero. We will use the two underlying models to analyze a fatigue life test situation, comparing the results obtained from both.Since a major chemical component of this low alloy-high strength steel part has been changed, there is little information available about the possible values that the parameters of the corresponding Normal and Inverse Weibull underlying sampling distributions could have. To estimate the shape and the scale parameters of these two sampling models we will use a maximum likelihood approach for censored failure data. We will also develop a truncation mechanism for the Inverse Weibull and Normal models. We will provide rules to truncate a sequential life testing situation making one of the two possible decisions at the moment of truncation; that is, accept or reject the null hypothesis H0. An example will develop the proposed truncated sequential life testing approach for the Inverse Weibull and Normal models.

Keywords: Sequential life testing, normal and inverse Weibull models, maximum likelihood approach, truncation mechanism.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1429
218 Finding Pareto Optimal Front for the Multi-Mode Time, Cost Quality Trade-off in Project Scheduling

Authors: H. Iranmanesh, M. R. Skandari, M. Allahverdiloo

Abstract:

Project managers are the ultimate responsible for the overall characteristics of a project, i.e. they should deliver the project on time with minimum cost and with maximum quality. It is vital for any manager to decide a trade-off between these conflicting objectives and they will be benefited of any scientific decision support tool. Our work will try to determine optimal solutions (rather than a single optimal solution) from which the project manager will select his desirable choice to run the project. In this paper, the problem in project scheduling notated as (1,T|cpm,disc,mu|curve:quality,time,cost) will be studied. The problem is multi-objective and the purpose is finding the Pareto optimal front of time, cost and quality of a project (curve:quality,time,cost), whose activities belong to a start to finish activity relationship network (cpm) and they can be done in different possible modes (mu) which are non-continuous or discrete (disc), and each mode has a different cost, time and quality . The project is constrained to a non-renewable resource i.e. money (1,T). Because the problem is NP-Hard, to solve the problem, a meta-heuristic is developed based on a version of genetic algorithm specially adapted to solve multi-objective problems namely FastPGA. A sample project with 30 activities is generated and then solved by the proposed method.

Keywords: FastPGA, Multi-Execution Activity Mode, ParetoOptimality, Project Scheduling, Time-Cost-Quality Trade-Off.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1684
217 The Kinetic of Biogas Production Rate from Cattle Manure in Batch Mode

Authors: Budiyono, I N. Widiasa, S. Johari, Sunarso

Abstract:

In this study, the kinetic of biogas production was studied by performing a series laboratory experiment using rumen fluid of animal ruminant as inoculums. Cattle manure as substrate was inoculated by rumen fluid to the anaerobic biodigester. Laboratory experiments using 400 ml biodigester were performed in batch operation mode. Given 100 grams of fresh cattle manure was fed to each biodigester and mixed with rumen fluid by manure : rumen weight ratio of 1:1 (MR11). The operating temperatures were varied at room temperature and 38.5 oC. The cumulative volume of biogas produced was used to measure the biodigester performance. The research showed that the rumen fluid inoculated to biodigester gave significant effect to biogas production (P<0.05). Rumen fluid inoculums caused biogas production rate and efficiency increase two to three times in compare to manure substrate without rumen fluid. With the rumen fluid inoculums, gave the kinetic parameters of biogas production i.e biogas production rate constants (U), maximum biogas production (A), and minimum time to produce biogas (λ) are 3.89 ml/(gVS.day); 172.51 (ml/gVS); dan 7.25 days, respectively. While the substrate without rumen fluid gave the kinetic parameters U, A, and λ are 1.74 ml/(gVS.day); 73.81 (ml/gVS); dan 14.75 days, respectively. The future work will be carried out to study the dynamics of biogas production if both the rumen inoculums and manure are fed in the continuous system.

Keywords: rumen fluid, inoculums, anaerobic digestion, biogasproduction.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3033
216 MaxMin Share Based Medium Access for Attaining Fairness and Channel Utilization in Mobile Adhoc Networks

Authors: P. Priakanth, P. Thangaraj

Abstract:

Due to the complex network architecture, the mobile adhoc network-s multihop feature gives additional problems to the users. When the traffic load at each node gets increased, the additional contention due its traffic pattern might cause the nodes which are close to destination to starve the nodes more away from the destination and also the capacity of network is unable to satisfy the total user-s demand which results in an unfairness problem. In this paper, we propose to create an algorithm to compute the optimal MAC-layer bandwidth assigned to each flow in the network. The bottleneck links contention area determines the fair time share which is necessary to calculate the maximum allowed transmission rate used by each flow. To completely utilize the network resources, we compute two optimal rates namely, the maximum fair share and minimum fair share. We use the maximum fair share achieved in order to limit the input rate of those flows which crosses the bottleneck links contention area when the flows that are not allocated to the optimal transmission rate and calculate the following highest fair share. Through simulation results, we show that the proposed protocol achieves improved fair share and throughput with reduced delay.

Keywords: MAC-layer, MANETs, Multihop, optimal rate, Transmission.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1509
215 Measurement of Lead Pollution in the Air of Babylon Governorate/Iraq during Year 2010

Authors: Khalid Safaa Hashim Al Khalidy, Ali Jalil Abdul Kareem Chabuk, Majid Mohammed Ali Kadhim

Abstract:

This research aims to study the lead pollution in the air of Babylon governorate that resulted generally from vehicles exhausts in addition to industrial and human activities.Vehicles number in Babylon governorate increased significantly after year 2003 that resulted with increase in lead emissions into the air.Measurement of lead emissions was done in seven stations distributed randomly in Babylon governorate. These stations where located in Industrial (Al-Sena'ay) Quarter, 60 street (near to Babylon sewer directorate), 40 Street (near to the first intersection), Al-Hashmia city, Al-Mahaweel city, , Al- Musayab city in addition to another station in Sayd Idris village belong to Abugharaq district (Agricultural station for comparison). The measured concentrations in these stations were compared with the standard limits of Environmental Protection Agency EPA (2 μg /m3). The results of this study showed that the average of lead concentrations ,in Babylon governorate during year 2010, was (3.13 μg/m3) which was greater than standard limits (2 μg/m3). The maximum concentration of lead was (6.41 μg / m3) recorded in the Industrial (Al-Sena'ay) Quarter during April month, while the minimum concentrations was (0.36 μg / m3) recorded in the agricultural station (Abugharaq) during December month.

Keywords: Lead, pollution, lead concentration

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1730
214 Finding Pareto Optimal Front for the Multi- Mode Time, Cost Quality Trade-off in Project Scheduling

Authors: H. Iranmanesh, M. R. Skandari, M. Allahverdiloo

Abstract:

Project managers are the ultimate responsible for the overall characteristics of a project, i.e. they should deliver the project on time with minimum cost and with maximum quality. It is vital for any manager to decide a trade-off between these conflicting objectives and they will be benefited of any scientific decision support tool. Our work will try to determine optimal solutions (rather than a single optimal solution) from which the project manager will select his desirable choice to run the project. In this paper, the problem in project scheduling notated as (1,T|cpm,disc,mu|curve:quality,time,cost) will be studied. The problem is multi-objective and the purpose is finding the Pareto optimal front of time, cost and quality of a project (curve:quality,time,cost), whose activities belong to a start to finish activity relationship network (cpm) and they can be done in different possible modes (mu) which are non-continuous or discrete (disc), and each mode has a different cost, time and quality . The project is constrained to a non-renewable resource i.e. money (1,T). Because the problem is NP-Hard, to solve the problem, a meta-heuristic is developed based on a version of genetic algorithm specially adapted to solve multi-objective problems namely FastPGA. A sample project with 30 activities is generated and then solved by the proposed method.

Keywords: FastPGA, Multi-Execution Activity Mode, Pareto Optimality, Project Scheduling, Time-Cost-Quality Trade-Off.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1807
213 Inferring Hierarchical Pronunciation Rules from a Phonetic Dictionary

Authors: Erika Pigliapoco, Valerio Freschi, Alessandro Bogliolo

Abstract:

This work presents a new phonetic transcription system based on a tree of hierarchical pronunciation rules expressed as context-specific grapheme-phoneme correspondences. The tree is automatically inferred from a phonetic dictionary by incrementally analyzing deeper context levels, eventually representing a minimum set of exhaustive rules that pronounce without errors all the words in the training dictionary and that can be applied to out-of-vocabulary words. The proposed approach improves upon existing rule-tree-based techniques in that it makes use of graphemes, rather than letters, as elementary orthographic units. A new linear algorithm for the segmentation of a word in graphemes is introduced to enable outof- vocabulary grapheme-based phonetic transcription. Exhaustive rule trees provide a canonical representation of the pronunciation rules of a language that can be used not only to pronounce out-of-vocabulary words, but also to analyze and compare the pronunciation rules inferred from different dictionaries. The proposed approach has been implemented in C and tested on Oxford British English and Basic English. Experimental results show that grapheme-based rule trees represent phonetically sound rules and provide better performance than letter-based rule trees.

Keywords: Automatic phonetic transcription, pronunciation rules, hierarchical tree inference.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1925
212 Knowledge-Driven Decision Support System Based on Knowledge Warehouse and Data Mining by Improving Apriori Algorithm with Fuzzy Logic

Authors: Pejman Hosseinioun, Hasan Shakeri, Ghasem Ghorbanirostam

Abstract:

In recent years, we have seen an increasing importance of research and study on knowledge source, decision support systems, data mining and procedure of knowledge discovery in data bases and it is considered that each of these aspects affects the others. In this article, we have merged information source and knowledge source to suggest a knowledge based system within limits of management based on storing and restoring of knowledge to manage information and improve decision making and resources. In this article, we have used method of data mining and Apriori algorithm in procedure of knowledge discovery one of the problems of Apriori algorithm is that, a user should specify the minimum threshold for supporting the regularity. Imagine that a user wants to apply Apriori algorithm for a database with millions of transactions. Definitely, the user does not have necessary knowledge of all existing transactions in that database, and therefore cannot specify a suitable threshold. Our purpose in this article is to improve Apriori algorithm. To achieve our goal, we tried using fuzzy logic to put data in different clusters before applying the Apriori algorithm for existing data in the database and we also try to suggest the most suitable threshold to the user automatically.

Keywords: Decision support system, data mining, knowledge discovery, data discovery, fuzzy logic.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2132
211 The Shaping of a Triangle Steel Plate into an Equilateral Vertical Steel by Finite-Element Modeling

Authors: Tsung-Chia Chen

Abstract:

The orthogonal processes to shape the triangle steel plate into a equilateral vertical steel are examined by an incremental elasto-plastic finite-element method based on an updated Lagrangian formulation. The highly non-linear problems due to the geometric changes, the inelastic constitutive behavior and the boundary conditions varied with deformation are taken into account in an incremental manner. On the contact boundary, a modified Coulomb friction mode is specially considered. A weighting factor r-minimum is employed to limit the step size of loading increment to linear relation. In particular, selective reduced integration was adopted to formulate the stiffness matrix. The simulated geometries of verticality could clearly demonstrate the vertical processes until unloading. A series of experiments and simulations were performed to validate the formulation in the theory, leading to the development of the computer codes. The whole deformation history and the distribution of stress, strain and thickness during the forming process were obtained by carefully considering the moving boundary condition in the finite-element method. Therefore, this modeling can be used for judging whether a equilateral vertical steel can be shaped successfully. The present work may be expected to improve the understanding of the formation of the equilateral vertical steel.

Keywords: Elasto-plastic, finite element, orthogonal pressing process, vertical steel.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1352
210 Study of Heat Transfer in the Poly Ethylene Fluidized Bed Reactor Numerically and Experimentally

Authors: Mahdi Hamzehei

Abstract:

In this research, heat transfer of a poly Ethylene fluidized bed reactor without reaction were studied experimentally and computationally at different superficial gas velocities. A multifluid Eulerian computational model incorporating the kinetic theory for solid particles was developed and used to simulate the heat conducting gas–solid flows in a fluidized bed configuration. Momentum exchange coefficients were evaluated using the Syamlal– O-Brien drag functions. Temperature distributions of different phases in the reactor were also computed. Good agreement was found between the model predictions and the experimentally obtained data for the bed expansion ratio as well as the qualitative gas–solid flow patterns. The simulation and experimental results showed that the gas temperature decreases as it moves upward in the reactor, while the solid particle temperature increases. Pressure drop and temperature distribution predicted by the simulations were in good agreement with the experimental measurements at superficial gas velocities higher than the minimum fluidization velocity. Also, the predicted time-average local voidage profiles were in reasonable agreement with the experimental results. The study showed that the computational model was capable of predicting the heat transfer and the hydrodynamic behavior of gas-solid fluidized bed flows with reasonable accuracy.

Keywords: Gas-solid flows, fluidized bed, Hydrodynamics, Heat transfer, Turbulence model, CFD

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1960
209 Optimal Image Compression Based on Sign and Magnitude Coding of Wavelet Coefficients

Authors: Mbainaibeye Jérôme, Noureddine Ellouze

Abstract:

Wavelet transforms is a very powerful tools for image compression. One of its advantage is the provision of both spatial and frequency localization of image energy. However, wavelet transform coefficients are defined by both a magnitude and sign. While algorithms exist for efficiently coding the magnitude of the transform coefficients, they are not efficient for the coding of their sign. It is generally assumed that there is no compression gain to be obtained from the coding of the sign. Only recently have some authors begun to investigate the sign of wavelet coefficients in image coding. Some authors have assumed that the sign information bit of wavelet coefficients may be encoded with the estimated probability of 0.5; the same assumption concerns the refinement information bit. In this paper, we propose a new method for Separate Sign Coding (SSC) of wavelet image coefficients. The sign and the magnitude of wavelet image coefficients are examined to obtain their online probabilities. We use the scalar quantization in which the information of the wavelet coefficient to belong to the lower or to the upper sub-interval in the uncertainly interval is also examined. We show that the sign information and the refinement information may be encoded by the probability of approximately 0.5 only after about five bit planes. Two maps are separately entropy encoded: the sign map and the magnitude map. The refinement information of the wavelet coefficient to belong to the lower or to the upper sub-interval in the uncertainly interval is also entropy encoded. An algorithm is developed and simulations are performed on three standard images in grey scale: Lena, Barbara and Cameraman. Five scales are performed using the biorthogonal wavelet transform 9/7 filter bank. The obtained results are compared to JPEG2000 standard in terms of peak signal to noise ration (PSNR) for the three images and in terms of subjective quality (visual quality). It is shown that the proposed method outperforms the JPEG2000. The proposed method is also compared to other codec in the literature. It is shown that the proposed method is very successful and shows its performance in term of PSNR.

Keywords: Image compression, wavelet transform, sign coding, magnitude coding.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1672
208 Comparison of Automated Zone Design Census Output Areas with Existing Output Areas in South Africa

Authors: T. Mokhele, O. Mutanga, F. Ahmed

Abstract:

South Africa is one of the few countries that have stopped using the same Enumeration Areas (EAs) for census enumeration and dissemination. The advantage of this change is that confidentiality issue could be addressed for census dissemination as the design of geographic unit for collection is mainly to ensure that this unit is covered by one enumerator. The objective of this paper was to evaluate the performance of automated zone design output areas against non-zone design developed geographies using the 2001 census data, and 2011 census to some extent, as the main input. The comparison of the Automated Zone-design Tool (AZTool) census output areas with the Small Area Layers (SALs) and SubPlaces based on confidentiality limit, population distribution, and degree of homogeneity, as well as shape compactness, was undertaken. Further, SPSS was employed for validation of the AZTool output results. The results showed that AZTool developed output areas out-perform the existing official SAL and SubPlaces with regard to minimum population threshold, population distribution and to some extent to homogeneity. Therefore, it was concluded that AZTool program provides a new alternative to the creation of optimised census output areas for dissemination of population census data in South Africa.

Keywords: AZTool, enumeration areas, small areal layers, South Africa.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 750
207 Long-Term Structural Behavior of Resilient Materials for Reduction of Floor Impact Sound

Authors: J. Y. Lee, J. Kim, H. J. Chang, J. M. Kim

Abstract:

People’s tendency towards living in apartment houses is increasing in a densely populated country. However, some residents living in apartment houses are bothered by noise coming from the houses above. In order to reduce noise pollution, the communities are increasingly imposing a bylaw, including the limitation of floor impact sound, minimum thickness of floors, and floor soundproofing solutions. This research effort focused on the specific long-time deflection of resilient materials in the floor sound insulation systems of apartment houses. The experimental program consisted of testing nine floor sound insulation specimens subjected to sustained load for 45 days. Two main parameters were considered in the experimental investigation: three types of resilient materials and magnitudes of loads. The test results indicated that the structural behavior of the floor sound insulation systems under long-time load was quite different from that the systems under short-time load. The loading period increased the deflection of floor sound insulation systems and the increasing rate of the long-time deflection of the systems with ethylene vinyl acetate was smaller than that of the systems with low density ethylene polystyrene.

Keywords: Resilient materials, floor sound insulation systems, long-time deflection, sustained load, noise pollution.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2363
206 Detection of Linkages Between Extreme Flow Measures and Climate Indices

Authors: Mohammed Sharif, Donald Burn

Abstract:

Large scale climate signals and their teleconnections can influence hydro-meteorological variables on a local scale. Several extreme flow and timing measures, including high flow and low flow measures, from 62 hydrometric stations in Canada are investigated to detect possible linkages with several large scale climate indices. The streamflow data used in this study are derived from the Canadian Reference Hydrometric Basin Network and are characterized by relatively pristine and stable land-use conditions with a minimum of 40 years of record. A composite analysis approach was used to identify linkages between extreme flow and timing measures and climate indices. The approach involves determining the 10 highest and 10 lowest values of various climate indices from the data record. Extreme flow and timing measures for each station were examined for the years associated with the 10 largest values and the years associated with the 10 smallest values. In each case, a re-sampling approach was applied to determine if the 10 values of extreme flow measures differed significantly from the series mean. Results indicate that several stations are impacted by the large scale climate indices considered in this study. The results allow the determination of any relationship between stations that exhibit a statistically significant trend and stations for which the extreme measures exhibit a linkage with the climate indices.

Keywords: flood analysis, low-flow events, climate change, trend analysis, Canada

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1603
205 Simulation Data Management Approach for Developing Adaptronic Systems – The W-Model Methodology

Authors: Roland S. Nattermann, Reiner Anderl

Abstract:

Existing proceeding-models for the development of mechatronic systems provide a largely parallel action in the detailed development. This parallel approach is to take place also largely independent of one another in the various disciplines involved. An approach for a new proceeding-model provides a further development of existing models to use for the development of Adaptronic Systems. This approach is based on an intermediate integration and an abstract modeling of the adaptronic system. Based on this system-model a simulation of the global system behavior, due to external and internal factors or Forces is developed. For the intermediate integration a special data management system is used. According to the presented approach this data management system has a number of functions that are not part of the "normal" PDM functionality. Therefore a concept for a new data management system for the development of Adaptive system is presented in this paper. This concept divides the functions into six layers. In the first layer a system model is created, which divides the adaptronic system based on its components and the various technical disciplines. Moreover, the parameters and properties of the system are modeled and linked together with the requirements and the system model. The modeled parameters and properties result in a network which is analyzed in the second layer. From this analysis necessary adjustments to individual components for specific manipulation of the system behavior can be determined. The third layer contains an automatic abstract simulation of the system behavior. This simulation is a precursor for network analysis and serves as a filter. By the network analysis and simulation changes to system components are examined and necessary adjustments to other components are calculated. The other layers of the concept treat the automatic calculation of system reliability, the "normal" PDM-functionality and the integration of discipline-specific data into the system model. A prototypical implementation of an appropriate data management with the addition of an automatic system development is being implemented using the data management system ENOVIA SmarTeam V5 and the simulation system MATLAB.

Keywords: Adaptronic, Data-Management, LOEWE-CentreAdRIA

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2368
204 Proposal of Optimality Evaluation for Quantum Secure Communication Protocols by Taking the Average of the Main Protocol Parameters: Efficiency, Security and Practicality

Authors: Georgi Bebrov, Rozalina Dimova

Abstract:

In the field of quantum secure communication, there is no evaluation that characterizes quantum secure communication (QSC) protocols in a complete, general manner. The current paper addresses the problem concerning the lack of such an evaluation for QSC protocols by introducing an optimality evaluation, which is expressed as the average over the three main parameters of QSC protocols: efficiency, security, and practicality. For the efficiency evaluation, the common expression of this parameter is used, which incorporates all the classical and quantum resources (bits and qubits) utilized for transferring a certain amount of information (bits) in a secure manner. By using criteria approach whether or not certain criteria are met, an expression for the practicality evaluation is presented, which accounts for the complexity of the QSC practical realization. Based on the error rates that the common quantum attacks (Measurement and resend, Intercept and resend, probe attack, and entanglement swapping attack) induce, the security evaluation for a QSC protocol is proposed as the minimum function taken over the error rates of the mentioned quantum attacks. For the sake of clarity, an example is presented in order to show how the optimality is calculated.

Keywords: Quantum cryptography, quantum secure communcation, quantum secure direct communcation security, quantum secure direct communcation efficiency, quantum secure direct communcation practicality.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 973
203 Transformability in Post-Earthquake Houses in Iran: with Special Focus on Lar City

Authors: M. Parva, K. Dola, F. Pour Rahimian

Abstract:

Earthquake is considered as one of the most catastrophic disasters in Iran, in terms of both short-term and long-term hazards. Due to the particular financial and time constraints in Iran, quickly constructed post-earthquake houses (PEHs) do not fulfill the minimum requirements to be considered as comfortable dwellings for people. Consequently, people often transform PEHs after they start to reside. However, lack of understanding about process, motivation, and results of housing transformation leads to construction of some houses not suitable for future transformations, hence resulting in eventually demolished or abandoned PEHs. This study investigated housing transformations in a natural bed of post-earthquake Lar. This paper reports results of the conducted survey for comparing normal condition housing transformation with post-earthquake housing transformation in order to reveal the factors that affect post-earthquake housing transformation in Iran. The findings proposed the use of a combination of ‘Temporary’ and ‘Permanent’ housing reconstruction models in Iran to provide victims with basic but permanent post-disaster dwellings. It is also suggested that needs for future transformation should be predicted and addressed during early stages of design and development. This study contributes to both research and practice regarding post-earthquake housing reconstruction in Iran by proposing new design approaches and guidelines.

Keywords: Housing transformation, Iran, Lar, post-earthquake housing.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1877
202 Method for Tuning Level Control Loops Based on Internal Model Control and Closed Loop Step Test Data

Authors: Arnaud Nougues

Abstract:

This paper describes a two-stage methodology derived from IMC (Internal Model Control) for tuning a PID (Proportional-Integral-Derivative) controller for levels or other integrating processes in an industrial environment. Focus is ease of use and implementation speed which are critical for an industrial application. Tuning can be done with minimum effort and without the need of time-consuming open-loop step tests on the plant. The first stage of the method applies to levels only: the vessel residence time is calculated from equipment dimensions and used to derive a set of preliminary PI (Proportional-Integral) settings with IMC. The second stage, re-tuning in closed-loop, applies to levels as well as other integrating processes: a tuning correction mechanism has been developed based on a series of closed-loop simulations with model errors. The tuning correction is done from a simple closed-loop step test and application of a generic correlation between observed overshoot and integral time correction. A spin-off of the method is that an estimate of the vessel residence time (levels) or open-loop process gain (other integrating process) is obtained from the closed-loop data.

Keywords: closed-loop model identification, IMC-PID tuning method, integrating process control, on-line PID tuning adaptation

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 578
201 Modelling Dengue Fever (DF) and Dengue Haemorrhagic Fever (DHF) Outbreak Using Poisson and Negative Binomial Model

Authors: W. Y. Wan Fairos, W. H. Wan Azaki, L. Mohamad Alias, Y. Bee Wah

Abstract:

Dengue fever has become a major concern for health authorities all over the world particularly in the tropical countries. These countries, in particular are experiencing the most worrying outbreak of dengue fever (DF) and dengue haemorrhagic fever (DHF). The DF and DHF epidemics, thus, have become the main causes of hospital admissions and deaths in Malaysia. This paper, therefore, attempts to examine the environmental factors that may influence the recent dengue outbreak. The aim of this study is twofold, firstly is to establish a statistical model to describe the relationship between the number of dengue cases and a range of explanatory variables and secondly, to identify the lag operator for explanatory variables which affect the dengue incidence the most. The explanatory variables involved include the level of cloud cover, percentage of relative humidity, amount of rainfall, maximum temperature, minimum temperature and wind speed. The Poisson and Negative Binomial regression analyses were used in this study. The results of the analyses on the 915 observations (daily data taken from July 2006 to Dec 2008), reveal that the climatic factors comprising of daily temperature and wind speed were found to significantly influence the incidence of dengue fever after 2 and 3 weeks of their occurrences. The effect of humidity, on the other hand, appears to be significant only after 2 weeks.

Keywords: Dengue Fever, Dengue Hemorrhagic Fever, Negative Binomial Regression model, Poisson Regression model.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2815
200 Jeffrey's Prior for Unknown Sinusoidal Noise Model via Cramer-Rao Lower Bound

Authors: Samuel A. Phillips, Emmanuel A. Ayanlowo, Rasaki O. Olanrewaju, Olayode Fatoki

Abstract:

This paper employs the Jeffrey's prior technique in the process of estimating the periodograms and frequency of sinusoidal model for unknown noisy time variants or oscillating events (data) in a Bayesian setting. The non-informative Jeffrey's prior was adopted for the posterior trigonometric function of the sinusoidal model such that Cramer-Rao Lower Bound (CRLB) inference was used in carving-out the minimum variance needed to curb the invariance structure effect for unknown noisy time observational and repeated circular patterns. An average monthly oscillating temperature series measured in degree Celsius (0C) from 1901 to 2014 was subjected to the posterior solution of the unknown noisy events of the sinusoidal model via Markov Chain Monte Carlo (MCMC). It was not only deduced that two minutes period is required before completing a cycle of changing temperature from one particular degree Celsius to another but also that the sinusoidal model via the CRLB-Jeffrey's prior for unknown noisy events produced a miniature posterior Maximum A Posteriori (MAP) compare to a known noisy events.

Keywords: Cramer-Rao Lower Bound (CRLB), Jeffrey's prior, Sinusoidal, Maximum A Posteriori (MAP), Markov Chain Monte Carlo (MCMC), Periodograms.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 658
199 Graph Cuts Segmentation Approach Using a Patch-Based Similarity Measure Applied for Interactive CT Lung Image Segmentation

Authors: Aicha Majda, Abdelhamid El Hassani

Abstract:

Lung CT image segmentation is a prerequisite in lung CT image analysis. Most of the conventional methods need a post-processing to deal with the abnormal lung CT scans such as lung nodules or other lesions. The simplest similarity measure in the standard Graph Cuts Algorithm consists of directly comparing the pixel values of the two neighboring regions, which is not accurate because this kind of metrics is extremely sensitive to minor transformations such as noise or other artifacts problems. In this work, we propose an improved version of the standard graph cuts algorithm based on the Patch-Based similarity metric. The boundary penalty term in the graph cut algorithm is defined Based on Patch-Based similarity measurement instead of the simple intensity measurement in the standard method. The weights between each pixel and its neighboring pixels are Based on the obtained new term. The graph is then created using theses weights between its nodes. Finally, the segmentation is completed with the minimum cut/Max-Flow algorithm. Experimental results show that the proposed method is very accurate and efficient, and can directly provide explicit lung regions without any post-processing operations compared to the standard method.

Keywords: Graph cuts, lung CT scan, lung parenchyma segmentation, patch based similarity metric.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 744
198 Unbalanced Distribution Optimal Power Flow to Minimize Losses with Distributed Photovoltaic Plants

Authors: Malinwo Estone Ayikpa

Abstract:

Electric power systems are likely to operate with minimum losses and voltage meeting international standards. This is made possible generally by control actions provide by automatic voltage regulators, capacitors and transformers with on-load tap changer (OLTC). With the development of photovoltaic (PV) systems technology, their integration on distribution networks has increased over the last years to the extent of replacing the above mentioned techniques. The conventional analysis and simulation tools used for electrical networks are no longer able to take into account control actions necessary for studying distributed PV generation impact. This paper presents an unbalanced optimal power flow (OPF) model that minimizes losses with association of active power generation and reactive power control of single-phase and three-phase PV systems. Reactive power can be generated or absorbed using the available capacity and the adjustable power factor of the inverter. The unbalance OPF is formulated by current balance equations and solved by primal-dual interior point method. Several simulation cases have been carried out varying the size and location of PV systems and the results show a detailed view of the impact of PV distributed generation on distribution systems.

Keywords: Distribution system, losses, photovoltaic generation, primal-dual interior point method, reactive power control.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1080
197 Effect of Twin Cavities on the Axially Loaded Pile in Clay

Authors: Ali A. Al-Jazaairry, Tahsin T. Sabbagh

Abstract:

Presence of cavities in soil predictably induces ground deformation and changes in soil stress, which might influence adjacent existing pile foundations, though the effect of twin cavities on a nearby pile needs to be understood. This research is an attempt to identify the behaviour of piles subjected to axial load and embedded in cavitied clayey soil. A series of finite element modelling were conducted to investigate the performance of piled foundation located in such soils. The validity of the numerical simulation was evaluated by comparing it with available field test and alternative analytical model. The study involved many parameters such as twin cavities size, depth, spacing between cavities, and eccentricity of cavities from the pile axis on the pile performance subjected to axial load. The study involved many cases; in each case, a critical value has been found in which cavities’ presence has shown minimum impact on the behaviour of pile. Load-displacement relationships of the affecting parameters on the pile behaviour were presented to provide helpful information for designing piled foundation situated near twin underground cavities. It was concluded that the presence of the cavities within the soil mass reduces the ultimate capacity of pile. This reduction differs according to the size and location of the cavity.

Keywords: Axial load, clay, finite element, pile, twin cavities, ultimate capacity.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1255
196 Face Recognition Using Double Dimension Reduction

Authors: M. A Anjum, M. Y. Javed, A. Basit

Abstract:

In this paper a new approach to face recognition is presented that achieves double dimension reduction making the system computationally efficient with better recognition results. In pattern recognition techniques, discriminative information of image increases with increase in resolution to a certain extent, consequently face recognition results improve with increase in face image resolution and levels off when arriving at a certain resolution level. In the proposed model of face recognition, first image decimation algorithm is applied on face image for dimension reduction to a certain resolution level which provides best recognition results. Due to better computational speed and feature extraction potential of Discrete Cosine Transform (DCT) it is applied on face image. A subset of coefficients of DCT from low to mid frequencies that represent the face adequately and provides best recognition results is retained. A trade of between decimation factor, number of DCT coefficients retained and recognition rate with minimum computation is obtained. Preprocessing of the image is carried out to increase its robustness against variations in poses and illumination level. This new model has been tested on different databases which include ORL database, Yale database and a color database. The proposed technique has performed much better compared to other techniques. The significance of the model is two fold: (1) dimension reduction up to an effective and suitable face image resolution (2) appropriate DCT coefficients are retained to achieve best recognition results with varying image poses, intensity and illumination level.

Keywords: Biometrics, DCT, Face Recognition, Feature extraction.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1492
195 Using the Nerlovian Adjustment Model to Assess the Response of Farmers to Price and Other Related Factors: Evidence from Sierra Leone Rice Cultivation

Authors: Alhaji M. H. Conteh, Xiangbin Yan, Alfred V. Gborie

Abstract:

The goal of this study was to increase the awareness of the description and assessments of rice acreage response and to offer mechanisms for agricultural policy scrutiny. The ordinary least square (OLS) technique was utilized to determine the coefficients of acreage response models for the rice varieties. The magnitudes of the coefficients (λ) of both the ROK lagged and NERICA lagged acreages were found positive and highly significant, which indicates that farmers’ adjustment rate was very low. Regarding lagged actual price for both the ROK and NERICE rice varieties, the short-run price elasticitieswere lower than long-run, which is suggesting a long term adjustment of the acreage under the crop.

However, the apparent recommendations for policy transformation are to open farm gate prices and to decrease government’s involvement in agricultural sector especially in the acquisition of agricultural inputs. Impending research have to be centered on how this might be better realized. Necessary conditions should be made available to the private sector by means of minimizing price volatility. In accordance with structural reforms, it is necessary to convey output prices to farmers with minimum distortion. There is need to eradicate price subsidies and control, which generate distortion in the market in addition to huge financial costs.

Keywords: Acreage response, rate of adjustment, rice varieties, Sierra Leone.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3791
194 Automatic Detection of Defects in Ornamental Limestone Using Wavelets

Authors: Maria C. Proença, Marco Aniceto, Pedro N. Santos, José C. Freitas

Abstract:

A methodology based on wavelets is proposed for the automatic location and delimitation of defects in limestone plates. Natural defects include dark colored spots, crystal zones trapped in the stone, areas of abnormal contrast colors, cracks or fracture lines, and fossil patterns. Although some of these may or may not be considered as defects according to the intended use of the plate, the goal is to pair each stone with a map of defects that can be overlaid on a computer display. These layers of defects constitute a database that will allow the preliminary selection of matching tiles of a particular variety, with specific dimensions, for a requirement of N square meters, to be done on a desktop computer rather than by a two-hour search in the storage park, with human operators manipulating stone plates as large as 3 m x 2 m, weighing about one ton. Accident risks and work times are reduced, with a consequent increase in productivity. The base for the algorithm is wavelet decomposition executed in two instances of the original image, to detect both hypotheses – dark and clear defects. The existence and/or size of these defects are the gauge to classify the quality grade of the stone products. The tuning of parameters that are possible in the framework of the wavelets corresponds to different levels of accuracy in the drawing of the contours and selection of the defects size, which allows for the use of the map of defects to cut a selected stone into tiles with minimum waste, according the dimension of defects allowed.

Keywords: Automatic detection, wavelets, defects, fracture lines.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1166
193 Intelligent Path Planning for Rescue Robot

Authors: Sohrab Khanmohammadi, Raana Soltani Zarrin

Abstract:

In this paper, a heuristic method for simultaneous rescue robot path-planning and mission scheduling is introduced based on project management techniques, multi criteria decision making and artificial potential fields path-planning. Groups of injured people are trapped in a disastrous situation. These people are categorized into several groups based on the severity of their situation. A rescue robot, whose ultimate objective is reaching injured groups and providing preliminary aid for them through a path with minimum risk, has to perform certain tasks on its way towards targets before the arrival of rescue team. A decision value is assigned to each target based on the whole degree of satisfaction of the criteria and duties of the robot toward the target and the importance of rescuing each target based on their category and the number of injured people. The resulted decision value defines the strength of the attractive potential field of each target. Dangerous environmental parameters are defined as obstacles whose risk determines the strength of the repulsive potential field of each obstacle. Moreover, negative and positive energies are assigned to the targets and obstacles, which are variable with respects to the factors involved. The simulation results show that the generated path for two cases studies with certain differences in environmental conditions and other risk factors differ considerably.

Keywords: Artificial potential field, GERT, path planning

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1844