Search results for: systemic methods.
3264 Best Option for Countercyclical Capital Buffer Implementation - Scenarios for Baltic States
Authors: Ģirts Brasliņš, Ilja Arefjevs, Nadežda Tarakanova
Abstract:
The objective of countercyclical capital buffer is to encourage banks to build up buffers in good times that can be drawn down in bad times. The aim of the report is to assess such decisions by banks derived from three approaches. The approaches are the aggregate credit-to-GDP ratio, credit growth as well as banking sector profits. The approaches are implemented for Estonia, Latvia and Lithuania for the time period 2000-2012. The report compares three approaches and analyses their relevance to the Baltic States by testing the correlation between a growth in studied variables and a growth of corresponding gaps. Methods used in the empirical part of the report are econometric analysis as well as economic analysis, development indicators, relative and absolute indicators and other methods. The research outcome is a cross-Baltic comparison of two alternative approaches to establish or release a countercyclical capital buffer by banks and their implications for each Baltic country.
Keywords: Basel III, countercyclical capital buffer, banks, credit growth, Baltic States.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 18723263 Transformer Top-Oil Temperature Modeling and Simulation
Authors: T. C. B. N. Assunção, J. L. Silvino, P. Resende
Abstract:
The winding hot-spot temperature is one of the most critical parameters that affect the useful life of the power transformers. The winding hot-spot temperature can be calculated as function of the top-oil temperature that can estimated by using the ambient temperature and transformer loading measured data. This paper proposes the estimation of the top-oil temperature by using a method based on Least Squares Support Vector Machines approach. The estimated top-oil temperature is compared with measured data of a power transformer in operation. The results are also compared with methods based on the IEEE Standard C57.91-1995/2000 and Artificial Neural Networks. It is shown that the Least Squares Support Vector Machines approach presents better performance than the methods based in the IEEE Standard C57.91-1995/2000 and artificial neural networks.Keywords: Artificial Neural Networks, Hot-spot Temperature, Least Squares Support Vector, Top-oil Temperature.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 24943262 A New Approach of Fuzzy Methods for Evaluating of Hydrological Data
Authors: Nasser Shamskia, Seyyed Habib Rahmati, Hassan Haleh , Seyyedeh Hoda Rahmati
Abstract:
The main criteria of designing in the most hydraulic constructions essentially are based on runoff or discharge of water. Two of those important criteria are runoff and return period. Mostly, these measures are calculated or estimated by stochastic data. Another feature in hydrological data is their impreciseness. Therefore, in order to deal with uncertainty and impreciseness, based on Buckley-s estimation method, a new fuzzy method of evaluating hydrological measures are developed. The method introduces triangular shape fuzzy numbers for different measures in which both of the uncertainty and impreciseness concepts are considered. Besides, since another important consideration in most of the hydrological studies is comparison of a measure during different months or years, a new fuzzy method which is consistent with special form of proposed fuzzy numbers, is also developed. Finally, to illustrate the methods more explicitly, the two algorithms are tested on one simple example and a real case study.Keywords: Fuzzy Discharge, Fuzzy estimation, Fuzzy ranking method, Hydrological data
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 17143261 Monotonicity of Dependence Concepts from Independent Random Vector into Dependent Random Vector
Authors: Guangpu Chen
Abstract:
When the failure function is monotone, some monotonic reliability methods are used to gratefully simplify and facilitate the reliability computations. However, these methods often work in a transformed iso-probabilistic space. To this end, a monotonic simulator or transformation is needed in order that the transformed failure function is still monotone. This note proves at first that the output distribution of failure function is invariant under the transformation. And then it presents some conditions under which the transformed function is still monotone in the newly obtained space. These concern the copulas and the dependence concepts. In many engineering applications, the Gaussian copulas are often used to approximate the real word copulas while the available information on the random variables is limited to the set of marginal distributions and the covariances. So this note catches an importance on the conditional monotonicity of the often used transformation from an independent random vector into a dependent random vector with Gaussian copulas.
Keywords: Monotonic, Rosenblatt, Nataf transformation, dependence concepts, completely positive matrices, Gaussiancopulas
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 12133260 Radiochemical Purity of 68Ga-BCA-Peptides: Separation of All 68Ga Species with a Single iTLC Strip
Authors: Anton A. Larenkov, Alesya Ya Maruk
Abstract:
In the present study, highly effective iTLC single strip method for the determination of radiochemical purity (RCP) of 68Ga-BCA-peptides was developed (with no double-developing, changing of eluents or other additional manipulation). In this method iTLC-SG strips and commonly used eluent TFAaq. (3-5 % (v/v)) are used. The method allows determining each of the key radiochemical forms of 68Ga (colloidal, bound, ionic) separately with the peaks separation being no less than 4 σ. Rf = 0.0-0.1 for 68Ga-colloid; Rf = 0.5-0.6 for 68Ga-BCA-peptides; Rf = 0.9-1.0 for ionic 68Ga. The method is simple and fast: For developing length of 75 mm only 4-6 min is required (versus 18-20 min for pharmacopoeial method). The method has been tested on various compounds (including 68Ga-DOTA-TOC, 68Ga-DOTA-TATE, 68Ga-NODAGA-RGD2 etc.). The cross-validation work for every specific form of 68Ga showed good correlation between method developed and control (pharmacopoeial) methods. The method can become convenient and much more informative replacement for pharmacopoeial methods, including HPLC.Keywords: DOTA-TATE, 68Ga, quality control, radiochemical purity, radiopharmaceuticals, iTLC.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 20203259 Phytoremediation of Cd and Pb by Four Tropical Timber Species Grown on an Ex-tin Mine in Peninsular Malaysia
Authors: Lai Hoe Ang, Lai Kuen Tang, Wai Mun Ho, Ting Fui Hui, Gary W. Theseira
Abstract:
Contamination of heavy metals in tin tailings has caused an interest in the scientific approach of their remediation. One of the approaches is through phytoremediation, which is using tree species to extract the heavy metals from the contaminated soils. Tin tailings comprise of slime and sand tailings. This paper reports only on the finding of the four timber species namely Acacia mangium, Hopea odorata, Intsia palembanica and Swietenia macrophylla on the removal of cadmium (Cd) and lead (Pb) from the slime tailings. The methods employed for sampling and soil analysis are established methods. Six trees of each species were randomly selected from a 0.25 ha plot for extraction and determination of their heavy metals. The soil samples were systematically collected according to 5 x 5 m grid from each plot. Results showed that the concentration of heavy metals in soils and trees varied according to species. Higher concentration of heavy metals was found in the stem than the primary roots of all the species. A. Mangium accumulated the highest total amount of Pb per hectare basis.Keywords: Cd, Pb, Phytoremediation of slimetailings, timber species.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 27533258 Comparative Study of Transformed and Concealed Data in Experimental Designs and Analyses
Authors: K. Chinda, P. Luangpaiboon
Abstract:
This paper presents the comparative study of coded data methods for finding the benefit of concealing the natural data which is the mercantile secret. Influential parameters of the number of replicates (rep), treatment effects (τ) and standard deviation (σ) against the efficiency of each transformation method are investigated. The experimental data are generated via computer simulations under the specified condition of the process with the completely randomized design (CRD). Three ways of data transformation consist of Box-Cox, arcsine and logit methods. The difference values of F statistic between coded data and natural data (Fc-Fn) and hypothesis testing results were determined. The experimental results indicate that the Box-Cox results are significantly different from natural data in cases of smaller levels of replicates and seem to be improper when the parameter of minus lambda has been assigned. On the other hand, arcsine and logit transformations are more robust and obviously, provide more precise numerical results. In addition, the alternate ways to select the lambda in the power transformation are also offered to achieve much more appropriate outcomes.Keywords: Experimental Designs, Box-Cox, Arcsine, Logit Transformations.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16223257 Context Modeling and Context-Aware Service Adaptation for Pervasive Computing Systems
Authors: Moeiz Miraoui, Chakib Tadj, Chokri ben Amar
Abstract:
Devices in a pervasive computing system (PCS) are characterized by their context-awareness. It permits them to provide proactively adapted services to the user and applications. To do so, context must be well understood and modeled in an appropriate form which enhance its sharing between devices and provide a high level of abstraction. The most interesting methods for modeling context are those based on ontology however the majority of the proposed methods fail in proposing a generic ontology for context which limit their usability and keep them specific to a particular domain. The adaptation task must be done automatically and without an explicit intervention of the user. Devices of a PCS must acquire some intelligence which permits them to sense the current context and trigger the appropriate service or provide a service in a better suitable form. In this paper we will propose a generic service ontology for context modeling and a context-aware service adaptation based on a service oriented definition of context.
Keywords: Pervasive computing system, context, contextawareness, service, context modeling, ontology, adaptation, machine learning.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 18163256 Phase Equilibrium of Volatile Organic Compounds in Polymeric Solvents Using Group Contribution Methods
Authors: E. Muzenda
Abstract:
Group contribution methods such as the UNIFAC are of major interest to researchers and engineers involved synthesis, feasibility studies, design and optimization of separation processes as well as other applications of industrial use. Reliable knowledge of the phase equilibrium behavior is crucial for the prediction of the fate of the chemical in the environment and other applications. The objective of this study was to predict the solubility of selected volatile organic compounds (VOCs) in glycol polymers and biodiesel. Measurements can be expensive and time consuming, hence the need for thermodynamic models. The results obtained in this study for the infinite dilution activity coefficients compare very well those published in literature obtained through measurements. It is suggested that in preliminary design or feasibility studies of absorption systems for the abatement of volatile organic compounds, prediction procedures should be implemented while accurate fluid phase equilibrium data should be obtained from experiment.Keywords: Volatile organic compounds, Prediction, Phaseequilibrium, Environmental, Infinite dilution.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 20293255 A Model to Determine Atmospheric Stability and its Correlation with CO Concentration
Authors: Kh. Ashrafi, Gh. A. Hoshyaripour
Abstract:
Atmospheric stability plays the most important role in the transport and dispersion of air pollutants. Different methods are used for stability determination with varying degrees of complexity. Most of these methods are based on the relative magnitude of convective and mechanical turbulence in atmospheric motions. Richardson number, Monin-Obukhov length, Pasquill-Gifford stability classification and Pasquill–Turner stability classification, are the most common parameters and methods. The Pasquill–Turner Method (PTM), which is employed in this study, makes use of observations of wind speed, insolation and the time of day to classify atmospheric stability with distinguishable indices. In this study, a model is presented to determination of atmospheric stability conditions using PTM. As a case study, meteorological data of Mehrabad station in Tehran from 2000 to 2005 is applied to model. Here, three different categories are considered to deduce the pattern of stability conditions. First, the total pattern of stability classification is obtained and results show that atmosphere is 38.77%, 27.26%, 33.97%, at stable, neutral and unstable condition, respectively. It is also observed that days are mostly unstable (66.50%) while nights are mostly stable (72.55%). Second, monthly and seasonal patterns are derived and results indicate that relative frequency of stable conditions decrease during January to June and increase during June to December, while results for unstable conditions are exactly in opposite manner. Autumn is the most stable season with relative frequency of 50.69% for stable condition, whilst, it is 42.79%, 34.38% and 27.08% for winter, summer and spring, respectively. Hourly stability pattern is the third category that points out that unstable condition is dominant from approximately 03-15 GTM and 04-12 GTM for warm and cold seasons, respectively. Finally, correlation between atmospheric stability and CO concentration is achieved.Keywords: Atmospheric stability, Pasquill-Turner classification, convective turbulence, mechanical turbulence, Tehran.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 64553254 BIM Application Research Based on the Main Entrance and Garden Area Project of Shanghai Disneyland
Authors: Ying Yuken, Pengfei Wang, Zhang Qilin, Xiao Ben
Abstract:
Based on the main entrance and garden area (ME&G) project of Shanghai Disneyland, this paper introduces the application of BIM technology in this kind of low-rise comprehensive building with complex facade system, electromechanical system and decoration system. BIM technology is applied to the whole process of design, construction and completion of the whole project. With the construction of BIM application framework of the whole project, the key points of BIM modeling methods of different systems and the integration and coordination of BIM models are elaborated in detail. The specific application methods of BIM technology in similar complex low-rise building projects are sorted out. Finally, the paper summarizes the benefits of BIM technology application, and puts forward some suggestions for BIM management mode and practical application of similar projects in the future.Keywords: BIM, complex low-rise building, BIM modeling, model integration and coordination, 3D scanning.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 10693253 Application of Post-Stack and Pre-Stack Seismic Inversion for Prediction of Hydrocarbon Reservoirs in a Persian Gulf Gas Field
Authors: Nastaran Moosavi, Mohammad Mokhtari
Abstract:
Seismic inversion is a technique which has been in use for years and its main goal is to estimate and to model physical characteristics of rocks and fluids. Generally, it is a combination of seismic and well-log data. Seismic inversion can be carried out through different methods; we have conducted and compared post-stack and pre- stack seismic inversion methods on real data in one of the fields in the Persian Gulf. Pre-stack seismic inversion can transform seismic data to rock physics such as P-impedance, S-impedance and density. While post- stack seismic inversion can just estimate P-impedance. Then these parameters can be used in reservoir identification. Based on the results of inverting seismic data, a gas reservoir was detected in one of Hydrocarbon oil fields in south of Iran (Persian Gulf). By comparing post stack and pre-stack seismic inversion it can be concluded that the pre-stack seismic inversion provides a more reliable and detailed information for identification and prediction of hydrocarbon reservoirs.Keywords: Density, P-impedance, S-impedance, post-stack seismic inversion, pre-stack seismic inversion.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 22293252 Kinetic Parameter Estimation from Thermogravimetry and Microscale Combustion Calorimetry
Authors: Rhoda Afriyie Mensah, Lin Jiang, Solomon Asante-Okyere, Xu Qiang, Cong Jin
Abstract:
Flammability analysis of extruded polystyrene (XPS) has become crucial due to its utilization as insulation material for energy efficient buildings. Using the Kissinger-Akahira-Sunose and Flynn-Wall-Ozawa methods, the degradation kinetics of two pure XPS from the local market, red and grey ones, were obtained from the results of thermogravity analysis (TG) and microscale combustion calorimetry (MCC) experiments performed under the same heating rates. From the experiments, it was discovered that red XPS released more heat than grey XPS and both materials showed two mass loss stages. Consequently, the kinetic parameters for red XPS were higher than grey XPS. A comparative evaluation of activation energies from MCC and TG showed an insignificant degree of deviation signifying an equivalent apparent activation energy from both methods. However, different activation energy profiles as a result of the different chemical pathways were presented when the dependencies of the activation energies on extent of conversion for TG and MCC were compared.
Keywords: Flammability, microscale combustion calorimetry, thermogravity analysis, thermal degradation, kinetic analysis.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 8843251 Face Recognition Based On Vector Quantization Using Fuzzy Neuro Clustering
Authors: Elizabeth B. Varghese, M. Wilscy
Abstract:
A face recognition system is a computer application for automatically identifying or verifying a person from a digital image or a video frame. A lot of algorithms have been proposed for face recognition. Vector Quantization (VQ) based face recognition is a novel approach for face recognition. Here a new codebook generation for VQ based face recognition using Integrated Adaptive Fuzzy Clustering (IAFC) is proposed. IAFC is a fuzzy neural network which incorporates a fuzzy learning rule into a competitive neural network. The performance of proposed algorithm is demonstrated by using publicly available AT&T database, Yale database, Indian Face database and a small face database, DCSKU database created in our lab. In all the databases the proposed approach got a higher recognition rate than most of the existing methods. In terms of Equal Error Rate (ERR) also the proposed codebook is better than the existing methods.
Keywords: Face Recognition, Vector Quantization, Integrated Adaptive Fuzzy Clustering, Self Organization Map.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 22423250 Optimization of Transmission Lines Loading in TNEP Using Decimal Codification Based GA
Authors: H. Shayeghi, M. Mahdavi
Abstract:
Transmission network expansion planning (TNEP) is a basic part of power system planning that determines where, when and how many new transmission lines should be added to the network. Up till now, various methods have been presented to solve the static transmission network expansion planning (STNEP) problem. But in all of these methods, lines adequacy rate has not been considered at the end of planning horizon, i.e., expanded network misses adequacy after some times and needs to be expanded again. In this paper, expansion planning has been implemented by merging lines loading parameter in the STNEP and inserting investment cost into the fitness function constraints using genetic algorithm. Expanded network will possess a maximum adequacy to provide load demand and also the transmission lines overloaded later. Finally, adequacy index could be defined and used to compare some designs that have different investment costs and adequacy rates. In this paper, the proposed idea has been tested on the Garvers network. The results show that the network will possess maximum efficiency economically.
Keywords: Adequacy Optimization, Transmission Expansion Planning, DCGA.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 18113249 Methods for Distinction of Cattle Using Supervised Learning
Authors: Radoslav Židek, Veronika Šidlová, Radovan Kasarda, Birgit Fuerst-Waltl
Abstract:
Machine learning represents a set of topics dealing with the creation and evaluation of algorithms that facilitate pattern recognition, classification, and prediction, based on models derived from existing data. The data can present identification patterns which are used to classify into groups. The result of the analysis is the pattern which can be used for identification of data set without the need to obtain input data used for creation of this pattern. An important requirement in this process is careful data preparation validation of model used and its suitable interpretation. For breeders, it is important to know the origin of animals from the point of the genetic diversity. In case of missing pedigree information, other methods can be used for traceability of animal´s origin. Genetic diversity written in genetic data is holding relatively useful information to identify animals originated from individual countries. We can conclude that the application of data mining for molecular genetic data using supervised learning is an appropriate tool for hypothesis testing and identifying an individual.
Keywords: Genetic data, Pinzgau cattle, supervised learning.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 23203248 Adaption Model for Building Agile Pronunciation Dictionaries Using Phonemic Distance Measurements
Authors: Akella Amarendra Babu, Rama Devi Yellasiri, Natukula Sainath
Abstract:
Where human beings can easily learn and adopt pronunciation variations, machines need training before put into use. Also humans keep minimum vocabulary and their pronunciation variations are stored in front-end of their memory for ready reference, while machines keep the entire pronunciation dictionary for ready reference. Supervised methods are used for preparation of pronunciation dictionaries which take large amounts of manual effort, cost, time and are not suitable for real time use. This paper presents an unsupervised adaptation model for building agile and dynamic pronunciation dictionaries online. These methods mimic human approach in learning the new pronunciations in real time. A new algorithm for measuring sound distances called Dynamic Phone Warping is presented and tested. Performance of the system is measured using an adaptation model and the precision metrics is found to be better than 86 percent.Keywords: Pronunciation variations, dynamic programming, machine learning, natural language processing.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 8013247 Exploring the Potential of Phase Change Memories as an Alternative to DRAM Technology
Authors: Venkataraman Krishnaswami, Venkatasubramanian Viswanathan
Abstract:
Scalability poses a severe threat to the existing DRAM technology. The capacitors that are used for storing and sensing charge in DRAM are generally not scaled beyond 42nm. This is because; the capacitors must be sufficiently large for reliable sensing and charge storage mechanism. This leaves DRAM memory scaling in jeopardy, as charge sensing and storage mechanisms become extremely difficult. In this paper we provide an overview of the potential and the possibilities of using Phase Change Memory (PCM) as an alternative for the existing DRAM technology. The main challenges that we encounter in using PCM are, the limited endurance, high access latencies, and higher dynamic energy consumption than that of the conventional DRAM. We then provide an overview of various methods, which can be employed to overcome these drawbacks. Hybrid memories involving both PCM and DRAM can be used, to achieve good tradeoffs in access latency and storage density. We conclude by presenting, the results of these methods that makes PCM a potential replacement for the current DRAM technology.Keywords: DRAM, Phase Change Memory.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 19973246 Adaptive Noise Reduction Algorithm for Speech Enhancement
Authors: M. Kalamani, S. Valarmathy, M. Krishnamoorthi
Abstract:
In this paper, Least Mean Square (LMS) adaptive noise reduction algorithm is proposed to enhance the speech signal from the noisy speech. In this, the speech signal is enhanced by varying the step size as the function of the input signal. Objective and subjective measures are made under various noises for the proposed and existing algorithms. From the experimental results, it is seen that the proposed LMS adaptive noise reduction algorithm reduces Mean square Error (MSE) and Log Spectral Distance (LSD) as compared to that of the earlier methods under various noise conditions with different input SNR levels. In addition, the proposed algorithm increases the Peak Signal to Noise Ratio (PSNR) and Segmental SNR improvement (ΔSNRseg) values; improves the Mean Opinion Score (MOS) as compared to that of the various existing LMS adaptive noise reduction algorithms. From these experimental results, it is observed that the proposed LMS adaptive noise reduction algorithm reduces the speech distortion and residual noise as compared to that of the existing methods.
Keywords: LMS, speech enhancement, speech quality, residual noise.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 28063245 Speech Intelligibility Improvement Using Variable Level Decomposition DWT
Authors: Samba Raju, Chiluveru, Manoj Tripathy
Abstract:
Intelligibility is an essential characteristic of a speech signal, which is used to help in the understanding of information in speech signal. Background noise in the environment can deteriorate the intelligibility of a recorded speech. In this paper, we presented a simple variance subtracted - variable level discrete wavelet transform, which improve the intelligibility of speech. The proposed algorithm does not require an explicit estimation of noise, i.e., prior knowledge of the noise; hence, it is easy to implement, and it reduces the computational burden. The proposed algorithm decides a separate decomposition level for each frame based on signal dominant and dominant noise criteria. The performance of the proposed algorithm is evaluated with speech intelligibility measure (STOI), and results obtained are compared with Universal Discrete Wavelet Transform (DWT) thresholding and Minimum Mean Square Error (MMSE) methods. The experimental results revealed that the proposed scheme outperformed competing methodsKeywords: Discrete Wavelet Transform, speech intelligibility, STOI, standard deviation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 6943244 Parenting Styles and Their Relation to Videogame Addiction
Authors: Petr Květon, Martin Jelínek
Abstract:
We try to identify the role of various aspects of parenting style in the phenomenon of videogame playing addiction. Relevant self-report questionnaires were part of a wider set of methods focused on the constructs related to videogame playing. The battery of methods was administered in school settings in paper and pencil form. The research sample consisted of 333 (166 males, 167 females) elementary and high school students at the age between 10 and 19 years (m=14.98, sd=1.77). Using stepwise regression analysis, we assessed the influence of demographic variables (gender and age) and parenting styles. Age and gender together explained 26.3% of game addiction variance (F(2,330)=58.81, p<.01). By adding four aspect of parenting styles (inconsistency, involvement, control, and warmth) another 10.2% of variance was explained (∆F(4,326)=13.09, p<.01). The significant predictor was gender of the respondent, where males scored higher on game addiction scale (B=0.70, p<.01), age (β=-0.18, p<.01), where younger children showed higher level of addiction, and parental inconsistency (β=0.30, p<.01), where the higher the inconsistency in upbringing, the more developed game playing addiction.
Keywords: Gender, parenting styles, video games, addiction.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 27513243 Glass Bottle Inspector Based on Machine Vision
Authors: Huanjun Liu, Yaonan Wang, Feng Duan
Abstract:
This text studies glass bottle intelligent inspector based machine vision instead of manual inspection. The system structure is illustrated in detail in this paper. The text presents the method based on watershed transform methods to segment the possible defective regions and extract features of bottle wall by rules. Then wavelet transform are used to exact features of bottle finish from images. After extracting features, the fuzzy support vector machine ensemble is putted forward as classifier. For ensuring that the fuzzy support vector machines have good classification ability, the GA based ensemble method is used to combining the several fuzzy support vector machines. The experiments demonstrate that using this inspector to inspect glass bottles, the accuracy rate may reach above 97.5%.Keywords: Intelligent Inspection, Support Vector Machines, Ensemble Methods, watershed transform, Wavelet Transform
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 38983242 Zero Inflated Models for Overdispersed Count Data
Authors: Y. N. Phang, E. F. Loh
Abstract:
The zero inflated models are usually used in modeling count data with excess zeros where the existence of the excess zeros could be structural zeros or zeros which occur by chance. These type of data are commonly found in various disciplines such as finance, insurance, biomedical, econometrical, ecology, and health sciences which involve sex and health dental epidemiology. The most popular zero inflated models used by many researchers are zero inflated Poisson and zero inflated negative binomial models. In addition, zero inflated generalized Poisson and zero inflated double Poisson models are also discussed and found in some literature. Recently zero inflated inverse trinomial model and zero inflated strict arcsine models are advocated and proven to serve as alternative models in modeling overdispersed count data caused by excessive zeros and unobserved heterogeneity. The purpose of this paper is to review some related literature and provide a variety of examples from different disciplines in the application of zero inflated models. Different model selection methods used in model comparison are discussed.
Keywords: Overdispersed count data, model selection methods, likelihood ratio, AIC, BIC.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 45333241 Analytical and Experimental Methods of Design for Supersonic Two-Stage Ejectors
Authors: S. Daneshmand, C. Aghanajafi, A. Bahrami
Abstract:
In this paper the supersonic ejectors are experimentally and analytically studied. Ejector is a device that uses the energy of a fluid to move another fluid. This device works like a vacuum pump without usage of piston, rotor or any other moving component. An ejector contains an active nozzle, a passive nozzle, a mixing chamber and a diffuser. Since the fluid viscosity is large, and the flow is turbulent and three dimensional in the mixing chamber, the numerical methods consume long time and high cost to analyze the flow in ejectors. Therefore this paper presents a simple analytical method that is based on the precise governing equations in fluid mechanics. According to achieved analytical relations, a computer code has been prepared to analyze the flow in different components of the ejector. An experiment has been performed in supersonic regime 1.53240 A Retrievable Genetic Algorithm for Efficient Solving of Sudoku Puzzles
Authors: Seyed Mehran Kazemi, Bahare Fatemi
Abstract:
Sudoku is a logic-based combinatorial puzzle game which is popular among people of different ages. Due to this popularity, computer softwares are being developed to generate and solve Sudoku puzzles with different levels of difficulty. Several methods and algorithms have been proposed and used in different softwares to efficiently solve Sudoku puzzles. Various search methods such as stochastic local search have been applied to this problem. Genetic Algorithm (GA) is one of the algorithms which have been applied to this problem in different forms and in several works in the literature. In these works, chromosomes with little or no information were considered and obtained results were not promising. In this paper, we propose a new way of applying GA to this problem which uses more-informed chromosomes than other works in the literature. We optimize the parameters of our GA using puzzles with different levels of difficulty. Then we use the optimized values of the parameters to solve various puzzles and compare our results to another GA-based method for solving Sudoku puzzles.
Keywords: Genetic algorithm, optimization, solving Sudoku puzzles, stochastic local search.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 37733239 Peakwise Smoothing of Data Models using Wavelets
Authors: D Sudheer Reddy, N Gopal Reddy, P V Radhadevi, J Saibaba, Geeta Varadan
Abstract:
Smoothing or filtering of data is first preprocessing step for noise suppression in many applications involving data analysis. Moving average is the most popular method of smoothing the data, generalization of this led to the development of Savitzky-Golay filter. Many window smoothing methods were developed by convolving the data with different window functions for different applications; most widely used window functions are Gaussian or Kaiser. Function approximation of the data by polynomial regression or Fourier expansion or wavelet expansion also gives a smoothed data. Wavelets also smooth the data to great extent by thresholding the wavelet coefficients. Almost all smoothing methods destroys the peaks and flatten them when the support of the window is increased. In certain applications it is desirable to retain peaks while smoothing the data as much as possible. In this paper we present a methodology called as peak-wise smoothing that will smooth the data to any desired level without losing the major peak features.Keywords: smoothing, moving average, peakwise smoothing, spatialdensity models, planar shape models, wavelets.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 17523238 Revisiting Domestication and Foreignisation Methods: Translating the Quran by the Hybrid Approach
Authors: Aladdin Al-Tarawneh
Abstract:
The Quran, as it is the sacred book of Islam and considered the literal word of God (Allah) in Arabic, is highly translated into many languages; however, the foreignising or the literal approach excessively stains the quality and discredits the final product in the eyes of its receptors. Such an approach fails to capture the intended meaning of the Quran and to communicate it in any language. Therefore, this study is conducted to propose a different approach that seeks involving other ones according to a hybrid model. Indeed, this study challenges the binary adherence that is highly used in Translation Studies (TS) in general and in the translation of the Quran in particular. Drawing on the genuine fact that the Quran can be communicated in any language in terms of meaning, and the translation is not sacred; this paper approaches the translation of the Quran by blending different methods like domestication or foreignisation in a systematic way, avoiding the binary choice made by many translators. To reach this aim, the paper has a conceptual part that seeks to elucidate and clarify the main methods employed in TS, and criticise and modify them to propose the new hybrid approach (the hybrid model) for translating the Quran – that is, the deductive method. To support and validate the outcome of the previous part, a comparative model is employed in order to highlight the differences between the suggested translation and other widely used ones – that is, the inductive method. By applying this methodology, the paper proves that there is a deficiency of communicating the original meaning of the Quran in light of the foreignising approach. In conclusion, the paper suggests producing a Quran translation has to take into account the adoption of many techniques to express the meaning of the Quran as understood in the original, and to offer this understanding in English in the most native-like manner to serve the intended target readers.
Keywords: Quran translation, hybrid approach, domestication, foreignisation, hybrid model.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 11943237 Effectiveness of Business Software Systems Development and Enhancement Projects versus Work Effort Estimation Methods
Authors: Beata Czarnacka-Chrobot
Abstract:
Execution of Business Software Systems (BSS) Development and Enhancement Projects (D&EP) is characterized by the exceptionally low effectiveness, leading to considerable financial losses. The general reason for low effectiveness of such projects is that they are inappropriately managed. One of the factors of proper BSS D&EP management is suitable (reliable and objective) method of project work effort estimation since this is what determines correct estimation of its major attributes: project cost and duration. BSS D&EP is usually considered to be accomplished effectively if product of a planned functionality is delivered without cost and time overrun. The goal of this paper is to prove that choosing approach to the BSS D&EP work effort estimation has a considerable influence on the effectiveness of such projects execution.
Keywords: Business software systems, development and enhancement projects, effectiveness, work effort estimation methods, software product size, software product functionality, project duration, project cost.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 20853236 A Usability Testing Approach to Evaluate User-Interfaces in Business Administration
Authors: Salaheddin Odeh, Ibrahim O. Adwan
Abstract:
This interdisciplinary study is an investigation to evaluate user-interfaces in business administration. The study is going to be implemented on two computerized business administration systems with two distinctive user-interfaces, so that differences between the two systems can be determined. Both systems, a commercial and a prototype developed for the purpose of this study, deal with ordering of supplies, tendering procedures, issuing purchase orders, controlling the movement of the stocks against their actual balances on the shelves and editing them on their tabulations. In the second suggested system, modern computer graphics and multimedia issues were taken into consideration to cover the drawbacks of the first system. To highlight differences between the two investigated systems regarding some chosen standard quality criteria, the study employs various statistical techniques and methods to evaluate the users- interaction with both systems. The study variables are divided into two divisions: independent representing the interfaces of the two systems, and dependent embracing efficiency, effectiveness, satisfaction, error rate etc.
Keywords: Evaluation and usability testing, software prototyping, statistical methods, user-interface design.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 14693235 The Feedback Control for Distributed Systems
Authors: Kamil Aida-zade, C. Ardil
Abstract:
We study the problem of synthesis of lumped sources control for the objects with distributed parameters on the basis of continuous observation of phase state at given points of object. In the proposed approach the phase state space (phase space) is beforehand somehow partitioned at observable points into given subsets (zones). The synthesizing control actions therewith are taken from the class of piecewise constant functions. The current values of control actions are determined by the subset of phase space that contains the aggregate of current states of object at the observable points (in these states control actions take constant values). In the paper such synthesized control actions are called zone control actions. A technique to obtain optimal values of zone control actions with the use of smooth optimization methods is given. With this aim, the formulas of objective functional gradient in the space of zone control actions are obtained.Keywords: Feedback control, distributed systems, smooth optimization methods, lumped control synthesis.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 614